Compare commits

...

92 Commits

Author SHA1 Message Date
David Peter
36ba9707ca Remove TODO message 2024-11-27 13:19:23 +01:00
David Peter
4aba390838 [red-knot] No panic for illegal self-referential f-string annotations 2024-11-27 10:33:22 +01:00
Lokejoke
82c01aa662 [pylint] Implement len-test (PLC1802) (#14309)
## Summary

This PR implements [`use-implicit-booleaness-not-len` /
`C1802`](https://pylint.pycqa.org/en/latest/user_guide/messages/convention/use-implicit-booleaness-not-len.html)
> For sequences, (strings, lists, tuples), use the fact that empty
sequences are false.

---------

Co-authored-by: xbrtnik1 <524841@mail.muni.cz>
Co-authored-by: xbrtnik1 <xbrtnik1@mail.muni.cz>
2024-11-26 13:30:17 -06:00
Brent Westbrook
9f446faa6c [pyflakes] Avoid false positives in @no_type_check contexts (F821, F722) (#14615) 2024-11-26 19:13:43 +00:00
David Peter
b94d6cf567 [red-knot] Fix panic related to f-strings in annotations (#14613)
## Summary

Fix panics related to expressions without inferred types in invalid
syntax examples like:
```py
x: f"Literal[{1 + 2}]" = 3
```
where the `1 + 2` expression (and its sub-expressions) inside the
annotation did not have an inferred type.

## Test Plan

Added new corpus test.
2024-11-26 16:35:44 +01:00
David Peter
cd0c97211c [red-knot] Update KNOWN_FAILURES (#14612)
## Summary

Remove entry that was prevously fixed in
5a30ec0df6.

## Test Plan

```sh
cargo test -p red_knot_workspace -- --ignored linter_af linter_gz
```
2024-11-26 15:56:42 +01:00
David Peter
0e71c9e3bb [red-knot] Fix unit tests in release mode (#14604)
## Summary

This is about the easiest patch that I can think of. It has a drawback
in that there is no real guarantee this won't happen again. I think this
might be acceptable, given that all of this is a temporary thing.

And we also add a new CI job to prevent regressions like this in the
future.

For the record though, I'm listing alternative approaches I thought of:

- We could get rid of the debug/release distinction and just add `@Todo`
type metadata everywhere. This has possible affects on runtime. The main
reason I didn't follow through with this is that the size of `Type`
increases. We would either have to adapt the `assert_eq_size!` test or
get rid of it. Even if we add messages everywhere and get rid of the
file-and-line-variant in the enum, it's not enough to get back to the
current release-mode size of `Type`.
- We could generally discard `@Todo` meta information when using it in
tests. I think this would be a huge drawback. I like that we can have
the actual messages in the mdtest. And make sure we get the expected
`@Todo` type, not just any `@Todo`. It's also helpful when debugging
tests.

closes #14594

## Test Plan

```rs
cargo nextest run --release
```
2024-11-26 15:40:02 +01:00
Dylan
24c90d6953 [pylint] Do not wrap function calls in parentheses in the fix for unnecessary-dunder-call (PLC2801) (#14601) 2024-11-26 06:47:01 -06:00
Tzu-ping Chung
fbff4dec3a [airflow] Avoid implicit DAG schedule (AIR301) (#14581) 2024-11-26 13:38:18 +01:00
Dhruv Manilawala
f3dac27e9a Fix f-string formatting in assignment statement (#14454)
## Summary

fixes: #13813

This PR fixes a bug in the formatting assignment statement when the
value is an f-string.

This is resolved by using custom best fit layouts if the f-string is (a)
not already a flat f-string (thus, cannot be multiline) and (b) is not a
multiline string (thus, cannot be flattened). So, it is used in cases
like the following:
```py
aaaaaaaaaaaaaaaaaa = f"testeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee{
    expression}moreeeeeeeeeeeeeeeee"
```
Which is (a) `FStringLayout::Multiline` and (b) not a multiline.

There are various other examples in the PR diff along with additional
explanation and context as code comments.

## Test Plan

Add multiple test cases for various scenarios.
2024-11-26 15:07:18 +05:30
Simon Brugman
e4cefd9bf9 Extend test cases for flake8-pyi (#14280) 2024-11-26 09:10:38 +01:00
Lokejoke
9e4ee98109 [ruff] Implement invalid-assert-message-literal-argument (RUF040) (#14488)
## Summary

This PR implements new rule discussed
[here](https://github.com/astral-sh/ruff/discussions/14449).
In short, it searches for assert messages which were unintentionally
used as a expression to be matched against.

## Test Plan

`cargo test` and review of `ruff-ecosystem`
2024-11-25 17:41:07 -06:00
Shaygan Hooshyari
557d583e32 Support typing.NoReturn and typing.Never (#14559)
Fix #14558 
## Summary

- Add `typing.NoReturn` and `typing.Never` to known instances and infer
them as `Type::Never`
- Add `is_assignable_to` cases for `Type::Never`

I skipped emitting diagnostic for when a function is annotated as
`NoReturn` but it actually returns.

## Test Plan

Added tests from

https://github.com/python/typing/blob/main/conformance/tests/specialtypes_never.py
except from generics and checking if the return value of the function
and the annotations match.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Carl Meyer <carl@astral.sh>
2024-11-25 21:37:55 +00:00
cake-monotone
f98eebdbab [red-knot] Fix Leaking Narrowing Constraint in ast::ExprIf (#14590)
## Summary

Closes #14588


```py
x: Literal[42, "hello"] = 42 if bool_instance() else "hello"
reveal_type(x)  # revealed: Literal[42] | Literal["hello"]

_ = ... if isinstance(x, str) else ...

# The `isinstance` test incorrectly narrows the type of `x`.
# As a result, `x` is revealed as Literal["hello"], but it should remain Literal[42, "hello"].
reveal_type(x)  # revealed: Literal["hello"]
```

## Test Plan
mdtest included!

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2024-11-25 10:36:37 -08:00
Simon Brugman
c606bf014e [flake8-pyi] Improve autofix safety for redundant-none-literal (PYI061) (#14583)
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2024-11-25 17:40:57 +00:00
Simon Brugman
e8fce20736 [ruff] Improve autofix safety for never-union (RUF020) (#14589) 2024-11-25 18:35:07 +01:00
Dhruv Manilawala
5a30ec0df6 Avoid inferring invalid expr types for string annotation (#14447)
## Summary

fixes: #14440

## Test Plan

Add a test case with all the invalid expressions in a string annotation
context.
2024-11-25 21:27:03 +05:30
Alex Waygood
fab1b0d546 fuzz-parser: catch exceptions from pysource-minimize (#14586) 2024-11-25 15:14:01 +00:00
Connor Skees
66abef433b red-knot: adapt fuzz-parser to work with red-knot (#14566)
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2024-11-25 13:12:28 +00:00
Harutaka Kawamura
fa22bd604a Fix pytest.mark.parametrize rules to check calls instead of decorators (#14515)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-11-25 13:55:18 +01:00
Dhruv Manilawala
0c9165fc3a Use Result for failed text document retrieval in LSP requests (#14579)
## Summary

Ref:
https://github.com/astral-sh/ruff-vscode/issues/644#issuecomment-2496588452

## Test Plan

Not sure how to test this as this is mainly to get more context on the
panic that the server is raising.
2024-11-25 15:14:30 +05:30
renovate[bot]
9f6147490b Update NPM Development dependencies (#14577)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-11-25 08:54:51 +01:00
renovate[bot]
b7571c3e24 Update Rust crate syn to v2.0.89 (#14573)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-11-25 07:46:06 +00:00
renovate[bot]
d178d115f3 Update dependency mdformat to v0.7.19 (#14576)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-11-25 08:40:53 +01:00
renovate[bot]
6501782678 Update Rust crate libcst to v1.5.1 (#14570)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-11-25 08:39:22 +01:00
renovate[bot]
bca4341dcc Update Rust crate hashbrown to v0.15.2 (#14569)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-11-25 08:38:34 +01:00
renovate[bot]
31ede11774 Update Rust crate quick-junit to v0.5.1 (#14572)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-11-25 08:38:12 +01:00
renovate[bot]
ba9f881687 Update Rust crate proc-macro2 to v1.0.92 (#14571)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-11-25 08:38:00 +01:00
renovate[bot]
4357a0a3c2 Update Rust crate unicode-ident to v1.0.14 (#14574)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-11-25 08:36:50 +01:00
renovate[bot]
c18afa93b3 Update Rust crate url to v2.5.4 (#14575)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-11-25 08:36:28 +01:00
renovate[bot]
8f04202ee4 Update Rust crate dir-test to 0.4.0 (#14578)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-11-25 08:36:06 +01:00
Dhruv Manilawala
efe54081d6 Remove FormatFStringPart (#14448)
## Summary

This is just a small refactor to remove the `FormatFStringPart` as it's
only used in the case when the f-string is not implicitly concatenated
in which case the only part is going to be `FString`. In implicitly
concatenated f-strings, we use `StringLike` instead.
2024-11-25 10:29:22 +05:30
Alex Waygood
ac23c99744 [ruff] Mark fixes for unsorted-dunder-all and unsorted-dunder-slots as unsafe when there are complex comments in the sequence (RUF022, RUF023) (#14560) 2024-11-24 12:49:29 +00:00
InSync
e5c7d87461 Add @astropy/astropy to ecosystem checks (#14565) 2024-11-24 12:47:11 +01:00
Charlie Marsh
de62e39eba Use truthiness check in auto_attribs detection (#14562) 2024-11-23 22:06:10 -05:00
InSync
d285717da8 [ruff] Handle attrs's auto_attribs correctly (RUF009) (#14520)
## Summary

Resolves #14519.

## Test Plan

`cargo nextest run` and `cargo insta test`.
2024-11-23 21:46:38 -05:00
InSync
545e9deba3 [flake8-builtins] Exempt private built-in modules (A005) (#14505)
## Summary

Resolves #12949.

## Test Plan

`cargo nextest run` and `cargo insta test`.
2024-11-23 21:39:04 -05:00
Harutaka Kawamura
e3d792605f [flake8-bugbear] Fix mutable-contextvar-default (B039) to resolve annotated function calls properly (#14532)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Fix #14525

## Test Plan

<!-- How was it tested? -->

New test cases

---------

Signed-off-by: harupy <hkawamura0130@gmail.com>
2024-11-23 21:29:25 -05:00
Harutaka Kawamura
1f303a5eb6 Simplify flake8_pytest_style::rules::fail_call implementation (#14556) 2024-11-23 15:14:28 +01:00
Nikolas Hearp
07d13c6b4a [B028-doc-update] Update documentation for B028 (#14338)
## Summary
Resolves #14289
The documentation for B028 no_explicit_stacklevel is updated to be more
clear.

---------

Co-authored-by: dylwil3 <dylwil3@gmail.com>
2024-11-23 07:45:28 +00:00
Dylan
e1838aac29 Ignore more rules for stub files (#14541)
This PR causes the following rules to ignore stub files, on the grounds
that it is not under the author's control to appease these lints:

- `PLR0904` https://docs.astral.sh/ruff/rules/too-many-public-methods/
- `PLR0913` https://docs.astral.sh/ruff/rules/too-many-arguments/
- `PLR0917`
https://docs.astral.sh/ruff/rules/too-many-positional-arguments/
- `PLW3201` https://docs.astral.sh/ruff/rules/bad-dunder-method-name/
- `SLOT` https://docs.astral.sh/ruff/rules/#flake8-slots-slot
- `FBT` https://docs.astral.sh/ruff/rules/#flake8-boolean-trap-fbt
(except for FBT003 since that involves a function call.)

Progress towards #14535
2024-11-23 07:41:10 +00:00
Carl Meyer
4ba847f250 [red-knot] remove wrong typevar attribute implementations (#14540) 2024-11-22 13:17:16 -08:00
renovate[bot]
13e9fc9362 Update dependency smol-toml to v1.3.1 (#14542)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-11-22 21:16:05 +00:00
Dylan
3fda2d17c7 [ruff] Auto-add r prefix when string has no backslashes for unraw-re-pattern (RUF039) (#14536)
This PR adds a sometimes-available, safe autofix for [unraw-re-pattern
(RUF039)](https://docs.astral.sh/ruff/rules/unraw-re-pattern/#unraw-re-pattern-ruf039),
which prepends an `r` prefix. It is used only when the string in
question has no backslahses (and also does not have a `u` prefix, since
that causes a syntax error.)

Closes #14527

Notes: 
- Test fixture unchanged, but snapshot changed to include fix messages.
- This fix is automatically only available in preview since the rule
itself is in preview
2024-11-22 15:09:53 -06:00
Harutaka Kawamura
931fa06d85 Extend invalid-envvar-default (PLW1508) to detect os.environ.get (#14512) 2024-11-22 19:13:58 +00:00
Micha Reiser
e53ac7985d Enable logging for directory-renamed test (#14533) 2024-11-22 16:41:46 +00:00
David Salvisberg
e25e7044ba [flake8-type-checking] Adds implementation for TC006 (#14511)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-11-22 15:22:59 +01:00
Micha Reiser
b80de52592 Consider quotes inside format-specs when choosing the quotes for an f-string (#14493) 2024-11-22 12:43:53 +00:00
Alex Waygood
2917534279 Fix broken link to PYI063 (#14526) 2024-11-22 12:27:52 +00:00
David Peter
f6b2cd5588 [red-knot] Semantic index: handle invalid breaks (#14522)
## Summary

This fix addresses panics related to invalid syntax like the following
where a `break` statement is used in a nested definition inside a
loop:

```py
while True:

    def b():
        x: int

        break
```

closes #14342

## Test Plan

* New corpus regression tests.
* New unit test to make sure we handle nested while loops correctly.
This test is passing on `main`, but can easily fail if the
`is_inside_loop` state isn't properly saved/restored.
2024-11-22 13:13:55 +01:00
Micha Reiser
302fe76c2b Fix unnecessary space around power op in overlong f-string expressions (#14489) 2024-11-22 13:01:22 +01:00
David Peter
a90e404c3f [red-knot] PEP 695 type aliases (#14357)
## Summary

Add support for (non-generic) type aliases. The main motivation behind
this was to get rid of panics involving expressions in (generic) type
aliases. But it turned out the best way to fix it was to implement
(partial) support for type aliases.

```py
type IntOrStr = int | str

reveal_type(IntOrStr)  # revealed: typing.TypeAliasType
reveal_type(IntOrStr.__name__)  # revealed: Literal["IntOrStr"]

x: IntOrStr = 1

reveal_type(x)  # revealed: Literal[1]

def f() -> None:
    reveal_type(x)  # revealed: int | str
```

## Test Plan

- Updated corpus test allow list to reflect that we don't panic anymore.
- Added Markdown-based test for type aliases (`type_alias.md`)
2024-11-22 08:47:14 +01:00
Micha Reiser
8358ad8d25 Ruff 0.8 release (#14486)
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: David Salvisberg <dave@daverball.com>
2024-11-22 08:45:19 +01:00
Alex Waygood
2b8b1ef178 Improve docs for some pycodestyle rules (#14517) 2024-11-21 17:26:06 +00:00
Dylan
2efa3fbb62 [flake8-import-conventions] Syntax check aliases supplied in configuration for unconventional-import-alias (ICN001) (#14477)
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2024-11-21 15:54:49 +00:00
cmp0xff
b9da4305e6 doc(B024): #14455 add annotated but unassgined class variables (#14502)
# Summary

Closes #14455, migrated from https://github.com/astral-sh/docs/pull/106.
2024-11-21 09:08:02 -06:00
Micha Reiser
87043a2415 Limit type size assertion to 64bit (#14514) 2024-11-21 12:49:55 +00:00
David Peter
f684b6fff4 [red-knot] Fix: Infer type for typing.Union[..] tuple expression (#14510)
## Summary

Fixes a panic related to sub-expressions of `typing.Union` where we fail
to store a type for the `int, str` tuple-expression in code like this:
```
x: Union[int, str] = 1
```

relates to [my
comment](https://github.com/astral-sh/ruff/pull/14499#discussion_r1851794467)
on #14499.

## Test Plan

New corpus test
2024-11-21 11:49:20 +01:00
David Peter
47f39ed1a0 [red-knot] Meta data for Type::Todo (#14500)
## Summary

Adds meta information to `Type::Todo`, allowing developers to easily
trace back the origin of a particular `@Todo` type they encounter.

Instead of `Type::Todo`, we now write either `type_todo!()` which
creates a `@Todo[path/to/source.rs:123]` type with file and line
information, or using `type_todo!("PEP 604 unions not supported")`,
which creates a variant with a custom message.

`Type::Todo` now contains a `TodoType` field. In release mode, this is
just a zero-sized struct, in order not to create any overhead. In debug
mode, this is an `enum` that contains the meta information.

`Type` implements `Copy`, which means that `TodoType` also needs to be
copyable. This limits the design space. We could intern `TodoType`, but
I discarded this option, as it would require us to have access to the
salsa DB everywhere we want to use `Type::Todo`. And it would have made
the macro invocations less ergonomic (requiring us to pass `db`).

So for now, the meta information is simply a `&'static str` / `u32` for
the file/line variant, or a `&'static str` for the custom message.
Anything involving a chain/backtrace of several `@Todo`s or similar is
therefore currently not implemented. Also because we currently don't see
any direct use cases for this, and because all of this will eventually
go away.

Note that the size of `Type` increases from 16 to 24 bytes, but only in
debug mode.

## Test Plan

- Observed the changes in Markdown tests.
- Added custom messages for all `Type::Todo`s that were revealed in the
tests
- Ran red knot in release and debug mode on the following Python file:
  ```py
  def f(x: int) -> int:
      reveal_type(x)
  ```
Prints `@Todo` in release mode and `@Todo(function parameter type)` in
debug mode.
2024-11-21 09:59:47 +01:00
Shaygan Hooshyari
aecdb8c144 [red-knot] support typing.Union in type annotations (#14499)
Fix #14498

## Summary

This PR adds `typing.Union` support

## Test Plan

I created new tests in mdtest.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2024-11-20 21:55:33 +00:00
Zanie Blue
3c52d2d1bd Improve the performance of the formatter instability check job (#14471)
We should probably get rid of this entirely and subsume it's
functionality in the normal ecosystem checks? I don't think we're using
the black comparison tests anymore, but maybe someone wants it?

There are a few major parts to this:

1. Making the formatter script idempotent, so it can be run repeatedly
and is robust to changing commits
2. Reducing the overhead of the git operations, minimizing the data
transfer
3. Parallelizing all the git operations by repository

This reduces the setup time from 80s to 16s (locally).

The initial motivation for idempotency was to include the repositories
in the GitHub Actions cache. I'm not sure it's worth it yet — they're
about 1GB and would consume our limited cache space. Regardless, it
improves correctness for local invocations.

The total runtime of the job is reduced from ~4m to ~3m.

I also made some cosmetic changes to the output paths and such.
2024-11-20 08:55:10 -06:00
Micha Reiser
942d6eeb9f Stabilize A004 (#14480) 2024-11-20 13:11:51 +01:00
Alex Waygood
4ccacc80f9 [ruff-0.8] [FAST] Further improve docs for fast-api-non-annotated-depencency (FAST002) (#14467) 2024-11-20 13:11:51 +01:00
Micha Reiser
b2bb119c6a Fix failing tests for Ruff 0.8 branch (#14482) 2024-11-20 13:11:51 +01:00
Alex Waygood
cef12f4925 [ruff-0.8] Spruce up docs for newly stabilised rules (#14466)
## Summary

- Expand some docs where they're unclear about the motivation, or assume
some knowledge that hasn't been introduced yet
- Add more links to external docs
- Rename PYI063 from `PrePep570PositionalArgument` to
`Pep484StylePositionalOnlyParameter`
- Rename the file `parenthesize_logical_operators.rs` to
`parenthesize_chained_operators.rs`, since the rule is called
`ParenthesizeChainedOperators`, not `ParenthesizeLogicalOperators`

## Test Plan

`cargo test`
2024-11-20 13:11:51 +01:00
Alex Waygood
aa7ac2ce0f [ruff-0.8] [ruff] Stabilise unsorted-dunder-all and unsorted-dunder-slots (#14468)
## Summary

These rules were implemented in January, have been very stable, and have
no open issues about them. They were highly requested by the community
prior to being implemented. Let's stabilise them!

## Test Plan

Ecosystem check on this PR.
2024-11-20 13:11:51 +01:00
Zanie Blue
70d9c90827 Use XDG (i.e. ~/.local/bin) instead of the Cargo home directory in the installer (#14457)
Closes https://github.com/astral-sh/ruff/issues/13927
2024-11-20 13:11:51 +01:00
Micha Reiser
adfa723464 Stabilize multiple rules (#14462) 2024-11-20 13:11:51 +01:00
Zanie Blue
844c07f1f0 Upgrade cargo-dist from 0.22.1 => 0.25.2-prerelease.3 (#14456)
Needed to prevent updater failures when doing
https://github.com/astral-sh/ruff/issues/13927

See 

- https://github.com/axodotdev/axoupdater/issues/210
- https://github.com/axodotdev/cargo-dist/pull/1538
- https://github.com/astral-sh/uv/pull/8958
2024-11-20 13:11:51 +01:00
Alex Waygood
11d20a1a51 [ruff-0.8] [ruff] Stabilise parenthesize-chained-operators (RUF021) (#14450) 2024-11-20 13:11:51 +01:00
Micha Reiser
e9079e7d95 Remove the deprecated E999 rule code (#14428) 2024-11-20 13:11:51 +01:00
Alex Waygood
c400725713 [ruff 0.8] [flake8-pytest-style] Remove deprecated rules PT004 and PT005 (#14385)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-11-20 13:11:51 +01:00
Alex Waygood
1081694140 [ruff 0.8] [flake8-annotations] Remove deprecated rules ANN101 and ANN102 (#14384)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-11-20 13:11:51 +01:00
Micha Reiser
52f526eb38 Warn instead of error when removed rules are used in ignore (#14435)
Closes https://github.com/astral-sh/ruff/issues/13505
2024-11-20 13:11:51 +01:00
David Salvisberg
dc05b38165 [ruff 0.8][flake8-type-checking] Rename TCH to TC (#14438)
Closes #9573
2024-11-20 13:11:51 +01:00
renovate[bot]
8c3c5ee5e3 Update Rust crate unicode-width to 0.2.0 (#13473)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-11-20 13:11:51 +01:00
konsti
b46cc6ac0b Update pyproject-toml to support PEP 639 (#13902)
Fixes #13869
2024-11-20 13:11:51 +01:00
Dylan
8b925ea626 [pycodestyle] Stabilize behavior to ignore stub files in ambiguous-variable-name (E741) (#14405) 2024-11-20 13:11:51 +01:00
yataka
1b180c8342 Change default for Python version from 3.8 to 3.9 (#13896)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-11-20 13:11:51 +01:00
Dylan
afeb217452 [pyupgrade] Stabilize behavior to show diagnostic even when unfixable in printf-string-formatting (UP031) (#14406) 2024-11-20 13:11:51 +01:00
Dylan
c0b3dd3745 [ruff] Stabilize unsafe fix for zip-instead-of-pairwise (RUF007) (#14401)
This PR stabilizes the unsafe fix for [zip-instead-of-pairwise
(RUF007)](https://docs.astral.sh/ruff/rules/zip-instead-of-pairwise/#zip-instead-of-pairwise-ruf007),
which replaces the use of zip with that of itertools.pairwise and has
been available under preview since version 0.5.7.

There are no open issues regarding RUF007 at the time of this writing.
2024-11-20 13:11:51 +01:00
Alex Waygood
5f6607bf54 [ruff 0.8] Remove deprecated rule UP027 (#14382) 2024-11-20 13:11:51 +01:00
Zanie Blue
a6deca44b5 Rename the parser fuzz job for consistency with the rest of CI (#14479) 2024-11-20 07:54:42 +00:00
Zanie Blue
0dbceccbc1 Only build the fuzz crate on main (#14478)
It's not actually doing anything per pull request and it's pretty slow?
xref #14469

It seems useful to build on `main` still to find build regressions? e.g.
https://github.com/astral-sh/ruff/issues/9368
2024-11-19 23:07:48 -06:00
Dhruv Manilawala
48680e10b6 Watch for changes to the generated file during documentation serve (#14476)
## Summary

Similar to https://github.com/astral-sh/uv/pull/9244, but we need to use
the `mkdocs.generated.yml` file because the `scripts/generate_mkdocs.py`
uses the `mkdocs.template.yml` to generate the final config.
2024-11-20 04:51:20 +00:00
Zanie Blue
b0c88a2a42 Only test release builds on main (#14475)
This is one of the slowest remaining jobs in the pull request CI. We
could use a larger runner for a trivial speed-up (in exchange for $$),
but I don't think this is going to break often enough to merit testing
on every pull request commit? It's not a required job, so I don't feel
strongly about it, but it feels like a bit of a waste of compute.

Originally added in https://github.com/astral-sh/ruff/pull/11182
2024-11-19 22:47:46 -06:00
InSync
b9c53a74f9 [pycodestyle] Exempt pytest.importorskip() calls (E402) (#14474)
## Summary

Resolves #13537.

## Test Plan

`cargo nextest run` and `cargo insta test`.
2024-11-19 22:08:15 -05:00
cake-monotone
6a4d207db7 [red-knot] Refactoring the inference logic of lexicographic comparisons (#14422)
## Summary

closes #14279

### Limitations of the Current Implementation
#### Incorrect Error Propagation

In the current implementation of lexicographic comparisons, if the
result of an Eq operation is Ambiguous, the comparison stops
immediately, returning a bool instance. While this may yield correct
inferences, it fails to capture unsupported-operation errors that might
occur in subsequent comparisons.
```py
class A: ...

(int_instance(), A()) < (int_instance(), A())  # should error
```

#### Weak Inference in Specific Cases

> Example: `(int_instance(), "foo") == (int_instance(), "bar")`
> Current result: `bool`
> Expected result: `Literal[False]`

`Eq` and `NotEq` have unique behavior in lexicographic comparisons
compared to other operators. Specifically:
- For `Eq`, if any non-equal pair exists within the tuples being
compared, we can immediately conclude that the tuples are not equal.
- For `NotEq`, if any equal pair exists, we can conclude that the tuples
are unequal.

```py
a = (str_instance(), int_instance(), "foo")

reveal_type(a == a)  # revealed: bool
reveal_type(a != a)  # revealed: bool

b = (str_instance(), int_instance(), "bar")

reveal_type(a == b)  # revealed: bool  # should be Literal[False]
reveal_type(a != b)  # revealed: bool  # should be Literal[True]
```
#### Incorrect Support for Non-Boolean Rich Comparisons

In CPython, aside from `==` and `!=`, tuple comparisons return a
non-boolean result as-is. Tuples do not convert the value into `bool`.

Note: If all pairwise `==` comparisons between elements in the tuples
return Truthy, the comparison then considers the tuples' lengths.
Regardless of the return type of the dunder methods, the final result
can still be a boolean.

```py
from __future__ import annotations

class A:
    def __eq__(self, o: object) -> str:
        return "hello"

    def __ne__(self, o: object) -> bytes:
        return b"world"

    def __lt__(self, o: A) -> float:
        return 3.14

a = (A(), A())

reveal_type(a == a)  # revealed: bool
reveal_type(a != a)  # revealed: bool
reveal_type(a < a)  # revealed: bool # should be: `float | Literal[False]`

```

### Key Changes
One of the major changes is that comparisons no longer end with a `bool`
result when a pairwise `Eq` result is `Ambiguous`. Instead, the function
attempts to infer all possible cases and unions the results. This
improvement allows for more robust type inference and better error
detection.

Additionally, as the function is now optimized for tuple comparisons,
the name has been changed from the more general
`infer_lexicographic_comparison` to `infer_tuple_rich_comparison`.

## Test Plan

mdtest included
2024-11-19 17:32:43 -08:00
Zanie Blue
42c35b6f44 Use larger GitHub runner for testing on Windows (#14461)
Reduces to 3m 50s (extra large) or 6m 5s (large) vs 9m 7s (standard)
2024-11-19 18:00:59 -06:00
Zanie Blue
9e79d64d62 Use Depot 16-core runner for testing on Linux (#14460)
Reduces Linux test CI to 1m 40s (16 core) or 2m 56s (8 core) to from 4m
25s. Times are approximate, as runner performance is pretty variable.

In uv, we use the 16 core runners.
2024-11-19 18:00:51 -06:00
Zanie Blue
582857f292 Use Depot 8-core runner for ecosystem tests (#14463)
I noticed this was exceedingly slow.

Reduces to 3m from 14m
2024-11-19 18:00:38 -06:00
Zanie Blue
9bbeb793e5 Specify the wasm-pack version explicitly (#14465)
There is an upstream bug with latest version detection
https://github.com/jetli/wasm-pack-action/issues/23

This causes random flakes of the wasm build job.
2024-11-19 18:00:27 -06:00
496 changed files with 10927 additions and 10388 deletions

View File

@@ -115,7 +115,7 @@ jobs:
cargo-test-linux:
name: "cargo test (linux)"
runs-on: ubuntu-latest
runs-on: depot-ubuntu-22.04-16
needs: determine_changes
if: ${{ needs.determine_changes.outputs.code == 'true' || github.ref == 'refs/heads/main' }}
timeout-minutes: 20
@@ -157,9 +157,36 @@ jobs:
name: ruff
path: target/debug/ruff
cargo-test-linux-release:
name: "cargo test (linux, release)"
runs-on: depot-ubuntu-22.04-16
needs: determine_changes
if: ${{ needs.determine_changes.outputs.code == 'true' || github.ref == 'refs/heads/main' }}
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@v2
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@v2
with:
tool: cargo-insta
- uses: Swatinem/rust-cache@v2
- name: "Run tests"
shell: bash
env:
NEXTEST_PROFILE: "ci"
run: cargo insta test --release --all-features --unreferenced reject --test-runner nextest
cargo-test-windows:
name: "cargo test (windows)"
runs-on: windows-latest
runs-on: windows-latest-xlarge
needs: determine_changes
if: ${{ needs.determine_changes.outputs.code == 'true' || github.ref == 'refs/heads/main' }}
timeout-minutes: 20
@@ -197,6 +224,8 @@ jobs:
cache: "npm"
cache-dependency-path: playground/package-lock.json
- uses: jetli/wasm-pack-action@v0.4.0
with:
version: v0.13.1
- uses: Swatinem/rust-cache@v2
- name: "Test ruff_wasm"
run: |
@@ -210,8 +239,7 @@ jobs:
cargo-build-release:
name: "cargo build (release)"
runs-on: macos-latest
needs: determine_changes
if: ${{ needs.determine_changes.outputs.code == 'true' || github.ref == 'refs/heads/main' }}
if: ${{ github.ref == 'refs/heads/main' }}
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
@@ -255,11 +283,11 @@ jobs:
NEXTEST_PROFILE: "ci"
run: cargo +${{ steps.msrv.outputs.value }} insta test --all-features --unreferenced reject --test-runner nextest
cargo-fuzz:
name: "cargo fuzz"
cargo-fuzz-build:
name: "cargo fuzz build"
runs-on: ubuntu-latest
needs: determine_changes
if: ${{ needs.determine_changes.outputs.code == 'true' || github.ref == 'refs/heads/main' }}
if: ${{ github.ref == 'refs/heads/main' }}
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
@@ -278,7 +306,7 @@ jobs:
- run: cargo fuzz build -s none
fuzz-parser:
name: "Fuzz the parser"
name: "fuzz parser"
runs-on: ubuntu-latest
needs:
- cargo-test-linux
@@ -307,7 +335,7 @@ jobs:
# Make executable, since artifact download doesn't preserve this
chmod +x ${{ steps.download-cached-binary.outputs.download-path }}/ruff
python scripts/fuzz-parser/fuzz.py 0-500 --test-executable ${{ steps.download-cached-binary.outputs.download-path }}/ruff
python scripts/fuzz-parser/fuzz.py --bin ruff 0-500 --test-executable ${{ steps.download-cached-binary.outputs.download-path }}/ruff
scripts:
name: "test scripts"
@@ -331,7 +359,7 @@ jobs:
ecosystem:
name: "ecosystem"
runs-on: ubuntu-latest
runs-on: depot-ubuntu-latest-8
needs:
- cargo-test-linux
- determine_changes
@@ -561,12 +589,12 @@ jobs:
run: rustup show
- name: "Cache rust"
uses: Swatinem/rust-cache@v2
- name: "Formatter progress"
- name: "Run checks"
run: scripts/formatter_ecosystem_checks.sh
- name: "Github step summary"
run: cat target/progress_projects_stats.txt > $GITHUB_STEP_SUMMARY
run: cat target/formatter-ecosystem/stats.txt > $GITHUB_STEP_SUMMARY
- name: "Remove checkouts from cache"
run: rm -r target/progress_projects
run: rm -r target/formatter-ecosystem
check-ruff-lsp:
name: "test ruff-lsp"

View File

@@ -49,7 +49,7 @@ jobs:
# but this is outweighed by the fact that a release build takes *much* longer to compile in CI
run: cargo build --locked
- name: Fuzz
run: python scripts/fuzz-parser/fuzz.py $(shuf -i 0-9999999999999999999 -n 1000) --test-executable target/debug/ruff
run: python scripts/fuzz-parser/fuzz.py --bin ruff $(shuf -i 0-9999999999999999999 -n 1000) --test-executable target/debug/ruff
create-issue-on-failure:
name: Create an issue if the daily fuzz surfaced any bugs

View File

@@ -1,4 +1,4 @@
# This file was autogenerated by cargo-dist: https://opensource.axo.dev/cargo-dist/
# This file was autogenerated by dist: https://opensource.axo.dev/cargo-dist/
#
# Copyright 2022-2024, axodotdev
# SPDX-License-Identifier: MIT or Apache-2.0
@@ -6,7 +6,7 @@
# CI that:
#
# * checks for a Git Tag that looks like a release
# * builds artifacts with cargo-dist (archives, installers, hashes)
# * builds artifacts with dist (archives, installers, hashes)
# * uploads those artifacts to temporary workflow zip
# * on success, uploads the artifacts to a GitHub Release
#
@@ -24,10 +24,10 @@ permissions:
# must be a Cargo-style SemVer Version (must have at least major.minor.patch).
#
# If PACKAGE_NAME is specified, then the announcement will be for that
# package (erroring out if it doesn't have the given version or isn't cargo-dist-able).
# package (erroring out if it doesn't have the given version or isn't dist-able).
#
# If PACKAGE_NAME isn't specified, then the announcement will be for all
# (cargo-dist-able) packages in the workspace with that version (this mode is
# (dist-able) packages in the workspace with that version (this mode is
# intended for workspaces with only one dist-able package, or with all dist-able
# packages versioned/released in lockstep).
#
@@ -48,7 +48,7 @@ on:
type: string
jobs:
# Run 'cargo dist plan' (or host) to determine what tasks we need to do
# Run 'dist plan' (or host) to determine what tasks we need to do
plan:
runs-on: "ubuntu-20.04"
outputs:
@@ -62,16 +62,16 @@ jobs:
- uses: actions/checkout@v4
with:
submodules: recursive
- name: Install cargo-dist
- name: Install dist
# we specify bash to get pipefail; it guards against the `curl` command
# failing. otherwise `sh` won't catch that `curl` returned non-0
shell: bash
run: "curl --proto '=https' --tlsv1.2 -LsSf https://github.com/axodotdev/cargo-dist/releases/download/v0.22.1/cargo-dist-installer.sh | sh"
- name: Cache cargo-dist
run: "curl --proto '=https' --tlsv1.2 -LsSf https://github.com/axodotdev/cargo-dist/releases/download/v0.25.2-prerelease.3/cargo-dist-installer.sh | sh"
- name: Cache dist
uses: actions/upload-artifact@v4
with:
name: cargo-dist-cache
path: ~/.cargo/bin/cargo-dist
path: ~/.cargo/bin/dist
# sure would be cool if github gave us proper conditionals...
# so here's a doubly-nested ternary-via-truthiness to try to provide the best possible
# functionality based on whether this is a pull_request, and whether it's from a fork.
@@ -79,8 +79,8 @@ jobs:
# but also really annoying to build CI around when it needs secrets to work right.)
- id: plan
run: |
cargo dist ${{ (inputs.tag && inputs.tag != 'dry-run' && format('host --steps=create --tag={0}', inputs.tag)) || 'plan' }} --output-format=json > plan-dist-manifest.json
echo "cargo dist ran successfully"
dist ${{ (inputs.tag && inputs.tag != 'dry-run' && format('host --steps=create --tag={0}', inputs.tag)) || 'plan' }} --output-format=json > plan-dist-manifest.json
echo "dist ran successfully"
cat plan-dist-manifest.json
echo "manifest=$(jq -c "." plan-dist-manifest.json)" >> "$GITHUB_OUTPUT"
- name: "Upload dist-manifest.json"
@@ -124,12 +124,12 @@ jobs:
- uses: actions/checkout@v4
with:
submodules: recursive
- name: Install cached cargo-dist
- name: Install cached dist
uses: actions/download-artifact@v4
with:
name: cargo-dist-cache
path: ~/.cargo/bin/
- run: chmod +x ~/.cargo/bin/cargo-dist
- run: chmod +x ~/.cargo/bin/dist
# Get all the local artifacts for the global tasks to use (for e.g. checksums)
- name: Fetch local artifacts
uses: actions/download-artifact@v4
@@ -140,8 +140,8 @@ jobs:
- id: cargo-dist
shell: bash
run: |
cargo dist build ${{ needs.plan.outputs.tag-flag }} --output-format=json "--artifacts=global" > dist-manifest.json
echo "cargo dist ran successfully"
dist build ${{ needs.plan.outputs.tag-flag }} --output-format=json "--artifacts=global" > dist-manifest.json
echo "dist ran successfully"
# Parse out what we just built and upload it to scratch storage
echo "paths<<EOF" >> "$GITHUB_OUTPUT"
@@ -174,12 +174,12 @@ jobs:
- uses: actions/checkout@v4
with:
submodules: recursive
- name: Install cached cargo-dist
- name: Install cached dist
uses: actions/download-artifact@v4
with:
name: cargo-dist-cache
path: ~/.cargo/bin/
- run: chmod +x ~/.cargo/bin/cargo-dist
- run: chmod +x ~/.cargo/bin/dist
# Fetch artifacts from scratch-storage
- name: Fetch artifacts
uses: actions/download-artifact@v4
@@ -191,7 +191,7 @@ jobs:
- id: host
shell: bash
run: |
cargo dist host ${{ needs.plan.outputs.tag-flag }} --steps=upload --steps=release --output-format=json > dist-manifest.json
dist host ${{ needs.plan.outputs.tag-flag }} --steps=upload --steps=release --output-format=json > dist-manifest.json
echo "artifacts uploaded and released successfully"
cat dist-manifest.json
echo "manifest=$(jq -c "." dist-manifest.json)" >> "$GITHUB_OUTPUT"

View File

@@ -1,5 +1,30 @@
# Breaking Changes
## 0.8.0
- **Default to Python 3.9**
Ruff now defaults to Python 3.9 instead of 3.8 if no explicit Python version is configured using [`ruff.target-version`](https://docs.astral.sh/ruff/settings/#target-version) or [`project.requires-python`](https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#python-requires) ([#13896](https://github.com/astral-sh/ruff/pull/13896))
- **Changed location of `pydoclint` diagnostics**
[`pydoclint`](https://docs.astral.sh/ruff/rules/#pydoclint-doc) diagnostics now point to the first-line of the problematic docstring. Previously, this was not the case.
If you've opted into these preview rules but have them suppressed using
[`noqa`](https://docs.astral.sh/ruff/linter/#error-suppression) comments in
some places, this change may mean that you need to move the `noqa` suppression
comments. Most users should be unaffected by this change.
- **Use XDG (i.e. `~/.local/bin`) instead of the Cargo home directory in the standalone installer**
Previously, Ruff's installer used `$CARGO_HOME` or `~/.cargo/bin` for its target install directory. Now, Ruff will be installed into `$XDG_BIN_HOME`, `$XDG_DATA_HOME/../bin`, or `~/.local/bin` (in that order).
This change is only relevant to users of the standalone Ruff installer (using the shell or PowerShell script). If you installed Ruff using uv or pip, you should be unaffected.
- **Changes to the line width calculation**
Ruff now uses a new version of the [unicode-width](https://github.com/unicode-rs/unicode-width) Rust crate to calculate the line width. In very rare cases, this may lead to lines containing Unicode characters being reformatted, or being considered too long when they were not before ([`E501`](https://docs.astral.sh/ruff/rules/line-too-long/)).
## 0.7.0
- The pytest rules `PT001` and `PT023` now default to omitting the decorator parentheses when there are no arguments

View File

@@ -1,5 +1,104 @@
# Changelog
## 0.8.0
Check out the [blog post](https://astral.sh/blog/ruff-v0.8.0) for a migration guide and overview of the changes!
### Breaking changes
See also, the "Remapped rules" section which may result in disabled rules.
- **Default to Python 3.9**
Ruff now defaults to Python 3.9 instead of 3.8 if no explicit Python version is configured using [`ruff.target-version`](https://docs.astral.sh/ruff/settings/#target-version) or [`project.requires-python`](https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#python-requires) ([#13896](https://github.com/astral-sh/ruff/pull/13896))
- **Changed location of `pydoclint` diagnostics**
[`pydoclint`](https://docs.astral.sh/ruff/rules/#pydoclint-doc) diagnostics now point to the first-line of the problematic docstring. Previously, this was not the case.
If you've opted into these preview rules but have them suppressed using
[`noqa`](https://docs.astral.sh/ruff/linter/#error-suppression) comments in
some places, this change may mean that you need to move the `noqa` suppression
comments. Most users should be unaffected by this change.
- **Use XDG (i.e. `~/.local/bin`) instead of the Cargo home directory in the standalone installer**
Previously, Ruff's installer used `$CARGO_HOME` or `~/.cargo/bin` for its target install directory. Now, Ruff will be installed into `$XDG_BIN_HOME`, `$XDG_DATA_HOME/../bin`, or `~/.local/bin` (in that order).
This change is only relevant to users of the standalone Ruff installer (using the shell or PowerShell script). If you installed Ruff using uv or pip, you should be unaffected.
- **Changes to the line width calculation**
Ruff now uses a new version of the [unicode-width](https://github.com/unicode-rs/unicode-width) Rust crate to calculate the line width. In very rare cases, this may lead to lines containing Unicode characters being reformatted, or being considered too long when they were not before ([`E501`](https://docs.astral.sh/ruff/rules/line-too-long/)).
### Removed Rules
The following deprecated rules have been removed:
- [`missing-type-self`](https://docs.astral.sh/ruff/rules/missing-type-self/) (`ANN101`)
- [`missing-type-cls`](https://docs.astral.sh/ruff/rules/missing-type-cls/) (`ANN102`)
- [`syntax-error`](https://docs.astral.sh/ruff/rules/syntax-error/) (`E999`)
- [`pytest-missing-fixture-name-underscore`](https://docs.astral.sh/ruff/rules/pytest-missing-fixture-name-underscore/) (`PT004`)
- [`pytest-incorrect-fixture-name-underscore`](https://docs.astral.sh/ruff/rules/pytest-incorrect-fixture-name-underscore/) (`PT005`)
- [`unpacked-list-comprehension`](https://docs.astral.sh/ruff/rules/unpacked-list-comprehension/) (`UP027`)
### Remapped rules
The following rules have been remapped to new rule codes:
- [`flake8-type-checking`](https://docs.astral.sh/ruff/rules/#flake8-type-checking-tc): `TCH` to `TC`
### Stabilization
The following rules have been stabilized and are no longer in preview:
- [`builtin-import-shadowing`](https://docs.astral.sh/ruff/rules/builtin-import-shadowing/) (`A004`)
- [`mutable-contextvar-default`](https://docs.astral.sh/ruff/rules/mutable-contextvar-default/) (`B039`)
- [`fast-api-redundant-response-model`](https://docs.astral.sh/ruff/rules/fast-api-redundant-response-model/) (`FAST001`)
- [`fast-api-non-annotated-dependency`](https://docs.astral.sh/ruff/rules/fast-api-non-annotated-dependency/) (`FAST002`)
- [`dict-index-missing-items`](https://docs.astral.sh/ruff/rules/dict-index-missing-items/) (`PLC0206`)
- [`pep484-style-positional-only-parameter`](https://docs.astral.sh/ruff/rules/pep484-style-positional-only-parameter/) (`PYI063`)
- [`redundant-final-literal`](https://docs.astral.sh/ruff/rules/redundant-final-literal/) (`PYI064`)
- [`bad-version-info-order`](https://docs.astral.sh/ruff/rules/bad-version-info-order/) (`PYI066`)
- [`parenthesize-chained-operators`](https://docs.astral.sh/ruff/rules/parenthesize-chained-operators/) (`RUF021`)
- [`unsorted-dunder-all`](https://docs.astral.sh/ruff/rules/unsorted-dunder-all/) (`RUF022`)
- [`unsorted-dunder-slots`](https://docs.astral.sh/ruff/rules/unsorted-dunder-slots/) (`RUF023`)
- [`assert-with-print-message`](https://docs.astral.sh/ruff/rules/assert-with-print-message/) (`RUF030`)
- [`unnecessary-default-type-args`](https://docs.astral.sh/ruff/rules/unnecessary-default-type-args/) (`UP043`)
The following behaviors have been stabilized:
- [`ambiguous-variable-name`](https://docs.astral.sh/ruff/rules/ambiguous-variable-name/) (`E741`): Violations in stub files are now ignored. Stub authors typically don't control variable names.
- [`printf-string-formatting`](https://docs.astral.sh/ruff/rules/printf-string-formatting/) (`UP031`): Report all `printf`-like usages even if no autofix is available
The following fixes have been stabilized:
- [`zip-instead-of-pairwise`](https://docs.astral.sh/ruff/rules/zip-instead-of-pairwise/) (`RUF007`)
### Preview features
- \[`flake8-datetimez`\] Exempt `min.time()` and `max.time()` (`DTZ901`) ([#14394](https://github.com/astral-sh/ruff/pull/14394))
- \[`flake8-pie`\] Mark fix as unsafe if the following statement is a string literal (`PIE790`) ([#14393](https://github.com/astral-sh/ruff/pull/14393))
- \[`flake8-pyi`\] New rule `redundant-none-literal` (`PYI061`) ([#14316](https://github.com/astral-sh/ruff/pull/14316))
- \[`flake8-pyi`\] Add autofix for `redundant-numeric-union` (`PYI041`) ([#14273](https://github.com/astral-sh/ruff/pull/14273))
- \[`ruff`\] New rule `map-int-version-parsing` (`RUF048`) ([#14373](https://github.com/astral-sh/ruff/pull/14373))
- \[`ruff`\] New rule `redundant-bool-literal` (`RUF038`) ([#14319](https://github.com/astral-sh/ruff/pull/14319))
- \[`ruff`\] New rule `unraw-re-pattern` (`RUF039`) ([#14446](https://github.com/astral-sh/ruff/pull/14446))
- \[`pycodestyle`\] Exempt `pytest.importorskip()` calls (`E402`) ([#14474](https://github.com/astral-sh/ruff/pull/14474))
- \[`pylint`\] Autofix suggests using sets when possible (`PLR1714`) ([#14372](https://github.com/astral-sh/ruff/pull/14372))
### Rule changes
- [`invalid-pyproject-toml`](https://docs.astral.sh/ruff/rules/invalid-pyproject-toml/) (`RUF200`): Updated to reflect the provisionally accepted [PEP 639](https://peps.python.org/pep-0639/).
- \[`flake8-pyi`\] Avoid panic in unfixable case (`PYI041`) ([#14402](https://github.com/astral-sh/ruff/pull/14402))
- \[`flake8-type-checking`\] Correctly handle quotes in subscript expression when generating an autofix ([#14371](https://github.com/astral-sh/ruff/pull/14371))
- \[`pylint`\] Suggest correct autofix for `__contains__` (`PLC2801`) ([#14424](https://github.com/astral-sh/ruff/pull/14424))
### Configuration
- Ruff now emits a warning instead of an error when a configuration [`ignore`](https://docs.astral.sh/ruff/settings/#lint_ignore)s a rule that has been removed ([#14435](https://github.com/astral-sh/ruff/pull/14435))
- Ruff now validates that `lint.flake8-import-conventions.aliases` only uses valid module names and aliases ([#14477](https://github.com/astral-sh/ruff/pull/14477))
## 0.7.4
### Preview features
@@ -978,7 +1077,7 @@ The following deprecated CLI commands have been removed:
### Preview features
- \[`flake8-bugbear`\] Implement `return-in-generator` (`B901`) ([#11644](https://github.com/astral-sh/ruff/pull/11644))
- \[`flake8-pyi`\] Implement `PYI063` ([#11699](https://github.com/astral-sh/ruff/pull/11699))
- \[`flake8-pyi`\] Implement `pep484-style-positional-only-parameter` (`PYI063`) ([#11699](https://github.com/astral-sh/ruff/pull/11699))
- \[`pygrep_hooks`\] Check blanket ignores via file-level pragmas (`PGH004`) ([#11540](https://github.com/astral-sh/ruff/pull/11540))
### Rule changes

197
Cargo.lock generated
View File

@@ -212,6 +212,12 @@ dependencies = [
"generic-array",
]
[[package]]
name = "boxcar"
version = "0.2.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f839cdf7e2d3198ac6ca003fd8ebc61715755f41c1cad15ff13df67531e00ed"
[[package]]
name = "bstr"
version = "1.11.0"
@@ -407,7 +413,7 @@ dependencies = [
"heck",
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -687,7 +693,7 @@ dependencies = [
"proc-macro2",
"quote",
"strsim 0.10.0",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -698,7 +704,7 @@ checksum = "a668eda54683121533a393014d8692171709ff57a7d61f187b6e782719f8933f"
dependencies = [
"darling_core",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -752,23 +758,23 @@ dependencies = [
[[package]]
name = "dir-test"
version = "0.3.0"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c44bdf9319ad5223afb7eb15a7110452b0adf0373ea6756561b2c708eef0dd1"
checksum = "b12781621d53fd9087021f5a338df5c57c04f84a6231c1f4726f45e2e333470b"
dependencies = [
"dir-test-macros",
]
[[package]]
name = "dir-test-macros"
version = "0.3.0"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "644f96047137dfaa7a09e34d4623f9e52a1926ecc25ba32ad2ba3fc422536b25"
checksum = "1340852f50b2285d01a7f598cc5d08b572669c3e09e614925175cc3c26787b91"
dependencies = [
"glob",
"proc-macro2",
"quote",
"syn 1.0.109",
"syn",
]
[[package]]
@@ -820,7 +826,7 @@ checksum = "97369cbbc041bc366949bc74d34658d6cda5621039731c6310521892a3a20ae0"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -1062,9 +1068,9 @@ dependencies = [
[[package]]
name = "hashbrown"
version = "0.15.1"
version = "0.15.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3a9bfc1af68b1726ea47d3d5109de126281def866b33970e10fbab11b5dafab3"
checksum = "bf151400ff0baff5465007dd2f3e717f3fe502074ca563069ce3a6629d07b289"
[[package]]
name = "hashlink"
@@ -1240,7 +1246,7 @@ checksum = "1ec89e9337638ecdc08744df490b221a7399bf8d164eb52a665454e60e075ad6"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -1313,7 +1319,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "707907fe3c25f5424cce2cb7e1cbcafee6bdbe735ca90ef77c29e84591e5b9da"
dependencies = [
"equivalent",
"hashbrown 0.15.1",
"hashbrown 0.15.2",
"serde",
]
@@ -1414,7 +1420,7 @@ dependencies = [
"heck",
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -1520,9 +1526,9 @@ checksum = "433bfe06b8c75da9b2e3fbea6e5329ff87748f0b144ef75306e674c3f6f7c13f"
[[package]]
name = "libcst"
version = "1.5.0"
version = "1.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1586dd7a857d8a61a577afde1a24cc9573ff549eff092d5ce968b1ec93cc61b6"
checksum = "fa3e60579a8cba3d86aa4a5f7fc98973cc0fd2ac270bf02f85a9bef09700b075"
dependencies = [
"chic",
"libcst_derive",
@@ -1540,7 +1546,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2ae40017ac09cd2c6a53504cb3c871c7f2b41466eac5bc66ba63f39073b467b"
dependencies = [
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -1704,9 +1710,9 @@ checksum = "308d96db8debc727c3fd9744aac51751243420e46edf401010908da7f8d5e57c"
[[package]]
name = "newtype-uuid"
version = "1.1.0"
version = "1.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3526cb7c660872e401beaf3297f95f548ce3b4b4bdd8121b7c0713771d7c4a6e"
checksum = "4c8781e2ef64806278a55ad223f0bc875772fd40e1fe6e73e8adbf027817229d"
dependencies = [
"uuid",
]
@@ -1935,18 +1941,6 @@ version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e3aeb8f54c078314c2065ee649a7241f46b9d8e418e1a9581ba0546657d7aa3a"
[[package]]
name = "pep440_rs"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e0c29f9c43de378b4e4e0cd7dbcce0e5cfb80443de8c05620368b2948bc936a1"
dependencies = [
"once_cell",
"regex",
"serde",
"unicode-width 0.1.13",
]
[[package]]
name = "pep440_rs"
version = "0.7.2"
@@ -1956,22 +1950,29 @@ dependencies = [
"serde",
"unicode-width 0.2.0",
"unscanny",
"version-ranges",
]
[[package]]
name = "pep508_rs"
version = "0.3.0"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "910c513bea0f4f833122321c0f20e8c704e01de98692f6989c2ec21f43d88b1e"
checksum = "8c2feee999fa547bacab06a4881bacc74688858b92fa8ef1e206c748b0a76048"
dependencies = [
"boxcar",
"indexmap",
"itertools 0.13.0",
"once_cell",
"pep440_rs 0.4.0",
"pep440_rs",
"regex",
"rustc-hash 2.0.0",
"serde",
"smallvec",
"thiserror 1.0.67",
"tracing",
"unicode-width 0.1.13",
"unicode-width 0.2.0",
"url",
"urlencoding",
"version-ranges",
]
[[package]]
@@ -2011,7 +2012,7 @@ dependencies = [
"pest_meta",
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -2126,46 +2127,47 @@ dependencies = [
[[package]]
name = "proc-macro2"
version = "1.0.89"
version = "1.0.92"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f139b0662de085916d1fb67d2b4169d1addddda1919e696f3252b740b629986e"
checksum = "37d3544b3f2748c54e147655edb5025752e2303145b5aefb3c3ea2c78b973bb0"
dependencies = [
"unicode-ident",
]
[[package]]
name = "pyproject-toml"
version = "0.9.0"
version = "0.13.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "95c3dd745f99aa3c554b7bb00859f7d18c2f1d6afd749ccc86d60b61e702abd9"
checksum = "643af57c3f36ba90a8b53e972727d8092f7408a9ebfbaf4c3d2c17b07c58d835"
dependencies = [
"indexmap",
"pep440_rs 0.4.0",
"pep440_rs",
"pep508_rs",
"serde",
"thiserror 1.0.67",
"toml",
]
[[package]]
name = "quick-junit"
version = "0.5.0"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "62ffd2f9a162cfae131bed6d9d1ed60adced33be340a94f96952897d7cb0c240"
checksum = "3ed1a693391a16317257103ad06a88c6529ac640846021da7c435a06fffdacd7"
dependencies = [
"chrono",
"indexmap",
"newtype-uuid",
"quick-xml",
"strip-ansi-escapes",
"thiserror 1.0.67",
"thiserror 2.0.3",
"uuid",
]
[[package]]
name = "quick-xml"
version = "0.36.1"
version = "0.37.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "96a05e2e8efddfa51a84ca47cec303fac86c8541b686d37cac5efc0e094417bc"
checksum = "f22f29bdff3987b4d8632ef95fd6424ec7e4e0a57e2f4fc63e489e75357f6a03"
dependencies = [
"memchr",
]
@@ -2264,7 +2266,7 @@ dependencies = [
"compact_str",
"countme",
"dir-test",
"hashbrown 0.15.1",
"hashbrown 0.15.2",
"indexmap",
"insta",
"itertools 0.13.0",
@@ -2370,7 +2372,7 @@ dependencies = [
"glob",
"insta",
"notify",
"pep440_rs 0.7.2",
"pep440_rs",
"rayon",
"red_knot_python_semantic",
"red_knot_vendored",
@@ -2487,7 +2489,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.7.4"
version = "0.8.0"
dependencies = [
"anyhow",
"argfile",
@@ -2674,7 +2676,7 @@ dependencies = [
"serde",
"static_assertions",
"tracing",
"unicode-width 0.1.13",
"unicode-width 0.2.0",
]
[[package]]
@@ -2706,7 +2708,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.7.4"
version = "0.8.0"
dependencies = [
"aho-corasick",
"annotate-snippets 0.9.2",
@@ -2729,7 +2731,7 @@ dependencies = [
"natord",
"path-absolutize",
"pathdiff",
"pep440_rs 0.7.2",
"pep440_rs",
"pyproject-toml",
"quick-junit",
"regex",
@@ -2760,7 +2762,7 @@ dependencies = [
"toml",
"typed-arena",
"unicode-normalization",
"unicode-width 0.1.13",
"unicode-width 0.2.0",
"unicode_names2",
"url",
]
@@ -2773,7 +2775,7 @@ dependencies = [
"proc-macro2",
"quote",
"ruff_python_trivia",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -3021,7 +3023,7 @@ dependencies = [
[[package]]
name = "ruff_wasm"
version = "0.7.4"
version = "0.8.0"
dependencies = [
"console_error_panic_hook",
"console_log",
@@ -3060,7 +3062,7 @@ dependencies = [
"matchit",
"path-absolutize",
"path-slash",
"pep440_rs 0.7.2",
"pep440_rs",
"regex",
"ruff_cache",
"ruff_formatter",
@@ -3070,6 +3072,7 @@ dependencies = [
"ruff_python_ast",
"ruff_python_formatter",
"ruff_python_semantic",
"ruff_python_stdlib",
"ruff_source_file",
"rustc-hash 2.0.0",
"schemars",
@@ -3192,7 +3195,7 @@ dependencies = [
"heck",
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
"synstructure",
]
@@ -3226,7 +3229,7 @@ dependencies = [
"proc-macro2",
"quote",
"serde_derive_internals",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -3275,7 +3278,7 @@ checksum = "ad1e866f866923f252f05c889987993144fb74e722403468a4ebd70c3cd756c0"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -3286,7 +3289,7 @@ checksum = "330f01ce65a3a5fe59a60c82f3c9a024b573b8a6e875bd233fe5f934e71d54e3"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -3309,7 +3312,7 @@ checksum = "6c64451ba24fc7a6a2d60fc75dd9c83c90903b19028d4eff35e88fc1e86564e9"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -3350,7 +3353,7 @@ dependencies = [
"darling",
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -3458,7 +3461,7 @@ dependencies = [
"proc-macro2",
"quote",
"rustversion",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -3469,20 +3472,9 @@ checksum = "81cdd64d312baedb58e21336b31bc043b77e01cc99033ce76ef539f78e965ebc"
[[package]]
name = "syn"
version = "1.0.109"
version = "2.0.89"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72b64191b275b66ffe2469e8af2c1cfe3bafa67b529ead792a6d0160888b4237"
dependencies = [
"proc-macro2",
"quote",
"unicode-ident",
]
[[package]]
name = "syn"
version = "2.0.87"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "25aa4ce346d03a6dcd68dd8b4010bcb74e54e62c90c573f394c46eae99aba32d"
checksum = "44d46482f1c1c87acd84dea20c1bf5ebff4c757009ed6bf19cfd36fb10e92c4e"
dependencies = [
"proc-macro2",
"quote",
@@ -3497,7 +3489,7 @@ checksum = "c8af7666ab7b6390ab78131fb5b0fce11d6b7a6951602017c35fa82800708971"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -3560,7 +3552,7 @@ dependencies = [
"cfg-if",
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -3571,7 +3563,7 @@ checksum = "5c89e72a01ed4c579669add59014b9a524d609c0c88c6a585ce37485879f6ffb"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
"test-case-core",
]
@@ -3601,7 +3593,7 @@ checksum = "b607164372e89797d78b8e23a6d67d5d1038c1c65efd52e1389ef8b77caba2a6"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -3612,7 +3604,7 @@ checksum = "f077553d607adc1caf65430528a576c757a71ed73944b66ebb58ef2bbd243568"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -3734,7 +3726,7 @@ checksum = "34704c8d6ebcbc939824180af020566b01a7c01f80641264eba0999f6c2b6be7"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -3873,9 +3865,9 @@ dependencies = [
[[package]]
name = "unicode-ident"
version = "1.0.13"
version = "1.0.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e91b56cd4cadaeb79bbf1a5645f6b4f8dc5bde8834ad5894a8db35fda9efa1fe"
checksum = "adb9e6ca4f869e1180728b7950e35922a7fc6397f7b641499e8f3ef06e50dc83"
[[package]]
name = "unicode-normalization"
@@ -3950,9 +3942,9 @@ dependencies = [
[[package]]
name = "url"
version = "2.5.3"
version = "2.5.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8d157f1b96d14500ffdc1f10ba712e780825526c03d9a49b4d0324b0d9113ada"
checksum = "32f8b686cadd1473f4bd0117a5d28d36b1ade384ea9b5069a1c40aefed7fda60"
dependencies = [
"form_urlencoded",
"idna",
@@ -3960,6 +3952,12 @@ dependencies = [
"serde",
]
[[package]]
name = "urlencoding"
version = "2.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "daf8dba3b7eb870caf1ddeed7bc9d2a049f3cfdfae7cb521b087cc33ae4c49da"
[[package]]
name = "utf16_iter"
version = "1.0.5"
@@ -3998,7 +3996,7 @@ checksum = "6b91f57fe13a38d0ce9e28a03463d8d3c2468ed03d75375110ec71d93b449a08"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -4007,6 +4005,15 @@ version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "830b7e5d4d90034032940e4ace0d9a9a057e7a45cd94e6c007832e39edb82f6d"
[[package]]
name = "version-ranges"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f8d079415ceb2be83fc355adbadafe401307d5c309c7e6ade6638e6f9f42f42d"
dependencies = [
"smallvec",
]
[[package]]
name = "version_check"
version = "0.9.4"
@@ -4084,7 +4091,7 @@ dependencies = [
"once_cell",
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
"wasm-bindgen-shared",
]
@@ -4118,7 +4125,7 @@ checksum = "26c6ab57572f7a24a4985830b120de1594465e5d500f24afe89e16b4e833ef68"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
"wasm-bindgen-backend",
"wasm-bindgen-shared",
]
@@ -4152,7 +4159,7 @@ checksum = "c97b2ef2c8d627381e51c071c2ab328eac606d3f69dd82bcbca20a9e389d95f0"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -4455,7 +4462,7 @@ checksum = "28cc31741b18cb6f1d5ff12f5b7523e3d6eb0852bbbad19d73905511d9849b95"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
"synstructure",
]
@@ -4476,7 +4483,7 @@ checksum = "9ce1b18ccd8e73a9321186f97e46f9f04b778851177567b1975109d26a08d2a6"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]
@@ -4496,7 +4503,7 @@ checksum = "0ea7b4a3637ea8669cedf0f1fd5c286a17f3de97b8dd5a70a6c167a1730e63a5"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
"synstructure",
]
@@ -4525,7 +4532,7 @@ checksum = "6eafa6dfb17584ea3e2bd6e76e0cc15ad7af12b09abdd1ca55961bed9b1063c6"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.87",
"syn",
]
[[package]]

View File

@@ -65,7 +65,7 @@ compact_str = "0.8.0"
criterion = { version = "0.5.1", default-features = false }
crossbeam = { version = "0.8.4" }
dashmap = { version = "6.0.1" }
dir-test = { version = "0.3.0" }
dir-test = { version = "0.4.0" }
dunce = { version = "1.0.5" }
drop_bomb = { version = "0.1.5" }
env_logger = { version = "0.11.0" }
@@ -111,7 +111,7 @@ pathdiff = { version = "0.2.1" }
pep440_rs = { version = "0.7.1" }
pretty_assertions = "1.3.0"
proc-macro2 = { version = "1.0.79" }
pyproject-toml = { version = "0.9.0" }
pyproject-toml = { version = "0.13.4" }
quick-junit = { version = "0.5.0" }
quote = { version = "1.0.23" }
rand = { version = "0.8.5" }
@@ -151,7 +151,7 @@ tracing-tree = { version = "0.4.0" }
typed-arena = { version = "2.0.2" }
unic-ucd-category = { version = "0.9" }
unicode-ident = { version = "1.0.12" }
unicode-width = { version = "0.1.11" }
unicode-width = { version = "0.2.0" }
unicode_names2 = { version = "1.2.2" }
unicode-normalization = { version = "0.1.23" }
ureq = { version = "2.9.6" }
@@ -248,10 +248,10 @@ debug = 1
[profile.dist]
inherits = "release"
# Config for 'cargo dist'
# Config for 'dist'
[workspace.metadata.dist]
# The preferred cargo-dist version to use in CI (Cargo.toml SemVer syntax)
cargo-dist-version = "0.22.1"
# The preferred dist version to use in CI (Cargo.toml SemVer syntax)
cargo-dist-version = "0.25.2-prerelease.3"
# CI backends to support
ci = "github"
# The installers to generate for each app
@@ -282,13 +282,13 @@ targets = [
]
# Whether to auto-include files like READMEs, LICENSEs, and CHANGELOGs (default true)
auto-includes = false
# Whether cargo-dist should create a GitHub Release or use an existing draft
# Whether dist should create a Github Release or use an existing draft
create-release = true
# Which actions to run on pull requests
pr-run-mode = "skip"
# Whether CI should trigger releases with dispatches instead of tag pushes
dispatch-releases = true
# Which phase cargo-dist should use to create the GitHub release
# Which phase dist should use to create the GitHub release
github-release = "announce"
# Whether CI should include auto-generated code to build local artifacts
build-local-artifacts = false
@@ -297,14 +297,10 @@ local-artifacts-jobs = ["./build-binaries", "./build-docker"]
# Publish jobs to run in CI
publish-jobs = ["./publish-pypi", "./publish-wasm"]
# Post-announce jobs to run in CI
post-announce-jobs = [
"./notify-dependents",
"./publish-docs",
"./publish-playground",
]
post-announce-jobs = ["./notify-dependents", "./publish-docs", "./publish-playground"]
# Custom permissions for GitHub Jobs
github-custom-job-permissions = { "build-docker" = { packages = "write", contents = "read" }, "publish-wasm" = { contents = "read", id-token = "write", packages = "write" } }
# Whether to install an updater program
install-updater = false
# Path that installers should place binaries in
install-path = "CARGO_HOME"
install-path = ["$XDG_BIN_HOME/", "$XDG_DATA_HOME/../bin", "~/.local/bin"]

View File

@@ -136,8 +136,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.7.4/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.7.4/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.8.0/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.8.0/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -170,7 +170,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.7.4
rev: v0.8.0
hooks:
# Run the linter.
- id: ruff
@@ -238,8 +238,8 @@ exclude = [
line-length = 88
indent-width = 4
# Assume Python 3.8
target-version = "py38"
# Assume Python 3.9
target-version = "py39"
[lint]
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.

View File

@@ -4,7 +4,6 @@ use std::io::Write;
use std::time::Duration;
use anyhow::{anyhow, Context};
use red_knot_python_semantic::{resolve_module, ModuleName, Program, PythonVersion, SitePackages};
use red_knot_workspace::db::{Db, RootDatabase};
use red_knot_workspace::watch;
@@ -14,7 +13,7 @@ use red_knot_workspace::workspace::WorkspaceMetadata;
use ruff_db::files::{system_path_to_file, File, FileError};
use ruff_db::source::source_text;
use ruff_db::system::{OsSystem, SystemPath, SystemPathBuf};
use ruff_db::testing::setup_logging;
use ruff_db::testing::{setup_logging, setup_logging_with_filter};
use ruff_db::Upcast;
struct TestCase {
@@ -47,6 +46,8 @@ impl TestCase {
}
fn try_stop_watch(&mut self, timeout: Duration) -> Option<Vec<watch::ChangeEvent>> {
tracing::debug!("Try stopping watch with timeout {:?}", timeout);
let watcher = self
.watcher
.take()
@@ -56,8 +57,11 @@ impl TestCase {
.changes_receiver
.recv_timeout(timeout)
.unwrap_or_default();
watcher.flush();
tracing::debug!("Flushed file watcher");
watcher.stop();
tracing::debug!("Stopping file watcher");
for event in &self.changes_receiver {
all_events.extend(event);
@@ -600,6 +604,8 @@ fn directory_moved_to_trash() -> anyhow::Result<()> {
#[test]
fn directory_renamed() -> anyhow::Result<()> {
let _tracing = setup_logging_with_filter("file_watching=TRACE,red_knot=TRACE");
let mut case = setup([
("bar.py", "import sub.a"),
("sub/__init__.py", ""),
@@ -640,6 +646,10 @@ fn directory_renamed() -> anyhow::Result<()> {
let changes = case.stop_watch();
for event in &changes {
tracing::debug!("Event: {:?}", event);
}
case.apply_changes(changes);
// `import sub.a` should no longer resolve

View File

@@ -0,0 +1,62 @@
# NoReturn & Never
`NoReturn` is used to annotate the return type for functions that never return. `Never` is the
bottom type, representing the empty set of Python objects. These two annotations can be used
interchangeably.
## Function Return Type Annotation
```py
from typing import NoReturn
def stop() -> NoReturn:
raise RuntimeError("no way")
# revealed: Never
reveal_type(stop())
```
## Assignment
```py
from typing import NoReturn, Never, Any
# error: [invalid-type-parameter] "Type `typing.Never` expected no type parameter"
x: Never[int]
a1: NoReturn
# TODO: Test `Never` is only available in python >= 3.11
a2: Never
b1: Any
b2: int
def f():
# revealed: Never
reveal_type(a1)
# revealed: Never
reveal_type(a2)
# Never is assignable to all types.
v1: int = a1
v2: str = a1
# Other types are not assignable to Never except for Never (and Any).
v3: Never = b1
v4: Never = a2
v5: Any = b2
# error: [invalid-assignment] "Object of type `Literal[1]` is not assignable to `Never`"
v6: Never = 1
```
## Typing Extensions
```py
from typing_extensions import NoReturn, Never
x: NoReturn
y: Never
def f():
# revealed: Never
reveal_type(x)
# revealed: Never
reveal_type(y)
```

View File

@@ -9,10 +9,10 @@ Ts = TypeVarTuple("Ts")
def append_int(*args: *Ts) -> tuple[*Ts, int]:
# TODO: should show some representation of the variadic generic type
reveal_type(args) # revealed: @Todo
reveal_type(args) # revealed: @Todo(function parameter type)
return (*args, 1)
# TODO should be tuple[Literal[True], Literal["a"], int]
reveal_type(append_int(True, "a")) # revealed: @Todo
reveal_type(append_int(True, "a")) # revealed: @Todo(full tuple[...] support)
```

View File

@@ -125,7 +125,6 @@ def f7() -> "\x69nt":
def f8() -> """int""":
return 1
# error: [annotation-byte-string] "Type expressions cannot use bytes literal"
def f9() -> "b'int'":
return 1
@@ -189,3 +188,31 @@ reveal_type(d) # revealed: Foo
## Parameter
TODO: Add tests once parameter inference is supported
## Invalid expressions
The expressions in these string annotations aren't valid expressions in this context but we
shouldn't panic.
```py
a: "1 or 2"
b: "(x := 1)"
c: "1 + 2"
d: "lambda x: x"
e: "x if True else y"
f: "{'a': 1, 'b': 2}"
g: "{1, 2}"
h: "[i for i in range(5)]"
i: "{i for i in range(5)}"
j: "{i: i for i in range(5)}"
k: "(i for i in range(5))"
l: "await 1"
# error: [forward-annotation-syntax-error]
m: "yield 1"
# error: [forward-annotation-syntax-error]
n: "yield from 1"
o: "1 < 2"
p: "call()"
r: "[1, 2]"
s: "(1, 2)"
```

View File

@@ -0,0 +1,61 @@
# Union
## Annotation
`typing.Union` can be used to construct union types same as `|` operator.
```py
from typing import Union
a: Union[int, str]
a1: Union[int, bool]
a2: Union[int, Union[float, str]]
a3: Union[int, None]
a4: Union[Union[float, str]]
a5: Union[int]
a6: Union[()]
def f():
# revealed: int | str
reveal_type(a)
# Since bool is a subtype of int we simplify to int here. But we do allow assigning boolean values (see below).
# revealed: int
reveal_type(a1)
# revealed: int | float | str
reveal_type(a2)
# revealed: int | None
reveal_type(a3)
# revealed: float | str
reveal_type(a4)
# revealed: int
reveal_type(a5)
# revealed: Never
reveal_type(a6)
```
## Assignment
```py
from typing import Union
a: Union[int, str]
a = 1
a = ""
a1: Union[int, bool]
a1 = 1
a1 = True
# error: [invalid-assignment] "Object of type `Literal[b""]` is not assignable to `int | str`"
a = b""
```
## Typing Extensions
```py
from typing_extensions import Union
a: Union[int, str]
def f():
# revealed: int | str
reveal_type(a)
```

View File

@@ -51,12 +51,12 @@ reveal_type(c) # revealed: tuple[str, int]
reveal_type(d) # revealed: tuple[tuple[str, str], tuple[int, int]]
# TODO: homogenous tuples, PEP-646 tuples
reveal_type(e) # revealed: @Todo
reveal_type(f) # revealed: @Todo
reveal_type(g) # revealed: @Todo
reveal_type(e) # revealed: @Todo(full tuple[...] support)
reveal_type(f) # revealed: @Todo(full tuple[...] support)
reveal_type(g) # revealed: @Todo(full tuple[...] support)
# TODO: support more kinds of type expressions in annotations
reveal_type(h) # revealed: @Todo
reveal_type(h) # revealed: @Todo(full tuple[...] support)
reveal_type(i) # revealed: tuple[str | int, str | int]
reveal_type(j) # revealed: tuple[str | int]

View File

@@ -317,7 +317,7 @@ reveal_type(1 + A()) # revealed: int
reveal_type(A() + "foo") # revealed: A
# TODO should be `A` since `str.__add__` doesn't support `A` instances
# TODO overloads
reveal_type("foo" + A()) # revealed: @Todo
reveal_type("foo" + A()) # revealed: @Todo(return type)
reveal_type(A() + b"foo") # revealed: A
# TODO should be `A` since `bytes.__add__` doesn't support `A` instances
@@ -325,7 +325,7 @@ reveal_type(b"foo" + A()) # revealed: bytes
reveal_type(A() + ()) # revealed: A
# TODO this should be `A`, since `tuple.__add__` doesn't support `A` instances
reveal_type(() + A()) # revealed: @Todo
reveal_type(() + A()) # revealed: @Todo(return type)
literal_string_instance = "foo" * 1_000_000_000
# the test is not testing what it's meant to be testing if this isn't a `LiteralString`:
@@ -334,7 +334,7 @@ reveal_type(literal_string_instance) # revealed: LiteralString
reveal_type(A() + literal_string_instance) # revealed: A
# TODO should be `A` since `str.__add__` doesn't support `A` instances
# TODO overloads
reveal_type(literal_string_instance + A()) # revealed: @Todo
reveal_type(literal_string_instance + A()) # revealed: @Todo(return type)
```
## Operations involving instances of classes inheriting from `Any`

View File

@@ -16,7 +16,7 @@ async def get_int_async() -> int:
return 42
# TODO: we don't yet support `types.CoroutineType`, should be generic `Coroutine[Any, Any, int]`
reveal_type(get_int_async()) # revealed: @Todo
reveal_type(get_int_async()) # revealed: @Todo(generic types.CoroutineType)
```
## Generic
@@ -36,6 +36,8 @@ from typing import Callable
def foo() -> int:
return 42
# TODO: This should be resolved once we understand `Callable` annotations
# error: [annotation-with-invalid-expression]
def decorator(func) -> Callable[[], int]:
return foo
@@ -44,7 +46,7 @@ def bar() -> str:
return "bar"
# TODO: should reveal `int`, as the decorator replaces `bar` with `foo`
reveal_type(bar()) # revealed: @Todo
reveal_type(bar()) # revealed: @Todo(return type)
```
## Invalid callable

View File

@@ -58,7 +58,9 @@ reveal_type(c >= d) # revealed: Literal[True]
#### Results with Ambiguity
```py
def bool_instance() -> bool: ...
def bool_instance() -> bool:
return True
def int_instance() -> int:
return 42
@@ -134,23 +136,158 @@ reveal_type(c >= c) # revealed: Literal[True]
#### Non Boolean Rich Comparisons
Rich comparison methods defined in a class affect tuple comparisons as well. Proper type inference
should be possible even in cases where these methods return non-boolean types.
Note: Tuples use lexicographic comparisons. If the `==` result for all paired elements in the tuple
is True, the comparison then considers the tuples length. Regardless of the return type of the
dunder methods, the final result can still be a boolean value.
(+cpython: For tuples, `==` and `!=` always produce boolean results, regardless of the return type
of the dunder methods.)
```py
from __future__ import annotations
class A:
def __eq__(self, o) -> str: ...
def __ne__(self, o) -> int: ...
def __lt__(self, o) -> float: ...
def __le__(self, o) -> object: ...
def __gt__(self, o) -> tuple: ...
def __ge__(self, o) -> list: ...
def __eq__(self, o: object) -> str:
return "hello"
def __ne__(self, o: object) -> bytes:
return b"world"
def __lt__(self, o: A) -> float:
return 3.14
def __le__(self, o: A) -> complex:
return complex(0.5, -0.5)
def __gt__(self, o: A) -> tuple:
return (1, 2, 3)
def __ge__(self, o: A) -> list:
return [1, 2, 3]
a = (A(), A())
reveal_type(a == a) # revealed: bool
reveal_type(a != a) # revealed: bool
reveal_type(a < a) # revealed: float | Literal[False]
reveal_type(a <= a) # revealed: complex | Literal[True]
reveal_type(a > a) # revealed: tuple | Literal[False]
reveal_type(a >= a) # revealed: list | Literal[True]
# If lexicographic comparison is finished before comparing A()
b = ("1_foo", A())
c = ("2_bar", A())
reveal_type(b == c) # revealed: Literal[False]
reveal_type(b != c) # revealed: Literal[True]
reveal_type(b < c) # revealed: Literal[True]
reveal_type(b <= c) # revealed: Literal[True]
reveal_type(b > c) # revealed: Literal[False]
reveal_type(b >= c) # revealed: Literal[False]
class B:
def __lt__(self, o: B) -> set:
return set()
reveal_type((A(), B()) < (A(), B())) # revealed: float | set | Literal[False]
```
#### Special Handling of Eq and NotEq in Lexicographic Comparisons
> Example: `(int_instance(), "foo") == (int_instance(), "bar")`
`Eq` and `NotEq` have unique behavior compared to other operators in lexicographic comparisons.
Specifically, for `Eq`, if any non-equal pair exists within the tuples being compared, we can
immediately conclude that the tuples are not equal. Conversely, for `NotEq`, if any non-equal pair
exists, we can determine that the tuples are unequal.
In contrast, with operators like `<` and `>`, the comparison must consider each pair of elements
sequentially, and the final outcome might remain ambiguous until all pairs are compared.
```py
def str_instance() -> str:
return "hello"
def int_instance() -> int:
return 42
reveal_type("foo" == "bar") # revealed: Literal[False]
reveal_type(("foo",) == ("bar",)) # revealed: Literal[False]
reveal_type((4, "foo") == (4, "bar")) # revealed: Literal[False]
reveal_type((int_instance(), "foo") == (int_instance(), "bar")) # revealed: Literal[False]
a = (str_instance(), int_instance(), "foo")
reveal_type(a == a) # revealed: bool
reveal_type(a != a) # revealed: bool
reveal_type(a < a) # revealed: bool
reveal_type(a <= a) # revealed: bool
reveal_type(a > a) # revealed: bool
reveal_type(a >= a) # revealed: bool
b = (str_instance(), int_instance(), "bar")
reveal_type(a == b) # revealed: Literal[False]
reveal_type(a != b) # revealed: Literal[True]
reveal_type(a < b) # revealed: bool
reveal_type(a <= b) # revealed: bool
reveal_type(a > b) # revealed: bool
reveal_type(a >= b) # revealed: bool
c = (str_instance(), int_instance(), "foo", "different_length")
reveal_type(a == c) # revealed: Literal[False]
reveal_type(a != c) # revealed: Literal[True]
reveal_type(a < c) # revealed: bool
reveal_type(a <= c) # revealed: bool
reveal_type(a > c) # revealed: bool
reveal_type(a >= c) # revealed: bool
```
#### Error Propagation
Errors occurring within a tuple comparison should propagate outward. However, if the tuple
comparison can clearly conclude before encountering an error, the error should not be raised.
```py
def int_instance() -> int:
return 42
def str_instance() -> str:
return "hello"
class A: ...
# error: [unsupported-operator] "Operator `<` is not supported for types `A` and `A`"
A() < A()
# error: [unsupported-operator] "Operator `<=` is not supported for types `A` and `A`"
A() <= A()
# error: [unsupported-operator] "Operator `>` is not supported for types `A` and `A`"
A() > A()
# error: [unsupported-operator] "Operator `>=` is not supported for types `A` and `A`"
A() >= A()
a = (0, int_instance(), A())
# error: [unsupported-operator] "Operator `<` is not supported for types `A` and `A`, in comparing `tuple[Literal[0], int, A]` with `tuple[Literal[0], int, A]`"
reveal_type(a < a) # revealed: Unknown
# error: [unsupported-operator] "Operator `<=` is not supported for types `A` and `A`, in comparing `tuple[Literal[0], int, A]` with `tuple[Literal[0], int, A]`"
reveal_type(a <= a) # revealed: Unknown
# error: [unsupported-operator] "Operator `>` is not supported for types `A` and `A`, in comparing `tuple[Literal[0], int, A]` with `tuple[Literal[0], int, A]`"
reveal_type(a > a) # revealed: Unknown
# error: [unsupported-operator] "Operator `>=` is not supported for types `A` and `A`, in comparing `tuple[Literal[0], int, A]` with `tuple[Literal[0], int, A]`"
reveal_type(a >= a) # revealed: Unknown
# Comparison between `a` and `b` should only involve the first elements, `Literal[0]` and `Literal[99999]`,
# and should terminate immediately.
b = (99999, int_instance(), A())
reveal_type(a < b) # revealed: Literal[True]
reveal_type(a <= b) # revealed: Literal[True]
reveal_type(a > b) # revealed: Literal[False]
reveal_type(a >= b) # revealed: Literal[False]
```
### Membership Test Comparisons

View File

@@ -4,6 +4,8 @@
def bool_instance() -> bool:
return True
class A: ...
a = 1 in 7 # error: "Operator `in` is not supported for types `Literal[1]` and `Literal[7]`"
reveal_type(a) # revealed: bool
@@ -33,4 +35,8 @@ reveal_type(e) # revealed: bool
f = (1, 2) < (1, "hello")
# TODO: should be Unknown, once operand type check is implemented
reveal_type(f) # revealed: bool
# error: [unsupported-operator] "Operator `<` is not supported for types `A` and `A`, in comparing `tuple[bool, A]` with `tuple[bool, A]`"
g = (bool_instance(), A()) < (bool_instance(), A())
reveal_type(g) # revealed: Unknown
```

View File

@@ -50,11 +50,11 @@ def foo(
help()
except x as e:
# TODO: should be `AttributeError`
reveal_type(e) # revealed: @Todo
reveal_type(e) # revealed: @Todo(exception type)
except y as f:
# TODO: should be `OSError | RuntimeError`
reveal_type(f) # revealed: @Todo
reveal_type(f) # revealed: @Todo(exception type)
except z as g:
# TODO: should be `BaseException`
reveal_type(g) # revealed: @Todo
reveal_type(g) # revealed: @Todo(exception type)
```

View File

@@ -22,3 +22,22 @@ reveal_type(1 if None else 2) # revealed: Literal[2]
reveal_type(1 if "" else 2) # revealed: Literal[2]
reveal_type(1 if 0 else 2) # revealed: Literal[2]
```
## Leaked Narrowing Constraint
(issue #14588)
The test inside an if expression should not affect code outside of the expression.
```py
def bool_instance() -> bool:
return True
x: Literal[42, "hello"] = 42 if bool_instance() else "hello"
reveal_type(x) # revealed: Literal[42] | Literal["hello"]
_ = ... if isinstance(x, str) else ...
reveal_type(x) # revealed: Literal[42] | Literal["hello"]
```

View File

@@ -18,7 +18,7 @@ box: MyBox[int] = MyBox(5)
wrong_innards: MyBox[int] = MyBox("five")
# TODO reveal int
reveal_type(box.data) # revealed: @Todo
reveal_type(box.data) # revealed: @Todo(instance attributes)
reveal_type(MyBox.box_model_number) # revealed: Literal[695]
```
@@ -39,7 +39,7 @@ class MySecureBox[T](MyBox[T]): ...
secure_box: MySecureBox[int] = MySecureBox(5)
reveal_type(secure_box) # revealed: MySecureBox
# TODO reveal int
reveal_type(secure_box.data) # revealed: @Todo
reveal_type(secure_box.data) # revealed: @Todo(instance attributes)
```
## Cyclical class definition
@@ -60,52 +60,20 @@ reveal_type(S) # revealed: Literal[S]
## Type params
A PEP695 type variable defines a value of type `typing.TypeVar` with attributes `__name__`,
`__bounds__`, `__constraints__`, and `__default__` (the latter three all lazily evaluated):
A PEP695 type variable defines a value of type `typing.TypeVar`.
```py
def f[T, U: A, V: (A, B), W = A, X: A = A1]():
def f[T]():
reveal_type(T) # revealed: T
reveal_type(T.__name__) # revealed: Literal["T"]
reveal_type(T.__bound__) # revealed: None
reveal_type(T.__constraints__) # revealed: tuple[()]
reveal_type(T.__default__) # revealed: NoDefault
reveal_type(U) # revealed: U
reveal_type(U.__name__) # revealed: Literal["U"]
reveal_type(U.__bound__) # revealed: type[A]
reveal_type(U.__constraints__) # revealed: tuple[()]
reveal_type(U.__default__) # revealed: NoDefault
reveal_type(V) # revealed: V
reveal_type(V.__name__) # revealed: Literal["V"]
reveal_type(V.__bound__) # revealed: None
reveal_type(V.__constraints__) # revealed: tuple[type[A], type[B]]
reveal_type(V.__default__) # revealed: NoDefault
reveal_type(W) # revealed: W
reveal_type(W.__name__) # revealed: Literal["W"]
reveal_type(W.__bound__) # revealed: None
reveal_type(W.__constraints__) # revealed: tuple[()]
reveal_type(W.__default__) # revealed: type[A]
reveal_type(X) # revealed: X
reveal_type(X.__name__) # revealed: Literal["X"]
reveal_type(X.__bound__) # revealed: type[A]
reveal_type(X.__constraints__) # revealed: tuple[()]
reveal_type(X.__default__) # revealed: type[A1]
class A: ...
class B: ...
class A1(A): ...
```
## Minimum two constraints
A typevar with less than two constraints emits a diagnostic and is treated as unconstrained:
A typevar with less than two constraints emits a diagnostic:
```py
# error: [invalid-typevar-constraints] "TypeVar must have at least two constrained types"
def f[T: (int,)]():
reveal_type(T.__constraints__) # revealed: tuple[()]
pass
```

View File

@@ -75,10 +75,11 @@ Literal: _SpecialForm
```py
from other import Literal
# error: [annotation-with-invalid-expression]
a1: Literal[26]
def f():
reveal_type(a1) # revealed: @Todo
reveal_type(a1) # revealed: @Todo(generics)
```
## Detecting typing_extensions.Literal

View File

@@ -18,7 +18,7 @@ async def foo():
pass
# TODO: should reveal `Unknown` because `__aiter__` is not defined
# revealed: @Todo
# revealed: @Todo(async iterables/iterators)
# error: [possibly-unresolved-reference]
reveal_type(x)
```
@@ -40,6 +40,6 @@ async def foo():
pass
# error: [possibly-unresolved-reference]
# revealed: @Todo
# revealed: @Todo(async iterables/iterators)
reveal_type(x)
```

View File

@@ -52,3 +52,29 @@ else:
reveal_type(x) # revealed: Literal[2, 3]
reveal_type(y) # revealed: Literal[1, 2, 4]
```
## Nested while loops
```py
def flag() -> bool:
return True
x = 1
while flag():
x = 2
while flag():
x = 3
if flag():
break
else:
x = 4
if flag():
break
else:
x = 5
reveal_type(x) # revealed: Literal[3, 4, 5]
```

View File

@@ -171,7 +171,7 @@ def f(*args, **kwargs) -> int: ...
class A(metaclass=f): ...
# TODO should be `type[int]`
reveal_type(A.__class__) # revealed: @Todo
reveal_type(A.__class__) # revealed: @Todo(metaclass not a class)
```
## Cyclic

View File

@@ -17,8 +17,7 @@ reveal_type(__doc__) # revealed: str | None
# (needs support for `*` imports)
reveal_type(__spec__) # revealed: Unknown | None
# TODO: generics
reveal_type(__path__) # revealed: @Todo
reveal_type(__path__) # revealed: @Todo(generics)
class X:
reveal_type(__name__) # revealed: str
@@ -64,7 +63,7 @@ reveal_type(typing.__class__) # revealed: Literal[type]
# TODO: needs support for attribute access on instances, properties and generics;
# should be `dict[str, Any]`
reveal_type(typing.__dict__) # revealed: @Todo
reveal_type(typing.__dict__) # revealed: @Todo(instance attributes)
```
Typeshed includes a fake `__getattr__` method in the stub for `types.ModuleType` to help out with
@@ -96,8 +95,8 @@ from foo import __dict__ as foo_dict
# TODO: needs support for attribute access on instances, properties, and generics;
# should be `dict[str, Any]` for both of these:
reveal_type(foo.__dict__) # revealed: @Todo
reveal_type(foo_dict) # revealed: @Todo
reveal_type(foo.__dict__) # revealed: @Todo(instance attributes)
reveal_type(foo_dict) # revealed: @Todo(instance attributes)
```
## Conditionally global or `ModuleType` attribute

View File

@@ -27,7 +27,7 @@ def int_instance() -> int:
a = b"abcde"[int_instance()]
# TODO: Support overloads... Should be `bytes`
reveal_type(a) # revealed: @Todo
reveal_type(a) # revealed: @Todo(return type)
```
## Slices
@@ -47,11 +47,11 @@ def int_instance() -> int: ...
byte_slice1 = b[int_instance() : int_instance()]
# TODO: Support overloads... Should be `bytes`
reveal_type(byte_slice1) # revealed: @Todo
reveal_type(byte_slice1) # revealed: @Todo(return type)
def bytes_instance() -> bytes: ...
byte_slice2 = bytes_instance()[0:5]
# TODO: Support overloads... Should be `bytes`
reveal_type(byte_slice2) # revealed: @Todo
reveal_type(byte_slice2) # revealed: @Todo(return type)
```

View File

@@ -12,13 +12,13 @@ x = [1, 2, 3]
reveal_type(x) # revealed: list
# TODO reveal int
reveal_type(x[0]) # revealed: @Todo
reveal_type(x[0]) # revealed: @Todo(return type)
# TODO reveal list
reveal_type(x[0:1]) # revealed: @Todo
reveal_type(x[0:1]) # revealed: @Todo(return type)
# TODO error
reveal_type(x["a"]) # revealed: @Todo
reveal_type(x["a"]) # revealed: @Todo(return type)
```
## Assignments within list assignment

View File

@@ -23,7 +23,7 @@ def int_instance() -> int: ...
a = "abcde"[int_instance()]
# TODO: Support overloads... Should be `str`
reveal_type(a) # revealed: @Todo
reveal_type(a) # revealed: @Todo(return type)
```
## Slices
@@ -78,13 +78,13 @@ def int_instance() -> int: ...
substring1 = s[int_instance() : int_instance()]
# TODO: Support overloads... Should be `LiteralString`
reveal_type(substring1) # revealed: @Todo
reveal_type(substring1) # revealed: @Todo(return type)
def str_instance() -> str: ...
substring2 = str_instance()[0:5]
# TODO: Support overloads... Should be `str`
reveal_type(substring2) # revealed: @Todo
reveal_type(substring2) # revealed: @Todo(return type)
```
## Unsupported slice types

View File

@@ -71,5 +71,5 @@ def int_instance() -> int: ...
tuple_slice = t[int_instance() : int_instance()]
# TODO: Support overloads... Should be `tuple[Literal[1, 'a', b"b"] | None, ...]`
reveal_type(tuple_slice) # revealed: @Todo
reveal_type(tuple_slice) # revealed: @Todo(return type)
```

View File

@@ -58,8 +58,7 @@ reveal_type(sys.version_info >= (3, 9, 1, "final", 0)) # revealed: bool
# emitting a lint diagnostic of some kind warning them about the probable error?
reveal_type(sys.version_info >= (3, 9, 1, "final", 0, 5)) # revealed: bool
# TODO: this should be `Literal[False]`; see #14279
reveal_type(sys.version_info == (3, 9, 1, "finallllll", 0)) # revealed: bool
reveal_type(sys.version_info == (3, 8, 1, "finallllll", 0)) # revealed: Literal[False]
```
## Imports and aliases
@@ -113,9 +112,9 @@ properties on instance types:
```py path=b.py
import sys
reveal_type(sys.version_info.micro) # revealed: @Todo
reveal_type(sys.version_info.releaselevel) # revealed: @Todo
reveal_type(sys.version_info.serial) # revealed: @Todo
reveal_type(sys.version_info.micro) # revealed: @Todo(instance attributes)
reveal_type(sys.version_info.releaselevel) # revealed: @Todo(instance attributes)
reveal_type(sys.version_info.serial) # revealed: @Todo(instance attributes)
```
## Accessing fields by index/slice

View File

@@ -0,0 +1,71 @@
# Type aliases
## Basic
```py
type IntOrStr = int | str
reveal_type(IntOrStr) # revealed: typing.TypeAliasType
reveal_type(IntOrStr.__name__) # revealed: Literal["IntOrStr"]
x: IntOrStr = 1
reveal_type(x) # revealed: Literal[1]
def f() -> None:
reveal_type(x) # revealed: int | str
```
## `__value__` attribute
```py
type IntOrStr = int | str
# TODO: This should either fall back to the specified type from typeshed,
# which is `Any`, or be the actual type of the runtime value expression
# `int | str`, i.e. `types.UnionType`.
reveal_type(IntOrStr.__value__) # revealed: @Todo(instance attributes)
```
## Invalid assignment
```py
type OptionalInt = int | None
# error: [invalid-assignment]
x: OptionalInt = "1"
```
## Type aliases in type aliases
```py
type IntOrStr = int | str
type IntOrStrOrBytes = IntOrStr | bytes
x: IntOrStrOrBytes = 1
def f() -> None:
reveal_type(x) # revealed: int | str | bytes
```
## Aliased type aliases
```py
type IntOrStr = int | str
MyIntOrStr = IntOrStr
x: MyIntOrStr = 1
# error: [invalid-assignment]
y: MyIntOrStr = None
```
## Generic type aliases
```py
type ListOrSet[T] = list[T] | set[T]
# TODO: Should be `tuple[typing.TypeVar | typing.ParamSpec | typing.TypeVarTuple, ...]`,
# as specified in the `typeshed` stubs.
reveal_type(ListOrSet.__type_params__) # revealed: @Todo(instance attributes)
```

View File

@@ -84,7 +84,7 @@ reveal_type(b) # revealed: Literal[2]
[a, *b, c, d] = (1, 2)
reveal_type(a) # revealed: Literal[1]
# TODO: Should be list[Any] once support for assigning to starred expression is added
reveal_type(b) # revealed: @Todo
reveal_type(b) # revealed: @Todo(starred unpacking)
reveal_type(c) # revealed: Literal[2]
reveal_type(d) # revealed: Unknown
```
@@ -95,7 +95,7 @@ reveal_type(d) # revealed: Unknown
[a, *b, c] = (1, 2)
reveal_type(a) # revealed: Literal[1]
# TODO: Should be list[Any] once support for assigning to starred expression is added
reveal_type(b) # revealed: @Todo
reveal_type(b) # revealed: @Todo(starred unpacking)
reveal_type(c) # revealed: Literal[2]
```
@@ -105,7 +105,7 @@ reveal_type(c) # revealed: Literal[2]
[a, *b, c] = (1, 2, 3)
reveal_type(a) # revealed: Literal[1]
# TODO: Should be list[int] once support for assigning to starred expression is added
reveal_type(b) # revealed: @Todo
reveal_type(b) # revealed: @Todo(starred unpacking)
reveal_type(c) # revealed: Literal[3]
```
@@ -115,7 +115,7 @@ reveal_type(c) # revealed: Literal[3]
[a, *b, c, d] = (1, 2, 3, 4, 5, 6)
reveal_type(a) # revealed: Literal[1]
# TODO: Should be list[int] once support for assigning to starred expression is added
reveal_type(b) # revealed: @Todo
reveal_type(b) # revealed: @Todo(starred unpacking)
reveal_type(c) # revealed: Literal[5]
reveal_type(d) # revealed: Literal[6]
```
@@ -127,7 +127,7 @@ reveal_type(d) # revealed: Literal[6]
reveal_type(a) # revealed: Literal[1]
reveal_type(b) # revealed: Literal[2]
# TODO: Should be list[int] once support for assigning to starred expression is added
reveal_type(c) # revealed: @Todo
reveal_type(c) # revealed: @Todo(starred unpacking)
```
### Starred expression (6)
@@ -138,7 +138,7 @@ reveal_type(c) # revealed: @Todo
reveal_type(a) # revealed: Literal[1]
reveal_type(b) # revealed: Unknown
reveal_type(c) # revealed: Unknown
reveal_type(d) # revealed: @Todo
reveal_type(d) # revealed: @Todo(starred unpacking)
reveal_type(e) # revealed: Unknown
reveal_type(f) # revealed: Unknown
```
@@ -222,7 +222,7 @@ reveal_type(b) # revealed: LiteralString
(a, *b, c, d) = "ab"
reveal_type(a) # revealed: LiteralString
# TODO: Should be list[LiteralString] once support for assigning to starred expression is added
reveal_type(b) # revealed: @Todo
reveal_type(b) # revealed: @Todo(starred unpacking)
reveal_type(c) # revealed: LiteralString
reveal_type(d) # revealed: Unknown
```
@@ -233,7 +233,7 @@ reveal_type(d) # revealed: Unknown
(a, *b, c) = "ab"
reveal_type(a) # revealed: LiteralString
# TODO: Should be list[Any] once support for assigning to starred expression is added
reveal_type(b) # revealed: @Todo
reveal_type(b) # revealed: @Todo(starred unpacking)
reveal_type(c) # revealed: LiteralString
```
@@ -243,7 +243,7 @@ reveal_type(c) # revealed: LiteralString
(a, *b, c) = "abc"
reveal_type(a) # revealed: LiteralString
# TODO: Should be list[LiteralString] once support for assigning to starred expression is added
reveal_type(b) # revealed: @Todo
reveal_type(b) # revealed: @Todo(starred unpacking)
reveal_type(c) # revealed: LiteralString
```
@@ -253,7 +253,7 @@ reveal_type(c) # revealed: LiteralString
(a, *b, c, d) = "abcdef"
reveal_type(a) # revealed: LiteralString
# TODO: Should be list[LiteralString] once support for assigning to starred expression is added
reveal_type(b) # revealed: @Todo
reveal_type(b) # revealed: @Todo(starred unpacking)
reveal_type(c) # revealed: LiteralString
reveal_type(d) # revealed: LiteralString
```
@@ -265,5 +265,5 @@ reveal_type(d) # revealed: LiteralString
reveal_type(a) # revealed: LiteralString
reveal_type(b) # revealed: LiteralString
# TODO: Should be list[int] once support for assigning to starred expression is added
reveal_type(c) # revealed: @Todo
reveal_type(c) # revealed: @Todo(starred unpacking)
```

View File

@@ -17,5 +17,5 @@ class Manager:
async def test():
async with Manager() as f:
reveal_type(f) # revealed: @Todo
reveal_type(f) # revealed: @Todo(async with statement)
```

View File

@@ -36,12 +36,25 @@ use super::definition::{
mod except_handlers;
/// Are we in a state where a `break` statement is allowed?
#[derive(Clone, Copy, Debug)]
enum LoopState {
InLoop,
NotInLoop,
}
impl LoopState {
fn is_inside(self) -> bool {
matches!(self, LoopState::InLoop)
}
}
pub(super) struct SemanticIndexBuilder<'db> {
// Builder state
db: &'db dyn Db,
file: File,
module: &'db ParsedModule,
scope_stack: Vec<FileScopeId>,
scope_stack: Vec<(FileScopeId, LoopState)>,
/// The assignments we're currently visiting, with
/// the most recent visit at the end of the Vec
current_assignments: Vec<CurrentAssignment<'db>>,
@@ -103,9 +116,24 @@ impl<'db> SemanticIndexBuilder<'db> {
*self
.scope_stack
.last()
.map(|(scope, _)| scope)
.expect("Always to have a root scope")
}
fn loop_state(&self) -> LoopState {
self.scope_stack
.last()
.expect("Always to have a root scope")
.1
}
fn set_inside_loop(&mut self, state: LoopState) {
self.scope_stack
.last_mut()
.expect("Always to have a root scope")
.1 = state;
}
fn push_scope(&mut self, node: NodeWithScopeRef) {
let parent = self.current_scope();
self.push_scope_with_parent(node, Some(parent));
@@ -131,15 +159,16 @@ impl<'db> SemanticIndexBuilder<'db> {
let scope_id = ScopeId::new(self.db, self.file, file_scope_id, countme::Count::default());
self.scope_ids_by_scope.push(scope_id);
self.scopes_by_node.insert(node.node_key(), file_scope_id);
let previous = self.scopes_by_node.insert(node.node_key(), file_scope_id);
debug_assert_eq!(previous, None);
debug_assert_eq!(ast_id_scope, file_scope_id);
self.scope_stack.push(file_scope_id);
self.scope_stack.push((file_scope_id, LoopState::NotInLoop));
}
fn pop_scope(&mut self) -> FileScopeId {
let id = self.scope_stack.pop().expect("Root scope to be present");
let (id, _) = self.scope_stack.pop().expect("Root scope to be present");
let children_end = self.scopes.next_index();
let scope = &mut self.scopes[id];
scope.descendents = scope.descendents.start..children_end;
@@ -589,6 +618,27 @@ where
},
);
}
ast::Stmt::TypeAlias(type_alias) => {
let symbol = self.add_symbol(
type_alias
.name
.as_name_expr()
.map(|name| name.id.clone())
.unwrap_or("<unknown>".into()),
);
self.add_definition(symbol, type_alias);
self.visit_expr(&type_alias.name);
self.with_type_params(
NodeWithScopeRef::TypeAliasTypeParameters(type_alias),
type_alias.type_params.as_ref(),
|builder| {
builder.push_scope(NodeWithScopeRef::TypeAlias(type_alias));
builder.visit_expr(&type_alias.value);
builder.pop_scope()
},
);
}
ast::Stmt::Import(node) => {
for alias in &node.names {
let symbol_name = if let Some(asname) = &alias.asname {
@@ -763,7 +813,10 @@ where
// TODO: definitions created inside the body should be fully visible
// to other statements/expressions inside the body --Alex/Carl
let outer_loop_state = self.loop_state();
self.set_inside_loop(LoopState::InLoop);
self.visit_body(body);
self.set_inside_loop(outer_loop_state);
// Get the break states from the body of this loop, and restore the saved outer
// ones.
@@ -802,7 +855,9 @@ where
self.visit_body(body);
}
ast::Stmt::Break(_) => {
self.loop_break_states.push(self.flow_snapshot());
if self.loop_state().is_inside() {
self.loop_break_states.push(self.flow_snapshot());
}
}
ast::Stmt::For(
@@ -829,7 +884,10 @@ where
// TODO: Definitions created by loop variables
// (and definitions created inside the body)
// are fully visible to other statements/expressions inside the body --Alex/Carl
let outer_loop_state = self.loop_state();
self.set_inside_loop(LoopState::InLoop);
self.visit_body(body);
self.set_inside_loop(outer_loop_state);
let break_states =
std::mem::replace(&mut self.loop_break_states, saved_break_states);
@@ -1131,8 +1189,8 @@ where
// AST inspection, so we can't simplify here, need to record test expression for
// later checking)
self.visit_expr(test);
let constraint = self.record_expression_constraint(test);
let pre_if = self.flow_snapshot();
let constraint = self.record_expression_constraint(test);
self.visit_expr(body);
let post_body = self.flow_snapshot();
self.flow_restore(pre_if);

View File

@@ -83,6 +83,7 @@ pub(crate) enum DefinitionNodeRef<'a> {
For(ForStmtDefinitionNodeRef<'a>),
Function(&'a ast::StmtFunctionDef),
Class(&'a ast::StmtClassDef),
TypeAlias(&'a ast::StmtTypeAlias),
NamedExpression(&'a ast::ExprNamed),
Assignment(AssignmentDefinitionNodeRef<'a>),
AnnotatedAssignment(&'a ast::StmtAnnAssign),
@@ -109,6 +110,12 @@ impl<'a> From<&'a ast::StmtClassDef> for DefinitionNodeRef<'a> {
}
}
impl<'a> From<&'a ast::StmtTypeAlias> for DefinitionNodeRef<'a> {
fn from(node: &'a ast::StmtTypeAlias) -> Self {
Self::TypeAlias(node)
}
}
impl<'a> From<&'a ast::ExprNamed> for DefinitionNodeRef<'a> {
fn from(node: &'a ast::ExprNamed) -> Self {
Self::NamedExpression(node)
@@ -265,6 +272,9 @@ impl<'db> DefinitionNodeRef<'db> {
DefinitionNodeRef::Class(class) => {
DefinitionKind::Class(AstNodeRef::new(parsed, class))
}
DefinitionNodeRef::TypeAlias(type_alias) => {
DefinitionKind::TypeAlias(AstNodeRef::new(parsed, type_alias))
}
DefinitionNodeRef::NamedExpression(named) => {
DefinitionKind::NamedExpression(AstNodeRef::new(parsed, named))
}
@@ -358,6 +368,7 @@ impl<'db> DefinitionNodeRef<'db> {
}
Self::Function(node) => node.into(),
Self::Class(node) => node.into(),
Self::TypeAlias(node) => node.into(),
Self::NamedExpression(node) => node.into(),
Self::Assignment(AssignmentDefinitionNodeRef {
value: _,
@@ -434,6 +445,7 @@ pub enum DefinitionKind<'db> {
ImportFrom(ImportFromDefinitionKind),
Function(AstNodeRef<ast::StmtFunctionDef>),
Class(AstNodeRef<ast::StmtClassDef>),
TypeAlias(AstNodeRef<ast::StmtTypeAlias>),
NamedExpression(AstNodeRef<ast::ExprNamed>),
Assignment(AssignmentDefinitionKind<'db>),
AnnotatedAssignment(AstNodeRef<ast::StmtAnnAssign>),
@@ -456,6 +468,7 @@ impl DefinitionKind<'_> {
// functions, classes, and imports always bind, and we consider them declarations
DefinitionKind::Function(_)
| DefinitionKind::Class(_)
| DefinitionKind::TypeAlias(_)
| DefinitionKind::Import(_)
| DefinitionKind::ImportFrom(_)
| DefinitionKind::TypeVar(_)
@@ -682,6 +695,12 @@ impl From<&ast::StmtClassDef> for DefinitionNodeKey {
}
}
impl From<&ast::StmtTypeAlias> for DefinitionNodeKey {
fn from(node: &ast::StmtTypeAlias) -> Self {
Self(NodeKey::from_node(node))
}
}
impl From<&ast::ExprName> for DefinitionNodeKey {
fn from(node: &ast::ExprName) -> Self {
Self(NodeKey::from_node(node))

View File

@@ -116,14 +116,11 @@ impl<'db> ScopeId<'db> {
// Type parameter scopes behave like function scopes in terms of name resolution; CPython
// symbol table also uses the term "function-like" for these scopes.
matches!(
self.node(db),
NodeWithScopeKind::ClassTypeParameters(_)
| NodeWithScopeKind::FunctionTypeParameters(_)
| NodeWithScopeKind::Function(_)
| NodeWithScopeKind::ListComprehension(_)
| NodeWithScopeKind::SetComprehension(_)
| NodeWithScopeKind::DictComprehension(_)
| NodeWithScopeKind::GeneratorExpression(_)
self.node(db).scope_kind(),
ScopeKind::Annotation
| ScopeKind::Function
| ScopeKind::TypeAlias
| ScopeKind::Comprehension
)
}
@@ -144,6 +141,12 @@ impl<'db> ScopeId<'db> {
}
NodeWithScopeKind::Function(function)
| NodeWithScopeKind::FunctionTypeParameters(function) => function.name.as_str(),
NodeWithScopeKind::TypeAlias(type_alias)
| NodeWithScopeKind::TypeAliasTypeParameters(type_alias) => type_alias
.name
.as_name_expr()
.map(|name| name.id.as_str())
.unwrap_or("<type alias>"),
NodeWithScopeKind::Lambda(_) => "<lambda>",
NodeWithScopeKind::ListComprehension(_) => "<listcomp>",
NodeWithScopeKind::SetComprehension(_) => "<setcomp>",
@@ -201,6 +204,7 @@ pub enum ScopeKind {
Class,
Function,
Comprehension,
TypeAlias,
}
impl ScopeKind {
@@ -326,6 +330,8 @@ pub(crate) enum NodeWithScopeRef<'a> {
Lambda(&'a ast::ExprLambda),
FunctionTypeParameters(&'a ast::StmtFunctionDef),
ClassTypeParameters(&'a ast::StmtClassDef),
TypeAlias(&'a ast::StmtTypeAlias),
TypeAliasTypeParameters(&'a ast::StmtTypeAlias),
ListComprehension(&'a ast::ExprListComp),
SetComprehension(&'a ast::ExprSetComp),
DictComprehension(&'a ast::ExprDictComp),
@@ -347,6 +353,12 @@ impl NodeWithScopeRef<'_> {
NodeWithScopeRef::Function(function) => {
NodeWithScopeKind::Function(AstNodeRef::new(module, function))
}
NodeWithScopeRef::TypeAlias(type_alias) => {
NodeWithScopeKind::TypeAlias(AstNodeRef::new(module, type_alias))
}
NodeWithScopeRef::TypeAliasTypeParameters(type_alias) => {
NodeWithScopeKind::TypeAliasTypeParameters(AstNodeRef::new(module, type_alias))
}
NodeWithScopeRef::Lambda(lambda) => {
NodeWithScopeKind::Lambda(AstNodeRef::new(module, lambda))
}
@@ -387,6 +399,12 @@ impl NodeWithScopeRef<'_> {
NodeWithScopeRef::ClassTypeParameters(class) => {
NodeWithScopeKey::ClassTypeParameters(NodeKey::from_node(class))
}
NodeWithScopeRef::TypeAlias(type_alias) => {
NodeWithScopeKey::TypeAlias(NodeKey::from_node(type_alias))
}
NodeWithScopeRef::TypeAliasTypeParameters(type_alias) => {
NodeWithScopeKey::TypeAliasTypeParameters(NodeKey::from_node(type_alias))
}
NodeWithScopeRef::ListComprehension(comprehension) => {
NodeWithScopeKey::ListComprehension(NodeKey::from_node(comprehension))
}
@@ -411,6 +429,8 @@ pub enum NodeWithScopeKind {
ClassTypeParameters(AstNodeRef<ast::StmtClassDef>),
Function(AstNodeRef<ast::StmtFunctionDef>),
FunctionTypeParameters(AstNodeRef<ast::StmtFunctionDef>),
TypeAliasTypeParameters(AstNodeRef<ast::StmtTypeAlias>),
TypeAlias(AstNodeRef<ast::StmtTypeAlias>),
Lambda(AstNodeRef<ast::ExprLambda>),
ListComprehension(AstNodeRef<ast::ExprListComp>),
SetComprehension(AstNodeRef<ast::ExprSetComp>),
@@ -423,9 +443,11 @@ impl NodeWithScopeKind {
match self {
Self::Module => ScopeKind::Module,
Self::Class(_) => ScopeKind::Class,
Self::Function(_) => ScopeKind::Function,
Self::Lambda(_) => ScopeKind::Function,
Self::FunctionTypeParameters(_) | Self::ClassTypeParameters(_) => ScopeKind::Annotation,
Self::Function(_) | Self::Lambda(_) => ScopeKind::Function,
Self::FunctionTypeParameters(_)
| Self::ClassTypeParameters(_)
| Self::TypeAliasTypeParameters(_) => ScopeKind::Annotation,
Self::TypeAlias(_) => ScopeKind::TypeAlias,
Self::ListComprehension(_)
| Self::SetComprehension(_)
| Self::DictComprehension(_)
@@ -446,6 +468,13 @@ impl NodeWithScopeKind {
_ => panic!("expected function"),
}
}
pub fn expect_type_alias(&self) -> &ast::StmtTypeAlias {
match self {
Self::TypeAlias(type_alias) => type_alias.node(),
_ => panic!("expected type alias"),
}
}
}
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
@@ -455,6 +484,8 @@ pub(crate) enum NodeWithScopeKey {
ClassTypeParameters(NodeKey),
Function(NodeKey),
FunctionTypeParameters(NodeKey),
TypeAlias(NodeKey),
TypeAliasTypeParameters(NodeKey),
Lambda(NodeKey),
ListComprehension(NodeKey),
SetComprehension(NodeKey),

View File

@@ -324,6 +324,61 @@ fn declarations_ty<'db>(
}
}
/// Meta data for `Type::Todo`, which represents a known limitation in red-knot.
#[cfg(debug_assertions)]
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
pub enum TodoType {
FileAndLine(&'static str, u32),
Message(&'static str),
}
#[cfg(debug_assertions)]
impl std::fmt::Display for TodoType {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
TodoType::FileAndLine(file, line) => write!(f, "[{file}:{line}]"),
TodoType::Message(msg) => write!(f, "({msg})"),
}
}
}
#[cfg(not(debug_assertions))]
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
pub struct TodoType;
#[cfg(not(debug_assertions))]
impl std::fmt::Display for TodoType {
fn fmt(&self, _: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Ok(())
}
}
/// Create a `Type::Todo` variant to represent a known limitation in the type system.
///
/// It can be used with a custom message (preferred): `todo_type!("PEP 604 not supported")`,
/// or simply using `todo_type!()`, which will include information about the file and line.
#[cfg(debug_assertions)]
macro_rules! todo_type {
() => {
Type::Todo(crate::types::TodoType::FileAndLine(file!(), line!()))
};
($message:literal) => {
Type::Todo(crate::types::TodoType::Message($message))
};
}
#[cfg(not(debug_assertions))]
macro_rules! todo_type {
() => {
Type::Todo(crate::types::TodoType)
};
($message:literal) => {
Type::Todo(crate::types::TodoType)
};
}
pub(crate) use todo_type;
/// Representation of a type: a set of possible values at runtime.
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash, salsa::Update)]
pub enum Type<'db> {
@@ -340,7 +395,9 @@ pub enum Type<'db> {
/// General rule: `Todo` should only propagate when the presence of the input `Todo` caused the
/// output to be unknown. An output should only be `Todo` if fixing all `Todo` inputs to be not
/// `Todo` would change the output type.
Todo,
///
/// This variant should be created with the `todo_type!` macro.
Todo(TodoType),
/// The empty set of values
Never,
/// A specific function object
@@ -384,7 +441,7 @@ impl<'db> Type<'db> {
}
pub const fn is_todo(&self) -> bool {
matches!(self, Type::Todo)
matches!(self, Type::Todo(_))
}
pub const fn class_literal(class: Class<'db>) -> Self {
@@ -480,6 +537,19 @@ impl<'db> Type<'db> {
.expect("Expected a Type::IntLiteral variant")
}
pub const fn into_known_instance(self) -> Option<KnownInstanceType<'db>> {
match self {
Type::KnownInstance(known_instance) => Some(known_instance),
_ => None,
}
}
#[track_caller]
pub fn expect_known_instance(self) -> KnownInstanceType<'db> {
self.into_known_instance()
.expect("Expected a Type::KnownInstance variant")
}
pub const fn is_boolean_literal(&self) -> bool {
matches!(self, Type::BooleanLiteral(..))
}
@@ -530,8 +600,8 @@ impl<'db> Type<'db> {
return true;
}
match (self, target) {
(Type::Unknown | Type::Any | Type::Todo, _) => false,
(_, Type::Unknown | Type::Any | Type::Todo) => false,
(Type::Unknown | Type::Any | Type::Todo(_), _) => false,
(_, Type::Unknown | Type::Any | Type::Todo(_)) => false,
(Type::Never, _) => true,
(_, Type::Never) => false,
(_, Type::Instance(InstanceType { class }))
@@ -666,8 +736,8 @@ impl<'db> Type<'db> {
return true;
}
match (self, target) {
(Type::Unknown | Type::Any | Type::Todo, _) => true,
(_, Type::Unknown | Type::Any | Type::Todo) => true,
(Type::Unknown | Type::Any | Type::Todo(_), _) => true,
(_, Type::Unknown | Type::Any | Type::Todo(_)) => true,
(Type::Union(union), ty) => union
.elements(db)
.iter()
@@ -703,6 +773,7 @@ impl<'db> Type<'db> {
// of `NoneType` and `NoDefaultType` in typeshed. This should not be required anymore once
// we understand `sys.version_info` branches.
self == other
|| matches!((self, other), (Type::Todo(_), Type::Todo(_)))
|| matches!((self, other),
(
Type::Instance(InstanceType { class: self_class }),
@@ -726,7 +797,7 @@ impl<'db> Type<'db> {
(Type::Any, _) | (_, Type::Any) => false,
(Type::Unknown, _) | (_, Type::Unknown) => false,
(Type::Todo, _) | (_, Type::Todo) => false,
(Type::Todo(_), _) | (_, Type::Todo(_)) => false,
(Type::Union(union), other) | (other, Type::Union(union)) => union
.elements(db)
@@ -931,7 +1002,7 @@ impl<'db> Type<'db> {
Type::Any
| Type::Never
| Type::Unknown
| Type::Todo
| Type::Todo(_)
| Type::IntLiteral(..)
| Type::StringLiteral(..)
| Type::BytesLiteral(..)
@@ -1007,7 +1078,10 @@ impl<'db> Type<'db> {
Type::Instance(InstanceType { class }) => match class.known(db) {
Some(
KnownClass::NoneType | KnownClass::NoDefaultType | KnownClass::VersionInfo,
KnownClass::NoneType
| KnownClass::NoDefaultType
| KnownClass::VersionInfo
| KnownClass::TypeAliasType,
) => true,
Some(
KnownClass::Bool
@@ -1034,7 +1108,7 @@ impl<'db> Type<'db> {
Type::Any
| Type::Never
| Type::Unknown
| Type::Todo
| Type::Todo(_)
| Type::Union(..)
| Type::Intersection(..)
| Type::LiteralString => false,
@@ -1052,12 +1126,12 @@ impl<'db> Type<'db> {
Type::Any => Type::Any.into(),
Type::Never => {
// TODO: attribute lookup on Never type
Type::Todo.into()
todo_type!().into()
}
Type::Unknown => Type::Unknown.into(),
Type::FunctionLiteral(_) => {
// TODO: attribute lookup on function type
Type::Todo.into()
todo_type!().into()
}
Type::ModuleLiteral(file) => {
// `__dict__` is a very special member that is never overridden by module globals;
@@ -1107,7 +1181,7 @@ impl<'db> Type<'db> {
Type::IntLiteral(Program::get(db).target_version(db).minor.into())
}
// TODO MRO? get_own_instance_member, get_instance_member
_ => Type::Todo,
_ => todo_type!("instance attributes"),
};
ty.into()
}
@@ -1149,36 +1223,36 @@ impl<'db> Type<'db> {
Type::Intersection(_) => {
// TODO perform the get_member on each type in the intersection
// TODO return the intersection of those results
Type::Todo.into()
todo_type!().into()
}
Type::IntLiteral(_) => {
// TODO raise error
Type::Todo.into()
todo_type!().into()
}
Type::BooleanLiteral(_) => Type::Todo.into(),
Type::BooleanLiteral(_) => todo_type!().into(),
Type::StringLiteral(_) => {
// TODO defer to `typing.LiteralString`/`builtins.str` methods
// from typeshed's stubs
Type::Todo.into()
todo_type!().into()
}
Type::LiteralString => {
// TODO defer to `typing.LiteralString`/`builtins.str` methods
// from typeshed's stubs
Type::Todo.into()
todo_type!().into()
}
Type::BytesLiteral(_) => {
// TODO defer to Type::Instance(<bytes from typeshed>).member
Type::Todo.into()
todo_type!().into()
}
Type::SliceLiteral(_) => {
// TODO defer to `builtins.slice` methods
Type::Todo.into()
todo_type!().into()
}
Type::Tuple(_) => {
// TODO: implement tuple methods
Type::Todo.into()
todo_type!().into()
}
Type::Todo => Type::Todo.into(),
&todo @ Type::Todo(_) => todo.into(),
}
}
@@ -1188,7 +1262,7 @@ impl<'db> Type<'db> {
/// when `bool(x)` is called on an object `x`.
fn bool(&self, db: &'db dyn Db) -> Truthiness {
match self {
Type::Any | Type::Todo | Type::Never | Type::Unknown => Truthiness::Ambiguous,
Type::Any | Type::Todo(_) | Type::Never | Type::Unknown => Truthiness::Ambiguous,
Type::FunctionLiteral(_) => Truthiness::AlwaysTrue,
Type::ModuleLiteral(_) => Truthiness::AlwaysTrue,
Type::ClassLiteral(_) => {
@@ -1329,7 +1403,7 @@ impl<'db> Type<'db> {
// `Any` is callable, and its return type is also `Any`.
Type::Any => CallOutcome::callable(Type::Any),
Type::Todo => CallOutcome::callable(Type::Todo),
Type::Todo(_) => CallOutcome::callable(todo_type!()),
Type::Unknown => CallOutcome::callable(Type::Unknown),
@@ -1342,7 +1416,7 @@ impl<'db> Type<'db> {
),
// TODO: intersection types
Type::Intersection(_) => CallOutcome::callable(Type::Todo),
Type::Intersection(_) => CallOutcome::callable(todo_type!()),
_ => CallOutcome::not_callable(self),
}
@@ -1381,7 +1455,7 @@ impl<'db> Type<'db> {
};
}
if matches!(self, Type::Unknown | Type::Any | Type::Todo) {
if matches!(self, Type::Unknown | Type::Any | Type::Todo(_)) {
// Explicit handling of `Unknown` and `Any` necessary until `type[Unknown]` and
// `type[Any]` are not defined as `Todo` anymore.
return IterationOutcome::Iterable { element_ty: self };
@@ -1440,14 +1514,14 @@ impl<'db> Type<'db> {
pub fn to_instance(&self, db: &'db dyn Db) -> Type<'db> {
match self {
Type::Any => Type::Any,
Type::Todo => Type::Todo,
todo @ Type::Todo(_) => *todo,
Type::Unknown => Type::Unknown,
Type::Never => Type::Never,
Type::ClassLiteral(ClassLiteralType { class }) => Type::instance(*class),
Type::SubclassOf(SubclassOfType { class }) => Type::instance(*class),
Type::Union(union) => union.map(db, |element| element.to_instance(db)),
// TODO: we can probably do better here: --Alex
Type::Intersection(_) => Type::Todo,
Type::Intersection(_) => todo_type!(),
// TODO: calling `.to_instance()` on any of these should result in a diagnostic,
// since they already indicate that the object is an instance of some kind:
Type::BooleanLiteral(_)
@@ -1478,7 +1552,11 @@ impl<'db> Type<'db> {
Type::Unknown => Type::Unknown,
// TODO map this to a new `Type::TypeVar` variant
Type::KnownInstance(KnownInstanceType::TypeVar(_)) => *self,
_ => Type::Todo,
Type::KnownInstance(KnownInstanceType::TypeAliasType(alias)) => alias.value_ty(db),
Type::KnownInstance(KnownInstanceType::Never | KnownInstanceType::NoReturn) => {
Type::Never
}
_ => todo_type!(),
}
}
@@ -1553,8 +1631,8 @@ impl<'db> Type<'db> {
// TODO: `type[Unknown]`?
Type::Unknown => Type::Unknown,
// TODO intersections
Type::Intersection(_) => Type::Todo,
Type::Todo => Type::Todo,
Type::Intersection(_) => todo_type!(),
todo @ Type::Todo(_) => *todo,
}
}
@@ -1642,6 +1720,7 @@ pub enum KnownClass {
// Typing
SpecialForm,
TypeVar,
TypeAliasType,
NoDefaultType,
// sys
VersionInfo,
@@ -1668,6 +1747,7 @@ impl<'db> KnownClass {
Self::NoneType => "NoneType",
Self::SpecialForm => "_SpecialForm",
Self::TypeVar => "TypeVar",
Self::TypeAliasType => "TypeAliasType",
Self::NoDefaultType => "_NoDefaultType",
// This is the name the type of `sys.version_info` has in typeshed,
// which is different to what `type(sys.version_info).__name__` is at runtime.
@@ -1706,7 +1786,7 @@ impl<'db> KnownClass {
Self::VersionInfo => CoreStdlibModule::Sys,
Self::GenericAlias | Self::ModuleType | Self::FunctionType => CoreStdlibModule::Types,
Self::NoneType => CoreStdlibModule::Typeshed,
Self::SpecialForm | Self::TypeVar => CoreStdlibModule::Typing,
Self::SpecialForm | Self::TypeVar | Self::TypeAliasType => CoreStdlibModule::Typing,
// TODO when we understand sys.version_info, we will need an explicit fallback here,
// because typing_extensions has a 3.13+ re-export for the `typing.NoDefault`
// singleton, but not for `typing._NoDefaultType`
@@ -1720,7 +1800,7 @@ impl<'db> KnownClass {
const fn is_singleton(self) -> bool {
// TODO there are other singleton types (EllipsisType, NotImplementedType)
match self {
Self::NoneType | Self::NoDefaultType | Self::VersionInfo => true,
Self::NoneType | Self::NoDefaultType | Self::VersionInfo | Self::TypeAliasType => true,
Self::Bool
| Self::Object
| Self::Bytes
@@ -1762,6 +1842,7 @@ impl<'db> KnownClass {
"NoneType" => Self::NoneType,
"ModuleType" => Self::ModuleType,
"FunctionType" => Self::FunctionType,
"TypeAliasType" => Self::TypeAliasType,
"_SpecialForm" => Self::SpecialForm,
"_NoDefaultType" => Self::NoDefaultType,
"_version_info" => Self::VersionInfo,
@@ -1795,7 +1876,7 @@ impl<'db> KnownClass {
| Self::VersionInfo
| Self::FunctionType => module.name() == self.canonical_module().as_str(),
Self::NoneType => matches!(module.name().as_str(), "_typeshed" | "types"),
Self::SpecialForm | Self::TypeVar | Self::NoDefaultType => {
Self::SpecialForm | Self::TypeVar | Self::TypeAliasType | Self::NoDefaultType => {
matches!(module.name().as_str(), "typing" | "typing_extensions")
}
}
@@ -1809,24 +1890,42 @@ pub enum KnownInstanceType<'db> {
Literal,
/// The symbol `typing.Optional` (which can also be found as `typing_extensions.Optional`)
Optional,
/// The symbol `typing.Union` (which can also be found as `typing_extensions.Union`)
Union,
/// The symbol `typing.NoReturn` (which can also be found as `typing_extensions.NoReturn`)
NoReturn,
/// The symbol `typing.Never` available since 3.11 (which can also be found as `typing_extensions.Never`)
Never,
/// A single instance of `typing.TypeVar`
TypeVar(TypeVarInstance<'db>),
/// A single instance of `typing.TypeAliasType` (PEP 695 type alias)
TypeAliasType(TypeAliasType<'db>),
// TODO: fill this enum out with more special forms, etc.
}
impl<'db> KnownInstanceType<'db> {
pub const fn as_str(self) -> &'static str {
match self {
KnownInstanceType::Literal => "Literal",
KnownInstanceType::Optional => "Optional",
KnownInstanceType::TypeVar(_) => "TypeVar",
Self::Literal => "Literal",
Self::Optional => "Optional",
Self::Union => "Union",
Self::TypeVar(_) => "TypeVar",
Self::NoReturn => "NoReturn",
Self::Never => "Never",
Self::TypeAliasType(_) => "TypeAliasType",
}
}
/// Evaluate the known instance in boolean context
pub const fn bool(self) -> Truthiness {
match self {
Self::Literal | Self::Optional | Self::TypeVar(_) => Truthiness::AlwaysTrue,
Self::Literal
| Self::Optional
| Self::TypeVar(_)
| Self::Union
| Self::NoReturn
| Self::Never
| Self::TypeAliasType(_) => Truthiness::AlwaysTrue,
}
}
@@ -1835,7 +1934,11 @@ impl<'db> KnownInstanceType<'db> {
match self {
Self::Literal => "typing.Literal",
Self::Optional => "typing.Optional",
Self::Union => "typing.Union",
Self::NoReturn => "typing.NoReturn",
Self::Never => "typing.Never",
Self::TypeVar(typevar) => typevar.name(db),
Self::TypeAliasType(_) => "typing.TypeAliasType",
}
}
@@ -1844,7 +1947,11 @@ impl<'db> KnownInstanceType<'db> {
match self {
Self::Literal => KnownClass::SpecialForm,
Self::Optional => KnownClass::SpecialForm,
Self::Union => KnownClass::SpecialForm,
Self::NoReturn => KnownClass::SpecialForm,
Self::Never => KnownClass::SpecialForm,
Self::TypeVar(_) => KnownClass::TypeVar,
Self::TypeAliasType(_) => KnownClass::TypeAliasType,
}
}
@@ -1864,6 +1971,9 @@ impl<'db> KnownInstanceType<'db> {
match (module.name().as_str(), instance_name) {
("typing" | "typing_extensions", "Literal") => Some(Self::Literal),
("typing" | "typing_extensions", "Optional") => Some(Self::Optional),
("typing" | "typing_extensions", "Union") => Some(Self::Union),
("typing" | "typing_extensions", "NoReturn") => Some(Self::NoReturn),
("typing" | "typing_extensions", "Never") => Some(Self::Never),
_ => None,
}
}
@@ -1871,23 +1981,7 @@ impl<'db> KnownInstanceType<'db> {
fn member(self, db: &'db dyn Db, name: &str) -> Symbol<'db> {
let ty = match (self, name) {
(Self::TypeVar(typevar), "__name__") => Type::string_literal(db, typevar.name(db)),
(Self::TypeVar(typevar), "__bound__") => typevar
.upper_bound(db)
.map(|ty| ty.to_meta_type(db))
.unwrap_or_else(|| KnownClass::NoneType.to_instance(db)),
(Self::TypeVar(typevar), "__constraints__") => {
let tuple_elements: Vec<Type<'db>> = typevar
.constraints(db)
.unwrap_or_default()
.iter()
.map(|ty| ty.to_meta_type(db))
.collect();
Type::tuple(db, &tuple_elements)
}
(Self::TypeVar(typevar), "__default__") => typevar
.default_ty(db)
.map(|ty| ty.to_meta_type(db))
.unwrap_or_else(|| KnownClass::NoDefaultType.to_instance(db)),
(Self::TypeAliasType(alias), "__name__") => Type::string_literal(db, alias.name(db)),
_ => return self.instance_fallback(db).member(db, name),
};
ty.into()
@@ -1919,6 +2013,7 @@ pub struct TypeVarInstance<'db> {
}
impl<'db> TypeVarInstance<'db> {
#[allow(unused)]
pub(crate) fn upper_bound(self, db: &'db dyn Db) -> Option<Type<'db>> {
if let Some(TypeVarBoundOrConstraints::UpperBound(ty)) = self.bound_or_constraints(db) {
Some(ty)
@@ -1927,6 +2022,7 @@ impl<'db> TypeVarInstance<'db> {
}
}
#[allow(unused)]
pub(crate) fn constraints(self, db: &'db dyn Db) -> Option<&[Type<'db>]> {
if let Some(TypeVarBoundOrConstraints::Constraints(tuple)) = self.bound_or_constraints(db) {
Some(tuple.elements(db))
@@ -2599,7 +2695,7 @@ impl<'db> Class<'db> {
// TODO: If the metaclass is not a class, we should verify that it's a callable
// which accepts the same arguments as `type.__new__` (otherwise error), and return
// the meta-type of its return type. (And validate that is a class type?)
return Ok(Type::Todo);
return Ok(todo_type!("metaclass not a class"));
};
// Reconcile all base classes' metaclasses with the candidate metaclass.
@@ -2713,6 +2809,27 @@ impl<'db> Class<'db> {
}
}
#[salsa::interned]
pub struct TypeAliasType<'db> {
#[return_ref]
pub name: ast::name::Name,
rhs_scope: ScopeId<'db>,
}
#[salsa::tracked]
impl<'db> TypeAliasType<'db> {
#[salsa::tracked]
pub fn value_ty(self, db: &'db dyn Db) -> Type<'db> {
let scope = self.rhs_scope(db);
let type_alias_stmt_node = scope.node(db).expect_type_alias();
let definition = semantic_index(db, scope.file(db)).definition(type_alias_stmt_node);
definition_expression_ty(db, definition, &type_alias_stmt_node.value)
}
}
/// Either the explicit `metaclass=` keyword of the class, or the inferred metaclass of one of its base classes.
#[derive(Debug, Clone, PartialEq, Eq)]
pub(super) struct MetaclassCandidate<'db> {
@@ -2899,6 +3016,11 @@ impl<'db> TupleType<'db> {
}
}
// Make sure that the `Type` enum does not grow unexpectedly.
#[cfg(not(debug_assertions))]
#[cfg(target_pointer_width = "64")]
static_assertions::assert_eq_size!(Type, [u8; 16]);
#[cfg(test)]
pub(crate) mod tests {
use super::*;
@@ -2914,13 +3036,6 @@ pub(crate) mod tests {
use ruff_python_ast as ast;
use test_case::test_case;
#[cfg(target_pointer_width = "64")]
#[test]
fn no_bloat_enum_sizes() {
use std::mem::size_of;
assert_eq!(size_of::<Type>(), 16);
}
pub(crate) fn setup_db() -> TestDb {
let db = TestDb::new();
@@ -2974,7 +3089,7 @@ pub(crate) mod tests {
Ty::Unknown => Type::Unknown,
Ty::None => Type::none(db),
Ty::Any => Type::Any,
Ty::Todo => Type::Todo,
Ty::Todo => todo_type!("Ty::Todo"),
Ty::IntLiteral(n) => Type::IntLiteral(n),
Ty::StringLiteral(s) => Type::string_literal(db, s),
Ty::BooleanLiteral(b) => Type::BooleanLiteral(b),
@@ -3565,4 +3680,95 @@ pub(crate) mod tests {
Ok(())
}
#[test]
fn type_alias_types() -> anyhow::Result<()> {
let mut db = setup_db();
db.write_dedented(
"src/mod.py",
r#"
type Alias1 = int
type Alias2 = int
"#,
)?;
let mod_py = system_path_to_file(&db, "src/mod.py")?;
let ty_alias1 = global_symbol(&db, mod_py, "Alias1").expect_type();
let ty_alias2 = global_symbol(&db, mod_py, "Alias2").expect_type();
let Type::KnownInstance(KnownInstanceType::TypeAliasType(alias1)) = ty_alias1 else {
panic!("Expected TypeAliasType, got {ty_alias1:?}");
};
assert_eq!(alias1.name(&db), "Alias1");
assert_eq!(alias1.value_ty(&db), KnownClass::Int.to_instance(&db));
// Two type aliases are distinct and disjoint, even if they refer to the same type
assert!(!ty_alias1.is_equivalent_to(&db, ty_alias2));
assert!(ty_alias1.is_disjoint_from(&db, ty_alias2));
Ok(())
}
/// All other tests also make sure that `Type::Todo` works as expected. This particular
/// test makes sure that we handle `Todo` types correctly, even if they originate from
/// different sources.
#[test]
fn todo_types() {
let db = setup_db();
let todo1 = todo_type!("1");
let todo2 = todo_type!("2");
let todo3 = todo_type!();
let todo4 = todo_type!();
assert!(todo1.is_equivalent_to(&db, todo2));
assert!(todo3.is_equivalent_to(&db, todo4));
assert!(todo1.is_equivalent_to(&db, todo3));
assert!(todo1.is_subtype_of(&db, todo2));
assert!(todo2.is_subtype_of(&db, todo1));
assert!(todo3.is_subtype_of(&db, todo4));
assert!(todo4.is_subtype_of(&db, todo3));
assert!(todo1.is_subtype_of(&db, todo3));
assert!(todo3.is_subtype_of(&db, todo1));
let int = KnownClass::Int.to_instance(&db);
assert!(int.is_assignable_to(&db, todo1));
assert!(int.is_assignable_to(&db, todo3));
assert!(todo1.is_assignable_to(&db, int));
assert!(todo3.is_assignable_to(&db, int));
// We lose information when combining several `Todo` types. This is an
// acknowledged limitation of the current implementation. We can not
// easily store the meta information of several `Todo`s in a single
// variant, as `TodoType` needs to implement `Copy`, meaning it can't
// contain `Vec`/`Box`/etc., and can't be boxed itself.
//
// Lifting this restriction would require us to intern `TodoType` in
// salsa, but that would mean we would have to pass in `db` everywhere.
// A union of several `Todo` types collapses to a single `Todo` type:
assert!(UnionType::from_elements(&db, vec![todo1, todo2, todo3, todo4]).is_todo());
// And similar for intersection types:
assert!(IntersectionBuilder::new(&db)
.add_positive(todo1)
.add_positive(todo2)
.add_positive(todo3)
.add_positive(todo4)
.build()
.is_todo());
assert!(IntersectionBuilder::new(&db)
.add_positive(todo1)
.add_negative(todo2)
.add_positive(todo3)
.add_negative(todo4)
.build()
.is_todo());
}
}

View File

@@ -309,7 +309,7 @@ impl<'db> InnerIntersectionBuilder<'db> {
self.add_positive(db, *neg);
}
}
ty @ (Type::Any | Type::Unknown | Type::Todo) => {
ty @ (Type::Any | Type::Unknown | Type::Todo(_)) => {
// Adding any of these types to the negative side of an intersection
// is equivalent to adding it to the positive side. We do this to
// simplify the representation.
@@ -379,7 +379,7 @@ mod tests {
use crate::program::{Program, SearchPathSettings};
use crate::python_version::PythonVersion;
use crate::stdlib::typing_symbol;
use crate::types::{global_symbol, KnownClass, UnionBuilder};
use crate::types::{global_symbol, todo_type, KnownClass, UnionBuilder};
use crate::ProgramSettings;
use ruff_db::files::system_path_to_file;
use ruff_db::system::{DbWithTestSystem, SystemPathBuf};
@@ -987,7 +987,7 @@ mod tests {
#[test_case(Type::Any)]
#[test_case(Type::Unknown)]
#[test_case(Type::Todo)]
#[test_case(todo_type!())]
fn build_intersection_t_and_negative_t_does_not_simplify(ty: Type) {
let db = setup_db();

View File

@@ -77,7 +77,7 @@ impl Display for DisplayRepresentation<'_> {
}
// `[Type::Todo]`'s display should be explicit that is not a valid display of
// any other type
Type::Todo => f.write_str("@Todo"),
Type::Todo(todo) => write!(f, "@Todo{todo}"),
Type::ModuleLiteral(file) => {
write!(f, "<module '{:?}'>", file.path(self.db))
}

View File

@@ -31,6 +31,7 @@ use std::num::NonZeroU32;
use itertools::Itertools;
use ruff_db::files::File;
use ruff_db::parsed::parsed_module;
use ruff_python_ast::visitor::{self, Visitor};
use ruff_python_ast::{self as ast, AnyNodeRef, Expr, ExprContext, UnaryOp};
use rustc_hash::{FxHashMap, FxHashSet};
use salsa;
@@ -52,11 +53,12 @@ use crate::types::diagnostic::{TypeCheckDiagnostics, TypeCheckDiagnosticsBuilder
use crate::types::mro::MroErrorKind;
use crate::types::unpacker::{UnpackResult, Unpacker};
use crate::types::{
bindings_ty, builtins_symbol, declarations_ty, global_symbol, symbol, typing_extensions_symbol,
Boundness, Class, ClassLiteralType, FunctionType, InstanceType, IntersectionBuilder,
IntersectionType, IterationOutcome, KnownClass, KnownFunction, KnownInstanceType,
MetaclassCandidate, MetaclassErrorKind, SliceLiteralType, Symbol, Truthiness, TupleType, Type,
TypeArrayDisplay, TypeVarBoundOrConstraints, TypeVarInstance, UnionBuilder, UnionType,
bindings_ty, builtins_symbol, declarations_ty, global_symbol, symbol, todo_type,
typing_extensions_symbol, Boundness, Class, ClassLiteralType, FunctionType, InstanceType,
IntersectionBuilder, IntersectionType, IterationOutcome, KnownClass, KnownFunction,
KnownInstanceType, MetaclassCandidate, MetaclassErrorKind, SliceLiteralType, Symbol,
Truthiness, TupleType, Type, TypeAliasType, TypeArrayDisplay, TypeVarBoundOrConstraints,
TypeVarInstance, UnionBuilder, UnionType,
};
use crate::unpack::Unpack;
use crate::util::subscript::{PyIndex, PySlice};
@@ -438,6 +440,12 @@ impl<'db> TypeInferenceBuilder<'db> {
NodeWithScopeKind::FunctionTypeParameters(function) => {
self.infer_function_type_params(function.node());
}
NodeWithScopeKind::TypeAliasTypeParameters(type_alias) => {
self.infer_type_alias_type_params(type_alias.node());
}
NodeWithScopeKind::TypeAlias(type_alias) => {
self.infer_type_alias(type_alias.node());
}
NodeWithScopeKind::ListComprehension(comprehension) => {
self.infer_list_comprehension_expression_scope(comprehension.node());
}
@@ -605,6 +613,9 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_function_definition(function.node(), definition);
}
DefinitionKind::Class(class) => self.infer_class_definition(class.node(), definition),
DefinitionKind::TypeAlias(type_alias) => {
self.infer_type_alias_definition(type_alias.node(), definition);
}
DefinitionKind::Import(import) => {
self.infer_import_definition(import.node(), definition);
}
@@ -847,6 +858,19 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_parameters(&function.parameters);
}
fn infer_type_alias_type_params(&mut self, type_alias: &ast::StmtTypeAlias) {
let type_params = type_alias
.type_params
.as_ref()
.expect("type alias type params scope without type params");
self.infer_type_parameters(type_params);
}
fn infer_type_alias(&mut self, type_alias: &ast::StmtTypeAlias) {
self.infer_annotation_expression(&type_alias.value, DeferredExpressionState::Deferred);
}
fn infer_function_body(&mut self, function: &ast::StmtFunctionDef) {
self.infer_body(&function.body);
}
@@ -1027,7 +1051,7 @@ impl<'db> TypeInferenceBuilder<'db> {
) {
// TODO(dhruvmanila): Annotation expression is resolved at the enclosing scope, infer the
// parameter type from there
let annotated_ty = Type::Todo;
let annotated_ty = todo_type!("function parameter type");
if parameter.annotation.is_some() {
self.add_declaration_with_binding(
parameter.into(),
@@ -1107,6 +1131,33 @@ impl<'db> TypeInferenceBuilder<'db> {
}
}
fn infer_type_alias_definition(
&mut self,
type_alias: &ast::StmtTypeAlias,
definition: Definition<'db>,
) {
self.infer_expression(&type_alias.name);
let rhs_scope = self
.index
.node_scope(NodeWithScopeRef::TypeAlias(type_alias))
.to_scope_id(self.db, self.file);
let type_alias_ty =
Type::KnownInstance(KnownInstanceType::TypeAliasType(TypeAliasType::new(
self.db,
&type_alias.name.as_name_expr().unwrap().id,
rhs_scope,
)));
self.add_declaration_with_binding(
type_alias.into(),
definition,
type_alias_ty,
type_alias_ty,
);
}
fn infer_if_statement(&mut self, if_statement: &ast::StmtIf) {
let ast::StmtIf {
range: _,
@@ -1236,7 +1287,7 @@ impl<'db> TypeInferenceBuilder<'db> {
) -> Type<'db> {
// TODO: Handle async with statements (they use `aenter` and `aexit`)
if is_async {
return Type::Todo;
return todo_type!("async with statement");
}
let context_manager_ty = context_expression_ty.to_meta_type(self.db);
@@ -1386,12 +1437,12 @@ impl<'db> TypeInferenceBuilder<'db> {
self.db,
tuple.elements(self.db).iter().map(|ty| {
ty.into_class_literal()
.map_or(Type::Todo, |ClassLiteralType { class }| {
.map_or(todo_type!(), |ClassLiteralType { class }| {
Type::instance(class)
})
}),
),
_ => Type::Todo,
_ => todo_type!("exception type"),
}
};
@@ -1461,7 +1512,7 @@ impl<'db> TypeInferenceBuilder<'db> {
default,
} = node;
self.infer_optional_expression(default.as_deref());
self.add_declaration_with_binding(node.into(), definition, Type::Todo, Type::Todo);
self.add_declaration_with_binding(node.into(), definition, todo_type!(), todo_type!());
}
fn infer_typevartuple_definition(
@@ -1475,7 +1526,7 @@ impl<'db> TypeInferenceBuilder<'db> {
default,
} = node;
self.infer_optional_expression(default.as_deref());
self.add_declaration_with_binding(node.into(), definition, Type::Todo, Type::Todo);
self.add_declaration_with_binding(node.into(), definition, todo_type!(), todo_type!());
}
fn infer_match_statement(&mut self, match_statement: &ast::StmtMatch) {
@@ -1510,7 +1561,7 @@ impl<'db> TypeInferenceBuilder<'db> {
// against the subject expression type (which we can query via `infer_expression_types`)
// and extract the type at the `index` position if the pattern matches. This will be
// similar to the logic in `self.infer_assignment_definition`.
self.add_binding(pattern.into(), definition, Type::Todo);
self.add_binding(pattern.into(), definition, todo_type!());
}
fn infer_match_pattern(&mut self, pattern: &ast::Pattern) {
@@ -1829,17 +1880,8 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_augmented_op(assignment, target_type, value_type)
}
fn infer_type_alias_statement(&mut self, type_alias_statement: &ast::StmtTypeAlias) {
let ast::StmtTypeAlias {
range: _,
name,
type_params: _,
value,
} = type_alias_statement;
self.infer_expression(value);
self.infer_expression(name);
// TODO: properly handle generic type aliases, which need their own annotation scope
fn infer_type_alias_statement(&mut self, node: &ast::StmtTypeAlias) {
self.infer_definition(node);
}
fn infer_for_statement(&mut self, for_statement: &ast::StmtFor) {
@@ -1874,8 +1916,7 @@ impl<'db> TypeInferenceBuilder<'db> {
let iterable_ty = self.infer_standalone_expression(iterable);
let loop_var_value_ty = if is_async {
// TODO(Alex): async iterables/iterators!
Type::Todo
todo_type!("async iterables/iterators")
} else {
iterable_ty
.iterate(self.db)
@@ -2203,7 +2244,7 @@ impl<'db> TypeInferenceBuilder<'db> {
ast::Expr::Await(await_expression) => self.infer_await_expression(await_expression),
ast::Expr::IpyEscapeCommand(_) => {
// TODO Implement Ipy escape command support
Type::Todo
todo_type!()
}
};
@@ -2397,7 +2438,7 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_first_comprehension_iter(generators);
// TODO generator type
Type::Todo
todo_type!()
}
fn infer_list_comprehension_expression(&mut self, listcomp: &ast::ExprListComp) -> Type<'db> {
@@ -2410,7 +2451,7 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_first_comprehension_iter(generators);
// TODO list type
Type::Todo
todo_type!()
}
fn infer_dict_comprehension_expression(&mut self, dictcomp: &ast::ExprDictComp) -> Type<'db> {
@@ -2424,7 +2465,7 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_first_comprehension_iter(generators);
// TODO dict type
Type::Todo
todo_type!()
}
fn infer_set_comprehension_expression(&mut self, setcomp: &ast::ExprSetComp) -> Type<'db> {
@@ -2437,7 +2478,7 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_first_comprehension_iter(generators);
// TODO set type
Type::Todo
todo_type!()
}
fn infer_generator_expression_scope(&mut self, generator: &ast::ExprGenerator) {
@@ -2552,7 +2593,7 @@ impl<'db> TypeInferenceBuilder<'db> {
let target_ty = if is_async {
// TODO: async iterables/iterators! -- Alex
Type::Todo
todo_type!("async iterables/iterators")
} else {
iterable_ty
.iterate(self.db)
@@ -2642,7 +2683,7 @@ impl<'db> TypeInferenceBuilder<'db> {
}
// TODO function type
Type::Todo
todo_type!()
}
fn infer_call_expression(&mut self, call_expression: &ast::ExprCall) -> Type<'db> {
@@ -2673,7 +2714,7 @@ impl<'db> TypeInferenceBuilder<'db> {
.unwrap_with_diagnostic(value.as_ref().into(), &mut self.diagnostics);
// TODO
Type::Todo
todo_type!()
}
fn infer_yield_expression(&mut self, yield_expression: &ast::ExprYield) -> Type<'db> {
@@ -2682,7 +2723,7 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_optional_expression(value.as_deref());
// TODO awaitable type
Type::Todo
todo_type!()
}
fn infer_yield_from_expression(&mut self, yield_from: &ast::ExprYieldFrom) -> Type<'db> {
@@ -2694,7 +2735,7 @@ impl<'db> TypeInferenceBuilder<'db> {
.unwrap_with_diagnostic(value.as_ref().into(), &mut self.diagnostics);
// TODO get type from `ReturnType` of generator
Type::Todo
todo_type!()
}
fn infer_await_expression(&mut self, await_expression: &ast::ExprAwait) -> Type<'db> {
@@ -2703,7 +2744,7 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_expression(value);
// TODO awaitable type
Type::Todo
todo_type!()
}
/// Look up a name reference that isn't bound in the local scope.
@@ -2979,7 +3020,7 @@ impl<'db> TypeInferenceBuilder<'db> {
Type::Unknown
}
}
_ => Type::Todo, // TODO other unary op types
_ => todo_type!(), // TODO other unary op types
}
}
@@ -3227,7 +3268,7 @@ impl<'db> TypeInferenceBuilder<'db> {
(left, Type::BooleanLiteral(bool_value), op) => {
self.infer_binary_expression_type(left, Type::IntLiteral(i64::from(bool_value)), op)
}
_ => Some(Type::Todo), // TODO
_ => Some(todo_type!()), // TODO
}
}
@@ -3643,16 +3684,16 @@ impl<'db> TypeInferenceBuilder<'db> {
let lhs_elements = lhs.elements(self.db);
let rhs_elements = rhs.elements(self.db);
let mut lexicographic_type_comparison =
|op| self.infer_lexicographic_type_comparison(lhs_elements, op, rhs_elements);
let mut tuple_rich_comparison =
|op| self.infer_tuple_rich_comparison(lhs_elements, op, rhs_elements);
match op {
ast::CmpOp::Eq => lexicographic_type_comparison(RichCompareOperator::Eq),
ast::CmpOp::NotEq => lexicographic_type_comparison(RichCompareOperator::Ne),
ast::CmpOp::Lt => lexicographic_type_comparison(RichCompareOperator::Lt),
ast::CmpOp::LtE => lexicographic_type_comparison(RichCompareOperator::Le),
ast::CmpOp::Gt => lexicographic_type_comparison(RichCompareOperator::Gt),
ast::CmpOp::GtE => lexicographic_type_comparison(RichCompareOperator::Ge),
ast::CmpOp::Eq => tuple_rich_comparison(RichCompareOperator::Eq),
ast::CmpOp::NotEq => tuple_rich_comparison(RichCompareOperator::Ne),
ast::CmpOp::Lt => tuple_rich_comparison(RichCompareOperator::Lt),
ast::CmpOp::LtE => tuple_rich_comparison(RichCompareOperator::Le),
ast::CmpOp::Gt => tuple_rich_comparison(RichCompareOperator::Gt),
ast::CmpOp::GtE => tuple_rich_comparison(RichCompareOperator::Ge),
ast::CmpOp::In | ast::CmpOp::NotIn => {
let mut eq_count = 0usize;
let mut not_eq_count = 0usize;
@@ -3665,7 +3706,7 @@ impl<'db> TypeInferenceBuilder<'db> {
).expect("infer_binary_type_comparison should never return None for `CmpOp::Eq`");
match eq_result {
Type::Todo => return Ok(Type::Todo),
todo @ Type::Todo(_) => return Ok(todo),
ty => match ty.bool(self.db) {
Truthiness::AlwaysTrue => eq_count += 1,
Truthiness::AlwaysFalse => not_eq_count += 1,
@@ -3685,13 +3726,12 @@ impl<'db> TypeInferenceBuilder<'db> {
ast::CmpOp::Is | ast::CmpOp::IsNot => {
// - `[ast::CmpOp::Is]`: returns `false` if the elements are definitely unequal, otherwise `bool`
// - `[ast::CmpOp::IsNot]`: returns `true` if the elements are definitely unequal, otherwise `bool`
let eq_result = lexicographic_type_comparison(RichCompareOperator::Eq)
.expect(
let eq_result = tuple_rich_comparison(RichCompareOperator::Eq).expect(
"infer_binary_type_comparison should never return None for `CmpOp::Eq`",
);
Ok(match eq_result {
Type::Todo => Type::Todo,
todo @ Type::Todo(_) => todo,
ty => match ty.bool(self.db) {
Truthiness::AlwaysFalse => Type::BooleanLiteral(op.is_is_not()),
_ => KnownClass::Bool.to_instance(self.db),
@@ -3746,58 +3786,85 @@ impl<'db> TypeInferenceBuilder<'db> {
// TODO: handle more types
_ => match op {
ast::CmpOp::Is | ast::CmpOp::IsNot => Ok(KnownClass::Bool.to_instance(self.db)),
_ => Ok(Type::Todo),
_ => Ok(todo_type!()),
},
}
}
/// Performs lexicographic comparison between two slices of types.
/// Simulates rich comparison between tuples and returns the inferred result.
/// This performs a lexicographic comparison, returning a union of all possible return types that could result from the comparison.
///
/// For lexicographic comparison, elements from both slices are compared pairwise using
/// `infer_binary_type_comparison`. If a conclusive result cannot be determined as a `BooleanLiteral`,
/// it returns `bool`. Returns `None` if the comparison is not supported.
fn infer_lexicographic_type_comparison(
/// basically it's based on cpython's `tuple_richcompare`
/// see `<https://github.com/python/cpython/blob/9d6366b60d01305fc5e45100e0cd13e358aa397d/Objects/tupleobject.c#L637>`
fn infer_tuple_rich_comparison(
&mut self,
left: &[Type<'db>],
op: RichCompareOperator,
right: &[Type<'db>],
) -> Result<Type<'db>, CompareUnsupportedError<'db>> {
// Compare paired elements from left and right slices
for (l_ty, r_ty) in left.iter().copied().zip(right.iter().copied()) {
let eq_result = self
let left_iter = left.iter().copied();
let right_iter = right.iter().copied();
let mut builder = UnionBuilder::new(self.db);
for (l_ty, r_ty) in left_iter.zip(right_iter) {
let pairwise_eq_result = self
.infer_binary_type_comparison(l_ty, ast::CmpOp::Eq, r_ty)
.expect("infer_binary_type_comparison should never return None for `CmpOp::Eq`");
match eq_result {
match pairwise_eq_result {
// If propagation is required, return the result as is
Type::Todo => return Ok(Type::Todo),
todo @ Type::Todo(_) => return Ok(todo),
ty => match ty.bool(self.db) {
// Types are equal, continue to the next pair
// - AlwaysTrue : Continue to the next pair for lexicographic comparison
Truthiness::AlwaysTrue => continue,
// Types are not equal, perform the specified comparison and return the result
Truthiness::AlwaysFalse => {
return self.infer_binary_type_comparison(l_ty, op.into(), r_ty)
// - AlwaysFalse:
// Lexicographic comparisons will always terminate with this pair.
// Complete the comparison and return the result.
// - Ambiguous:
// Lexicographic comparisons might continue to the next pair (if eq_result is true),
// or terminate here (if eq_result is false).
// To account for cases where the comparison terminates here, add the pairwise comparison result to the union builder.
eq_truthiness @ (Truthiness::AlwaysFalse | Truthiness::Ambiguous) => {
let pairwise_compare_result = match op {
RichCompareOperator::Lt
| RichCompareOperator::Le
| RichCompareOperator::Gt
| RichCompareOperator::Ge => {
self.infer_binary_type_comparison(l_ty, op.into(), r_ty)?
}
// For `==` and `!=`, we already figure out the result from `pairwise_eq_result`
// NOTE: The CPython implementation does not account for non-boolean return types
// or cases where `!=` is not the negation of `==`, we also do not consider these cases.
RichCompareOperator::Eq => Type::BooleanLiteral(false),
RichCompareOperator::Ne => Type::BooleanLiteral(true),
};
builder = builder.add(pairwise_compare_result);
if eq_truthiness.is_ambiguous() {
continue;
}
return Ok(builder.build());
}
// If the intermediate result is ambiguous, we cannot determine the final result as BooleanLiteral.
// In this case, we simply return a bool instance.
Truthiness::Ambiguous => return Ok(KnownClass::Bool.to_instance(self.db)),
},
}
}
// At this point, the lengths of the two slices may be different, but the prefix of
// left and right slices is entirely identical.
// We return a comparison of the slice lengths based on the operator.
// if no more items to compare, we just compare sizes
let (left_len, right_len) = (left.len(), right.len());
Ok(Type::BooleanLiteral(match op {
builder = builder.add(Type::BooleanLiteral(match op {
RichCompareOperator::Eq => left_len == right_len,
RichCompareOperator::Ne => left_len != right_len,
RichCompareOperator::Lt => left_len < right_len,
RichCompareOperator::Le => left_len <= right_len,
RichCompareOperator::Gt => left_len > right_len,
RichCompareOperator::Ge => left_len >= right_len,
}))
}));
Ok(builder.build())
}
fn infer_subscript_expression(&mut self, subscript: &ast::ExprSubscript) -> Type<'db> {
@@ -4148,24 +4215,6 @@ impl<'db> TypeInferenceBuilder<'db> {
// Annotation expressions also get special handling for `*args` and `**kwargs`.
ast::Expr::Starred(starred) => self.infer_starred_expression(starred),
ast::Expr::BytesLiteral(bytes) => {
self.diagnostics.add(
bytes.into(),
"annotation-byte-string",
format_args!("Type expressions cannot use bytes literal"),
);
Type::Unknown
}
ast::Expr::FString(fstring) => {
self.diagnostics.add(
fstring.into(),
"annotation-f-string",
format_args!("Type expressions cannot use f-strings"),
);
Type::Unknown
}
// All other annotation expressions are (possibly) valid type expressions, so handle
// them there instead.
type_expr => self.infer_type_expression_no_store(type_expr),
@@ -4176,6 +4225,72 @@ impl<'db> TypeInferenceBuilder<'db> {
annotation_ty
}
/// Walk child expressions of the given AST node and store the given type for each of them.
/// Does not store a type for the root expression. Resets to normal type inference for all
/// expressions with a nested scope (lambda, named expression, comprehensions, generators).
fn store_type_for_sub_expressions_of(&mut self, expression: &ast::Expr, ty: Type<'db>) {
struct StoreTypeVisitor<'a, 'db> {
builder: &'a mut TypeInferenceBuilder<'db>,
ty: Type<'db>,
is_root_node: bool,
}
impl<'a, 'db> StoreTypeVisitor<'a, 'db> {
fn new(builder: &'a mut TypeInferenceBuilder<'db>, ty: Type<'db>) -> Self {
Self {
builder,
ty,
is_root_node: true,
}
}
fn store(&mut self, expr: &ast::Expr) {
if self.is_root_node {
self.is_root_node = false;
} else {
self.builder.store_expression_type(expr, self.ty);
}
}
}
impl<'a, 'db, 'ast> Visitor<'ast> for StoreTypeVisitor<'a, 'db> {
fn visit_expr(&mut self, expr: &'ast Expr) {
match expr {
ast::Expr::Lambda(lambda) => {
self.builder.infer_lambda_expression(lambda);
self.store(expr);
}
ast::Expr::Named(named) => {
self.builder.infer_named_expression(named);
self.store(expr);
}
ast::Expr::ListComp(list_comp) => {
self.builder.infer_list_comprehension_expression(list_comp);
self.store(expr);
}
ast::Expr::SetComp(set_comp) => {
self.builder.infer_set_comprehension_expression(set_comp);
self.store(expr);
}
ast::Expr::DictComp(dict_comp) => {
self.builder.infer_dict_comprehension_expression(dict_comp);
self.store(expr);
}
ast::Expr::Generator(generator) => {
self.builder.infer_generator_expression(generator);
self.store(expr);
}
_ => {
self.store(expr);
visitor::walk_expr(self, expr);
}
}
}
}
StoreTypeVisitor::new(self, ty).visit_expr(expression);
}
/// Infer the type of a string annotation expression.
fn infer_string_annotation_expression(&mut self, string: &ast::ExprStringLiteral) -> Type<'db> {
match parse_string_annotation(self.db, self.file, string) {
@@ -4237,7 +4352,7 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_name_expression(name).in_type_expression(self.db)
}
ast::ExprContext::Invalid => Type::Unknown,
ast::ExprContext::Store | ast::ExprContext::Del => Type::Todo,
ast::ExprContext::Store | ast::ExprContext::Del => todo_type!(),
},
ast::Expr::Attribute(attribute_expression) => match attribute_expression.ctx {
@@ -4245,7 +4360,7 @@ impl<'db> TypeInferenceBuilder<'db> {
.infer_attribute_expression(attribute_expression)
.in_type_expression(self.db),
ast::ExprContext::Invalid => Type::Unknown,
ast::ExprContext::Store | ast::ExprContext::Del => Type::Todo,
ast::ExprContext::Store | ast::ExprContext::Del => todo_type!(),
},
ast::Expr::NoneLiteral(_literal) => Type::none(self.db),
@@ -4255,14 +4370,7 @@ impl<'db> TypeInferenceBuilder<'db> {
// TODO: an Ellipsis literal *on its own* does not have any meaning in annotation
// expressions, but is meaningful in the context of a number of special forms.
ast::Expr::EllipsisLiteral(_literal) => Type::Todo,
// Other literals do not have meaningful values in the annotation expression context.
// However, we will we want to handle these differently when working with special forms,
// since (e.g.) `123` is not valid in an annotation expression but `Literal[123]` is.
ast::Expr::BytesLiteral(_literal) => Type::Todo,
ast::Expr::NumberLiteral(_literal) => Type::Todo,
ast::Expr::BooleanLiteral(_literal) => Type::Todo,
ast::Expr::EllipsisLiteral(_literal) => todo_type!(),
ast::Expr::Subscript(subscript) => {
let ast::ExprSubscript {
@@ -4305,94 +4413,69 @@ impl<'db> TypeInferenceBuilder<'db> {
// TODO PEP 646
ast::Expr::Starred(starred) => {
self.infer_starred_expression(starred);
Type::Todo
todo_type!()
}
// Forms which are invalid in the context of annotation expressions: we infer their
// nested expressions as normal expressions, but the type of the top-level expression is
// always `Type::Unknown` in these cases.
ast::Expr::BoolOp(bool_op) => {
self.infer_boolean_expression(bool_op);
Type::Unknown
}
ast::Expr::Named(named) => {
self.infer_named_expression(named);
Type::Unknown
}
ast::Expr::UnaryOp(unary) => {
self.infer_unary_expression(unary);
Type::Unknown
}
ast::Expr::Lambda(lambda_expression) => {
self.infer_lambda_expression(lambda_expression);
Type::Unknown
}
ast::Expr::If(if_expression) => {
self.infer_if_expression(if_expression);
Type::Unknown
}
ast::Expr::Dict(dict) => {
self.infer_dict_expression(dict);
Type::Unknown
}
ast::Expr::Set(set) => {
self.infer_set_expression(set);
Type::Unknown
}
ast::Expr::ListComp(listcomp) => {
self.infer_list_comprehension_expression(listcomp);
Type::Unknown
}
ast::Expr::SetComp(setcomp) => {
self.infer_set_comprehension_expression(setcomp);
Type::Unknown
}
ast::Expr::DictComp(dictcomp) => {
self.infer_dict_comprehension_expression(dictcomp);
Type::Unknown
}
ast::Expr::Generator(generator) => {
self.infer_generator_expression(generator);
Type::Unknown
}
ast::Expr::Await(await_expression) => {
self.infer_await_expression(await_expression);
Type::Unknown
}
ast::Expr::Yield(yield_expression) => {
self.infer_yield_expression(yield_expression);
Type::Unknown
}
ast::Expr::YieldFrom(yield_from) => {
self.infer_yield_from_expression(yield_from);
Type::Unknown
}
ast::Expr::Compare(compare) => {
self.infer_compare_expression(compare);
Type::Unknown
}
ast::Expr::Call(call_expr) => {
self.infer_call_expression(call_expr);
Type::Unknown
}
ast::Expr::FString(fstring) => {
self.infer_fstring_expression(fstring);
Type::Unknown
}
ast::Expr::List(list) => {
self.infer_list_expression(list);
Type::Unknown
}
ast::Expr::Tuple(tuple) => {
self.infer_tuple_expression(tuple);
Type::Unknown
}
ast::Expr::Slice(slice) => {
self.infer_slice_expression(slice);
Type::Unknown
}
// Avoid inferring the types of invalid type expressions that have been parsed from a
// string annotation, as they are not present in the semantic index.
_ if self.deferred_state.in_string_annotation() => Type::Unknown,
ast::Expr::IpyEscapeCommand(_) => todo!("Implement Ipy escape command support"),
// Forms which are invalid in the context of annotation expressions: we store a type of
// `Type::Unknown` for these expressions (and their sub-expressions) to avoid problems
// with invalid-syntax examples like `x: f"{x}"` or `x: lambda y: x`, for which we can
// not infer a meaningful type for the inner `x` expression. The top-level expression
// is also `Type::Unknown` in these cases.
ast::Expr::BoolOp(_)
| ast::Expr::Named(_)
| ast::Expr::UnaryOp(_)
| ast::Expr::Lambda(_)
| ast::Expr::If(_)
| ast::Expr::Dict(_)
| ast::Expr::Set(_)
| ast::Expr::ListComp(_)
| ast::Expr::SetComp(_)
| ast::Expr::DictComp(_)
| ast::Expr::Generator(_)
| ast::Expr::Await(_)
| ast::Expr::Yield(_)
| ast::Expr::YieldFrom(_)
| ast::Expr::Compare(_)
| ast::Expr::Call(_)
| ast::Expr::FString(_)
| ast::Expr::List(_)
| ast::Expr::Tuple(_)
| ast::Expr::Slice(_)
| ast::Expr::IpyEscapeCommand(_)
| ast::Expr::BytesLiteral(_)
| ast::Expr::NumberLiteral(_)
| ast::Expr::BooleanLiteral(_) => {
match expression {
ast::Expr::BytesLiteral(bytes) => {
self.diagnostics.add(
bytes.into(),
"annotation-byte-string",
format_args!("Type expressions cannot use bytes literal"),
);
}
ast::Expr::FString(fstring) => {
self.diagnostics.add(
fstring.into(),
"annotation-f-string",
format_args!("Type expressions cannot use f-strings"),
);
}
_ => {
self.diagnostics.add(
expression.into(),
"annotation-with-invalid-expression",
format_args!("Invalid expression in type expression"),
);
}
}
self.store_type_for_sub_expressions_of(expression, Type::Unknown);
Type::Unknown
}
}
}
@@ -4450,7 +4533,7 @@ impl<'db> TypeInferenceBuilder<'db> {
}
let ty = if return_todo {
Type::Todo
todo_type!("full tuple[...] support")
} else {
Type::tuple(self.db, &element_types)
};
@@ -4465,7 +4548,7 @@ impl<'db> TypeInferenceBuilder<'db> {
single_element => {
let single_element_ty = self.infer_type_expression(single_element);
if element_could_alter_type_of_whole_tuple(single_element, single_element_ty) {
Type::Todo
todo_type!()
} else {
Type::tuple(self.db, &[single_element_ty])
}
@@ -4481,13 +4564,13 @@ impl<'db> TypeInferenceBuilder<'db> {
if let Some(ClassLiteralType { class }) = name_ty.into_class_literal() {
Type::subclass_of(class)
} else {
Type::Todo
todo_type!()
}
}
// TODO: attributes, unions, subscripts, etc.
_ => {
self.infer_type_expression(slice);
Type::Todo
todo_type!()
}
}
}
@@ -4506,20 +4589,21 @@ impl<'db> TypeInferenceBuilder<'db> {
match value_ty {
Type::KnownInstance(known_instance) => {
self.infer_parameterized_known_instance_type_expression(known_instance, slice)
self.infer_parameterized_known_instance_type_expression(subscript, known_instance)
}
_ => {
self.infer_type_expression(slice);
Type::Todo // TODO: generics
todo_type!("generics")
}
}
}
fn infer_parameterized_known_instance_type_expression(
&mut self,
subscript: &ast::ExprSubscript,
known_instance: KnownInstanceType,
parameters: &ast::Expr,
) -> Type<'db> {
let parameters = &*subscript.slice;
match known_instance {
KnownInstanceType::Literal => match self.infer_literal_parameter_type(parameters) {
Ok(ty) => ty,
@@ -4541,7 +4625,36 @@ impl<'db> TypeInferenceBuilder<'db> {
let param_type = self.infer_type_expression(parameters);
UnionType::from_elements(self.db, [param_type, Type::none(self.db)])
}
KnownInstanceType::TypeVar(_) => Type::Todo,
KnownInstanceType::Union => match parameters {
ast::Expr::Tuple(t) => {
let union_ty = UnionType::from_elements(
self.db,
t.iter().map(|elt| self.infer_type_expression(elt)),
);
self.store_expression_type(parameters, union_ty);
union_ty
}
_ => self.infer_type_expression(parameters),
},
KnownInstanceType::TypeVar(_) => {
self.infer_type_expression(parameters);
todo_type!()
}
KnownInstanceType::TypeAliasType(_) => {
self.infer_type_expression(parameters);
todo_type!("generic type alias")
}
KnownInstanceType::NoReturn | KnownInstanceType::Never => {
self.diagnostics.add(
subscript.into(),
"invalid-type-parameter",
format_args!(
"Type `{}` expected no type parameter",
known_instance.repr(self.db)
),
);
Type::Unknown
}
}
}
@@ -4909,8 +5022,8 @@ fn perform_membership_test_comparison<'db>(
compare_result_opt
.map(|ty| {
if matches!(ty, Type::Todo) {
return Type::Todo;
if matches!(ty, Type::Todo(_)) {
return ty;
}
match op {
@@ -5881,7 +5994,17 @@ mod tests {
// We currently return `Todo` for all async comprehensions,
// including comprehensions that have invalid syntax
assert_scope_ty(&db, "src/a.py", &["foo", "<listcomp>"], "x", "@Todo");
assert_scope_ty(
&db,
"src/a.py",
&["foo", "<listcomp>"],
"x",
if cfg!(debug_assertions) {
"@Todo(async iterables/iterators)"
} else {
"@Todo"
},
);
Ok(())
}
@@ -5905,7 +6028,17 @@ mod tests {
)?;
// TODO async iterables/iterators! --Alex
assert_scope_ty(&db, "src/a.py", &["foo", "<listcomp>"], "x", "@Todo");
assert_scope_ty(
&db,
"src/a.py",
&["foo", "<listcomp>"],
"x",
if cfg!(debug_assertions) {
"@Todo(async iterables/iterators)"
} else {
"@Todo"
},
);
Ok(())
}
@@ -5939,6 +6072,72 @@ mod tests {
);
}
#[test]
fn pep695_type_params() {
let mut db = setup_db();
db.write_dedented(
"src/a.py",
"
def f[T, U: A, V: (A, B), W = A, X: A = A1, Y: (int,)]():
pass
class A: ...
class B: ...
class A1(A): ...
",
)
.unwrap();
let check_typevar = |var: &'static str,
upper_bound: Option<&'static str>,
constraints: Option<&[&'static str]>,
default: Option<&'static str>| {
let var_ty = get_symbol(&db, "src/a.py", &["f"], var).expect_type();
assert_eq!(var_ty.display(&db).to_string(), var);
let expected_name_ty = format!(r#"Literal["{var}"]"#);
let name_ty = var_ty.member(&db, "__name__").expect_type();
assert_eq!(name_ty.display(&db).to_string(), expected_name_ty);
let KnownInstanceType::TypeVar(typevar) = var_ty.expect_known_instance() else {
panic!("expected TypeVar");
};
assert_eq!(
typevar
.upper_bound(&db)
.map(|ty| ty.display(&db).to_string()),
upper_bound.map(std::borrow::ToOwned::to_owned)
);
assert_eq!(
typevar.constraints(&db).map(|tys| tys
.iter()
.map(|ty| ty.display(&db).to_string())
.collect::<Vec<_>>()),
constraints.map(|strings| strings
.iter()
.map(std::string::ToString::to_string)
.collect::<Vec<_>>())
);
assert_eq!(
typevar
.default_ty(&db)
.map(|ty| ty.display(&db).to_string()),
default.map(std::borrow::ToOwned::to_owned)
);
};
check_typevar("T", None, None, None);
check_typevar("U", Some("A"), None, None);
check_typevar("V", None, Some(&["A", "B"]), None);
check_typevar("W", None, None, Some("A"));
check_typevar("X", Some("A"), None, Some("A1"));
// a typevar with less than two constraints is treated as unconstrained
check_typevar("Y", None, None, None);
}
// Incremental inference tests
fn first_public_binding<'db>(db: &'db TestDb, file: File, name: &str) -> Definition<'db> {

View File

@@ -5,7 +5,7 @@ use itertools::Either;
use rustc_hash::FxHashSet;
use super::{Class, ClassLiteralType, KnownClass, KnownInstanceType, Type};
use crate::Db;
use crate::{types::todo_type, Db};
/// The inferred method resolution order of a given class.
///
@@ -354,7 +354,7 @@ impl<'db> ClassBase<'db> {
match ty {
Type::Any => Some(Self::Any),
Type::Unknown => Some(Self::Unknown),
Type::Todo => Some(Self::Todo),
Type::Todo(_) => Some(Self::Todo),
Type::ClassLiteral(ClassLiteralType { class }) => Some(Self::Class(class)),
Type::Union(_) => None, // TODO -- forces consideration of multiple possible MROs?
Type::Intersection(_) => None, // TODO -- probably incorrect?
@@ -372,7 +372,11 @@ impl<'db> ClassBase<'db> {
| Type::SubclassOf(_) => None,
Type::KnownInstance(known_instance) => match known_instance {
KnownInstanceType::TypeVar(_)
| KnownInstanceType::TypeAliasType(_)
| KnownInstanceType::Literal
| KnownInstanceType::Union
| KnownInstanceType::NoReturn
| KnownInstanceType::Never
| KnownInstanceType::Optional => None,
},
}
@@ -405,7 +409,7 @@ impl<'db> From<ClassBase<'db>> for Type<'db> {
fn from(value: ClassBase<'db>) -> Self {
match value {
ClassBase::Any => Type::Any,
ClassBase::Todo => Type::Todo,
ClassBase::Todo => todo_type!(),
ClassBase::Unknown => Type::Unknown,
ClassBase::Class(class) => Type::class_literal(class),
}

View File

@@ -1,7 +1,7 @@
#![allow(dead_code)]
use super::{definition_expression_ty, Type};
use crate::semantic_index::definition::Definition;
use crate::Db;
use crate::{semantic_index::definition::Definition, types::todo_type};
use ruff_python_ast::{self as ast, name::Name};
/// A typed callable signature.
@@ -18,7 +18,7 @@ impl<'db> Signature<'db> {
pub(crate) fn todo() -> Self {
Self {
parameters: Parameters::todo(),
return_ty: Type::Todo,
return_ty: todo_type!("return type"),
}
}
@@ -33,8 +33,7 @@ impl<'db> Signature<'db> {
.as_ref()
.map(|returns| {
if function_node.is_async {
// TODO: generic `types.CoroutineType`!
Type::Todo
todo_type!("generic types.CoroutineType")
} else {
definition_expression_ty(db, definition, returns.as_ref())
}
@@ -81,11 +80,11 @@ impl<'db> Parameters<'db> {
Self {
variadic: Some(Parameter {
name: Some(Name::new_static("args")),
annotated_ty: Type::Todo,
annotated_ty: todo_type!(),
}),
keywords: Some(Parameter {
name: Some(Name::new_static("kwargs")),
annotated_ty: Type::Todo,
annotated_ty: todo_type!(),
}),
..Default::default()
}

View File

@@ -6,7 +6,7 @@ use rustc_hash::FxHashMap;
use crate::semantic_index::ast_ids::{HasScopedExpressionId, ScopedExpressionId};
use crate::semantic_index::symbol::ScopeId;
use crate::types::{Type, TypeCheckDiagnostics, TypeCheckDiagnosticsBuilder};
use crate::types::{todo_type, Type, TypeCheckDiagnostics, TypeCheckDiagnosticsBuilder};
use crate::Db;
/// Unpacks the value expression type to their respective targets.
@@ -59,7 +59,7 @@ impl<'db> Unpacker<'db> {
// TODO: Combine the types into a list type. If the
// starred_element_types is empty, then it should be `List[Any]`.
// combine_types(starred_element_types);
element_types.push(Type::Todo);
element_types.push(todo_type!("starred unpacking"));
element_types.extend_from_slice(
// SAFETY: Safe because of the length check above.
@@ -72,7 +72,7 @@ impl<'db> Unpacker<'db> {
// index.
element_types.resize(elts.len() - 1, Type::Unknown);
// TODO: This should be `list[Unknown]`
element_types.insert(starred_index, Type::Todo);
element_types.insert(starred_index, todo_type!("starred unpacking"));
Cow::Owned(element_types)
}
} else {

View File

@@ -180,6 +180,16 @@ where
}
}
/// Discard `@Todo`-type metadata from expected types, which is not available
/// when running in release mode.
#[cfg(not(debug_assertions))]
fn discard_todo_metadata(ty: &str) -> std::borrow::Cow<'_, str> {
static TODO_METADATA_REGEX: std::sync::LazyLock<regex::Regex> =
std::sync::LazyLock::new(|| regex::Regex::new(r"@Todo\([^)]*\)").unwrap());
TODO_METADATA_REGEX.replace_all(ty, "@Todo")
}
struct Matcher {
line_index: LineIndex,
source: SourceText,
@@ -276,6 +286,9 @@ impl Matcher {
}
}
Assertion::Revealed(expected_type) => {
#[cfg(not(debug_assertions))]
let expected_type = discard_todo_metadata(&expected_type);
let mut matched_revealed_type = None;
let mut matched_undefined_reveal = None;
let expected_reveal_type_message = format!("Revealed type is `{expected_type}`");

View File

@@ -0,0 +1,6 @@
while True:
class A:
x: int
break

View File

@@ -0,0 +1,6 @@
while True:
def b():
x: int
break

View File

@@ -0,0 +1,6 @@
for _ in range(1):
class A:
x: int
break

View File

@@ -0,0 +1,6 @@
for _ in range(1):
def b():
x: int
break

View File

@@ -0,0 +1 @@
../../../../ruff_python_parser/resources/inline/err/type_param_invalid_bound_expr.py

View File

@@ -0,0 +1 @@
x: f"Literal[{1 + 2}]" = 3

View File

@@ -0,0 +1,3 @@
from typing import Union
x: Union[int, str] = 1

View File

@@ -264,26 +264,12 @@ impl SourceOrderVisitor<'_> for PullTypesVisitor<'_> {
}
/// Whether or not the .py/.pyi version of this file is expected to fail
#[rustfmt::skip]
const KNOWN_FAILURES: &[(&str, bool, bool)] = &[
// Probably related to missing support for type aliases / type params:
("crates/ruff_python_parser/resources/inline/err/type_param_invalid_bound_expr.py", true, true),
("crates/ruff_python_parser/resources/inline/err/type_param_param_spec_invalid_default_expr.py", true, true),
("crates/ruff_python_parser/resources/inline/err/type_param_type_var_invalid_default_expr.py", true, true),
("crates/ruff_python_parser/resources/inline/err/type_param_type_var_missing_default.py", true, true),
("crates/ruff_python_parser/resources/inline/err/type_param_type_var_tuple_invalid_default_expr.py", true, true),
("crates/ruff_python_parser/resources/inline/ok/type_param_param_spec.py", true, true),
("crates/ruff_python_parser/resources/inline/ok/type_param_type_var_tuple.py", true, true),
("crates/ruff_python_parser/resources/inline/ok/type_param_type_var.py", true, true),
("crates/ruff_python_parser/resources/valid/statement/type.py", true, true),
("crates/ruff_linter/resources/test/fixtures/flake8_type_checking/TCH004_15.py", true, true),
("crates/ruff_linter/resources/test/fixtures/pyflakes/F401_19.py", true, true),
("crates/ruff_linter/resources/test/fixtures/pyflakes/F821_14.py", false, true),
("crates/ruff_linter/resources/test/fixtures/pyflakes/F821_15.py", true, true),
("crates/ruff_linter/resources/test/fixtures/pyflakes/F821_17.py", true, true),
("crates/ruff_linter/resources/test/fixtures/pyflakes/F821_20.py", true, true),
// related to circular references in class definitions
("crates/ruff_linter/resources/test/fixtures/pyflakes/F821_26.py", true, false),
// Fails for unknown reasons:
("crates/ruff_linter/resources/test/fixtures/pyflakes/F632.py", true, true),
("crates/ruff_linter/resources/test/fixtures/pyflakes/F811_19.py", true, false),
("crates/ruff_linter/resources/test/fixtures/pyupgrade/UP039.py", true, false),
// related to circular references in type aliases (salsa cycle panic):
("crates/ruff_python_parser/resources/inline/err/type_alias_invalid_value_expr.py", true, true),
];

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.7.4"
version = "0.8.0"
publish = true
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -840,7 +840,7 @@ fn stdin_multiple_parse_error() {
#[test]
fn parse_error_not_included() {
// Select any rule except for `E999`, syntax error should still be shown.
// Parse errors are always shown
let mut cmd = RuffCheck::default().args(["--select=I"]).build();
assert_cmd_snapshot!(cmd
.pass_stdin("foo =\n"), @r"
@@ -859,27 +859,6 @@ fn parse_error_not_included() {
");
}
#[test]
fn deprecated_parse_error_selection() {
let mut cmd = RuffCheck::default().args(["--select=E999"]).build();
assert_cmd_snapshot!(cmd
.pass_stdin("foo =\n"), @r"
success: false
exit_code: 1
----- stdout -----
-:1:6: SyntaxError: Expected an expression
|
1 | foo =
| ^
|
Found 1 error.
----- stderr -----
warning: Rule `E999` is deprecated and will be removed in a future release. Syntax errors will always be shown regardless of whether this rule is selected or not.
");
}
#[test]
fn full_output_preview() {
let mut cmd = RuffCheck::default().args(["--preview"]).build();
@@ -1250,6 +1229,68 @@ fn removed_indirect() {
");
}
#[test]
fn removed_ignore_direct() {
let mut cmd = RuffCheck::default().args(["--ignore", "UP027"]).build();
assert_cmd_snapshot!(cmd, @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
warning: The following rules have been removed and ignoring them has no effect:
- UP027
");
}
#[test]
fn removed_ignore_multiple_direct() {
let mut cmd = RuffCheck::default()
.args(["--ignore", "UP027", "--ignore", "PLR1706"])
.build();
assert_cmd_snapshot!(cmd, @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
warning: The following rules have been removed and ignoring them has no effect:
- PLR1706
- UP027
");
}
#[test]
fn removed_ignore_remapped_direct() {
let mut cmd = RuffCheck::default().args(["--ignore", "PGH001"]).build();
assert_cmd_snapshot!(cmd, @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
warning: `PGH001` has been remapped to `S307`.
");
}
#[test]
fn removed_ignore_indirect() {
// `PLR170` includes removed rules but should not select or warn
// since it is not a "direct" selection
let mut cmd = RuffCheck::default().args(["--ignore", "PLR170"]).build();
assert_cmd_snapshot!(cmd, @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
");
}
#[test]
fn redirect_direct() {
// Selection of a redirected rule directly should use the new rule and warn

View File

@@ -1966,3 +1966,75 @@ fn nested_implicit_namespace_package() -> Result<()> {
Ok(())
}
#[test]
fn flake8_import_convention_invalid_aliases_config_alias_name() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
[lint.flake8-import-conventions.aliases]
"module.name" = "invalid.alias"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
, @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: Failed to parse [TMP]/ruff.toml
Cause: TOML parse error at line 2, column 2
|
2 | [lint.flake8-import-conventions.aliases]
| ^^^^
invalid value: string "invalid.alias", expected a Python identifier
"###);});
Ok(())
}
#[test]
fn flake8_import_convention_invalid_aliases_config_module_name() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
[lint.flake8-import-conventions.aliases]
"module..invalid" = "alias"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
, @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: Failed to parse [TMP]/ruff.toml
Cause: TOML parse error at line 2, column 2
|
2 | [lint.flake8-import-conventions.aliases]
| ^^^^
invalid value: string "module..invalid", expected a sequence of Python identifiers delimited by periods
"###);});
Ok(())
}

View File

@@ -88,7 +88,6 @@ linter.rules.enabled = [
ambiguous-class-name (E742),
ambiguous-function-name (E743),
io-error (E902),
syntax-error (E999),
unused-import (F401),
import-shadowed-by-loop-var (F402),
undefined-local-with-import-star (F403),
@@ -150,7 +149,6 @@ linter.rules.should_fix = [
ambiguous-class-name (E742),
ambiguous-function-name (E743),
io-error (E902),
syntax-error (E999),
unused-import (F401),
import-shadowed-by-loop-var (F402),
undefined-local-with-import-star (F403),

View File

@@ -27,13 +27,17 @@ static EXPECTED_DIAGNOSTICS: &[&str] = &[
// We don't support `*` imports yet:
"error[unresolved-import] /src/tomllib/_parser.py:7:29 Module `collections.abc` has no member `Iterable`",
// We don't support terminal statements in control flow yet:
"error[annotation-with-invalid-expression] /src/tomllib/_parser.py:57:71 Invalid expression in type expression",
"error[possibly-unresolved-reference] /src/tomllib/_parser.py:66:18 Name `s` used when possibly not defined",
"error[annotation-with-invalid-expression] /src/tomllib/_parser.py:69:66 Invalid expression in type expression",
"error[possibly-unresolved-reference] /src/tomllib/_parser.py:98:12 Name `char` used when possibly not defined",
"error[possibly-unresolved-reference] /src/tomllib/_parser.py:101:12 Name `char` used when possibly not defined",
"error[possibly-unresolved-reference] /src/tomllib/_parser.py:104:14 Name `char` used when possibly not defined",
"error[conflicting-declarations] /src/tomllib/_parser.py:108:17 Conflicting declared types for `second_char`: Unknown, str | None",
"error[possibly-unresolved-reference] /src/tomllib/_parser.py:115:14 Name `char` used when possibly not defined",
"error[possibly-unresolved-reference] /src/tomllib/_parser.py:126:12 Name `char` used when possibly not defined",
"error[annotation-with-invalid-expression] /src/tomllib/_parser.py:145:27 Invalid expression in type expression",
"error[annotation-with-invalid-expression] /src/tomllib/_parser.py:196:25 Invalid expression in type expression",
"error[conflicting-declarations] /src/tomllib/_parser.py:267:9 Conflicting declared types for `char`: Unknown, str | None",
"error[possibly-unresolved-reference] /src/tomllib/_parser.py:348:20 Name `nest` used when possibly not defined",
"error[possibly-unresolved-reference] /src/tomllib/_parser.py:353:5 Name `nest` used when possibly not defined",

View File

@@ -1,10 +1,11 @@
use super::{write, Arguments, FormatElement};
use crate::format_element::Interned;
use crate::prelude::LineMode;
use crate::prelude::{LineMode, Tag};
use crate::{FormatResult, FormatState};
use rustc_hash::FxHashMap;
use std::any::{Any, TypeId};
use std::fmt::Debug;
use std::num::NonZeroUsize;
use std::ops::{Deref, DerefMut};
/// A trait for writing or formatting into [`FormatElement`]-accepting buffers or streams.
@@ -294,10 +295,11 @@ where
}
}
/// A Buffer that removes any soft line breaks.
/// A Buffer that removes any soft line breaks or [`if_group_breaks`](crate::builders::if_group_breaks) elements.
///
/// - Removes [`lines`](FormatElement::Line) with the mode [`Soft`](LineMode::Soft).
/// - Replaces [`lines`](FormatElement::Line) with the mode [`Soft`](LineMode::SoftOrSpace) with a [`Space`](FormatElement::Space)
/// - Removes [`if_group_breaks`](crate::builders::if_group_breaks) elements.
///
/// # Examples
///
@@ -350,6 +352,8 @@ pub struct RemoveSoftLinesBuffer<'a, Context> {
/// It's fine to not snapshot the cache. The worst that can happen is that it holds on interned elements
/// that are now unused. But there's little harm in that and the cache is cleaned when dropping the buffer.
interned_cache: FxHashMap<Interned, Interned>,
state: RemoveSoftLineBreaksState,
}
impl<'a, Context> RemoveSoftLinesBuffer<'a, Context> {
@@ -357,6 +361,7 @@ impl<'a, Context> RemoveSoftLinesBuffer<'a, Context> {
pub fn new(inner: &'a mut dyn Buffer<Context = Context>) -> Self {
Self {
inner,
state: RemoveSoftLineBreaksState::default(),
interned_cache: FxHashMap::default(),
}
}
@@ -375,6 +380,8 @@ fn clean_interned(
if let Some(cleaned) = interned_cache.get(interned) {
cleaned.clone()
} else {
let mut state = RemoveSoftLineBreaksState::default();
// Find the first soft line break element or interned element that must be changed
let result = interned
.iter()
@@ -382,8 +389,9 @@ fn clean_interned(
.find_map(|(index, element)| match element {
FormatElement::Line(LineMode::Soft | LineMode::SoftOrSpace) => {
let mut cleaned = Vec::new();
cleaned.extend_from_slice(&interned[..index]);
Some((cleaned, &interned[index..]))
let (before, after) = interned.split_at(index);
cleaned.extend_from_slice(before);
Some((cleaned, &after[1..]))
}
FormatElement::Interned(inner) => {
let cleaned_inner = clean_interned(inner, interned_cache);
@@ -398,19 +406,33 @@ fn clean_interned(
}
}
_ => None,
element => {
if state.should_drop(element) {
let mut cleaned = Vec::new();
let (before, after) = interned.split_at(index);
cleaned.extend_from_slice(before);
Some((cleaned, &after[1..]))
} else {
None
}
}
});
let result = match result {
// Copy the whole interned buffer so that becomes possible to change the necessary elements.
Some((mut cleaned, rest)) => {
for element in rest {
if state.should_drop(element) {
continue;
}
let element = match element {
FormatElement::Line(LineMode::Soft) => continue,
FormatElement::Line(LineMode::SoftOrSpace) => FormatElement::Space,
FormatElement::Interned(interned) => {
FormatElement::Interned(clean_interned(interned, interned_cache))
}
element => element.clone(),
};
cleaned.push(element);
@@ -431,12 +453,17 @@ impl<Context> Buffer for RemoveSoftLinesBuffer<'_, Context> {
type Context = Context;
fn write_element(&mut self, element: FormatElement) {
if self.state.should_drop(&element) {
return;
}
let element = match element {
FormatElement::Line(LineMode::Soft) => return,
FormatElement::Line(LineMode::SoftOrSpace) => FormatElement::Space,
FormatElement::Interned(interned) => {
FormatElement::Interned(self.clean_interned(&interned))
}
element => element,
};
@@ -456,14 +483,77 @@ impl<Context> Buffer for RemoveSoftLinesBuffer<'_, Context> {
}
fn snapshot(&self) -> BufferSnapshot {
self.inner.snapshot()
BufferSnapshot::Any(Box::new(RemoveSoftLinebreaksSnapshot {
inner: self.inner.snapshot(),
state: self.state,
}))
}
fn restore_snapshot(&mut self, snapshot: BufferSnapshot) {
self.inner.restore_snapshot(snapshot);
let RemoveSoftLinebreaksSnapshot { inner, state } = snapshot.unwrap_any();
self.inner.restore_snapshot(inner);
self.state = state;
}
}
#[derive(Copy, Clone, Debug, Default)]
enum RemoveSoftLineBreaksState {
#[default]
Default,
InIfGroupBreaks {
conditional_content_level: NonZeroUsize,
},
}
impl RemoveSoftLineBreaksState {
fn should_drop(&mut self, element: &FormatElement) -> bool {
match self {
Self::Default => {
// Entered the start of an `if_group_breaks`
if let FormatElement::Tag(Tag::StartConditionalContent(condition)) = element {
if condition.mode.is_expanded() {
*self = Self::InIfGroupBreaks {
conditional_content_level: NonZeroUsize::new(1).unwrap(),
};
return true;
}
}
false
}
Self::InIfGroupBreaks {
conditional_content_level,
} => {
match element {
// A nested `if_group_breaks` or `if_group_fits`
FormatElement::Tag(Tag::StartConditionalContent(_)) => {
*conditional_content_level = conditional_content_level.saturating_add(1);
}
// The end of an `if_group_breaks` or `if_group_fits`.
FormatElement::Tag(Tag::EndConditionalContent) => {
if let Some(level) = NonZeroUsize::new(conditional_content_level.get() - 1)
{
*conditional_content_level = level;
} else {
// Found the end tag of the initial `if_group_breaks`. Skip this element but retain
// the elements coming after
*self = RemoveSoftLineBreaksState::Default;
}
}
_ => {}
}
true
}
}
}
}
struct RemoveSoftLinebreaksSnapshot {
inner: BufferSnapshot,
state: RemoveSoftLineBreaksState,
}
pub trait BufferExtensions: Buffer + Sized {
/// Returns a new buffer that calls the passed inspector for every element that gets written to the output
#[must_use]

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.7.4"
version = "0.8.0"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -0,0 +1,15 @@
from airflow import DAG, dag
DAG(dag_id="class_default_schedule")
DAG(dag_id="class_schedule", schedule="@hourly")
@dag()
def decorator_default_schedule():
pass
@dag(schedule="0 * * * *")
def decorator_schedule():
pass

View File

@@ -71,7 +71,7 @@ class Foo:
def foo(self: "Foo", a: int, b: int) -> int:
pass
# ANN101
# OK
def foo(self, a: int, b: int) -> int:
pass
@@ -125,12 +125,12 @@ class Foo:
def foo(cls: Type["Foo"], a: int, b: int) -> int:
pass
# ANN102
# OK
@classmethod
def foo(cls, a: int, b: int) -> int:
pass
# ANN101
# OK
def foo(self, /, a: int, b: int) -> int:
pass

View File

@@ -14,6 +14,8 @@ ContextVar("cv", default=frozenset())
ContextVar("cv", default=MappingProxyType({}))
ContextVar("cv", default=re.compile("foo"))
ContextVar("cv", default=float(1))
ContextVar("cv", default=frozenset[str]())
ContextVar[frozenset[str]]("cv", default=frozenset[str]())
# Bad
ContextVar("cv", default=[])
@@ -25,6 +27,8 @@ ContextVar("cv", default=[char for char in "foo"])
ContextVar("cv", default={char for char in "foo"})
ContextVar("cv", default={char: idx for idx, char in enumerate("foo")})
ContextVar("cv", default=collections.deque())
ContextVar("cv", default=set[str]())
ContextVar[set[str]]("cv", default=set[str]())
def bar() -> list[int]:
return [1, 2, 3]

View File

@@ -84,3 +84,27 @@ field24: typing.Union[int, typing.Union[int, int]] # PYI016: Duplicate union me
# duplicates of the outer `int`), but not three times (which would indicate that
# we incorrectly re-checked the nested union).
field25: typing.Union[int, int | int] # PYI016: Duplicate union member `int`
# Should emit in cases with nested `typing.Union`
field26: typing.Union[typing.Union[int, int]] # PYI016: Duplicate union member `int`
# Should emit in cases with nested `typing.Union`
field27: typing.Union[typing.Union[typing.Union[int, int]]] # PYI016: Duplicate union member `int`
# Should emit in cases with mixed `typing.Union` and `|`
field28: typing.Union[int | int] # Error
# Should emit twice in cases with multiple nested `typing.Union`
field29: typing.Union[int, typing.Union[typing.Union[int, int]]] # Error
# Should emit once in cases with multiple nested `typing.Union`
field30: typing.Union[int, typing.Union[typing.Union[int, str]]] # Error
# Should emit once, and fix to `typing.Union[float, int]`
field31: typing.Union[float, typing.Union[int | int]] # Error
# Should emit once, and fix to `typing.Union[float, int]`
field32: typing.Union[float, typing.Union[int | int | int]] # Error
# Test case for mixed union type fix
field33: typing.Union[typing.Union[int | int] | typing.Union[int | int]] # Error

View File

@@ -84,3 +84,27 @@ field24: typing.Union[int, typing.Union[int, int]] # PYI016: Duplicate union me
# duplicates of the outer `int`), but not three times (which would indicate that
# we incorrectly re-checked the nested union).
field25: typing.Union[int, int | int] # PYI016: Duplicate union member `int`
# Should emit in cases with nested `typing.Union`
field26: typing.Union[typing.Union[int, int]] # PYI016: Duplicate union member `int`
# Should emit in cases with nested `typing.Union`
field27: typing.Union[typing.Union[typing.Union[int, int]]] # PYI016: Duplicate union member `int`
# Should emit in cases with mixed `typing.Union` and `|`
field28: typing.Union[int | int] # Error
# Should emit twice in cases with multiple nested `typing.Union`
field29: typing.Union[int, typing.Union[typing.Union[int, int]]] # Error
# Should emit once in cases with multiple nested `typing.Union`
field30: typing.Union[int, typing.Union[typing.Union[int, str]]] # Error
# Should emit once, and fix to `typing.Union[float, int]`
field31: typing.Union[float, typing.Union[int | int]] # Error
# Should emit once, and fix to `typing.Union[float, int]`
field32: typing.Union[float, typing.Union[int | int | int]] # Error
# Test case for mixed union type fix
field33: typing.Union[typing.Union[int | int] | typing.Union[int | int]] # Error

View File

@@ -39,14 +39,30 @@ async def f4(**kwargs: int | int | float) -> None:
...
def f5(
def f5(arg1: int, *args: Union[int, int, float]) -> None:
...
def f6(arg1: int, *args: Union[Union[int, int, float]]) -> None:
...
def f7(arg1: int, *args: Union[Union[Union[int, int, float]]]) -> None:
...
def f8(arg1: int, *args: Union[Union[Union[int | int | float]]]) -> None:
...
def f9(
arg: Union[ # comment
float, # another
complex, int]
) -> None:
...
def f6(
def f10(
arg: (
int | # comment
float | # another

View File

@@ -46,6 +46,18 @@ def f6(
)
) -> None: ... # PYI041
def f5(arg1: int, *args: Union[int, int, float]) -> None: ... # PYI041
def f6(arg1: int, *args: Union[Union[int, int, float]]) -> None: ... # PYI041
def f7(arg1: int, *args: Union[Union[Union[int, int, float]]]) -> None: ... # PYI041
def f8(arg1: int, *args: Union[Union[Union[int | int | float]]]) -> None: ... # PYI041
class Foo:
def good(self, arg: int) -> None: ...

View File

@@ -5,6 +5,9 @@ A: str | Literal["foo"]
B: TypeAlias = typing.Union[Literal[b"bar", b"foo"], bytes, str]
C: TypeAlias = typing.Union[Literal[5], int, typing.Union[Literal["foo"], str]]
D: TypeAlias = typing.Union[Literal[b"str_bytes", 42], bytes, int]
E: TypeAlias = typing.Union[typing.Union[typing.Union[typing.Union[Literal["foo"], str]]]]
F: TypeAlias = typing.Union[str, typing.Union[typing.Union[typing.Union[Literal["foo"], int]]]]
G: typing.Union[str, typing.Union[typing.Union[typing.Union[Literal["foo"], int]]]]
def func(x: complex | Literal[1J], y: Union[Literal[3.14], float]): ...

View File

@@ -5,6 +5,9 @@ A: str | Literal["foo"]
B: TypeAlias = typing.Union[Literal[b"bar", b"foo"], bytes, str]
C: TypeAlias = typing.Union[Literal[5], int, typing.Union[Literal["foo"], str]]
D: TypeAlias = typing.Union[Literal[b"str_bytes", 42], bytes, int]
E: TypeAlias = typing.Union[typing.Union[typing.Union[typing.Union[Literal["foo"], str]]]]
F: TypeAlias = typing.Union[str, typing.Union[typing.Union[typing.Union[Literal["foo"], int]]]]
G: typing.Union[str, typing.Union[typing.Union[typing.Union[Literal["foo"], int]]]]
def func(x: complex | Literal[1J], y: Union[Literal[3.14], float]): ...

View File

@@ -1,11 +1,14 @@
import builtins
from typing import Union
w: builtins.type[int] | builtins.type[str] | builtins.type[complex]
x: type[int] | type[str] | type[float]
y: builtins.type[int] | type[str] | builtins.type[complex]
z: Union[type[float], type[complex]]
z: Union[type[float, int], type[complex]]
s: builtins.type[int] | builtins.type[str] | builtins.type[complex]
t: type[int] | type[str] | type[float]
u: builtins.type[int] | type[str] | builtins.type[complex]
v: Union[type[float], type[complex]]
w: Union[type[float, int], type[complex]]
x: Union[Union[type[float, int], type[complex]]]
y: Union[Union[Union[type[float, int], type[complex]]]]
z: Union[type[complex], Union[Union[type[float, int]]]]
def func(arg: type[int] | str | type[float]) -> None:

View File

@@ -1,11 +1,14 @@
import builtins
from typing import Union
w: builtins.type[int] | builtins.type[str] | builtins.type[complex]
x: type[int] | type[str] | type[float]
y: builtins.type[int] | type[str] | builtins.type[complex]
z: Union[type[float], type[complex]]
z: Union[type[float, int], type[complex]]
s: builtins.type[int] | builtins.type[str] | builtins.type[complex]
t: type[int] | type[str] | type[float]
u: builtins.type[int] | type[str] | builtins.type[complex]
v: Union[type[float], type[complex]]
w: Union[type[float, int], type[complex]]
x: Union[Union[type[float, int], type[complex]]]
y: Union[Union[Union[type[float, int], type[complex]]]]
z: Union[type[complex], Union[Union[type[float, int]]]]
def func(arg: type[int] | str | type[float]) -> None: ...

View File

@@ -1,4 +1,4 @@
from typing import Literal
from typing import Literal, Union
def func1(arg1: Literal[None]):
@@ -17,7 +17,7 @@ def func4(arg1: Literal[int, None, float]):
...
def func5(arg1: Literal[None, None]):
def func5(arg1: Literal[None, None]):
...
@@ -25,13 +25,21 @@ def func6(arg1: Literal[
"hello",
None # Comment 1
, "world"
]):
]):
...
def func7(arg1: Literal[
None # Comment 1
]):
]):
...
def func8(arg1: Literal[None] | None):
...
def func9(arg1: Union[Literal[None], None]):
...
@@ -58,3 +66,16 @@ Literal[1, None, "foo", None] # Y061 None inside "Literal[]" expression. Replac
# and there are no None members in the Literal[] slice,
# only emit Y062:
Literal[None, True, None, True] # Y062 Duplicate "Literal[]" member "True"
# Regression tests for https://github.com/astral-sh/ruff/issues/14567
x: Literal[None] | None
y: None | Literal[None]
z: Union[Literal[None], None]
a: int | Literal[None] | None
b: None | Literal[None] | None
c: (None | Literal[None]) | None
d: None | (Literal[None] | None)
e: None | ((None | Literal[None]) | None) | None
f: Literal[None] | Literal[None]

View File

@@ -1,4 +1,4 @@
from typing import Literal
from typing import Literal, Union
def func1(arg1: Literal[None]): ...
@@ -28,6 +28,12 @@ def func7(arg1: Literal[
]): ...
def func8(arg1: Literal[None] | None):...
def func9(arg1: Union[Literal[None], None]): ...
# OK
def good_func(arg1: Literal[int] | None): ...
@@ -35,3 +41,16 @@ def good_func(arg1: Literal[int] | None): ...
# From flake8-pyi
Literal[None] # PYI061 None inside "Literal[]" expression. Replace with "None"
Literal[True, None] # PYI061 None inside "Literal[]" expression. Replace with "Literal[True] | None"
# Regression tests for https://github.com/astral-sh/ruff/issues/14567
x: Literal[None] | None
y: None | Literal[None]
z: Union[Literal[None], None]
a: int | Literal[None] | None
b: None | Literal[None] | None
c: (None | Literal[None]) | None
d: None | (Literal[None] | None)
e: None | ((None | Literal[None]) | None) | None
f: Literal[None] | Literal[None]

View File

@@ -25,3 +25,9 @@ Literal[
MyType = Literal["foo", Literal[True, False, True], "bar"] # PYI062
n: Literal["No", "duplicates", "here", 1, "1"]
# nested literals, all equivalent to `Literal[1]`
Literal[Literal[1]] # no duplicate
Literal[Literal[Literal[1], Literal[1]]] # once
Literal[Literal[1], Literal[Literal[Literal[1]]]] # once

View File

@@ -25,3 +25,9 @@ Literal[
MyType = Literal["foo", Literal[True, False, True], "bar"] # PYI062
n: Literal["No", "duplicates", "here", 1, "1"]
# nested literals, all equivalent to `Literal[1]`
Literal[Literal[1]] # no duplicate
Literal[Literal[Literal[1], Literal[1]]] # once
Literal[Literal[1], Literal[Literal[Literal[1]]]] # once

View File

@@ -1,58 +0,0 @@
import abc
from abc import abstractmethod
import pytest
@pytest.fixture()
def _patch_something(mocker): # OK simple
mocker.patch("some.thing")
@pytest.fixture()
def _patch_something(mocker): # OK with return
if something:
return
mocker.patch("some.thing")
@pytest.fixture()
def _activate_context(): # OK with yield
with context:
yield
class BaseTest:
@pytest.fixture()
@abc.abstractmethod
def my_fixture(): # OK abstract with import abc
raise NotImplementedError
class BaseTest:
@pytest.fixture()
@abstractmethod
def my_fixture(): # OK abstract with from import
raise NotImplementedError
@pytest.fixture()
def my_fixture(): # OK ignoring yield from
yield from some_generator()
@pytest.fixture()
def my_fixture(): # OK ignoring yield value
yield 1
@pytest.fixture()
def patch_something(mocker): # Error simple
mocker.patch("some.thing")
@pytest.fixture()
def activate_context(): # Error with yield
with context:
yield

View File

@@ -1,57 +0,0 @@
import abc
from abc import abstractmethod
import pytest
@pytest.fixture()
def my_fixture(mocker): # OK with return
return 0
@pytest.fixture()
def activate_context(): # OK with yield
with get_context() as context:
yield context
@pytest.fixture()
def _any_fixture(mocker): # Ok nested function
def nested_function():
return 1
mocker.patch("...", nested_function)
class BaseTest:
@pytest.fixture()
@abc.abstractmethod
def _my_fixture(): # OK abstract with import abc
return NotImplemented
class BaseTest:
@pytest.fixture()
@abstractmethod
def _my_fixture(): # OK abstract with from import
return NotImplemented
@pytest.fixture()
def _my_fixture(mocker): # Error with return
return 0
@pytest.fixture()
def _activate_context(): # Error with yield
with get_context() as context:
yield context
@pytest.fixture()
def _activate_context(): # Error with conditional yield from
if some_condition:
with get_context() as context:
yield context
else:
yield from other_context()

View File

@@ -69,3 +69,15 @@ def test_implicit_str_concat_with_multi_parens(param1, param2, param3):
@pytest.mark.parametrize(("param1,param2"), [(1, 2), (3, 4)])
def test_csv_with_parens(param1, param2):
...
parametrize = pytest.mark.parametrize(("param1,param2"), [(1, 2), (3, 4)])
@parametrize
def test_not_decorator(param1, param2):
...
@pytest.mark.parametrize(argnames=("param1,param2"), argvalues=[(1, 2), (3, 4)])
def test_keyword_arguments(param1, param2):
...

View File

@@ -1,6 +1,6 @@
"""Tests to determine first-party import classification.
For typing-only import detection tests, see `TCH002.py`.
For typing-only import detection tests, see `TC002.py`.
"""

View File

@@ -2,49 +2,49 @@
def f():
import pandas as pd # TCH002
import pandas as pd # TC002
x: pd.DataFrame
def f():
from pandas import DataFrame # TCH002
from pandas import DataFrame # TC002
x: DataFrame
def f():
from pandas import DataFrame as df # TCH002
from pandas import DataFrame as df # TC002
x: df
def f():
import pandas as pd # TCH002
import pandas as pd # TC002
x: pd.DataFrame = 1
def f():
from pandas import DataFrame # TCH002
from pandas import DataFrame # TC002
x: DataFrame = 2
def f():
from pandas import DataFrame as df # TCH002
from pandas import DataFrame as df # TC002
x: df = 3
def f():
import pandas as pd # TCH002
import pandas as pd # TC002
x: "pd.DataFrame" = 1
def f():
import pandas as pd # TCH002
import pandas as pd # TC002
x = dict["pd.DataFrame", "pd.DataFrame"]
@@ -153,13 +153,13 @@ def f():
def f():
from pandas import DataFrame # noqa: TCH002
from pandas import DataFrame # noqa: TC002
x: DataFrame = 2
def f():
from pandas import ( # noqa: TCH002
from pandas import ( # noqa: TC002
DataFrame,
)

View File

@@ -1,6 +1,6 @@
"""Tests to determine standard library import classification.
For typing-only import detection tests, see `TCH002.py`.
For typing-only import detection tests, see `TC002.py`.
"""

View File

@@ -1,25 +1,25 @@
from typing import TYPE_CHECKING, List
if TYPE_CHECKING:
pass # TCH005
pass # TC005
if False:
pass # TCH005
pass # TC005
if 0:
pass # TCH005
pass # TC005
def example():
if TYPE_CHECKING:
pass # TCH005
pass # TC005
return
class Test:
if TYPE_CHECKING:
pass # TCH005
pass # TC005
x = 2
@@ -42,7 +42,7 @@ if 0:
from typing_extensions import TYPE_CHECKING
if TYPE_CHECKING:
pass # TCH005
pass # TC005
# https://github.com/astral-sh/ruff/issues/11368
if TYPE_CHECKING:

Some files were not shown because too many files have changed in this diff Show More