Compare commits

..

48 Commits

Author SHA1 Message Date
Charlie Marsh
cc850ec348 Add box to stmt 2025-12-11 11:24:31 -05:00
Micha Reiser
c9155d5e72 [ty] Reduce size of ty-ide snapshots (#21915) 2025-12-11 13:36:16 +00:00
Andrew Gallant
8647844572 [ty] Adjust scope completions to use all reachable symbols
Fixes astral-sh/ty#1294
2025-12-11 08:26:15 -05:00
Andrew Gallant
1dcb7f89f1 [ty] Rename all_members_of_scope to all_end_of_scope_members
This reflects more precisely its behavior based on how it uses the
use-def map.
2025-12-11 08:26:15 -05:00
Andrew Gallant
c1c45a6a13 [ty] Remove all_ prefix from some routines on UseDefMap
These routines don't return *all* symbols/members, but rather,
only *for* a particular scope. We do specifically want to add
some routines that return *all* symbols/members, and this naming
scheme made that confusing. It was also inconsistent with other
routines like `all_end_of_scope_symbol_declarations` which *do*
return *all* symbols.
2025-12-11 08:26:15 -05:00
Brent Westbrook
c51727708a Enable --document-private-items for ruff_python_formatter (#21903) 2025-12-11 08:23:10 -05:00
Denys Zhak
27912d46b1 Remove BackwardsTokenizer based parenthesized_range references in ruff_linter (#21836)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-11 13:04:57 +01:00
David Peter
71540c03b6 [ty] Revert "Do not infer types for invalid binary expressions in annotations" (#21914)
See discussion here:
https://github.com/astral-sh/ruff/pull/21911#discussion_r2610155157
2025-12-11 11:57:45 +00:00
Micha Reiser
aa27925e87 Skip over trivia tokens after re-lexing (#21895) 2025-12-11 10:45:18 +00:00
Charlie Marsh
5c320990f7 [ty] Avoid inferring types for invalid binary expressions in string annotations (#21911)
## Summary

Closes https://github.com/astral-sh/ty/issues/1847.

---------

Co-authored-by: David Peter <mail@david-peter.de>
2025-12-11 09:40:19 +01:00
Dhruv Manilawala
24ed28e314 [ty] Improve overload call resolution tracing (#21913)
This PR improves the overload call resolution tracing messages as:
- Use `trace` level instead of `debug` level
- Add a `trace_span` which contains the call arguments and signature
- Remove the signature from individual tracing messages
2025-12-11 12:28:45 +05:30
Carl Meyer
2d0681da08 [ty] fix missing heap_size on Salsa query (#21912) 2025-12-10 18:34:00 -08:00
Ibraheem Ahmed
29bf2cd201 [ty] Support implicit type of cls in signatures (#21771)
## Summary

Extends https://github.com/astral-sh/ruff/pull/20517 to support the
implicit type of `cls` in `@classmethod` signatures. Part of
https://github.com/astral-sh/ty/issues/159.
2025-12-10 16:56:20 -05:00
Jack O'Connor
1b44d7e2a7 [ty] add SyntheticTypedDictType and implement normalized and is_equivalent_to (#21784) 2025-12-10 20:36:36 +00:00
Ibraheem Ahmed
a2fb2ee06c [ty] Fix disjointness checks with type-of @final classes (#21770)
## Summary

We currently perform a subtyping check, similar to what we were doing
for `@final` instances before
https://github.com/astral-sh/ruff/pull/21167, which is incorrect, e.g.
we currently consider `type[X[Any]]` and `type[X[T]]]` disjoint (where
`X` is `@final`).
2025-12-10 15:15:10 -05:00
Douglas Creager
3e00221a6c [ty] Fix negation upper bounds in constraint sets (#21897)
This fixes the logic error that @sharkdp
[found](https://github.com/astral-sh/ruff/pull/21871#discussion_r2605755588)
in the constraint set upper bound normalization logic I introduced in
#21871.

I had originally claimed that `(T ≤ α & ~β)` should simplify into `(T ≤
α) ∧ ¬(T ≤ β)`. But that also suggests that `T ≤ ~β` should simplify to
`¬(T ≤ β)` on its own, and that's not correct.

The correct simplification is that `~α` is an "atomic" type, not an
"intersection" for the purposes of our upper bound simplifcation. So `(T
≤ α & ~β)` should simplify to `(T ≤ α) ∧ (T ≤ ~β)`. That is, break apart
the elements of a (proper) intersection, regardless of whether each
element is negated or not.

This PR fixes the logic, adds a test case, and updates the comments to
be hopefully more clear and accurate.
2025-12-10 15:07:50 -05:00
Ibraheem Ahmed
5dc0079e78 [ty] Fix disjointness checks on @final class instances (#21769)
## Summary

This was left unfinished in
https://github.com/astral-sh/ruff/pull/21167. This is required to fix
our disjointness checks with type-of a final class, which is currently
broken, and blocking https://github.com/astral-sh/ty/issues/159.
2025-12-10 14:17:22 -05:00
Micha Reiser
f7528bd325 [ty] Checking files without extension (#21867) 2025-12-10 16:47:41 +00:00
Avasam
59b92b3522 Document *.pyw is included by default in preview (#21885)
Document `*.pyw` is included by default in preview mode.
Originally requested in https://github.com/astral-sh/ruff/issues/13246
and added in https://github.com/astral-sh/ruff/pull/20458

Co-authored-by: Amethyst Reese <amethyst@n7.gg>
2025-12-10 16:43:55 +00:00
Micha Reiser
9ceec359a0 [ty] Add mypy primer check comparing same revisions (#21864) 2025-12-10 16:37:17 +00:00
Micha Reiser
2dd412c89a Update README to remove production warning (#21899) 2025-12-10 17:25:41 +01:00
Carl Meyer
951766d1fb [ty] default-specialize class-literal types in assignment to generic-alias types (#21883)
Fixes https://github.com/astral-sh/ty/issues/1832, fixes
https://github.com/astral-sh/ty/issues/1513

## Summary

A class object `C` (for which we infer an unspecialized `ClassLiteral`
type) should always be assignable to the type `type[C]` (which is
default-specialized, if `C` is generic). We already implemented this for
most cases, but we missed the case of a generic final type, where we
simplify `type[C]` to the `GenericAlias` type for the default
specialization of `C`. So we also need to implement this assignability
of generic `ClassLiteral` types as-if default-specialized.

## Test Plan

Added mdtests that failed before this PR.

---------

Co-authored-by: David Peter <mail@david-peter.de>
2025-12-10 17:18:08 +01:00
David Peter
7bf50e70a7 [ty] Generics: Respect typevar bounds when matching against a union (#21893)
## Summary

Respect typevar bounds and constraints when matching against a union.
For example:

```py
def accepts_t_or_int[T_str: str](x: T_str | int) -> T_str:
    raise NotImplementedError

reveal_type(accepts_t_or_int("a"))  # ok, reveals `Literal["a"]`
reveal_type(accepts_t_or_int(1))  # ok, reveals `Unknown`

class Unrelated: ...

# error: [invalid-argument-type] "Argument type `Unrelated` does not
# satisfy upper bound `str` of type variable `T_str`"
accepts_t_or_int(Unrelated())
```

Previously, the last call succeed without any errors. Worse than that,
we also incorrectly solved `T_str = Unrelated`, which often lead to
downstream errors.

closes https://github.com/astral-sh/ty/issues/1837

## Ecosystem impact

Looks good!

* Lots of removed false positives, often because we previously selected
a wrong overload for a generic function (because we didn't respect the
typevar bound in an earlier overload).
* We now understand calls to functions accepting an argument of type
`GenericPath: TypeAlias = AnyStr | PathLike[AnyStr]`. Previously, we
would incorrectly match a `Path` argument against the `AnyStr` typevar
(violating its constraints), but now we match against `PathLike`.

## Performance

Another regression on `colour`. This package uses `numpy` heavily. And
`numpy` is the codebase that originally lead me to this bug. The fix
here allows us to infer more precise `np.array` types in some cases, so
it's reasonable that we just need to perform more work.

The fix here also requires us to look at more union elements when we
would previously short-circuit incorrectly, so some more work needs to
be done in the solver.

## Test Plan

New Markdown tests
2025-12-10 14:58:57 +01:00
Ibraheem Ahmed
ff7086d9ad [ty] Infer type of implicit cls parameter in method bodies (#21685)
## Summary

Extends https://github.com/astral-sh/ruff/pull/20922 to infer
unannotated `cls` parameters as `type[Self]` in method bodies.

Part of https://github.com/astral-sh/ty/issues/159.
2025-12-10 10:31:28 +01:00
Charlie Marsh
d2aabeaaa2 [ty] Respect kw_only from parent class (#21820)
## Summary

Closes https://github.com/astral-sh/ty/issues/1769.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-12-10 10:12:18 +01:00
Dhruv Manilawala
8293afe2ae Remove hack about unknown options warning (#21887)
This hack was introduced to reduce the amount of warnings that users
would get while transitioning to the new settings format
(https://github.com/astral-sh/ruff/pull/19787) but now that we're near
the beta release, it would be good to remove this.
2025-12-10 07:09:31 +00:00
Jack O'Connor
aaadf16b1b [ty] bump dependencies to pull in Salsa support for ordermap (#21854) 2025-12-09 19:08:03 -08:00
Douglas Creager
c343e94ac5 [ty] Simplify union lower bounds and intersection upper bounds in constraint sets (#21871)
In a constraint set, it's not useful for an upper bound to be an
intersection type, or for a lower bound to be a union type. Both of
those can be rewritten as simpler BDDs:

```
T ≤ α & β  ⇒ (T ≤ α) ∧ (T ≤ β)
T ≤ α & ¬β ⇒ (T ≤ α) ∧ ¬(T ≤ β)
α | β ≤ T  ⇒ (α ≤ T) ∧ (β ≤ T)
```

We were seeing performance issues on #21551 when _not_ performing this
simplification. For instance, `pandas` was producing some constraint
sets involving intersections of 8-9 different types. Our sequent map
calculation was timing out calculating all of the different permutations
of those types:

```
t1 & t2 & t3 → t1
t1 & t2 & t3 → t2
t1 & t2 & t3 → t3
t1 & t2 & t3 → t1 & t2
t1 & t2 & t3 → t1 & t3
t1 & t2 & t3 → t2 & t3
```

(and then imagine what that looks like for 9 types instead of 3...)

With this change, all of those permutations are now encoded in the BDD
structure itself, which is very good at simplifying that kind of thing.

Pulling this out of #21551 for separate review.
2025-12-09 19:49:17 -05:00
Douglas Creager
270b8d1d14 [ty] Collapse never paths in constraint set BDDs (#21880)
#21744 fixed some non-determinism in our constraint set implementation
by switching our BDD representation from being "fully reduced" to being
"quasi-reduced". We still deduplicate identical nodes (via salsa
interning), but we removed the logic to prune redundant nodes (one with
identical outgoing true and false edges). This ensures that the BDD
"remembers" all of the individual constraints that it was created with.

However, that comes at the cost of creating larger BDDs, and on #21551
that was causing performance issues. `scikit-learn` was producing a
function signature with dozens of overloads, and we were trying to
create a constraint set that would map a return type typevar to any of
those overload's return types. This created a combinatorial explosion in
the BDD, with by far most of the BDD paths leading to the `never`
terminal.

This change updates the quasi-reduction logic to prune nodes that are
redundant _because both edges lead to the `never` terminal_. In this
case, we don't need to "remember" that constraint, since no assignment
to it can lead to a valid specialization. So we keep the "memory" of our
quasi-reduced structure, while still pruning large unneeded portions of
the BDD structure.

Pulling this out of https://github.com/astral-sh/ruff/pull/21551 for
separate review.
2025-12-09 18:22:54 -05:00
Brent Westbrook
f3714fd3c1 Fix leading comment formatting for lambdas with multiple parameters (#21879)
## Summary

This is a follow-up to #21868. As soon as I started merging #21868 into
#21385, I realized that I had missed a test case with `**kwargs` after
the `*args` parameter. Such a case is supposed to be formatted on one
line like:

```py
# input
(
    lambda
    # comment
    *x,
    **y: x
)

# output
(
    lambda
    # comment
    *x, **y: x
)
```

which you can still see on the
[playground](https://play.ruff.rs/bd88d339-1358-40d2-819f-865bfcb23aef?secondary=Format),
but on `main` after #21868, this was formatted as:

```py
(
    lambda
    # comment
    *x,
    **y: x
)
```

because the leading comment on the first parameter caused the whole
group around the parameters to break.

Instead of making these comments leading comments on the first
parameter, this PR makes them leading comments on the parameters list as
a whole.

## Test Plan

New tests, and I will also try merging this into #21385 _before_ opening
it for review this time.

<hr>

(labeling `internal` since #21868 should not be released before some
kind of fix)
2025-12-09 18:15:12 -05:00
David Peter
a9be810c38 [ty] Type inference for @asynccontextmanager (#21876)
## Summary

This PR adds special handling for `asynccontextmanager` calls as a
temporary solution for https://github.com/astral-sh/ty/issues/1804. We
will be able to remove this soon once we have support for generic
protocols in the solver.

closes https://github.com/astral-sh/ty/issues/1804

## Ecosystem

```diff
+ tests/test_downloadermiddleware.py:305:56: error[invalid-argument-type] Argument to bound method `download` is incorrect: Expected `Spider`, found `Unknown | Spider | None`
+ tests/test_downloadermiddleware.py:305:56: warning[possibly-missing-attribute] Attribute `spider` may be missing on object of type `Crawler | None`
```
These look like true positives

```diff
+ pymongo/asynchronous/database.py:1021:35: error[invalid-assignment] Object of type `(AsyncClientSession & ~AlwaysTruthy & ~AlwaysFalsy) | (_ServerMode & ~AlwaysFalsy) | Unknown | Primary` is not assignable to `_ServerMode | None`
+ pymongo/asynchronous/database.py:1025:17: error[invalid-argument-type] Argument to bound method `_conn_for_reads` is incorrect: Expected `_ServerMode`, found `_ServerMode | None`
```

Known problems or true positives, just caused by the new type for
`session`

```diff
- src/integrations/prefect-sqlalchemy/prefect_sqlalchemy/database.py:269:16: error[invalid-return-type] Return type does not match returned value: expected `Connection | AsyncConnection`, found `_GeneratorContextManager[Unknown, None, None] | _AsyncGeneratorContextManager[Unknown, None] | Connection | AsyncConnection`
+ src/integrations/prefect-sqlalchemy/prefect_sqlalchemy/database.py:269:16: error[invalid-return-type] Return type does not match returned value: expected `Connection | AsyncConnection`, found `_GeneratorContextManager[Unknown, None, None] | _AsyncGeneratorContextManager[AsyncConnection, None] | Connection | AsyncConnection`
```

Just a more concrete type

```diff
- src/prefect/flow_engine.py:1277:24: error[missing-argument] No argument provided for required parameter `cls`
- src/prefect/server/api/server.py:696:49: error[missing-argument] No argument provided for required parameter `cls`
- src/prefect/task_engine.py:1426:24: error[missing-argument] No argument provided for required parameter `cls`
```

Good

## Test Plan

* Adapted and newly added Markdown tests
* Tested on internal codebase
2025-12-09 22:49:00 +01:00
Brent Westbrook
0bec5c0362 Fix comment placement in lambda parameters (#21868)
Summary
--

This PR makes two changes to comment placement in lambda parameters.
First, we
now insert a line break if the first parameter has a leading comment:

```py
# input
(
    lambda
    * # comment 2
    x:
    x
)

# main
(
    lambda # comment 2
    *x: x
)

# this PR
(
    lambda
	# comment 2
    *x: x
)
```

Note the missing space in the output from main. This case is currently
unstable
on main. Also note that the new formatting is more consistent with our
stable
formatting in cases where the lambda has its own dangling comment:

```py
# input
(
    lambda # comment 1
    * # comment 2
    x:
    x
)

# output
(
    lambda  # comment 1
    # comment 2
    *x: x
)
```

and when a parameter without a comment precedes the split `*x`:

```py
# input
(
    lambda y,
    * # comment 2
    x:
    x
)

# output
(
    lambda y,
    # comment 2
    *x: x
)
```

This does change the stable formatting, but I think such cases are rare
(expecting zero hits in the ecosystem report), this fixes an existing
instability, and it should not change any code we've previously
formatted.

Second, this PR modifies the comment placement such that `# comment 2`
in these
outputs is still a leading comment on the parameter. This is also not
the case
on main, where it becomes a [dangling lambda
comment](https://play.ruff.rs/3b29bb7e-70e4-4365-88e0-e60fe1857a35?secondary=Comments).
This doesn't cause any
instability that I'm aware of on main, but it does cause problems when
trying to
adjust the placement of dangling lambda comments in #21385. Changing the
placement in this way should not affect any formatting here.

Test Plan
--

New lambda tests, plus existing tests covering the cases above with
multiple
comments around the parameters (see lambda.py 122-143, and 122-205 or so
more
broadly)

I also checked manually that the comments are now leading on the
parameter:

```shell
❯ cargo run --bin ruff_python_formatter -- --emit stdout --target-version 3.10 --print-comments <<EOF
(
    lambda
        # comment 2
    *x: x
)
EOF
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.15s
     Running `target/debug/ruff_python_formatter --emit stdout --target-version 3.10 --print-comments`
# Comment decoration: Range, Preceding, Following, Enclosing, Comment
21..32, None, Some((Parameters, 37..39)), (ExprLambda, 6..42), "# comment 2"
{
    Node {
        kind: Parameter,
        range: 37..39,
        source: `*x`,
    }: {
        "leading": [
            SourceComment {
                text: "# comment 2",
                position: OwnLine,
                formatted: true,
            },
        ],
        "dangling": [],
        "trailing": [],
    },
}
(
    lambda
    # comment 2
    *x: x
)
```

But I didn't see a great place to put a test like this. Is there
somewhere I can assert this comment placement since it doesn't affect
any formatting yet? Or is it okay to wait until we use this in #21385?
2025-12-09 14:07:48 -05:00
Loïc Riegel
9490fbf1e1 [pylint] Detect subclasses of builtin exceptions (PLW0133) (#21382)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
Closes #17347

Goal is to detect the useless exception statement not just for builtin
exceptions but also custom (user defined) ones.

## Test Plan

<!-- How was it tested? -->
I added test cases in the rule fixture and updated the insta snapshot.
Note that I first moved up a test case case which was at the bottom to
the correct "violation category".
I wasn't sure if I should create new test cases or just insert inside
those tests. I know that ideally each test case should test only one
thing, but here, duplicating twice 12 test cases seemed very verbose,
and actually less maintainable in the future. The drawback is that the
diff in the snapshot is hard to review, sorry. But you can see that the
snapshot gives 38 diagnostics, which is what we expect.

Alternatively, I also created this file for manual testing.
```py
# tmp/test_error.py

class MyException(Exception):
    ...
class MyBaseException(BaseException):
    ...
class MyValueError(ValueError):
    ...
class MyExceptionCustom(Exception):
    ...
class MyBaseExceptionCustom(BaseException):
    ...
class MyValueErrorCustom(ValueError):
    ...
class MyDeprecationWarning(DeprecationWarning):
    ...
class MyDeprecationWarningCustom(MyDeprecationWarning):
    ...
class MyExceptionGroup(ExceptionGroup):
    ...
class MyExceptionGroupCustom(MyExceptionGroup):
    ...
class MyBaseExceptionGroup(ExceptionGroup):
    ...
class MyBaseExceptionGroupCustom(MyBaseExceptionGroup):
    ...


def foo():
    Exception("...")
    BaseException("...")
    ValueError("...")
    RuntimeError("...")
    DeprecationWarning("...")
    GeneratorExit("...")
    SystemExit("...")
    ExceptionGroup("eg", [ValueError(1), TypeError(2), OSError(3), OSError(4)])
    BaseExceptionGroup("eg", [ValueError(1), TypeError(2), OSError(3), OSError(4)])
    MyException("...")
    MyBaseException("...")
    MyValueError("...")
    MyExceptionCustom("...")
    MyBaseExceptionCustom("...")
    MyValueErrorCustom("...")
    MyDeprecationWarning("...")
    MyDeprecationWarningCustom("...")
    MyExceptionGroup("...")
    MyExceptionGroupCustom("...")
    MyBaseExceptionGroup("...")
    MyBaseExceptionGroupCustom("...")

```

and you can run this to check the PR:
```sh
target/debug/ruff check tmp/test_error.py --select PLW0133 --unsafe-fixes --diff --no-cache --isolated --target-version py310
target/debug/ruff check tmp/test_error.py --select PLW0133 --unsafe-fixes --diff --no-cache --isolated --target-version py314
```
2025-12-09 13:49:55 -05:00
Carl Meyer
8727a7b179 Fix stack overflow with recursive generic protocols (depth limit) (#21858)
## Summary

This fixes https://github.com/astral-sh/ty/issues/1736 where recursive
generic protocols with growing specializations caused a stack overflow.

The issue occurred with protocols like:
```python
class C[T](Protocol):
    a: 'C[set[T]]'
```

When checking `C[set[int]]` against e.g. `C[Unknown]`, member `a`
requires checking `C[set[set[int]]]`, which requires
`C[set[set[set[int]]]]`, etc. Each level has different type
specializations, so the existing cycle detection (using full types as
cache keys) didn't catch the infinite recursion.

This fix adds a simple recursion depth limit (64) to the CycleDetector.
When the depth exceeds the limit, we return the fallback value (assume
compatible) to safely terminate the recursion.

This is a bit of a blunt hammer, but it should be broadly effective to
prevent stack overflow in any nested-relation case, and it's hard to
imagine that non-recursive nested relation comparisons of depth > 64
exist much in the wild.

## Test Plan

Added mdtest.
2025-12-09 09:05:18 -08:00
Amethyst Reese
4e4d018344 New diagnostics for unused range suppressions (#21783)
Issue #3711
2025-12-09 08:30:27 -08:00
Andrew Gallant
a9899af98a [ty] Use default settings in completion tests
This makes it so the test and production environments match.

Ref https://github.com/astral-sh/ruff/pull/21851#discussion_r2601579316
2025-12-09 10:42:46 -05:00
David Peter
aea2bc2308 [ty] Infer type variables within generic unions (#21862)
## Summary

This PR allows our generics solver to find a solution for `T` in cases
like the following:
```py
def extract_t[T](x: P[T] | Q[T]) -> T:
    raise NotImplementedError

reveal_type(extract_t(P[int]()))  # revealed: int
reveal_type(extract_t(Q[str]()))  # revealed: str
```

closes https://github.com/astral-sh/ty/issues/1772
closes https://github.com/astral-sh/ty/issues/1314

## Ecosystem

The impact here looks very good!

It took me a long time to figure this out, but the new diagnostics on
bokeh are actually true positives. I should have tested with another
type-checker immediately, I guess. All other type checkers also emit
errors on these `__init__` calls. MRE
[here](https://play.ty.dev/5c19d260-65e2-4f70-a75e-1a25780843a2) (no
error on main, diagnostic on this branch)

A lot of false positives on home-assistant go away for calls to
functions like
[`async_listen`](180053fe98/homeassistant/core.py (L1581-L1587))
which take a `event_type: EventType[_DataT] | str` parameter. We can now
solve for `_DataT` here, which was previously falling back to its
default value, and then caused problems because it was used as an
argument to an invariant generic class.

## Test Plan

New Markdown tests
2025-12-09 16:22:59 +01:00
Dhruv Manilawala
c35bf8f441 [ty] Fix overload filtering to prefer more "precise" match (#21859)
## Summary

fixes: https://github.com/astral-sh/ty/issues/1809

I took this chance to add some debug level tracing logs for overload
call evaluation similar to Doug's implementation in `constraints.rs`.

## Test Plan

- Add new mdtests
- Tested it against `sqlalchemy.select` in pyx which results in the
correct overload being matched
2025-12-09 20:29:34 +05:30
Andrew Gallant
426125f5c0 [ty] Stabilize auto-import
While still under development, it's far enough along now that we think
it's worth enabling it by default. This should also help give us
feedback for how it behaves.

This PR adds a "completion settings" grouping similar to inlay hints. We
only have an auto-import setting there now, but I expect we'll add more
options to configure completion behavior in the future.

Closes astral-sh/ty#1765
2025-12-09 09:40:38 -05:00
Micha Reiser
a0b18bc153 [ty] Fix reveal-type E2E test (#21865) 2025-12-09 14:08:22 +01:00
Micha Reiser
11901384b4 [ty] Use concise message for LSP clients not supporting related diagnostic information (#21850) 2025-12-09 13:18:30 +01:00
Micha Reiser
dc2f0a86fd Include more details in Tokens 'offset is inside token' panic message (#21860) 2025-12-09 11:12:35 +01:00
Amethyst Reese
4e67a219bb apply range suppressions to filter diagnostics (#21623)
Builds on range suppressions from
https://github.com/astral-sh/ruff/pull/21441

Filters diagnostics based on parsed valid range suppressions.

Issue: #3711
2025-12-08 16:11:59 -08:00
Aria Desires
8ea18966cf [ty] followup: add-import action for reveal_type too (#21668) 2025-12-08 22:44:17 +00:00
Rasmus Nygren
e548ce1ca9 [ty] Enrich function argument auto-complete suggestions with annotated types 2025-12-08 14:19:44 -05:00
Rasmus Nygren
eac8a90cc4 [ty] Add autocomplete suggestions for function arguments
This adds autocomplete suggestions for function arguments. For example,
`okay` in:

```python
def foo(okay=None):

foo(o<CURSOR>
```

This also ensures that we don't suggest a keyword argument if it has
already been used.

Closes astral-sh/issues#1550
2025-12-08 14:19:44 -05:00
Loïc Riegel
2d3466eccf [flake8-bugbear] Accept immutable slice default arguments (B008) (#21823)
Closes issue #21565

## Summary

As pointed out in the issue, slices are currently flagged by B008 but
this behavior is incorrect because slices are immutable.

## Test Plan

Added a test case in the "B006_B008.py" fixture. Sorry for the diff in
the snapshots, the only thing that changes in those flies is the line
numbers, though.

You can also test this manually with this file:
```py
# test_slice.py
def c(d=slice(0, 3)): ...
```

```sh
> target/debug/ruff check tmp/test_slice.py --no-cache --select B008
All checks passed!
```
2025-12-08 14:00:43 -05:00
Phong Do
45fb3732a4 [pydocstyle] Suppress D417 for parameters with Unpack annotations (#21816)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

Fixes https://github.com/astral-sh/ruff/issues/8774

This PR fixes `pydocstyle` incorrectly flagging missing argument for
arguments with `Unpack` type annotation by extracting the `kwarg` `D417`
suppression logic into a helper function for future rules as needed.

## Problem Statement

The below example was incorrectly triggering `D417` error for missing
`**kwargs` doc.

```python
class User(TypedDict):
    id: int
    name: str

def do_something(some_arg: str, **kwargs: Unpack[User]):
    """Some doc
    
    Args:
        some_arg: Some argument
    """
```

<img width="1135" height="276" alt="image"
src="https://github.com/user-attachments/assets/42fa4bb9-61a5-4a70-a79c-0c8922a3ee66"
/>

`**kwargs: Unpack[User]` indicates the function expects keyword
arguments that will be unpacked. Ideally, the individual fields of the
User `TypedDict` should be documented, not in the `**kwargs` itself. The
`**kwargs` parameter acts as a semantic grouping rather than a parameter
requiring documentation.

## Solution

As discussed in the linked issue, it makes sense to suppress the `D417`
for parameters with `Unpack` annotation. I extract a helper function to
solely check `D417` should be suppressed with `**kwarg: Unpack[T]`
parameter, this function can also be unit tested independently and
reduce complexity of current `missing_args` check function. This also
makes it easier to add additional rules in the future.

_✏️ Note:_ This is my first PR in this repo, as I've learned a ton from
it, please call out anything that could be improved. Thanks for making
this excellent tool 👏

## Test Plan

Add 2 test cases in `D417.py` and update snapshots.

---------

Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
2025-12-08 19:00:05 +00:00
350 changed files with 11981 additions and 8154 deletions

View File

@@ -298,7 +298,7 @@ jobs:
# sync, not just public items. Eventually we should do this for all
# crates; for now add crates here as they are warning-clean to prevent
# regression.
- run: cargo doc --no-deps -p ty_python_semantic -p ty -p ty_test -p ruff_db --document-private-items
- run: cargo doc --no-deps -p ty_python_semantic -p ty -p ty_test -p ruff_db -p ruff_python_formatter --document-private-items
env:
# Setting RUSTDOCFLAGS because `cargo doc --check` isn't yet implemented (https://github.com/rust-lang/cargo/issues/10025).
RUSTDOCFLAGS: "-D warnings"

View File

@@ -47,6 +47,7 @@ jobs:
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
shared-key: "mypy-primer"
workspaces: "ruff"
- name: Install Rust toolchain
@@ -86,6 +87,7 @@ jobs:
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
workspaces: "ruff"
shared-key: "mypy-primer"
- name: Install Rust toolchain
run: rustup show
@@ -105,3 +107,54 @@ jobs:
with:
name: mypy_primer_memory_diff
path: mypy_primer_memory.diff
# Runs mypy twice against the same ty version to catch any non-deterministic behavior (ideally).
# The job is disabled for now because there are some non-deterministic diagnostics.
mypy_primer_same_revision:
name: Run mypy_primer on same revision
runs-on: ${{ github.repository == 'astral-sh/ruff' && 'depot-ubuntu-22.04-32' || 'ubuntu-latest' }}
timeout-minutes: 20
# TODO: Enable once we fixed the non-deterministic diagnostics
if: false
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: ruff
fetch-depth: 0
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
workspaces: "ruff"
shared-key: "mypy-primer"
- name: Install Rust toolchain
run: rustup show
- name: Run determinism check
env:
BASE_REVISION: ${{ github.event.pull_request.head.sha }}
PRIMER_SELECTOR: crates/ty_python_semantic/resources/primer/good.txt
CLICOLOR_FORCE: "1"
DIFF_FILE: mypy_primer_determinism.diff
run: |
cd ruff
scripts/mypy_primer.sh
- name: Check for non-determinism
run: |
# Remove ANSI color codes for checking
sed -e 's/\x1b\[[0-9;]*m//g' mypy_primer_determinism.diff > mypy_primer_determinism_clean.diff
# Check if there are any differences (non-determinism)
if [ -s mypy_primer_determinism_clean.diff ]; then
echo "ERROR: Non-deterministic output detected!"
echo "The following differences were found when running ty twice on the same commit:"
cat mypy_primer_determinism_clean.diff
exit 1
else
echo "✓ Output is deterministic"
fi

33
Cargo.lock generated
View File

@@ -1016,7 +1016,7 @@ dependencies = [
"libc",
"option-ext",
"redox_users",
"windows-sys 0.59.0",
"windows-sys 0.61.0",
]
[[package]]
@@ -1108,7 +1108,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb"
dependencies = [
"libc",
"windows-sys 0.59.0",
"windows-sys 0.61.0",
]
[[package]]
@@ -1238,9 +1238,9 @@ dependencies = [
[[package]]
name = "get-size-derive2"
version = "0.7.2"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ff47daa61505c85af126e9dd64af6a342a33dc0cccfe1be74ceadc7d352e6efd"
checksum = "ab21d7bd2c625f2064f04ce54bcb88bc57c45724cde45cba326d784e22d3f71a"
dependencies = [
"attribute-derive",
"quote",
@@ -1249,14 +1249,15 @@ dependencies = [
[[package]]
name = "get-size2"
version = "0.7.2"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac7bb8710e1f09672102be7ddf39f764d8440ae74a9f4e30aaa4820dcdffa4af"
checksum = "879272b0de109e2b67b39fcfe3d25fdbba96ac07e44a254f5a0b4d7ff55340cb"
dependencies = [
"compact_str",
"get-size-derive2",
"hashbrown 0.16.1",
"indexmap",
"ordermap",
"smallvec",
]
@@ -1763,7 +1764,7 @@ dependencies = [
"portable-atomic",
"portable-atomic-util",
"serde_core",
"windows-sys 0.59.0",
"windows-sys 0.61.0",
]
[[package]]
@@ -2233,9 +2234,9 @@ checksum = "04744f49eae99ab78e0d5c0b603ab218f515ea8cfe5a456d7629ad883a3b6e7d"
[[package]]
name = "ordermap"
version = "0.5.12"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b100f7dd605611822d30e182214d3c02fdefce2d801d23993f6b6ba6ca1392af"
checksum = "ed637741ced8fb240855d22a2b4f208dab7a06bcce73380162e5253000c16758"
dependencies = [
"indexmap",
"serde",
@@ -3348,6 +3349,7 @@ dependencies = [
"compact_str",
"get-size2",
"insta",
"itertools 0.14.0",
"memchr",
"ruff_annotate_snippets",
"ruff_python_ast",
@@ -3571,7 +3573,7 @@ dependencies = [
"errno",
"libc",
"linux-raw-sys",
"windows-sys 0.59.0",
"windows-sys 0.61.0",
]
[[package]]
@@ -3589,7 +3591,7 @@ checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
[[package]]
name = "salsa"
version = "0.24.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=59aa1075e837f5deb0d6ffb24b68fedc0f4bc5e0#59aa1075e837f5deb0d6ffb24b68fedc0f4bc5e0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=55e5e7d32fa3fc189276f35bb04c9438f9aedbd1#55e5e7d32fa3fc189276f35bb04c9438f9aedbd1"
dependencies = [
"boxcar",
"compact_str",
@@ -3600,6 +3602,7 @@ dependencies = [
"indexmap",
"intrusive-collections",
"inventory",
"ordermap",
"parking_lot",
"portable-atomic",
"rustc-hash",
@@ -3613,12 +3616,12 @@ dependencies = [
[[package]]
name = "salsa-macro-rules"
version = "0.24.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=59aa1075e837f5deb0d6ffb24b68fedc0f4bc5e0#59aa1075e837f5deb0d6ffb24b68fedc0f4bc5e0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=55e5e7d32fa3fc189276f35bb04c9438f9aedbd1#55e5e7d32fa3fc189276f35bb04c9438f9aedbd1"
[[package]]
name = "salsa-macros"
version = "0.24.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=59aa1075e837f5deb0d6ffb24b68fedc0f4bc5e0#59aa1075e837f5deb0d6ffb24b68fedc0f4bc5e0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=55e5e7d32fa3fc189276f35bb04c9438f9aedbd1#55e5e7d32fa3fc189276f35bb04c9438f9aedbd1"
dependencies = [
"proc-macro2",
"quote",
@@ -3972,7 +3975,7 @@ dependencies = [
"getrandom 0.3.4",
"once_cell",
"rustix",
"windows-sys 0.59.0",
"windows-sys 0.61.0",
]
[[package]]
@@ -5026,7 +5029,7 @@ version = "0.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c2a7b1c03c876122aa43f3020e6c3c3ee5c05081c9a00739faf7503aeba10d22"
dependencies = [
"windows-sys 0.59.0",
"windows-sys 0.61.0",
]
[[package]]

View File

@@ -88,7 +88,7 @@ etcetera = { version = "0.11.0" }
fern = { version = "0.7.0" }
filetime = { version = "0.2.23" }
getrandom = { version = "0.3.1" }
get-size2 = { version = "0.7.0", features = [
get-size2 = { version = "0.7.3", features = [
"derive",
"smallvec",
"hashbrown",
@@ -129,7 +129,7 @@ memchr = { version = "2.7.1" }
mimalloc = { version = "0.1.39" }
natord = { version = "1.0.9" }
notify = { version = "8.0.0" }
ordermap = { version = "0.5.0" }
ordermap = { version = "1.0.0" }
path-absolutize = { version = "3.1.1" }
path-slash = { version = "0.2.1" }
pathdiff = { version = "0.2.1" }
@@ -146,7 +146,7 @@ regex-automata = { version = "0.4.9" }
rustc-hash = { version = "2.0.0" }
rustc-stable-hash = { version = "0.1.2" }
# When updating salsa, make sure to also update the revision in `fuzz/Cargo.toml`
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "59aa1075e837f5deb0d6ffb24b68fedc0f4bc5e0", default-features = false, features = [
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "55e5e7d32fa3fc189276f35bb04c9438f9aedbd1", default-features = false, features = [
"compact_str",
"macros",
"salsa_unstable",

View File

@@ -1440,6 +1440,78 @@ def function():
Ok(())
}
#[test]
fn ignore_noqa() -> Result<()> {
let fixture = CliTest::new()?;
fixture.write_file(
"ruff.toml",
r#"
[lint]
select = ["F401"]
"#,
)?;
fixture.write_file(
"noqa.py",
r#"
import os # noqa: F401
# ruff: disable[F401]
import sys
"#,
)?;
// without --ignore-noqa
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py"),
@r"
success: false
exit_code: 1
----- stdout -----
noqa.py:5:8: F401 [*] `sys` imported but unused
Found 1 error.
[*] 1 fixable with the `--fix` option.
----- stderr -----
");
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.args(["--preview"]),
@r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
");
// with --ignore-noqa --preview
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.args(["--ignore-noqa", "--preview"]),
@r"
success: false
exit_code: 1
----- stdout -----
noqa.py:2:8: F401 [*] `os` imported but unused
noqa.py:5:8: F401 [*] `sys` imported but unused
Found 2 errors.
[*] 2 fixable with the `--fix` option.
----- stderr -----
");
Ok(())
}
#[test]
fn add_noqa() -> Result<()> {
let fixture = CliTest::new()?;
@@ -1632,6 +1704,100 @@ def unused(x): # noqa: ANN001, ARG001, D103
Ok(())
}
#[test]
fn add_noqa_existing_file_level_noqa() -> Result<()> {
let fixture = CliTest::new()?;
fixture.write_file(
"ruff.toml",
r#"
[lint]
select = ["F401"]
"#,
)?;
fixture.write_file(
"noqa.py",
r#"
# ruff: noqa F401
import os
"#,
)?;
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.arg("--preview")
.args(["--add-noqa"])
.arg("-")
.pass_stdin(r#"
"#), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
");
let test_code =
fs::read_to_string(fixture.root().join("noqa.py")).expect("should read test file");
insta::assert_snapshot!(test_code, @r"
# ruff: noqa F401
import os
");
Ok(())
}
#[test]
fn add_noqa_existing_range_suppression() -> Result<()> {
let fixture = CliTest::new()?;
fixture.write_file(
"ruff.toml",
r#"
[lint]
select = ["F401"]
"#,
)?;
fixture.write_file(
"noqa.py",
r#"
# ruff: disable[F401]
import os
"#,
)?;
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.arg("--preview")
.args(["--add-noqa"])
.arg("-")
.pass_stdin(r#"
"#), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
");
let test_code =
fs::read_to_string(fixture.root().join("noqa.py")).expect("should read test file");
insta::assert_snapshot!(test_code, @r"
# ruff: disable[F401]
import os
");
Ok(())
}
#[test]
fn add_noqa_multiline_comment() -> Result<()> {
let fixture = CliTest::new()?;

View File

@@ -194,7 +194,7 @@ static SYMPY: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY312,
},
13000,
13030,
);
static TANJUN: Benchmark = Benchmark::new(

View File

@@ -42,13 +42,14 @@ impl<'a> Collector<'a> {
impl<'ast> SourceOrderVisitor<'ast> for Collector<'_> {
fn visit_stmt(&mut self, stmt: &'ast Stmt) {
match stmt {
Stmt::ImportFrom(ast::StmtImportFrom {
names,
module,
level,
range: _,
node_index: _,
}) => {
Stmt::ImportFrom(import_from) => {
let ast::StmtImportFrom {
names,
module,
level,
range: _,
node_index: _,
} = &**import_from;
let module = module.as_deref();
let level = *level;
for alias in names {
@@ -87,24 +88,26 @@ impl<'ast> SourceOrderVisitor<'ast> for Collector<'_> {
}
}
}
Stmt::Import(ast::StmtImport {
names,
range: _,
node_index: _,
}) => {
Stmt::Import(import_stmt) => {
let ast::StmtImport {
names,
range: _,
node_index: _,
} = &**import_stmt;
for alias in names {
if let Some(module_name) = ModuleName::new(alias.name.as_str()) {
self.imports.push(CollectedImport::Import(module_name));
}
}
}
Stmt::If(ast::StmtIf {
test,
body,
elif_else_clauses,
range: _,
node_index: _,
}) => {
Stmt::If(if_stmt) => {
let ast::StmtIf {
test,
body,
elif_else_clauses,
range: _,
node_index: _,
} = &**if_stmt;
// Skip TYPE_CHECKING blocks if not requested
if self.type_checking_imports || !is_type_checking_condition(test) {
self.visit_body(body);

View File

@@ -199,6 +199,9 @@ def bytes_okay(value=bytes(1)):
def int_okay(value=int("12")):
pass
# Allow immutable slice()
def slice_okay(value=slice(1,2)):
pass
# Allow immutable complex() value
def complex_okay(value=complex(1,2)):

View File

@@ -218,3 +218,26 @@ def should_not_fail(payload, Args):
Args:
The other arguments.
"""
# Test cases for Unpack[TypedDict] kwargs
from typing import TypedDict
from typing_extensions import Unpack
class User(TypedDict):
id: int
name: str
def function_with_unpack_args_should_not_fail(query: str, **kwargs: Unpack[User]):
"""Function with Unpack kwargs.
Args:
query: some arg
"""
def function_with_unpack_and_missing_arg_doc_should_fail(query: str, **kwargs: Unpack[User]):
"""Function with Unpack kwargs but missing query arg documentation.
Args:
**kwargs: keyword arguments
"""

View File

@@ -2,15 +2,40 @@ from abc import ABC, abstractmethod
from contextlib import suppress
class MyError(Exception):
...
class MySubError(MyError):
...
class MyValueError(ValueError):
...
class MyUserWarning(UserWarning):
...
# Violation test cases with builtin errors: PLW0133
# Test case 1: Useless exception statement
def func():
AssertionError("This is an assertion error") # PLW0133
MyError("This is a custom error") # PLW0133
MySubError("This is a custom error") # PLW0133
MyValueError("This is a custom value error") # PLW0133
# Test case 2: Useless exception statement in try-except block
def func():
try:
Exception("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
except Exception as err:
pass
@@ -19,6 +44,9 @@ def func():
def func():
if True:
RuntimeError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
# Test case 4: Useless exception statement in class
@@ -26,12 +54,18 @@ def func():
class Class:
def __init__(self):
TypeError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
# Test case 5: Useless exception statement in function
def func():
def inner():
IndexError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
inner()
@@ -40,6 +74,9 @@ def func():
def func():
while True:
KeyError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
# Test case 7: Useless exception statement in abstract class
@@ -48,27 +85,58 @@ def func():
@abstractmethod
def method(self):
NotImplementedError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
# Test case 8: Useless exception statement inside context manager
def func():
with suppress(AttributeError):
with suppress(Exception):
AttributeError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
# Test case 9: Useless exception statement in parentheses
def func():
(RuntimeError("This is an exception")) # PLW0133
(MyError("This is an exception")) # PLW0133
(MySubError("This is an exception")) # PLW0133
(MyValueError("This is an exception")) # PLW0133
# Test case 10: Useless exception statement in continuation
def func():
x = 1; (RuntimeError("This is an exception")); y = 2 # PLW0133
x = 1; (MyError("This is an exception")); y = 2 # PLW0133
x = 1; (MySubError("This is an exception")); y = 2 # PLW0133
x = 1; (MyValueError("This is an exception")); y = 2 # PLW0133
# Test case 11: Useless warning statement
def func():
UserWarning("This is an assertion error") # PLW0133
UserWarning("This is a user warning") # PLW0133
MyUserWarning("This is a custom user warning") # PLW0133
# Test case 12: Useless exception statement at module level
import builtins
builtins.TypeError("still an exception even though it's an Attribute") # PLW0133
PythonFinalizationError("Added in Python 3.13") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
UserWarning("This is a user warning") # PLW0133
MyUserWarning("This is a custom user warning") # PLW0133
# Non-violation test cases: PLW0133
@@ -119,10 +187,3 @@ def func():
def func():
with suppress(AttributeError):
raise AttributeError("This is an exception") # OK
import builtins
builtins.TypeError("still an exception even though it's an Attribute")
PythonFinalizationError("Added in Python 3.13")

View File

@@ -0,0 +1,88 @@
def f():
# These should both be ignored by the range suppression.
# ruff: disable[E741, F841]
I = 1
# ruff: enable[E741, F841]
def f():
# These should both be ignored by the implicit range suppression.
# Should also generate an "unmatched suppression" warning.
# ruff:disable[E741,F841]
I = 1
def f():
# Neither warning is ignored, and an "unmatched suppression"
# should be generated.
I = 1
# ruff: enable[E741, F841]
def f():
# One should be ignored by the range suppression, and
# the other logged to the user.
# ruff: disable[E741]
I = 1
# ruff: enable[E741]
def f():
# Test interleaved range suppressions. The first and last
# lines should each log a different warning, while the
# middle line should be completely silenced.
# ruff: disable[E741]
l = 0
# ruff: disable[F841]
O = 1
# ruff: enable[E741]
I = 2
# ruff: enable[F841]
def f():
# Neither of these are ignored and warnings are
# logged to user
# ruff: disable[E501]
I = 1
# ruff: enable[E501]
def f():
# These should both be ignored by the range suppression,
# and an unusued noqa diagnostic should be logged.
# ruff:disable[E741,F841]
I = 1 # noqa: E741,F841
# ruff:enable[E741,F841]
def f():
# TODO: Duplicate codes should be counted as duplicate, not unused
# ruff: disable[F841, F841]
foo = 0
def f():
# Overlapping range suppressions, one should be marked as used,
# and the other should trigger an unused suppression diagnostic
# ruff: disable[F841]
# ruff: disable[F841]
foo = 0
def f():
# Multiple codes but only one is used
# ruff: disable[E741, F401, F841]
foo = 0
def f():
# Multiple codes but only two are used
# ruff: disable[E741, F401, F841]
I = 0
def f():
# Multiple codes but none are used
# ruff: disable[E741, F401, F841]
print("hello")

View File

@@ -17,11 +17,12 @@ use ruff_python_ast::PythonVersion;
/// Run lint rules over a [`Stmt`] syntax node.
pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
match stmt {
Stmt::Global(ast::StmtGlobal {
names,
range: _,
node_index: _,
}) => {
Stmt::Global(global) => {
let ast::StmtGlobal {
names,
range: _,
node_index: _,
} = &**global;
if checker.is_rule_enabled(Rule::GlobalAtModuleLevel) {
pylint::rules::global_at_module_level(checker, stmt);
}
@@ -31,13 +32,12 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
}
}
}
Stmt::Nonlocal(
nonlocal @ ast::StmtNonlocal {
Stmt::Nonlocal(nonlocal) => {
let ast::StmtNonlocal {
names,
range: _,
node_index: _,
},
) => {
} = &**nonlocal;
if checker.is_rule_enabled(Rule::AmbiguousVariableName) {
for name in names {
pycodestyle::rules::ambiguous_variable_name(checker, name, name.range());
@@ -47,8 +47,8 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
pylint::rules::nonlocal_and_global(checker, nonlocal);
}
}
Stmt::FunctionDef(
function_def @ ast::StmtFunctionDef {
Stmt::FunctionDef(function_def) => {
let ast::StmtFunctionDef {
is_async,
name,
decorator_list,
@@ -58,8 +58,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
type_params: _,
range: _,
node_index: _,
},
) => {
} = &**function_def;
if checker.is_rule_enabled(Rule::DjangoNonLeadingReceiverDecorator) {
flake8_django::rules::non_leading_receiver_decorator(checker, decorator_list);
}
@@ -321,7 +320,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
pylint::rules::in_function(checker, name, body);
}
if checker.is_rule_enabled(Rule::ReimplementedOperator) {
refurb::rules::reimplemented_operator(checker, &function_def.into());
refurb::rules::reimplemented_operator(checker, &(&**function_def).into());
}
if checker.is_rule_enabled(Rule::SslWithBadDefaults) {
flake8_bandit::rules::ssl_with_bad_defaults(checker, function_def);
@@ -356,8 +355,8 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
pylint::rules::return_in_init(checker, stmt);
}
}
Stmt::ClassDef(
class_def @ ast::StmtClassDef {
Stmt::ClassDef(class_def) => {
let ast::StmtClassDef {
name,
arguments,
type_params: _,
@@ -365,8 +364,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
body,
range: _,
node_index: _,
},
) => {
} = &**class_def;
if checker.is_rule_enabled(Rule::NoClassmethodDecorator) {
pylint::rules::no_classmethod_decorator(checker, stmt);
}
@@ -526,11 +524,12 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
ruff::rules::implicit_class_var_in_dataclass(checker, class_def);
}
}
Stmt::Import(ast::StmtImport {
names,
range: _,
node_index: _,
}) => {
Stmt::Import(import) => {
let ast::StmtImport {
names,
range: _,
node_index: _,
} = &**import;
if checker.is_rule_enabled(Rule::MultipleImportsOnOneLine) {
pycodestyle::rules::multiple_imports_on_one_line(checker, stmt, names);
}
@@ -578,7 +577,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
flake8_tidy_imports::rules::banned_module_level_imports(checker, stmt);
}
for alias in names {
for alias in &import.names {
if checker.is_rule_enabled(Rule::NonAsciiImportName) {
pylint::rules::non_ascii_module_import(checker, alias);
}
@@ -604,7 +603,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
}
}
if checker.is_rule_enabled(Rule::ManualFromImport) {
pylint::rules::manual_from_import(checker, stmt, alias, names);
pylint::rules::manual_from_import(checker, stmt, alias, &import.names);
}
if checker.is_rule_enabled(Rule::ImportSelf) {
pylint::rules::import_self(checker, alias, checker.module.qualified_name());
@@ -681,17 +680,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
}
}
}
Stmt::ImportFrom(
import_from @ ast::StmtImportFrom {
names,
module,
level,
range: _,
node_index: _,
},
) => {
let level = *level;
let module = module.as_deref();
Stmt::ImportFrom(import_from) => {
let level = import_from.level;
let module = import_from.module.as_deref();
if checker.is_rule_enabled(Rule::ModuleImportNotAtTopOfFile) {
pycodestyle::rules::module_import_not_at_top_of_file(checker, stmt);
}
@@ -699,7 +690,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
pylint::rules::import_outside_top_level(checker, stmt);
}
if checker.is_rule_enabled(Rule::GlobalStatement) {
for name in names {
for name in &import_from.names {
if let Some(asname) = name.asname.as_ref() {
pylint::rules::global_statement(checker, asname);
} else {
@@ -708,7 +699,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
}
}
if checker.is_rule_enabled(Rule::NonAsciiImportName) {
for alias in names {
for alias in &import_from.names {
pylint::rules::non_ascii_module_import(checker, alias);
}
}
@@ -724,7 +715,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.is_rule_enabled(Rule::UnnecessaryBuiltinImport) {
if let Some(module) = module {
pyupgrade::rules::unnecessary_builtin_import(
checker, stmt, module, names, level,
checker, stmt, module, &import_from.names, level,
);
}
}
@@ -760,7 +751,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
&stmt,
);
for alias in names {
for alias in &import_from.names {
if &alias.name == "*" {
continue;
}
@@ -789,7 +780,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
flake8_pyi::rules::from_future_import(checker, import_from);
}
}
for alias in names {
for alias in &import_from.names {
if module != Some("__future__") && &alias.name == "*" {
// F403
checker.report_diagnostic_if_enabled(
@@ -890,7 +881,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
checker,
level,
module,
names,
&import_from.names,
checker.module.qualified_name(),
);
}
@@ -906,14 +897,14 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
flake8_pyi::rules::bytestring_import(checker, import_from);
}
}
Stmt::Raise(raise @ ast::StmtRaise { exc, .. }) => {
Stmt::Raise(raise) => {
if checker.is_rule_enabled(Rule::RaiseNotImplemented) {
if let Some(expr) = exc {
if let Some(expr) = &raise.exc {
pyflakes::rules::raise_not_implemented(checker, expr);
}
}
if checker.is_rule_enabled(Rule::RaiseLiteral) {
if let Some(exc) = exc {
if let Some(exc) = &raise.exc {
flake8_bugbear::rules::raise_literal(checker, exc);
}
}
@@ -922,34 +913,34 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
Rule::FStringInException,
Rule::DotFormatInException,
]) {
if let Some(exc) = exc {
if let Some(exc) = &raise.exc {
flake8_errmsg::rules::string_in_exception(checker, stmt, exc);
}
}
if checker.is_rule_enabled(Rule::OSErrorAlias) {
if let Some(item) = exc {
if let Some(item) = &raise.exc {
pyupgrade::rules::os_error_alias_raise(checker, item);
}
}
if checker.is_rule_enabled(Rule::TimeoutErrorAlias) {
if checker.target_version() >= PythonVersion::PY310 {
if let Some(item) = exc {
if let Some(item) = &raise.exc {
pyupgrade::rules::timeout_error_alias_raise(checker, item);
}
}
}
if checker.is_rule_enabled(Rule::RaiseVanillaClass) {
if let Some(expr) = exc {
if let Some(expr) = &raise.exc {
tryceratops::rules::raise_vanilla_class(checker, expr);
}
}
if checker.is_rule_enabled(Rule::RaiseVanillaArgs) {
if let Some(expr) = exc {
if let Some(expr) = &raise.exc {
tryceratops::rules::raise_vanilla_args(checker, expr);
}
}
if checker.is_rule_enabled(Rule::UnnecessaryParenOnRaiseException) {
if let Some(expr) = exc {
if let Some(expr) = &raise.exc {
flake8_raise::rules::unnecessary_paren_on_raise_exception(checker, expr);
}
}
@@ -957,9 +948,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
pylint::rules::misplaced_bare_raise(checker, raise);
}
}
Stmt::AugAssign(aug_assign @ ast::StmtAugAssign { target, .. }) => {
Stmt::AugAssign(aug_assign) => {
if checker.is_rule_enabled(Rule::GlobalStatement) {
if let Expr::Name(ast::ExprName { id, .. }) = target.as_ref() {
if let Expr::Name(ast::ExprName { id, .. }) = aug_assign.target.as_ref() {
pylint::rules::global_statement(checker, id);
}
}
@@ -967,13 +958,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
ruff::rules::sort_dunder_all_aug_assign(checker, aug_assign);
}
}
Stmt::If(
if_ @ ast::StmtIf {
test,
elif_else_clauses,
..
},
) => {
Stmt::If(if_) => {
if checker.is_rule_enabled(Rule::TooManyNestedBlocks) {
pylint::rules::too_many_nested_blocks(checker, stmt);
}
@@ -1036,33 +1021,33 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
Rule::PatchVersionComparison,
Rule::WrongTupleLengthVersionComparison,
]) {
if let Expr::BoolOp(ast::ExprBoolOp { values, .. }) = test.as_ref() {
if let Expr::BoolOp(ast::ExprBoolOp { values, .. }) = if_.test.as_ref() {
for value in values {
flake8_pyi::rules::unrecognized_version_info(checker, value);
}
} else {
flake8_pyi::rules::unrecognized_version_info(checker, test);
flake8_pyi::rules::unrecognized_version_info(checker, &if_.test);
}
}
if checker.any_rule_enabled(&[
Rule::UnrecognizedPlatformCheck,
Rule::UnrecognizedPlatformName,
]) {
if let Expr::BoolOp(ast::ExprBoolOp { values, .. }) = test.as_ref() {
if let Expr::BoolOp(ast::ExprBoolOp { values, .. }) = if_.test.as_ref() {
for value in values {
flake8_pyi::rules::unrecognized_platform(checker, value);
}
} else {
flake8_pyi::rules::unrecognized_platform(checker, test);
flake8_pyi::rules::unrecognized_platform(checker, &if_.test);
}
}
if checker.is_rule_enabled(Rule::ComplexIfStatementInStub) {
if let Expr::BoolOp(ast::ExprBoolOp { values, .. }) = test.as_ref() {
if let Expr::BoolOp(ast::ExprBoolOp { values, .. }) = if_.test.as_ref() {
for value in values {
flake8_pyi::rules::complex_if_statement_in_stub(checker, value);
}
} else {
flake8_pyi::rules::complex_if_statement_in_stub(checker, test);
flake8_pyi::rules::complex_if_statement_in_stub(checker, &if_.test);
}
}
}
@@ -1091,10 +1076,10 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
}
}
let has_else_clause = elif_else_clauses.iter().any(|clause| clause.test.is_none());
let has_else_clause = if_.elif_else_clauses.iter().any(|clause| clause.test.is_none());
bad_version_info_comparison(checker, test.as_ref(), has_else_clause);
for clause in elif_else_clauses {
bad_version_info_comparison(checker, if_.test.as_ref(), has_else_clause);
for clause in &if_.elif_else_clauses {
if let Some(test) = clause.test.as_ref() {
bad_version_info_comparison(checker, test, has_else_clause);
}
@@ -1105,44 +1090,37 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
ruff::rules::if_key_in_dict_del(checker, if_);
}
if checker.is_rule_enabled(Rule::NeedlessElse) {
ruff::rules::needless_else(checker, if_.into());
ruff::rules::needless_else(checker, (&**if_).into());
}
}
Stmt::Assert(
assert_stmt @ ast::StmtAssert {
test,
msg,
range: _,
node_index: _,
},
) => {
Stmt::Assert(assert_stmt) => {
if !checker.semantic.in_type_checking_block() {
if checker.is_rule_enabled(Rule::Assert) {
flake8_bandit::rules::assert_used(checker, stmt);
}
}
if checker.is_rule_enabled(Rule::AssertTuple) {
pyflakes::rules::assert_tuple(checker, stmt, test);
pyflakes::rules::assert_tuple(checker, stmt, &assert_stmt.test);
}
if checker.is_rule_enabled(Rule::AssertFalse) {
flake8_bugbear::rules::assert_false(checker, stmt, test, msg.as_deref());
flake8_bugbear::rules::assert_false(checker, stmt, &assert_stmt.test, assert_stmt.msg.as_deref());
}
if checker.is_rule_enabled(Rule::PytestAssertAlwaysFalse) {
flake8_pytest_style::rules::assert_falsy(checker, stmt, test);
flake8_pytest_style::rules::assert_falsy(checker, stmt, &assert_stmt.test);
}
if checker.is_rule_enabled(Rule::PytestCompositeAssertion) {
flake8_pytest_style::rules::composite_condition(
checker,
stmt,
test,
msg.as_deref(),
&assert_stmt.test,
assert_stmt.msg.as_deref(),
);
}
if checker.is_rule_enabled(Rule::AssertOnStringLiteral) {
pylint::rules::assert_on_string_literal(checker, test);
pylint::rules::assert_on_string_literal(checker, &assert_stmt.test);
}
if checker.is_rule_enabled(Rule::InvalidMockAccess) {
pygrep_hooks::rules::non_existent_mock_method(checker, test);
pygrep_hooks::rules::non_existent_mock_method(checker, &assert_stmt.test);
}
if checker.is_rule_enabled(Rule::AssertWithPrintMessage) {
ruff::rules::assert_with_print_message(checker, assert_stmt);
@@ -1151,18 +1129,18 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
ruff::rules::invalid_assert_message_literal_argument(checker, assert_stmt);
}
}
Stmt::With(with_stmt @ ast::StmtWith { items, body, .. }) => {
Stmt::With(with_stmt) => {
if checker.is_rule_enabled(Rule::TooManyNestedBlocks) {
pylint::rules::too_many_nested_blocks(checker, stmt);
}
if checker.is_rule_enabled(Rule::AssertRaisesException) {
flake8_bugbear::rules::assert_raises_exception(checker, items);
flake8_bugbear::rules::assert_raises_exception(checker, &with_stmt.items);
}
if checker.is_rule_enabled(Rule::PytestRaisesWithMultipleStatements) {
flake8_pytest_style::rules::complex_raises(checker, stmt, items, body);
flake8_pytest_style::rules::complex_raises(checker, stmt, &with_stmt.items, &with_stmt.body);
}
if checker.is_rule_enabled(Rule::PytestWarnsWithMultipleStatements) {
flake8_pytest_style::rules::complex_warns(checker, stmt, items, body);
flake8_pytest_style::rules::complex_warns(checker, stmt, &with_stmt.items, &with_stmt.body);
}
if checker.is_rule_enabled(Rule::MultipleWithStatements) {
flake8_simplify::rules::multiple_with_statements(
@@ -1184,10 +1162,10 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
pylint::rules::useless_with_lock(checker, with_stmt);
}
if checker.is_rule_enabled(Rule::CancelScopeNoCheckpoint) {
flake8_async::rules::cancel_scope_no_checkpoint(checker, with_stmt, items);
flake8_async::rules::cancel_scope_no_checkpoint(checker, with_stmt, &with_stmt.items);
}
}
Stmt::While(while_stmt @ ast::StmtWhile { body, orelse, .. }) => {
Stmt::While(while_stmt) => {
if checker.is_rule_enabled(Rule::TooManyNestedBlocks) {
pylint::rules::too_many_nested_blocks(checker, stmt);
}
@@ -1195,29 +1173,19 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
flake8_bugbear::rules::function_uses_loop_variable(checker, &Node::Stmt(stmt));
}
if checker.is_rule_enabled(Rule::UselessElseOnLoop) {
pylint::rules::useless_else_on_loop(checker, stmt, body, orelse);
pylint::rules::useless_else_on_loop(checker, stmt, &while_stmt.body, &while_stmt.orelse);
}
if checker.is_rule_enabled(Rule::TryExceptInLoop) {
perflint::rules::try_except_in_loop(checker, body);
perflint::rules::try_except_in_loop(checker, &while_stmt.body);
}
if checker.is_rule_enabled(Rule::AsyncBusyWait) {
flake8_async::rules::async_busy_wait(checker, while_stmt);
}
if checker.is_rule_enabled(Rule::NeedlessElse) {
ruff::rules::needless_else(checker, while_stmt.into());
ruff::rules::needless_else(checker, (&**while_stmt).into());
}
}
Stmt::For(
for_stmt @ ast::StmtFor {
target,
body,
iter,
orelse,
is_async,
range: _,
node_index: _,
},
) => {
Stmt::For(for_stmt) => {
if checker.is_rule_enabled(Rule::TooManyNestedBlocks) {
pylint::rules::too_many_nested_blocks(checker, stmt);
}
@@ -1235,25 +1203,25 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
checker.analyze.for_loops.push(checker.semantic.snapshot());
}
if checker.is_rule_enabled(Rule::LoopVariableOverridesIterator) {
flake8_bugbear::rules::loop_variable_overrides_iterator(checker, target, iter);
flake8_bugbear::rules::loop_variable_overrides_iterator(checker, &for_stmt.target, &for_stmt.iter);
}
if checker.is_rule_enabled(Rule::FunctionUsesLoopVariable) {
flake8_bugbear::rules::function_uses_loop_variable(checker, &Node::Stmt(stmt));
}
if checker.is_rule_enabled(Rule::ReuseOfGroupbyGenerator) {
flake8_bugbear::rules::reuse_of_groupby_generator(checker, target, body, iter);
flake8_bugbear::rules::reuse_of_groupby_generator(checker, &for_stmt.target, &for_stmt.body, &for_stmt.iter);
}
if checker.is_rule_enabled(Rule::UselessElseOnLoop) {
pylint::rules::useless_else_on_loop(checker, stmt, body, orelse);
pylint::rules::useless_else_on_loop(checker, stmt, &for_stmt.body, &for_stmt.orelse);
}
if checker.is_rule_enabled(Rule::RedefinedLoopName) {
pylint::rules::redefined_loop_name(checker, stmt);
}
if checker.is_rule_enabled(Rule::IterationOverSet) {
pylint::rules::iteration_over_set(checker, iter);
pylint::rules::iteration_over_set(checker, &for_stmt.iter);
}
if checker.is_rule_enabled(Rule::DictIterMissingItems) {
pylint::rules::dict_iter_missing_items(checker, target, iter);
pylint::rules::dict_iter_missing_items(checker, &for_stmt.target, &for_stmt.iter);
}
if checker.is_rule_enabled(Rule::ManualListCopy) {
perflint::rules::manual_list_copy(checker, for_stmt);
@@ -1263,7 +1231,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
pylint::rules::modified_iterating_set(checker, for_stmt);
}
if checker.is_rule_enabled(Rule::UnnecessaryListCast) {
perflint::rules::unnecessary_list_cast(checker, iter, body);
perflint::rules::unnecessary_list_cast(checker, &for_stmt.iter, &for_stmt.body);
}
if checker.is_rule_enabled(Rule::UnnecessaryListIndexLookup) {
pylint::rules::unnecessary_list_index_lookup(checker, for_stmt);
@@ -1274,7 +1242,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.is_rule_enabled(Rule::ReadlinesInFor) {
refurb::rules::readlines_in_for(checker, for_stmt);
}
if !*is_async {
if !for_stmt.is_async {
if checker.is_rule_enabled(Rule::ReimplementedBuiltin) {
flake8_simplify::rules::convert_for_loop_to_any_all(checker, stmt);
}
@@ -1282,7 +1250,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
flake8_simplify::rules::key_in_dict_for(checker, for_stmt);
}
if checker.is_rule_enabled(Rule::TryExceptInLoop) {
perflint::rules::try_except_in_loop(checker, body);
perflint::rules::try_except_in_loop(checker, &for_stmt.body);
}
if checker.is_rule_enabled(Rule::ForLoopSetMutations) {
refurb::rules::for_loop_set_mutations(checker, for_stmt);
@@ -1292,141 +1260,133 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
}
}
if checker.is_rule_enabled(Rule::NeedlessElse) {
ruff::rules::needless_else(checker, for_stmt.into());
ruff::rules::needless_else(checker, (&**for_stmt).into());
}
}
Stmt::Try(
try_stmt @ ast::StmtTry {
body,
handlers,
orelse,
finalbody,
..
},
) => {
Stmt::Try(try_stmt) => {
if checker.is_rule_enabled(Rule::TooManyNestedBlocks) {
pylint::rules::too_many_nested_blocks(checker, stmt);
}
if checker.is_rule_enabled(Rule::JumpStatementInFinally) {
flake8_bugbear::rules::jump_statement_in_finally(checker, finalbody);
flake8_bugbear::rules::jump_statement_in_finally(checker, &try_stmt.finalbody);
}
if checker.is_rule_enabled(Rule::ContinueInFinally) {
if checker.target_version() <= PythonVersion::PY38 {
pylint::rules::continue_in_finally(checker, finalbody);
pylint::rules::continue_in_finally(checker, &try_stmt.finalbody);
}
}
if checker.is_rule_enabled(Rule::DefaultExceptNotLast) {
pyflakes::rules::default_except_not_last(checker, handlers, checker.locator);
pyflakes::rules::default_except_not_last(checker, &try_stmt.handlers, checker.locator);
}
if checker.any_rule_enabled(&[
Rule::DuplicateHandlerException,
Rule::DuplicateTryBlockException,
]) {
flake8_bugbear::rules::duplicate_exceptions(checker, handlers);
flake8_bugbear::rules::duplicate_exceptions(checker, &try_stmt.handlers);
}
if checker.is_rule_enabled(Rule::RedundantTupleInExceptionHandler) {
flake8_bugbear::rules::redundant_tuple_in_exception_handler(checker, handlers);
flake8_bugbear::rules::redundant_tuple_in_exception_handler(checker, &try_stmt.handlers);
}
if checker.is_rule_enabled(Rule::OSErrorAlias) {
pyupgrade::rules::os_error_alias_handlers(checker, handlers);
pyupgrade::rules::os_error_alias_handlers(checker, &try_stmt.handlers);
}
if checker.is_rule_enabled(Rule::TimeoutErrorAlias) {
if checker.target_version() >= PythonVersion::PY310 {
pyupgrade::rules::timeout_error_alias_handlers(checker, handlers);
pyupgrade::rules::timeout_error_alias_handlers(checker, &try_stmt.handlers);
}
}
if checker.is_rule_enabled(Rule::PytestAssertInExcept) {
flake8_pytest_style::rules::assert_in_exception_handler(checker, handlers);
flake8_pytest_style::rules::assert_in_exception_handler(checker, &try_stmt.handlers);
}
if checker.is_rule_enabled(Rule::SuppressibleException) {
flake8_simplify::rules::suppressible_exception(
checker, stmt, body, handlers, orelse, finalbody,
checker, stmt, &try_stmt.body, &try_stmt.handlers, &try_stmt.orelse, &try_stmt.finalbody,
);
}
if checker.is_rule_enabled(Rule::ReturnInTryExceptFinally) {
flake8_simplify::rules::return_in_try_except_finally(
checker, body, handlers, finalbody,
checker, &try_stmt.body, &try_stmt.handlers, &try_stmt.finalbody,
);
}
if checker.is_rule_enabled(Rule::TryConsiderElse) {
tryceratops::rules::try_consider_else(checker, body, orelse, handlers);
tryceratops::rules::try_consider_else(checker, &try_stmt.body, &try_stmt.orelse, &try_stmt.handlers);
}
if checker.is_rule_enabled(Rule::VerboseRaise) {
tryceratops::rules::verbose_raise(checker, handlers);
tryceratops::rules::verbose_raise(checker, &try_stmt.handlers);
}
if checker.is_rule_enabled(Rule::VerboseLogMessage) {
tryceratops::rules::verbose_log_message(checker, handlers);
tryceratops::rules::verbose_log_message(checker, &try_stmt.handlers);
}
if checker.is_rule_enabled(Rule::RaiseWithinTry) {
tryceratops::rules::raise_within_try(checker, body, handlers);
tryceratops::rules::raise_within_try(checker, &try_stmt.body, &try_stmt.handlers);
}
if checker.is_rule_enabled(Rule::UselessTryExcept) {
tryceratops::rules::useless_try_except(checker, handlers);
tryceratops::rules::useless_try_except(checker, &try_stmt.handlers);
}
if checker.is_rule_enabled(Rule::ErrorInsteadOfException) {
tryceratops::rules::error_instead_of_exception(checker, handlers);
tryceratops::rules::error_instead_of_exception(checker, &try_stmt.handlers);
}
if checker.is_rule_enabled(Rule::NeedlessElse) {
ruff::rules::needless_else(checker, try_stmt.into());
ruff::rules::needless_else(checker, (&**try_stmt).into());
}
}
Stmt::Assign(assign @ ast::StmtAssign { targets, value, .. }) => {
Stmt::Assign(assign) => {
if checker.is_rule_enabled(Rule::SelfOrClsAssignment) {
for target in targets {
for target in &assign.targets {
pylint::rules::self_or_cls_assignment(checker, target);
}
}
if checker.is_rule_enabled(Rule::RedeclaredAssignedName) {
pylint::rules::redeclared_assigned_name(checker, targets);
pylint::rules::redeclared_assigned_name(checker, &assign.targets);
}
if checker.is_rule_enabled(Rule::LambdaAssignment) {
if let [target] = &targets[..] {
pycodestyle::rules::lambda_assignment(checker, target, value, None, stmt);
if let [target] = &assign.targets[..] {
pycodestyle::rules::lambda_assignment(checker, target, &assign.value, None, stmt);
}
}
if checker.is_rule_enabled(Rule::AssignmentToOsEnviron) {
flake8_bugbear::rules::assignment_to_os_environ(checker, targets);
flake8_bugbear::rules::assignment_to_os_environ(checker, &assign.targets);
}
if checker.is_rule_enabled(Rule::HardcodedPasswordString) {
flake8_bandit::rules::assign_hardcoded_password_string(checker, value, targets);
flake8_bandit::rules::assign_hardcoded_password_string(checker, &assign.value, &assign.targets);
}
if checker.is_rule_enabled(Rule::GlobalStatement) {
for target in targets {
if let Expr::Name(ast::ExprName { id, .. }) = target {
pylint::rules::global_statement(checker, id);
for target in &assign.targets {
if let Expr::Name(name_expr) = target {
pylint::rules::global_statement(checker, &name_expr.id);
}
}
}
if checker.is_rule_enabled(Rule::UselessMetaclassType) {
pyupgrade::rules::useless_metaclass_type(checker, stmt, value, targets);
pyupgrade::rules::useless_metaclass_type(checker, stmt, &assign.value, &assign.targets);
}
if checker.is_rule_enabled(Rule::ConvertTypedDictFunctionalToClass) {
pyupgrade::rules::convert_typed_dict_functional_to_class(
checker, stmt, targets, value,
checker, stmt, &assign.targets, &assign.value,
);
}
if checker.is_rule_enabled(Rule::ConvertNamedTupleFunctionalToClass) {
pyupgrade::rules::convert_named_tuple_functional_to_class(
checker, stmt, targets, value,
checker, stmt, &assign.targets, &assign.value,
);
}
if checker.is_rule_enabled(Rule::PandasDfVariableName) {
pandas_vet::rules::assignment_to_df(checker, targets);
pandas_vet::rules::assignment_to_df(checker, &assign.targets);
}
if checker.is_rule_enabled(Rule::AirflowVariableNameTaskIdMismatch) {
airflow::rules::variable_name_task_id(checker, targets, value);
airflow::rules::variable_name_task_id(checker, &assign.targets, &assign.value);
}
if checker.is_rule_enabled(Rule::SelfAssigningVariable) {
pylint::rules::self_assignment(checker, assign);
}
if checker.is_rule_enabled(Rule::TypeParamNameMismatch) {
pylint::rules::type_param_name_mismatch(checker, value, targets);
pylint::rules::type_param_name_mismatch(checker, &assign.value, &assign.targets);
}
if checker.is_rule_enabled(Rule::TypeNameIncorrectVariance) {
pylint::rules::type_name_incorrect_variance(checker, value);
pylint::rules::type_name_incorrect_variance(checker, &assign.value);
}
if checker.is_rule_enabled(Rule::TypeBivariance) {
pylint::rules::type_bivariance(checker, value);
pylint::rules::type_bivariance(checker, &assign.value);
}
if checker.is_rule_enabled(Rule::NonAugmentedAssignment) {
pylint::rules::non_augmented_assignment(checker, assign);
@@ -1449,14 +1409,14 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
.any(|scope| scope.kind.is_function())
{
if checker.is_rule_enabled(Rule::UnprefixedTypeParam) {
flake8_pyi::rules::prefix_type_params(checker, value, targets);
flake8_pyi::rules::prefix_type_params(checker, &assign.value, &assign.targets);
}
if checker.is_rule_enabled(Rule::AssignmentDefaultInStub) {
flake8_pyi::rules::assignment_default_in_stub(checker, targets, value);
flake8_pyi::rules::assignment_default_in_stub(checker, &assign.targets, &assign.value);
}
if checker.is_rule_enabled(Rule::UnannotatedAssignmentInStub) {
flake8_pyi::rules::unannotated_assignment_in_stub(
checker, targets, value,
checker, &assign.targets, &assign.value,
);
}
if checker.is_rule_enabled(Rule::ComplexAssignmentInStub) {
@@ -1464,7 +1424,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
}
if checker.is_rule_enabled(Rule::TypeAliasWithoutAnnotation) {
flake8_pyi::rules::type_alias_without_annotation(
checker, value, targets,
checker, &assign.value, &assign.targets,
);
}
}
@@ -1477,15 +1437,10 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
pyupgrade::rules::non_pep695_type_alias_type(checker, assign);
}
}
Stmt::AnnAssign(
assign_stmt @ ast::StmtAnnAssign {
target,
value,
annotation,
..
},
) => {
if let Some(value) = value {
Stmt::AnnAssign(assign_stmt) => {
let target = &assign_stmt.target;
let annotation = &assign_stmt.annotation;
if let Some(value) = &assign_stmt.value {
if checker.is_rule_enabled(Rule::LambdaAssignment) {
pycodestyle::rules::lambda_assignment(
checker,
@@ -1506,7 +1461,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
flake8_bugbear::rules::unintentional_type_annotation(
checker,
target,
value.as_deref(),
assign_stmt.value.as_deref(),
stmt,
);
}
@@ -1514,7 +1469,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
pyupgrade::rules::non_pep695_type_alias(checker, assign_stmt);
}
if checker.is_rule_enabled(Rule::HardcodedPasswordString) {
if let Some(value) = value.as_deref() {
if let Some(value) = assign_stmt.value.as_deref() {
flake8_bandit::rules::assign_hardcoded_password_string(
checker,
value,
@@ -1526,7 +1481,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
ruff::rules::sort_dunder_all_ann_assign(checker, assign_stmt);
}
if checker.source_type.is_stub() {
if let Some(value) = value {
if let Some(value) = &assign_stmt.value {
if checker.is_rule_enabled(Rule::AssignmentDefaultInStub) {
// Ignore assignments in function bodies; those are covered by other rules.
if !checker
@@ -1563,7 +1518,8 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
}
}
}
Stmt::TypeAlias(ast::StmtTypeAlias { name, .. }) => {
Stmt::TypeAlias(type_alias) => {
let name = &type_alias.name;
if checker.is_rule_enabled(Rule::SnakeCaseTypeAlias) {
flake8_pyi::rules::snake_case_type_alias(checker, name);
}
@@ -1571,17 +1527,12 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
flake8_pyi::rules::t_suffixed_type_alias(checker, name);
}
}
Stmt::Delete(
delete @ ast::StmtDelete {
targets,
range: _,
node_index: _,
},
) => {
Stmt::Delete(delete) => {
let targets = &delete.targets;
if checker.is_rule_enabled(Rule::GlobalStatement) {
for target in targets {
if let Expr::Name(ast::ExprName { id, .. }) = target {
pylint::rules::global_statement(checker, id);
if let Expr::Name(name_expr) = target {
pylint::rules::global_statement(checker, &name_expr.id);
}
}
}
@@ -1618,12 +1569,13 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
pylint::rules::useless_exception_statement(checker, expr);
}
}
Stmt::Match(ast::StmtMatch {
subject: _,
cases,
range: _,
node_index: _,
}) => {
Stmt::Match(match_stmt) => {
let ast::StmtMatch {
subject: _,
cases,
range: _,
node_index: _,
} = &**match_stmt;
if checker.is_rule_enabled(Rule::NanComparison) {
pylint::rules::nan_comparison_match(checker, cases);
}

View File

@@ -437,6 +437,15 @@ impl<'a> Checker<'a> {
}
}
/// Returns the [`Tokens`] for the parsed source file.
///
///
/// Unlike [`Self::tokens`], this method always returns
/// the tokens for the current file, even when within a parsed type annotation.
pub(crate) fn source_tokens(&self) -> &'a Tokens {
self.parsed.tokens()
}
/// The [`Locator`] for the current file, which enables extraction of source code from byte
/// offsets.
pub(crate) const fn locator(&self) -> &'a Locator<'a> {
@@ -773,7 +782,10 @@ impl SemanticSyntaxContext for Checker<'_> {
for scope in self.semantic.current_scopes() {
match scope.kind {
ScopeKind::Class(_) | ScopeKind::Lambda(_) => return false,
ScopeKind::Function(ast::StmtFunctionDef { is_async, .. }) => return *is_async,
ScopeKind::Function(function_def) => {
let is_async = &function_def.is_async;
return *is_async;
}
ScopeKind::Generator { .. }
| ScopeKind::Module
| ScopeKind::Type
@@ -861,9 +873,13 @@ impl SemanticSyntaxContext for Checker<'_> {
for parent in self.semantic.current_statements().skip(1) {
match parent {
Stmt::For(ast::StmtFor { orelse, .. })
| Stmt::While(ast::StmtWhile { orelse, .. }) => {
if !orelse.contains(child) {
Stmt::For(node) => {
if !node.orelse.contains(child) {
return true;
}
}
Stmt::While(node) => {
if !node.orelse.contains(child) {
return true;
}
}
@@ -879,7 +895,8 @@ impl SemanticSyntaxContext for Checker<'_> {
fn is_bound_parameter(&self, name: &str) -> bool {
match self.semantic.current_scope().kind {
ScopeKind::Function(ast::StmtFunctionDef { parameters, .. }) => {
ScopeKind::Function(function_def) => {
let parameters = &function_def.parameters;
parameters.includes(name)
}
ScopeKind::Class(_)
@@ -923,12 +940,13 @@ impl<'a> Visitor<'a> for Checker<'a> {
{
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING_BOUNDARY;
}
Stmt::ImportFrom(ast::StmtImportFrom { module, names, .. }) => {
Stmt::ImportFrom(node) => {
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING_BOUNDARY;
// Allow __future__ imports until we see a non-__future__ import.
if let Some("__future__") = module.as_deref() {
if names
if let Some("__future__") = node.module.as_deref() {
if node
.names
.iter()
.any(|alias| alias.name.as_str() == "annotations")
{
@@ -972,20 +990,22 @@ impl<'a> Visitor<'a> for Checker<'a> {
// Step 1: Binding
match stmt {
Stmt::AugAssign(ast::StmtAugAssign {
target,
op: _,
value: _,
range: _,
node_index: _,
}) => {
Stmt::AugAssign(node) => {
let ast::StmtAugAssign {
target,
op: _,
value: _,
range: _,
node_index: _,
} = &**node;
self.handle_node_load(target);
}
Stmt::Import(ast::StmtImport {
names,
range: _,
node_index: _,
}) => {
Stmt::Import(node) => {
let ast::StmtImport {
names,
range: _,
node_index: _,
} = &**node;
if self.semantic.at_top_level() {
self.importer.visit_import(stmt);
}
@@ -1034,13 +1054,14 @@ impl<'a> Visitor<'a> for Checker<'a> {
}
}
}
Stmt::ImportFrom(ast::StmtImportFrom {
names,
module,
level,
range: _,
node_index: _,
}) => {
Stmt::ImportFrom(node) => {
let ast::StmtImportFrom {
names,
module,
level,
range: _,
node_index: _,
} = &**node;
if self.semantic.at_top_level() {
self.importer.visit_import(stmt);
}
@@ -1101,11 +1122,12 @@ impl<'a> Visitor<'a> for Checker<'a> {
}
}
}
Stmt::Global(ast::StmtGlobal {
names,
range: _,
node_index: _,
}) => {
Stmt::Global(node) => {
let ast::StmtGlobal {
names,
range: _,
node_index: _,
} = &**node;
if !self.semantic.scope_id.is_global() {
for name in names {
let binding_id = self.semantic.global_scope().get(name);
@@ -1127,11 +1149,12 @@ impl<'a> Visitor<'a> for Checker<'a> {
}
}
}
Stmt::Nonlocal(ast::StmtNonlocal {
names,
range: _,
node_index: _,
}) => {
Stmt::Nonlocal(node) => {
let ast::StmtNonlocal {
names,
range: _,
node_index: _,
} = &**node;
if !self.semantic.scope_id.is_global() {
for name in names {
if let Some((scope_id, binding_id)) = self.semantic.nonlocal(name) {
@@ -1165,17 +1188,13 @@ impl<'a> Visitor<'a> for Checker<'a> {
// Step 2: Traversal
match stmt {
Stmt::FunctionDef(
function_def @ ast::StmtFunctionDef {
name,
body,
parameters,
decorator_list,
returns,
type_params,
..
},
) => {
Stmt::FunctionDef(function_def) => {
let name = &function_def.name;
let body = &function_def.body;
let parameters = &function_def.parameters;
let decorator_list = &function_def.decorator_list;
let returns = &function_def.returns;
let type_params = &function_def.type_params;
// Visit the decorators and arguments, but avoid the body, which will be
// deferred.
for decorator in decorator_list {
@@ -1304,16 +1323,12 @@ impl<'a> Visitor<'a> for Checker<'a> {
BindingFlags::empty(),
);
}
Stmt::ClassDef(
class_def @ ast::StmtClassDef {
name,
body,
arguments,
decorator_list,
type_params,
..
},
) => {
Stmt::ClassDef(class_def) => {
let name = &class_def.name;
let body = &class_def.body;
let arguments = &class_def.arguments;
let decorator_list = &class_def.decorator_list;
let type_params = &class_def.type_params;
for decorator in decorator_list {
self.visit_decorator(decorator);
}
@@ -1360,30 +1375,20 @@ impl<'a> Visitor<'a> for Checker<'a> {
BindingFlags::empty(),
);
}
Stmt::TypeAlias(ast::StmtTypeAlias {
range: _,
node_index: _,
name,
type_params,
value,
}) => {
Stmt::TypeAlias(node) => {
self.semantic.push_scope(ScopeKind::Type);
if let Some(type_params) = type_params {
if let Some(type_params) = &node.type_params {
self.visit_type_params(type_params);
}
self.visit_deferred_type_alias_value(value);
self.visit_deferred_type_alias_value(&node.value);
self.semantic.pop_scope();
self.visit_expr(name);
self.visit_expr(&node.name);
}
Stmt::Try(
try_node @ ast::StmtTry {
body,
handlers,
orelse,
finalbody,
..
},
) => {
Stmt::Try(try_node) => {
let body = &try_node.body;
let handlers = &try_node.handlers;
let orelse = &try_node.orelse;
let finalbody = &try_node.finalbody;
// Iterate over the `body`, then the `handlers`, then the `orelse`, then the
// `finalbody`, but treat the body and the `orelse` as a single branch for
// flow analysis purposes.
@@ -1409,64 +1414,60 @@ impl<'a> Visitor<'a> for Checker<'a> {
self.visit_body(finalbody);
self.semantic.pop_branch();
}
Stmt::AnnAssign(ast::StmtAnnAssign {
target,
annotation,
value,
..
}) => {
Stmt::AnnAssign(node) => {
match AnnotationContext::from_model(
&self.semantic,
self.settings(),
self.target_version(),
) {
AnnotationContext::RuntimeRequired => {
self.visit_runtime_required_annotation(annotation);
self.visit_runtime_required_annotation(&node.annotation);
}
AnnotationContext::RuntimeEvaluated
if flake8_type_checking::helpers::is_dataclass_meta_annotation(
annotation,
&node.annotation,
self.semantic(),
) =>
{
self.visit_runtime_required_annotation(annotation);
self.visit_runtime_required_annotation(&node.annotation);
}
AnnotationContext::RuntimeEvaluated => {
self.visit_runtime_evaluated_annotation(annotation);
self.visit_runtime_evaluated_annotation(&node.annotation);
}
AnnotationContext::TypingOnly
if flake8_type_checking::helpers::is_dataclass_meta_annotation(
annotation,
&node.annotation,
self.semantic(),
) =>
{
if let Expr::Subscript(subscript) = &**annotation {
if let Expr::Subscript(subscript) = &*node.annotation {
// Ex) `InitVar[str]`
self.visit_runtime_required_annotation(&subscript.value);
self.visit_annotation(&subscript.slice);
} else {
// Ex) `InitVar`
self.visit_runtime_required_annotation(annotation);
self.visit_runtime_required_annotation(&node.annotation);
}
}
AnnotationContext::TypingOnly => self.visit_annotation(annotation),
AnnotationContext::TypingOnly => self.visit_annotation(&node.annotation),
}
if let Some(expr) = value {
if self.semantic.match_typing_expr(annotation, "TypeAlias") {
if let Some(expr) = &node.value {
if self.semantic.match_typing_expr(&node.annotation, "TypeAlias") {
self.visit_annotated_type_alias_value(expr);
} else {
self.visit_expr(expr);
}
}
self.visit_expr(target);
self.visit_expr(&node.target);
}
Stmt::Assert(ast::StmtAssert {
test,
msg,
range: _,
node_index: _,
}) => {
Stmt::Assert(node) => {
let ast::StmtAssert {
test,
msg,
range: _,
node_index: _,
} = &**node;
let snapshot = self.semantic.flags;
self.semantic.flags |= SemanticModelFlags::ASSERT_STATEMENT;
self.visit_boolean_test(test);
@@ -1475,13 +1476,14 @@ impl<'a> Visitor<'a> for Checker<'a> {
}
self.semantic.flags = snapshot;
}
Stmt::With(ast::StmtWith {
items,
body,
is_async: _,
range: _,
node_index: _,
}) => {
Stmt::With(node) => {
let ast::StmtWith {
items,
body,
is_async: _,
range: _,
node_index: _,
} = &**node;
for item in items {
self.visit_with_item(item);
}
@@ -1489,26 +1491,22 @@ impl<'a> Visitor<'a> for Checker<'a> {
self.visit_body(body);
self.semantic.pop_branch();
}
Stmt::While(ast::StmtWhile {
test,
body,
orelse,
range: _,
node_index: _,
}) => {
Stmt::While(node) => {
let ast::StmtWhile {
test,
body,
orelse,
range: _,
node_index: _,
} = &**node;
self.visit_boolean_test(test);
self.visit_body(body);
self.visit_body(orelse);
}
Stmt::If(
stmt_if @ ast::StmtIf {
test,
body,
elif_else_clauses,
range: _,
node_index: _,
},
) => {
Stmt::If(stmt_if) => {
let test = &stmt_if.test;
let body = &stmt_if.body;
let elif_else_clauses = &stmt_if.elif_else_clauses;
self.visit_boolean_test(test);
self.semantic.push_branch();
@@ -1533,14 +1531,14 @@ impl<'a> Visitor<'a> for Checker<'a> {
if self.semantic().at_top_level() || self.semantic().current_scope().kind.is_class() {
match stmt {
Stmt::Assign(ast::StmtAssign { targets, .. }) => {
if let [Expr::Name(_)] = targets.as_slice() {
Stmt::Assign(node) => {
if let [Expr::Name(_)] = node.targets.as_slice() {
self.docstring_state =
DocstringState::Expected(ExpectedDocstringKind::Attribute);
}
}
Stmt::AnnAssign(ast::StmtAnnAssign { target, .. }) => {
if target.is_name_expr() {
Stmt::AnnAssign(node) => {
if node.target.is_name_expr() {
self.docstring_state =
DocstringState::Expected(ExpectedDocstringKind::Attribute);
}
@@ -2681,13 +2679,13 @@ impl<'a> Checker<'a> {
match parent {
Stmt::TypeAlias(_) => flags.insert(BindingFlags::DEFERRED_TYPE_ALIAS),
Stmt::AnnAssign(ast::StmtAnnAssign { annotation, .. }) => {
Stmt::AnnAssign(node) => {
// TODO: It is a bit unfortunate that we do this check twice
// maybe we should change how we visit this statement
// so the semantic flag for the type alias sticks around
// until after we've handled this store, so we can check
// the flag instead of duplicating this check
if self.semantic.match_typing_expr(annotation, "TypeAlias") {
if self.semantic.match_typing_expr(&node.annotation, "TypeAlias") {
flags.insert(BindingFlags::ANNOTATED_TYPE_ALIAS);
}
}
@@ -2698,22 +2696,22 @@ impl<'a> Checker<'a> {
if scope.kind.is_module()
&& match parent {
Stmt::Assign(ast::StmtAssign { targets, .. }) => {
if let Some(Expr::Name(ast::ExprName { id, .. })) = targets.first() {
Stmt::Assign(node) => {
if let Some(Expr::Name(ast::ExprName { id, .. })) = node.targets.first() {
id == "__all__"
} else {
false
}
}
Stmt::AugAssign(ast::StmtAugAssign { target, .. }) => {
if let Expr::Name(ast::ExprName { id, .. }) = target.as_ref() {
Stmt::AugAssign(node) => {
if let Expr::Name(ast::ExprName { id, .. }) = node.target.as_ref() {
id == "__all__"
} else {
false
}
}
Stmt::AnnAssign(ast::StmtAnnAssign { target, .. }) => {
if let Expr::Name(ast::ExprName { id, .. }) = target.as_ref() {
Stmt::AnnAssign(node) => {
if let Expr::Name(ast::ExprName { id, .. }) = node.target.as_ref() {
id == "__all__"
} else {
false
@@ -2756,10 +2754,8 @@ impl<'a> Checker<'a> {
// Match the left-hand side of an annotated assignment without a value,
// like `x` in `x: int`. N.B. In stub files, these should be viewed
// as assignments on par with statements such as `x: int = 5`.
if matches!(
parent,
Stmt::AnnAssign(ast::StmtAnnAssign { value: None, .. })
) && !self.semantic.in_annotation()
if matches!(parent, Stmt::AnnAssign(node) if node.value.is_none())
&& !self.semantic.in_annotation()
{
self.add_binding(id, expr.range(), BindingKind::Annotation, flags);
return;
@@ -3031,19 +3027,16 @@ impl<'a> Checker<'a> {
let stmt = self.semantic.current_statement();
let Stmt::FunctionDef(ast::StmtFunctionDef {
body, parameters, ..
}) = stmt
else {
let Stmt::FunctionDef(node) = stmt else {
unreachable!("Expected Stmt::FunctionDef")
};
self.with_semantic_checker(|semantic, context| semantic.visit_stmt(stmt, context));
self.visit_parameters(parameters);
self.visit_parameters(&node.parameters);
// Set the docstring state before visiting the function body.
self.docstring_state = DocstringState::Expected(ExpectedDocstringKind::Function);
self.visit_body(body);
self.visit_body(&node.body);
}
}
self.semantic.restore(snapshot);

View File

@@ -12,17 +12,20 @@ use crate::fix::edits::delete_comment;
use crate::noqa::{
Code, Directive, FileExemption, FileNoqaDirectives, NoqaDirectives, NoqaMapping,
};
use crate::preview::is_range_suppressions_enabled;
use crate::registry::Rule;
use crate::rule_redirects::get_redirect_target;
use crate::rules::pygrep_hooks;
use crate::rules::ruff;
use crate::rules::ruff::rules::{UnusedCodes, UnusedNOQA};
use crate::settings::LinterSettings;
use crate::suppression::Suppressions;
use crate::{Edit, Fix, Locator};
use super::ast::LintContext;
/// RUF100
#[expect(clippy::too_many_arguments)]
pub(crate) fn check_noqa(
context: &mut LintContext,
path: &Path,
@@ -31,6 +34,7 @@ pub(crate) fn check_noqa(
noqa_line_for: &NoqaMapping,
analyze_directives: bool,
settings: &LinterSettings,
suppressions: &Suppressions,
) -> Vec<usize> {
// Identify any codes that are globally exempted (within the current file).
let file_noqa_directives =
@@ -40,7 +44,7 @@ pub(crate) fn check_noqa(
let mut noqa_directives =
NoqaDirectives::from_commented_ranges(comment_ranges, &settings.external, path, locator);
if file_noqa_directives.is_empty() && noqa_directives.is_empty() {
if file_noqa_directives.is_empty() && noqa_directives.is_empty() && suppressions.is_empty() {
return Vec::new();
}
@@ -60,11 +64,19 @@ pub(crate) fn check_noqa(
continue;
}
// Apply file-level suppressions first
if exemption.contains_secondary_code(code) {
ignored_diagnostics.push(index);
continue;
}
// Apply ranged suppressions next
if is_range_suppressions_enabled(settings) && suppressions.check_diagnostic(diagnostic) {
ignored_diagnostics.push(index);
continue;
}
// Apply end-of-line noqa suppressions last
let noqa_offsets = diagnostic
.parent()
.into_iter()
@@ -107,6 +119,9 @@ pub(crate) fn check_noqa(
}
}
// Diagnostics for unused/invalid range suppressions
suppressions.check_suppressions(context, locator);
// Enforce that the noqa directive was actually used (RUF100), unless RUF100 was itself
// suppressed.
if context.is_rule_enabled(Rule::UnusedNOQA)
@@ -128,8 +143,13 @@ pub(crate) fn check_noqa(
Directive::All(directive) => {
if matches.is_empty() {
let edit = delete_comment(directive.range(), locator);
let mut diagnostic = context
.report_diagnostic(UnusedNOQA { codes: None }, directive.range());
let mut diagnostic = context.report_diagnostic(
UnusedNOQA {
codes: None,
kind: ruff::rules::UnusedNOQAKind::Noqa,
},
directive.range(),
);
diagnostic.add_primary_tag(ruff_db::diagnostic::DiagnosticTag::Unnecessary);
diagnostic.set_fix(Fix::safe_edit(edit));
}
@@ -224,6 +244,7 @@ pub(crate) fn check_noqa(
.map(|code| (*code).to_string())
.collect(),
}),
kind: ruff::rules::UnusedNOQAKind::Noqa,
},
directive.range(),
);

View File

@@ -3,14 +3,13 @@
use anyhow::{Context, Result};
use ruff_python_ast::AnyNodeRef;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::{self, Tokens, parenthesized_range};
use ruff_python_ast::{self as ast, Arguments, ExceptHandler, Expr, ExprList, Parameters, Stmt};
use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer;
use ruff_python_trivia::textwrap::dedent_to;
use ruff_python_trivia::{
CommentRanges, PythonWhitespace, SimpleTokenKind, SimpleTokenizer, has_leading_content,
is_python_whitespace,
PythonWhitespace, SimpleTokenKind, SimpleTokenizer, has_leading_content, is_python_whitespace,
};
use ruff_source_file::{LineRanges, NewlineWithTrailingNewline, UniversalNewlines};
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
@@ -128,8 +127,8 @@ pub(crate) fn make_redundant_alias<'a>(
stmt: &Stmt,
) -> Vec<Edit> {
let aliases = match stmt {
Stmt::Import(ast::StmtImport { names, .. }) => names,
Stmt::ImportFrom(ast::StmtImportFrom { names, .. }) => names,
Stmt::Import(node) => &node.names,
Stmt::ImportFrom(node) => &node.names,
_ => {
return Vec::new();
}
@@ -209,7 +208,7 @@ pub(crate) fn remove_argument<T: Ranged>(
arguments: &Arguments,
parentheses: Parentheses,
source: &str,
comment_ranges: &CommentRanges,
tokens: &Tokens,
) -> Result<Edit> {
// Partition into arguments before and after the argument to remove.
let (before, after): (Vec<_>, Vec<_>) = arguments
@@ -224,7 +223,7 @@ pub(crate) fn remove_argument<T: Ranged>(
.context("Unable to find argument")?;
let parenthesized_range =
parenthesized_range(arg.value().into(), arguments.into(), comment_ranges, source)
token::parenthesized_range(arg.value().into(), arguments.into(), tokens)
.unwrap_or(arg.range());
if !after.is_empty() {
@@ -270,25 +269,14 @@ pub(crate) fn remove_argument<T: Ranged>(
///
/// The new argument will be inserted before the first existing keyword argument in `arguments`, if
/// there are any present. Otherwise, the new argument is added to the end of the argument list.
pub(crate) fn add_argument(
argument: &str,
arguments: &Arguments,
comment_ranges: &CommentRanges,
source: &str,
) -> Edit {
pub(crate) fn add_argument(argument: &str, arguments: &Arguments, tokens: &Tokens) -> Edit {
if let Some(ast::Keyword { range, value, .. }) = arguments.keywords.first() {
let keyword = parenthesized_range(value.into(), arguments.into(), comment_ranges, source)
.unwrap_or(*range);
let keyword = parenthesized_range(value.into(), arguments.into(), tokens).unwrap_or(*range);
Edit::insertion(format!("{argument}, "), keyword.start())
} else if let Some(last) = arguments.arguments_source_order().last() {
// Case 1: existing arguments, so append after the last argument.
let last = parenthesized_range(
last.value().into(),
arguments.into(),
comment_ranges,
source,
)
.unwrap_or(last.range());
let last = parenthesized_range(last.value().into(), arguments.into(), tokens)
.unwrap_or(last.range());
Edit::insertion(format!(", {argument}"), last.end())
} else {
// Case 2: no arguments. Add argument, without any trailing comma.
@@ -416,43 +404,46 @@ fn is_only<T: PartialEq>(vec: &[T], value: &T) -> bool {
/// Determine if a child is the only statement in its body.
fn is_lone_child(child: &Stmt, parent: &Stmt) -> bool {
match parent {
Stmt::FunctionDef(ast::StmtFunctionDef { body, .. })
| Stmt::ClassDef(ast::StmtClassDef { body, .. })
| Stmt::With(ast::StmtWith { body, .. }) => {
if is_only(body, child) {
Stmt::FunctionDef(node) => {
if is_only(&node.body, child) {
return true;
}
}
Stmt::For(ast::StmtFor { body, orelse, .. })
| Stmt::While(ast::StmtWhile { body, orelse, .. }) => {
if is_only(body, child) || is_only(orelse, child) {
Stmt::ClassDef(node) => {
if is_only(&node.body, child) {
return true;
}
}
Stmt::If(ast::StmtIf {
body,
elif_else_clauses,
..
}) => {
if is_only(body, child)
|| elif_else_clauses
Stmt::With(node) => {
if is_only(&node.body, child) {
return true;
}
}
Stmt::For(node) => {
if is_only(&node.body, child) || is_only(&node.orelse, child) {
return true;
}
}
Stmt::While(node) => {
if is_only(&node.body, child) || is_only(&node.orelse, child) {
return true;
}
}
Stmt::If(node) => {
if is_only(&node.body, child)
|| node
.elif_else_clauses
.iter()
.any(|ast::ElifElseClause { body, .. }| is_only(body, child))
{
return true;
}
}
Stmt::Try(ast::StmtTry {
body,
handlers,
orelse,
finalbody,
..
}) => {
if is_only(body, child)
|| is_only(orelse, child)
|| is_only(finalbody, child)
|| handlers.iter().any(|handler| match handler {
Stmt::Try(node) => {
if is_only(&node.body, child)
|| is_only(&node.orelse, child)
|| is_only(&node.finalbody, child)
|| node.handlers.iter().any(|handler| match handler {
ExceptHandler::ExceptHandler(ast::ExceptHandlerExceptHandler {
body, ..
}) => is_only(body, child),
@@ -461,8 +452,8 @@ fn is_lone_child(child: &Stmt, parent: &Stmt) -> bool {
return true;
}
}
Stmt::Match(ast::StmtMatch { cases, .. }) => {
if cases.iter().any(|case| is_only(&case.body, child)) {
Stmt::Match(node) => {
if node.cases.iter().any(|case| is_only(&case.body, child)) {
return true;
}
}

View File

@@ -236,9 +236,10 @@ impl<'a> Importer<'a> {
semantic: &SemanticModel<'a>,
type_checking_block: &Stmt,
) -> Option<&'a Stmt> {
let Stmt::If(ast::StmtIf { test, .. }) = type_checking_block else {
let Stmt::If(node) = type_checking_block else {
return None;
};
let test = &node.test;
let mut source = test;
while let Expr::Attribute(ast::ExprAttribute { value, .. }) = source.as_ref() {
@@ -453,17 +454,10 @@ impl<'a> Importer<'a> {
if stmt.start() >= at {
break;
}
if let Stmt::ImportFrom(ast::StmtImportFrom {
module: name,
names,
level,
range: _,
node_index: _,
}) = stmt
{
if *level == 0
&& name.as_ref().is_some_and(|name| name == module)
&& names.iter().all(|alias| alias.name.as_str() != "*")
if let Stmt::ImportFrom(node) = stmt {
if node.level == 0
&& node.module.as_ref().is_some_and(|name| name == module)
&& node.names.iter().all(|alias| alias.name.as_str() != "*")
{
import_from = Some(*stmt);
}

View File

@@ -32,6 +32,7 @@ use crate::rules::ruff::rules::test_rules::{self, TEST_RULES, TestRule};
use crate::settings::types::UnsafeFixes;
use crate::settings::{LinterSettings, TargetVersion, flags};
use crate::source_kind::SourceKind;
use crate::suppression::Suppressions;
use crate::{Locator, directives, fs};
pub(crate) mod float;
@@ -128,6 +129,7 @@ pub fn check_path(
source_type: PySourceType,
parsed: &Parsed<ModModule>,
target_version: TargetVersion,
suppressions: &Suppressions,
) -> Vec<Diagnostic> {
// Aggregate all diagnostics.
let mut context = LintContext::new(path, locator.contents(), settings);
@@ -339,6 +341,7 @@ pub fn check_path(
&directives.noqa_line_for,
parsed.has_valid_syntax(),
settings,
suppressions,
);
if noqa.is_enabled() {
for index in ignored.iter().rev() {
@@ -400,6 +403,9 @@ pub fn add_noqa_to_path(
&indexer,
);
// Parse range suppression comments
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
// Generate diagnostics, ignoring any existing `noqa` directives.
let diagnostics = check_path(
path,
@@ -414,6 +420,7 @@ pub fn add_noqa_to_path(
source_type,
&parsed,
target_version,
&suppressions,
);
// Add any missing `# noqa` pragmas.
@@ -427,6 +434,7 @@ pub fn add_noqa_to_path(
&directives.noqa_line_for,
stylist.line_ending(),
reason,
&suppressions,
)
}
@@ -461,6 +469,9 @@ pub fn lint_only(
&indexer,
);
// Parse range suppression comments
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
// Generate diagnostics.
let diagnostics = check_path(
path,
@@ -475,6 +486,7 @@ pub fn lint_only(
source_type,
&parsed,
target_version,
&suppressions,
);
LinterResult {
@@ -566,6 +578,9 @@ pub fn lint_fix<'a>(
&indexer,
);
// Parse range suppression comments
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
// Generate diagnostics.
let diagnostics = check_path(
path,
@@ -580,6 +595,7 @@ pub fn lint_fix<'a>(
source_type,
&parsed,
target_version,
&suppressions,
);
if iterations == 0 {
@@ -769,6 +785,7 @@ mod tests {
use crate::registry::Rule;
use crate::settings::LinterSettings;
use crate::source_kind::SourceKind;
use crate::suppression::Suppressions;
use crate::test::{TestedNotebook, assert_notebook_path, test_contents, test_snippet};
use crate::{Locator, assert_diagnostics, directives, settings};
@@ -944,6 +961,7 @@ mod tests {
&locator,
&indexer,
);
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
let mut diagnostics = check_path(
path,
None,
@@ -957,6 +975,7 @@ mod tests {
source_type,
&parsed,
target_version,
&suppressions,
);
diagnostics.sort_by(Diagnostic::ruff_start_ordering);
diagnostics

View File

@@ -20,12 +20,14 @@ use crate::Locator;
use crate::fs::relativize_path;
use crate::registry::Rule;
use crate::rule_redirects::get_redirect_target;
use crate::suppression::Suppressions;
/// Generates an array of edits that matches the length of `messages`.
/// Each potential edit in the array is paired, in order, with the associated diagnostic.
/// Each edit will add a `noqa` comment to the appropriate line in the source to hide
/// the diagnostic. These edits may conflict with each other and should not be applied
/// simultaneously.
#[expect(clippy::too_many_arguments)]
pub fn generate_noqa_edits(
path: &Path,
diagnostics: &[Diagnostic],
@@ -34,11 +36,19 @@ pub fn generate_noqa_edits(
external: &[String],
noqa_line_for: &NoqaMapping,
line_ending: LineEnding,
suppressions: &Suppressions,
) -> Vec<Option<Edit>> {
let file_directives = FileNoqaDirectives::extract(locator, comment_ranges, external, path);
let exemption = FileExemption::from(&file_directives);
let directives = NoqaDirectives::from_commented_ranges(comment_ranges, external, path, locator);
let comments = find_noqa_comments(diagnostics, locator, &exemption, &directives, noqa_line_for);
let comments = find_noqa_comments(
diagnostics,
locator,
&exemption,
&directives,
noqa_line_for,
suppressions,
);
build_noqa_edits_by_diagnostic(comments, locator, line_ending, None)
}
@@ -725,6 +735,7 @@ pub(crate) fn add_noqa(
noqa_line_for: &NoqaMapping,
line_ending: LineEnding,
reason: Option<&str>,
suppressions: &Suppressions,
) -> Result<usize> {
let (count, output) = add_noqa_inner(
path,
@@ -735,6 +746,7 @@ pub(crate) fn add_noqa(
noqa_line_for,
line_ending,
reason,
suppressions,
);
fs::write(path, output)?;
@@ -751,6 +763,7 @@ fn add_noqa_inner(
noqa_line_for: &NoqaMapping,
line_ending: LineEnding,
reason: Option<&str>,
suppressions: &Suppressions,
) -> (usize, String) {
let mut count = 0;
@@ -760,7 +773,14 @@ fn add_noqa_inner(
let directives = NoqaDirectives::from_commented_ranges(comment_ranges, external, path, locator);
let comments = find_noqa_comments(diagnostics, locator, &exemption, &directives, noqa_line_for);
let comments = find_noqa_comments(
diagnostics,
locator,
&exemption,
&directives,
noqa_line_for,
suppressions,
);
let edits = build_noqa_edits_by_line(comments, locator, line_ending, reason);
@@ -859,6 +879,7 @@ fn find_noqa_comments<'a>(
exemption: &'a FileExemption,
directives: &'a NoqaDirectives,
noqa_line_for: &NoqaMapping,
suppressions: &'a Suppressions,
) -> Vec<Option<NoqaComment<'a>>> {
// List of noqa comments, ordered to match up with `messages`
let mut comments_by_line: Vec<Option<NoqaComment<'a>>> = vec![];
@@ -875,6 +896,12 @@ fn find_noqa_comments<'a>(
continue;
}
// Apply ranged suppressions next
if suppressions.check_diagnostic(message) {
comments_by_line.push(None);
continue;
}
// Is the violation ignored by a `noqa` directive on the parent line?
if let Some(parent) = message.parent() {
if let Some(directive_line) =
@@ -1253,6 +1280,7 @@ mod tests {
use crate::rules::pycodestyle::rules::{AmbiguousVariableName, UselessSemicolon};
use crate::rules::pyflakes::rules::UnusedVariable;
use crate::rules::pyupgrade::rules::PrintfStringFormatting;
use crate::suppression::Suppressions;
use crate::{Edit, Violation};
use crate::{Locator, generate_noqa_edits};
@@ -2848,6 +2876,7 @@ mod tests {
&noqa_line_for,
LineEnding::Lf,
None,
&Suppressions::default(),
);
assert_eq!(count, 0);
assert_eq!(output, format!("{contents}"));
@@ -2872,6 +2901,7 @@ mod tests {
&noqa_line_for,
LineEnding::Lf,
None,
&Suppressions::default(),
);
assert_eq!(count, 1);
assert_eq!(output, "x = 1 # noqa: F841\n");
@@ -2903,6 +2933,7 @@ mod tests {
&noqa_line_for,
LineEnding::Lf,
None,
&Suppressions::default(),
);
assert_eq!(count, 1);
assert_eq!(output, "x = 1 # noqa: E741, F841\n");
@@ -2934,6 +2965,7 @@ mod tests {
&noqa_line_for,
LineEnding::Lf,
None,
&Suppressions::default(),
);
assert_eq!(count, 0);
assert_eq!(output, "x = 1 # noqa");
@@ -2956,6 +2988,7 @@ print(
let messages = [PrintfStringFormatting
.into_diagnostic(TextRange::new(12.into(), 79.into()), &source_file)];
let comment_ranges = CommentRanges::default();
let suppressions = Suppressions::default();
let edits = generate_noqa_edits(
path,
&messages,
@@ -2964,6 +2997,7 @@ print(
&[],
&noqa_line_for,
LineEnding::Lf,
&suppressions,
);
assert_eq!(
edits,
@@ -2987,6 +3021,7 @@ bar =
[UselessSemicolon.into_diagnostic(TextRange::new(4.into(), 5.into()), &source_file)];
let noqa_line_for = NoqaMapping::default();
let comment_ranges = CommentRanges::default();
let suppressions = Suppressions::default();
let edits = generate_noqa_edits(
path,
&messages,
@@ -2995,6 +3030,7 @@ bar =
&[],
&noqa_line_for,
LineEnding::Lf,
&suppressions,
);
assert_eq!(
edits,

View File

@@ -9,6 +9,11 @@ use crate::settings::LinterSettings;
// Rule-specific behavior
// https://github.com/astral-sh/ruff/pull/21382
pub(crate) const fn is_custom_exception_checking_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/15541
pub(crate) const fn is_suspicious_function_reference_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
@@ -286,3 +291,8 @@ pub(crate) const fn is_s310_resolve_string_literal_bindings_enabled(
) -> bool {
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/21623
pub(crate) const fn is_range_suppressions_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}

View File

@@ -281,12 +281,10 @@ impl Renamer {
) -> Option<Edit> {
let statement = binding.statement(semantic)?;
let (ast::Stmt::Assign(ast::StmtAssign { value, .. })
| ast::Stmt::AnnAssign(ast::StmtAnnAssign {
value: Some(value), ..
})) = statement
else {
return None;
let value = match statement {
ast::Stmt::Assign(node) => &node.value,
ast::Stmt::AnnAssign(node) => node.value.as_ref()?,
_ => return None,
};
let ast::ExprCall {

View File

@@ -448,11 +448,10 @@ fn is_kwarg_parameter(semantic: &SemanticModel, name: &ExprName) -> bool {
return false;
};
let binding = semantic.binding(binding_id);
let Some(Stmt::FunctionDef(StmtFunctionDef { parameters, .. })) = binding.statement(semantic)
else {
let Some(Stmt::FunctionDef(node)) = binding.statement(semantic) else {
return false;
};
parameters
node.parameters
.kwarg
.as_deref()
.is_some_and(|kwarg| kwarg.name.as_str() == name.id.as_str())

View File

@@ -91,8 +91,8 @@ pub(crate) fn fastapi_redundant_response_model(checker: &Checker, function_def:
response_model_arg,
&call.arguments,
Parentheses::Preserve,
checker.locator().contents(),
checker.comment_ranges(),
checker.source(),
checker.tokens(),
)
.map(Fix::unsafe_edit)
});

View File

@@ -2,7 +2,7 @@
//!
//! See: <https://bandit.readthedocs.io/en/latest/blacklists/blacklist_imports.html>
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::{self as ast, Stmt};
use ruff_python_ast::Stmt;
use ruff_text_size::Ranged;
use crate::Violation;
@@ -371,7 +371,8 @@ pub(crate) fn suspicious_imports(checker: &Checker, stmt: &Stmt) {
}
match stmt {
Stmt::Import(ast::StmtImport { names, .. }) => {
Stmt::Import(node) => {
let names = &node.names;
for name in names {
match name.name.as_str() {
"telnetlib" => {
@@ -421,8 +422,9 @@ pub(crate) fn suspicious_imports(checker: &Checker, stmt: &Stmt) {
}
}
}
Stmt::ImportFrom(ast::StmtImportFrom { module, names, .. }) => {
let Some(identifier) = module else { return };
Stmt::ImportFrom(node) => {
let Some(identifier) = &node.module else { return };
let names = &node.names;
match identifier.as_str() {
"telnetlib" => {
checker.report_diagnostic_if_enabled(

View File

@@ -154,10 +154,12 @@ impl<'a> StatementVisitor<'a> for ReraiseVisitor<'a> {
return;
}
match stmt {
Stmt::Raise(ast::StmtRaise { exc, cause, .. }) => {
Stmt::Raise(node) => {
let exc = node.exc.as_deref();
let cause = node.cause.as_deref();
// except Exception [as <name>]:
// raise [<exc> [from <cause>]]
let reraised = match (self.name, exc.as_deref(), cause.as_deref()) {
let reraised = match (self.name, exc, cause) {
// `raise`
(_, None, None) => true,
// `raise SomeExc from <name>`

View File

@@ -173,24 +173,21 @@ pub(crate) fn abstract_base_class(
// If an ABC declares an attribute by providing a type annotation
// but does not actually assign a value for that attribute,
// assume it is intended to be an "abstract attribute"
if matches!(
stmt,
Stmt::AnnAssign(ast::StmtAnnAssign { value: None, .. })
) {
has_abstract_method = true;
continue;
if let Stmt::AnnAssign(node) = stmt {
if node.value.is_none() {
has_abstract_method = true;
continue;
}
}
let Stmt::FunctionDef(ast::StmtFunctionDef {
decorator_list,
body,
name: method_name,
..
}) = stmt
else {
let Stmt::FunctionDef(node) = stmt else {
continue;
};
let decorator_list = &node.decorator_list;
let body = &node.body;
let method_name = &node.name;
let has_abstract_decorator = is_abstract(decorator_list, checker.semantic());
has_abstract_method |= has_abstract_decorator;

View File

@@ -51,7 +51,7 @@ impl AlwaysFixableViolation for AssertFalse {
}
fn assertion_error(msg: Option<&Expr>) -> Stmt {
Stmt::Raise(ast::StmtRaise {
Stmt::Raise(Box::new(ast::StmtRaise {
range: TextRange::default(),
node_index: ruff_python_ast::AtomicNodeIndex::NONE,
exc: Some(Box::new(Expr::Call(ast::ExprCall {
@@ -75,7 +75,7 @@ fn assertion_error(msg: Option<&Expr>) -> Stmt {
node_index: ruff_python_ast::AtomicNodeIndex::NONE,
}))),
cause: None,
})
}))
}
/// B011

View File

@@ -114,14 +114,14 @@ pub(crate) fn class_as_data_structure(checker: &Checker, class_def: &ast::StmtCl
// assignment of a name to an attribute.
fn is_simple_assignment_to_attribute(stmt: &ast::Stmt) -> bool {
match stmt {
ast::Stmt::Assign(ast::StmtAssign { targets, value, .. }) => {
let [target] = targets.as_slice() else {
ast::Stmt::Assign(node) => {
let [target] = node.targets.as_slice() else {
return false;
};
target.is_attribute_expr() && value.is_name_expr()
target.is_attribute_expr() && node.value.is_name_expr()
}
ast::Stmt::AnnAssign(ast::StmtAnnAssign { target, value, .. }) => {
target.is_attribute_expr() && value.as_ref().is_some_and(|val| val.is_name_expr())
ast::Stmt::AnnAssign(node) => {
node.target.is_attribute_expr() && node.value.as_ref().is_some_and(|val| val.is_name_expr())
}
_ => false,
}

View File

@@ -86,12 +86,10 @@ struct SuspiciousVariablesVisitor<'a> {
impl<'a> Visitor<'a> for SuspiciousVariablesVisitor<'a> {
fn visit_stmt(&mut self, stmt: &'a Stmt) {
match stmt {
Stmt::FunctionDef(ast::StmtFunctionDef {
parameters, body, ..
}) => {
Stmt::FunctionDef(node) => {
// Collect all loaded variable names.
let mut visitor = LoadedNamesVisitor::default();
visitor.visit_body(body);
visitor.visit_body(&node.body);
// Treat any non-arguments as "suspicious".
self.names
@@ -100,7 +98,7 @@ impl<'a> Visitor<'a> for SuspiciousVariablesVisitor<'a> {
return false;
}
if parameters.includes(&loaded.id) {
if node.parameters.includes(&loaded.id) {
return false;
}
@@ -242,18 +240,26 @@ impl<'a> Visitor<'a> for AssignedNamesVisitor<'a> {
}
match stmt {
Stmt::Assign(ast::StmtAssign { targets, .. }) => {
Stmt::Assign(node) => {
let mut visitor = NamesFromAssignmentsVisitor::default();
for expr in targets {
for expr in &node.targets {
visitor.visit_expr(expr);
}
self.names.extend(visitor.names);
}
Stmt::AugAssign(ast::StmtAugAssign { target, .. })
| Stmt::AnnAssign(ast::StmtAnnAssign { target, .. })
| Stmt::For(ast::StmtFor { target, .. }) => {
Stmt::AugAssign(node) => {
let mut visitor = NamesFromAssignmentsVisitor::default();
visitor.visit_expr(target);
visitor.visit_expr(&node.target);
self.names.extend(visitor.names);
}
Stmt::AnnAssign(node) => {
let mut visitor = NamesFromAssignmentsVisitor::default();
visitor.visit_expr(&node.target);
self.names.extend(visitor.names);
}
Stmt::For(node) => {
let mut visitor = NamesFromAssignmentsVisitor::default();
visitor.visit_expr(&node.target);
self.names.extend(visitor.names);
}
_ => {}

View File

@@ -1,4 +1,4 @@
use ruff_python_ast::{self as ast, Stmt};
use ruff_python_ast::Stmt;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_text_size::Ranged;
@@ -71,15 +71,23 @@ fn walk_stmt(checker: &Checker, body: &[Stmt], f: fn(&Stmt) -> bool) {
);
}
match stmt {
Stmt::While(ast::StmtWhile { body, .. }) | Stmt::For(ast::StmtFor { body, .. }) => {
walk_stmt(checker, body, Stmt::is_return_stmt);
Stmt::While(node) => {
walk_stmt(checker, &node.body, Stmt::is_return_stmt);
}
Stmt::If(ast::StmtIf { body, .. })
| Stmt::Try(ast::StmtTry { body, .. })
| Stmt::With(ast::StmtWith { body, .. }) => {
walk_stmt(checker, body, f);
Stmt::For(node) => {
walk_stmt(checker, &node.body, Stmt::is_return_stmt);
}
Stmt::Match(ast::StmtMatch { cases, .. }) => {
Stmt::If(node) => {
walk_stmt(checker, &node.body, f);
}
Stmt::Try(node) => {
walk_stmt(checker, &node.body, f);
}
Stmt::With(node) => {
walk_stmt(checker, &node.body, f);
}
Stmt::Match(node) => {
let cases = &node.cases;
for case in cases {
walk_stmt(checker, &case.body, f);
}

View File

@@ -5,8 +5,7 @@ use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::name::UnqualifiedName;
use ruff_python_ast::{
Expr, ExprAttribute, ExprCall, ExprSubscript, ExprTuple, Stmt, StmtAssign, StmtAugAssign,
StmtDelete, StmtFor, StmtIf,
self as ast, Expr, ExprAttribute, ExprCall, ExprSubscript, ExprTuple, Stmt, StmtFor,
visitor::{self, Visitor},
};
use ruff_text_size::TextRange;
@@ -242,43 +241,39 @@ impl<'a> Visitor<'a> for LoopMutationsVisitor<'a> {
fn visit_stmt(&mut self, stmt: &'a Stmt) {
match stmt {
// Ex) `del items[0]`
Stmt::Delete(StmtDelete {
range,
targets,
node_index: _,
}) => {
Stmt::Delete(node) => {
let ast::StmtDelete {
range,
targets,
node_index: _,
} = &**node;
self.handle_delete(*range, targets);
visitor::walk_stmt(self, stmt);
}
// Ex) `items[0] = 1`
Stmt::Assign(StmtAssign { range, targets, .. }) => {
self.handle_assign(*range, targets);
Stmt::Assign(node) => {
self.handle_assign(node.range, &node.targets);
visitor::walk_stmt(self, stmt);
}
// Ex) `items += [1]`
Stmt::AugAssign(StmtAugAssign { range, target, .. }) => {
self.handle_aug_assign(*range, target);
Stmt::AugAssign(node) => {
self.handle_aug_assign(node.range, &node.target);
visitor::walk_stmt(self, stmt);
}
// Ex) `if True: items.append(1)`
Stmt::If(StmtIf {
test,
body,
elif_else_clauses,
..
}) => {
Stmt::If(node) => {
// Handle the `if` branch.
self.branch += 1;
self.branches.push(self.branch);
self.visit_expr(test);
self.visit_body(body);
self.visit_expr(&node.test);
self.visit_body(&node.body);
self.branches.pop();
// Handle the `elif` and `else` branches.
for clause in elif_else_clauses {
for clause in &node.elif_else_clauses {
self.branch += 1;
self.branches.push(self.branch);
if let Some(test) = &clause.test {

View File

@@ -74,12 +74,7 @@ pub(crate) fn map_without_explicit_strict(checker: &Checker, call: &ast::ExprCal
checker
.report_diagnostic(MapWithoutExplicitStrict, call.range())
.set_fix(Fix::applicable_edit(
add_argument(
"strict=False",
&call.arguments,
checker.comment_ranges(),
checker.locator().contents(),
),
add_argument("strict=False", &call.arguments, checker.tokens()),
Applicability::Unsafe,
));
}

View File

@@ -3,7 +3,7 @@ use std::fmt::Write;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::is_docstring_stmt;
use ruff_python_ast::name::QualifiedName;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Expr, ParameterWithDefault};
use ruff_python_semantic::SemanticModel;
use ruff_python_semantic::analyze::function_type::is_stub;
@@ -166,12 +166,7 @@ fn move_initialization(
return None;
}
let range = match parenthesized_range(
default.into(),
parameter.into(),
checker.comment_ranges(),
checker.source(),
) {
let range = match parenthesized_range(default.into(), parameter.into(), checker.tokens()) {
Some(range) => range,
None => default.range(),
};
@@ -194,13 +189,8 @@ fn move_initialization(
"{} = {}",
parameter.parameter.name(),
locator.slice(
parenthesized_range(
default.into(),
parameter.into(),
checker.comment_ranges(),
checker.source()
)
.unwrap_or(default.range())
parenthesized_range(default.into(), parameter.into(), checker.tokens())
.unwrap_or(default.range())
)
);
} else {

View File

@@ -92,12 +92,7 @@ pub(crate) fn no_explicit_stacklevel(checker: &Checker, call: &ast::ExprCall) {
}
let mut diagnostic = checker.report_diagnostic(NoExplicitStacklevel, call.func.range());
let edit = add_argument(
"stacklevel=2",
&call.arguments,
checker.comment_ranges(),
checker.locator().contents(),
);
let edit = add_argument("stacklevel=2", &call.arguments, checker.tokens());
diagnostic.set_fix(Fix::unsafe_edit(edit));
}

View File

@@ -119,13 +119,11 @@ impl<'a> Visitor<'a> for GroupNameFinder<'a> {
return;
}
match stmt {
Stmt::For(ast::StmtFor {
target, iter, body, ..
}) => {
if self.name_matches(target) {
Stmt::For(node) => {
if self.name_matches(&node.target) {
self.overridden = true;
} else {
if self.name_matches(iter) {
if self.name_matches(&node.iter) {
self.increment_usage_count(1);
// This could happen when the group is being looped
// over multiple times:
@@ -136,36 +134,30 @@ impl<'a> Visitor<'a> for GroupNameFinder<'a> {
// for item in group:
// ...
if self.usage_count > 1 {
self.exprs.push(iter);
self.exprs.push(&node.iter);
}
}
self.nested = true;
visitor::walk_body(self, body);
visitor::walk_body(self, &node.body);
self.nested = false;
}
}
Stmt::While(ast::StmtWhile { body, .. }) => {
Stmt::While(node) => {
self.nested = true;
visitor::walk_body(self, body);
visitor::walk_body(self, &node.body);
self.nested = false;
}
Stmt::If(ast::StmtIf {
test,
body,
elif_else_clauses,
range: _,
node_index: _,
}) => {
Stmt::If(node) => {
// base if plus branches
let mut if_stack = Vec::with_capacity(1 + elif_else_clauses.len());
let mut if_stack = Vec::with_capacity(1 + node.elif_else_clauses.len());
// Initialize the vector with the count for the if branch.
if_stack.push(0);
self.counter_stack.push(if_stack);
self.visit_expr(test);
self.visit_body(body);
self.visit_expr(&node.test);
self.visit_body(&node.body);
for clause in elif_else_clauses {
for clause in &node.elif_else_clauses {
self.counter_stack.last_mut().unwrap().push(0);
self.visit_elif_else_clause(clause);
}
@@ -177,15 +169,10 @@ impl<'a> Visitor<'a> for GroupNameFinder<'a> {
self.increment_usage_count(max_count);
}
}
Stmt::Match(ast::StmtMatch {
subject,
cases,
range: _,
node_index: _,
}) => {
self.counter_stack.push(Vec::with_capacity(cases.len()));
self.visit_expr(subject);
for match_case in cases {
Stmt::Match(node) => {
self.counter_stack.push(Vec::with_capacity(node.cases.len()));
self.visit_expr(&node.subject);
for match_case in &node.cases {
self.counter_stack.last_mut().unwrap().push(0);
self.visit_match_case(match_case);
}
@@ -196,17 +183,17 @@ impl<'a> Visitor<'a> for GroupNameFinder<'a> {
self.increment_usage_count(max_count);
}
}
Stmt::Assign(ast::StmtAssign { targets, value, .. }) => {
if targets.iter().any(|target| self.name_matches(target)) {
Stmt::Assign(node) => {
if node.targets.iter().any(|target| self.name_matches(target)) {
self.overridden = true;
} else {
self.visit_expr(value);
self.visit_expr(&node.value);
}
}
Stmt::AnnAssign(ast::StmtAnnAssign { target, value, .. }) => {
if self.name_matches(target) {
Stmt::AnnAssign(node) => {
if self.name_matches(&node.target) {
self.overridden = true;
} else if let Some(expr) = value {
} else if let Some(expr) = &node.value {
self.visit_expr(expr);
}
}

View File

@@ -66,7 +66,7 @@ impl AlwaysFixableViolation for SetAttrWithConstant {
}
fn assignment(obj: &Expr, name: &str, value: &Expr, generator: Generator) -> String {
let stmt = Stmt::Assign(ast::StmtAssign {
let stmt = Stmt::Assign(Box::new(ast::StmtAssign {
targets: vec![Expr::Attribute(ast::ExprAttribute {
value: Box::new(obj.clone()),
attr: Identifier::new(name.to_string(), TextRange::default()),
@@ -77,7 +77,7 @@ fn assignment(obj: &Expr, name: &str, value: &Expr, generator: Generator) -> Str
value: Box::new(value.clone()),
range: TextRange::default(),
node_index: ruff_python_ast::AtomicNodeIndex::NONE,
});
}));
generator.stmt(&stmt)
}

View File

@@ -70,12 +70,7 @@ pub(crate) fn zip_without_explicit_strict(checker: &Checker, call: &ast::ExprCal
checker
.report_diagnostic(ZipWithoutExplicitStrict, call.range())
.set_fix(Fix::applicable_edit(
add_argument(
"strict=False",
&call.arguments,
checker.comment_ranges(),
checker.locator().contents(),
),
add_argument("strict=False", &call.arguments, checker.tokens()),
Applicability::Unsafe,
));
}

View File

@@ -236,227 +236,227 @@ help: Replace with `None`; initialize within function
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:239:20
--> B006_B008.py:242:20
|
237 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008.
239 | def nested_combo(a=[float(3), dt.datetime.now()]):
240 | # B006 and B008
241 | # We should handle arbitrary nesting of these B008.
242 | def nested_combo(a=[float(3), dt.datetime.now()]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
240 | pass
243 | pass
|
help: Replace with `None`; initialize within function
236 |
237 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008.
239 |
240 | # B006 and B008
241 | # We should handle arbitrary nesting of these B008.
- def nested_combo(a=[float(3), dt.datetime.now()]):
239 + def nested_combo(a=None):
240 | pass
241 |
242 |
242 + def nested_combo(a=None):
243 | pass
244 |
245 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:276:27
--> B006_B008.py:279:27
|
275 | def mutable_annotations(
276 | a: list[int] | None = [],
278 | def mutable_annotations(
279 | a: list[int] | None = [],
| ^^
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
|
help: Replace with `None`; initialize within function
273 |
274 |
275 | def mutable_annotations(
276 |
277 |
278 | def mutable_annotations(
- a: list[int] | None = [],
276 + a: list[int] | None = None,
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 + a: list[int] | None = None,
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:277:35
--> B006_B008.py:280:35
|
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | def mutable_annotations(
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
| ^^
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
|
help: Replace with `None`; initialize within function
274 |
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 |
278 | def mutable_annotations(
279 | a: list[int] | None = [],
- b: Optional[Dict[int, int]] = {},
277 + b: Optional[Dict[int, int]] = None,
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
280 + b: Optional[Dict[int, int]] = None,
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:278:62
--> B006_B008.py:281:62
|
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
|
help: Replace with `None`; initialize within function
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | def mutable_annotations(
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
- c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
278 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
281 | pass
281 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
284 | pass
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:279:80
--> B006_B008.py:282:80
|
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^
280 | ):
281 | pass
283 | ):
284 | pass
|
help: Replace with `None`; initialize within function
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
- d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
280 | ):
281 | pass
282 |
282 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
283 | ):
284 | pass
285 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:284:52
--> B006_B008.py:287:52
|
284 | def single_line_func_wrong(value: dict[str, str] = {}):
287 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
285 | """Docstring"""
288 | """Docstring"""
|
help: Replace with `None`; initialize within function
281 | pass
282 |
283 |
- def single_line_func_wrong(value: dict[str, str] = {}):
284 + def single_line_func_wrong(value: dict[str, str] = None):
285 | """Docstring"""
284 | pass
285 |
286 |
287 |
- def single_line_func_wrong(value: dict[str, str] = {}):
287 + def single_line_func_wrong(value: dict[str, str] = None):
288 | """Docstring"""
289 |
290 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:288:52
--> B006_B008.py:291:52
|
288 | def single_line_func_wrong(value: dict[str, str] = {}):
291 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
289 | """Docstring"""
290 | ...
292 | """Docstring"""
293 | ...
|
help: Replace with `None`; initialize within function
285 | """Docstring"""
286 |
287 |
288 | """Docstring"""
289 |
290 |
- def single_line_func_wrong(value: dict[str, str] = {}):
288 + def single_line_func_wrong(value: dict[str, str] = None):
289 | """Docstring"""
290 | ...
291 |
291 + def single_line_func_wrong(value: dict[str, str] = None):
292 | """Docstring"""
293 | ...
294 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:293:52
--> B006_B008.py:296:52
|
293 | def single_line_func_wrong(value: dict[str, str] = {}):
296 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
294 | """Docstring"""; ...
297 | """Docstring"""; ...
|
help: Replace with `None`; initialize within function
290 | ...
291 |
292 |
- def single_line_func_wrong(value: dict[str, str] = {}):
293 + def single_line_func_wrong(value: dict[str, str] = None):
294 | """Docstring"""; ...
293 | ...
294 |
295 |
296 |
- def single_line_func_wrong(value: dict[str, str] = {}):
296 + def single_line_func_wrong(value: dict[str, str] = None):
297 | """Docstring"""; ...
298 |
299 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:297:52
--> B006_B008.py:300:52
|
297 | def single_line_func_wrong(value: dict[str, str] = {}):
300 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
298 | """Docstring"""; \
299 | ...
301 | """Docstring"""; \
302 | ...
|
help: Replace with `None`; initialize within function
294 | """Docstring"""; ...
295 |
296 |
297 | """Docstring"""; ...
298 |
299 |
- def single_line_func_wrong(value: dict[str, str] = {}):
297 + def single_line_func_wrong(value: dict[str, str] = None):
298 | """Docstring"""; \
299 | ...
300 |
300 + def single_line_func_wrong(value: dict[str, str] = None):
301 | """Docstring"""; \
302 | ...
303 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:302:52
--> B006_B008.py:305:52
|
302 | def single_line_func_wrong(value: dict[str, str] = {
305 | def single_line_func_wrong(value: dict[str, str] = {
| ____________________________________________________^
303 | | # This is a comment
304 | | }):
306 | | # This is a comment
307 | | }):
| |_^
305 | """Docstring"""
308 | """Docstring"""
|
help: Replace with `None`; initialize within function
299 | ...
300 |
301 |
302 | ...
303 |
304 |
- def single_line_func_wrong(value: dict[str, str] = {
- # This is a comment
- }):
302 + def single_line_func_wrong(value: dict[str, str] = None):
303 | """Docstring"""
304 |
305 |
305 + def single_line_func_wrong(value: dict[str, str] = None):
306 | """Docstring"""
307 |
308 |
note: This is an unsafe fix and may change runtime behavior
B006 Do not use mutable data structures for argument defaults
--> B006_B008.py:308:52
--> B006_B008.py:311:52
|
308 | def single_line_func_wrong(value: dict[str, str] = {}) \
311 | def single_line_func_wrong(value: dict[str, str] = {}) \
| ^^
309 | : \
310 | """Docstring"""
312 | : \
313 | """Docstring"""
|
help: Replace with `None`; initialize within function
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:313:52
--> B006_B008.py:316:52
|
313 | def single_line_func_wrong(value: dict[str, str] = {}):
316 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
314 | """Docstring without newline"""
317 | """Docstring without newline"""
|
help: Replace with `None`; initialize within function
310 | """Docstring"""
311 |
312 |
313 | """Docstring"""
314 |
315 |
- def single_line_func_wrong(value: dict[str, str] = {}):
313 + def single_line_func_wrong(value: dict[str, str] = None):
314 | """Docstring without newline"""
316 + def single_line_func_wrong(value: dict[str, str] = None):
317 | """Docstring without newline"""
note: This is an unsafe fix and may change runtime behavior

View File

@@ -53,39 +53,39 @@ B008 Do not perform function call in argument defaults; instead, perform the cal
|
B008 Do not perform function call `dt.datetime.now` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:239:31
--> B006_B008.py:242:31
|
237 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008.
239 | def nested_combo(a=[float(3), dt.datetime.now()]):
240 | # B006 and B008
241 | # We should handle arbitrary nesting of these B008.
242 | def nested_combo(a=[float(3), dt.datetime.now()]):
| ^^^^^^^^^^^^^^^^^
240 | pass
243 | pass
|
B008 Do not perform function call `map` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:245:22
--> B006_B008.py:248:22
|
243 | # Don't flag nested B006 since we can't guarantee that
244 | # it isn't made mutable by the outer operation.
245 | def no_nested_b006(a=map(lambda s: s.upper(), ["a", "b", "c"])):
246 | # Don't flag nested B006 since we can't guarantee that
247 | # it isn't made mutable by the outer operation.
248 | def no_nested_b006(a=map(lambda s: s.upper(), ["a", "b", "c"])):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
246 | pass
249 | pass
|
B008 Do not perform function call `random.randint` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:250:19
--> B006_B008.py:253:19
|
249 | # B008-ception.
250 | def nested_b008(a=random.randint(0, dt.datetime.now().year)):
252 | # B008-ception.
253 | def nested_b008(a=random.randint(0, dt.datetime.now().year)):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
251 | pass
254 | pass
|
B008 Do not perform function call `dt.datetime.now` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:250:37
--> B006_B008.py:253:37
|
249 | # B008-ception.
250 | def nested_b008(a=random.randint(0, dt.datetime.now().year)):
252 | # B008-ception.
253 | def nested_b008(a=random.randint(0, dt.datetime.now().year)):
| ^^^^^^^^^^^^^^^^^
251 | pass
254 | pass
|

View File

@@ -236,227 +236,227 @@ help: Replace with `None`; initialize within function
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:239:20
--> B006_B008.py:242:20
|
237 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008.
239 | def nested_combo(a=[float(3), dt.datetime.now()]):
240 | # B006 and B008
241 | # We should handle arbitrary nesting of these B008.
242 | def nested_combo(a=[float(3), dt.datetime.now()]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
240 | pass
243 | pass
|
help: Replace with `None`; initialize within function
236 |
237 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008.
239 |
240 | # B006 and B008
241 | # We should handle arbitrary nesting of these B008.
- def nested_combo(a=[float(3), dt.datetime.now()]):
239 + def nested_combo(a=None):
240 | pass
241 |
242 |
242 + def nested_combo(a=None):
243 | pass
244 |
245 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:276:27
--> B006_B008.py:279:27
|
275 | def mutable_annotations(
276 | a: list[int] | None = [],
278 | def mutable_annotations(
279 | a: list[int] | None = [],
| ^^
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
|
help: Replace with `None`; initialize within function
273 |
274 |
275 | def mutable_annotations(
276 |
277 |
278 | def mutable_annotations(
- a: list[int] | None = [],
276 + a: list[int] | None = None,
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 + a: list[int] | None = None,
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:277:35
--> B006_B008.py:280:35
|
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | def mutable_annotations(
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
| ^^
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
|
help: Replace with `None`; initialize within function
274 |
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 |
278 | def mutable_annotations(
279 | a: list[int] | None = [],
- b: Optional[Dict[int, int]] = {},
277 + b: Optional[Dict[int, int]] = None,
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
280 + b: Optional[Dict[int, int]] = None,
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:278:62
--> B006_B008.py:281:62
|
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
|
help: Replace with `None`; initialize within function
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | def mutable_annotations(
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
- c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
278 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
281 | pass
281 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
284 | pass
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:279:80
--> B006_B008.py:282:80
|
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^
280 | ):
281 | pass
283 | ):
284 | pass
|
help: Replace with `None`; initialize within function
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
- d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
280 | ):
281 | pass
282 |
282 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
283 | ):
284 | pass
285 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:284:52
--> B006_B008.py:287:52
|
284 | def single_line_func_wrong(value: dict[str, str] = {}):
287 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
285 | """Docstring"""
288 | """Docstring"""
|
help: Replace with `None`; initialize within function
281 | pass
282 |
283 |
- def single_line_func_wrong(value: dict[str, str] = {}):
284 + def single_line_func_wrong(value: dict[str, str] = None):
285 | """Docstring"""
284 | pass
285 |
286 |
287 |
- def single_line_func_wrong(value: dict[str, str] = {}):
287 + def single_line_func_wrong(value: dict[str, str] = None):
288 | """Docstring"""
289 |
290 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:288:52
--> B006_B008.py:291:52
|
288 | def single_line_func_wrong(value: dict[str, str] = {}):
291 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
289 | """Docstring"""
290 | ...
292 | """Docstring"""
293 | ...
|
help: Replace with `None`; initialize within function
285 | """Docstring"""
286 |
287 |
288 | """Docstring"""
289 |
290 |
- def single_line_func_wrong(value: dict[str, str] = {}):
288 + def single_line_func_wrong(value: dict[str, str] = None):
289 | """Docstring"""
290 | ...
291 |
291 + def single_line_func_wrong(value: dict[str, str] = None):
292 | """Docstring"""
293 | ...
294 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:293:52
--> B006_B008.py:296:52
|
293 | def single_line_func_wrong(value: dict[str, str] = {}):
296 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
294 | """Docstring"""; ...
297 | """Docstring"""; ...
|
help: Replace with `None`; initialize within function
290 | ...
291 |
292 |
- def single_line_func_wrong(value: dict[str, str] = {}):
293 + def single_line_func_wrong(value: dict[str, str] = None):
294 | """Docstring"""; ...
293 | ...
294 |
295 |
296 |
- def single_line_func_wrong(value: dict[str, str] = {}):
296 + def single_line_func_wrong(value: dict[str, str] = None):
297 | """Docstring"""; ...
298 |
299 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:297:52
--> B006_B008.py:300:52
|
297 | def single_line_func_wrong(value: dict[str, str] = {}):
300 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
298 | """Docstring"""; \
299 | ...
301 | """Docstring"""; \
302 | ...
|
help: Replace with `None`; initialize within function
294 | """Docstring"""; ...
295 |
296 |
297 | """Docstring"""; ...
298 |
299 |
- def single_line_func_wrong(value: dict[str, str] = {}):
297 + def single_line_func_wrong(value: dict[str, str] = None):
298 | """Docstring"""; \
299 | ...
300 |
300 + def single_line_func_wrong(value: dict[str, str] = None):
301 | """Docstring"""; \
302 | ...
303 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:302:52
--> B006_B008.py:305:52
|
302 | def single_line_func_wrong(value: dict[str, str] = {
305 | def single_line_func_wrong(value: dict[str, str] = {
| ____________________________________________________^
303 | | # This is a comment
304 | | }):
306 | | # This is a comment
307 | | }):
| |_^
305 | """Docstring"""
308 | """Docstring"""
|
help: Replace with `None`; initialize within function
299 | ...
300 |
301 |
302 | ...
303 |
304 |
- def single_line_func_wrong(value: dict[str, str] = {
- # This is a comment
- }):
302 + def single_line_func_wrong(value: dict[str, str] = None):
303 | """Docstring"""
304 |
305 |
305 + def single_line_func_wrong(value: dict[str, str] = None):
306 | """Docstring"""
307 |
308 |
note: This is an unsafe fix and may change runtime behavior
B006 Do not use mutable data structures for argument defaults
--> B006_B008.py:308:52
--> B006_B008.py:311:52
|
308 | def single_line_func_wrong(value: dict[str, str] = {}) \
311 | def single_line_func_wrong(value: dict[str, str] = {}) \
| ^^
309 | : \
310 | """Docstring"""
312 | : \
313 | """Docstring"""
|
help: Replace with `None`; initialize within function
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:313:52
--> B006_B008.py:316:52
|
313 | def single_line_func_wrong(value: dict[str, str] = {}):
316 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
314 | """Docstring without newline"""
317 | """Docstring without newline"""
|
help: Replace with `None`; initialize within function
310 | """Docstring"""
311 |
312 |
313 | """Docstring"""
314 |
315 |
- def single_line_func_wrong(value: dict[str, str] = {}):
313 + def single_line_func_wrong(value: dict[str, str] = None):
314 | """Docstring without newline"""
316 + def single_line_func_wrong(value: dict[str, str] = None):
317 | """Docstring without newline"""
note: This is an unsafe fix and may change runtime behavior

View File

@@ -2,8 +2,8 @@ use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast as ast;
use ruff_python_ast::ExprGenerator;
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::TokenKind;
use ruff_python_ast::token::parenthesized_range;
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::checkers::ast::Checker;
@@ -142,13 +142,9 @@ pub(crate) fn unnecessary_generator_list(checker: &Checker, call: &ast::ExprCall
if *parenthesized {
// The generator's range will include the innermost parentheses, but it could be
// surrounded by additional parentheses.
let range = parenthesized_range(
argument.into(),
(&call.arguments).into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(argument.range());
let range =
parenthesized_range(argument.into(), (&call.arguments).into(), checker.tokens())
.unwrap_or(argument.range());
// The generator always parenthesizes the expression; trim the parentheses.
let generator = checker.generator().expr(argument);

View File

@@ -2,8 +2,8 @@ use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast as ast;
use ruff_python_ast::ExprGenerator;
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::TokenKind;
use ruff_python_ast::token::parenthesized_range;
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::checkers::ast::Checker;
@@ -147,13 +147,9 @@ pub(crate) fn unnecessary_generator_set(checker: &Checker, call: &ast::ExprCall)
if *parenthesized {
// The generator's range will include the innermost parentheses, but it could be
// surrounded by additional parentheses.
let range = parenthesized_range(
argument.into(),
(&call.arguments).into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(argument.range());
let range =
parenthesized_range(argument.into(), (&call.arguments).into(), checker.tokens())
.unwrap_or(argument.range());
// The generator always parenthesizes the expression; trim the parentheses.
let generator = checker.generator().expr(argument);

View File

@@ -1,7 +1,7 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast as ast;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::TokenKind;
use ruff_python_ast::token::parenthesized_range;
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::checkers::ast::Checker;
@@ -89,13 +89,9 @@ pub(crate) fn unnecessary_list_comprehension_set(checker: &Checker, call: &ast::
// If the list comprehension is parenthesized, remove the parentheses in addition to
// removing the brackets.
let replacement_range = parenthesized_range(
argument.into(),
(&call.arguments).into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or_else(|| argument.range());
let replacement_range =
parenthesized_range(argument.into(), (&call.arguments).into(), checker.tokens())
.unwrap_or_else(|| argument.range());
let span = argument.range().add_start(one).sub_end(one);
let replacement =

View File

@@ -59,16 +59,20 @@ pub(crate) fn all_with_model_form(checker: &Checker, class_def: &ast::StmtClassD
}
for element in &class_def.body {
let Stmt::ClassDef(ast::StmtClassDef { name, body, .. }) = element else {
let Stmt::ClassDef(class_def_inner) = element else {
continue;
};
let name = &class_def_inner.name;
let body = &class_def_inner.body;
if name != "Meta" {
continue;
}
for element in body {
let Stmt::Assign(ast::StmtAssign { targets, value, .. }) = element else {
let Stmt::Assign(assign) = element else {
continue;
};
let targets = &assign.targets;
let value = &assign.value;
for target in targets {
let Expr::Name(ast::ExprName { id, .. }) = target else {
continue;

View File

@@ -57,16 +57,19 @@ pub(crate) fn exclude_with_model_form(checker: &Checker, class_def: &ast::StmtCl
}
for element in &class_def.body {
let Stmt::ClassDef(ast::StmtClassDef { name, body, .. }) = element else {
let Stmt::ClassDef(class_def_inner) = element else {
continue;
};
let name = &class_def_inner.name;
let body = &class_def_inner.body;
if name != "Meta" {
continue;
}
for element in body {
let Stmt::Assign(ast::StmtAssign { targets, .. }) = element else {
let Stmt::Assign(assign) = element else {
continue;
};
let targets = &assign.targets;
for target in targets {
let Expr::Name(ast::ExprName { id, .. }) = target else {
continue;

View File

@@ -72,7 +72,7 @@ pub(crate) fn model_without_dunder_str(checker: &Checker, class_def: &ast::StmtC
fn has_dunder_method(class_def: &ast::StmtClassDef, semantic: &SemanticModel) -> bool {
analyze::class::any_super_class(class_def, semantic, &|class_def| {
class_def.body.iter().any(|val| match val {
Stmt::FunctionDef(ast::StmtFunctionDef { name, .. }) => name == "__str__",
Stmt::FunctionDef(node) => node.name.as_str() == "__str__",
_ => false,
})
})
@@ -90,24 +90,25 @@ fn is_non_abstract_model(class_def: &ast::StmtClassDef, semantic: &SemanticModel
/// Check if class is abstract, in terms of Django model inheritance.
fn is_model_abstract(class_def: &ast::StmtClassDef) -> bool {
for element in &class_def.body {
let Stmt::ClassDef(ast::StmtClassDef { name, body, .. }) = element else {
let Stmt::ClassDef(node) = element else {
continue;
};
if name != "Meta" {
if node.name.as_str() != "Meta" {
continue;
}
for element in body {
for element in &node.body {
match element {
Stmt::Assign(ast::StmtAssign { targets, value, .. }) => {
if targets
Stmt::Assign(assign) => {
if assign
.targets
.iter()
.any(|target| is_abstract_true_assignment(target, Some(value)))
.any(|target| is_abstract_true_assignment(target, Some(&assign.value)))
{
return true;
}
}
Stmt::AnnAssign(ast::StmtAnnAssign { target, value, .. }) => {
if is_abstract_true_assignment(target, value.as_deref()) {
Stmt::AnnAssign(ann_assign) => {
if is_abstract_true_assignment(&ann_assign.target, ann_assign.value.as_deref()) {
return true;
}
}

View File

@@ -1,4 +1,4 @@
use ruff_python_ast::{self as ast, Expr, Stmt};
use ruff_python_ast::{Expr, Stmt};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::is_const_true;
@@ -62,10 +62,13 @@ pub(crate) fn nullable_model_string_field(checker: &Checker, body: &[Stmt]) {
for statement in body {
let value = match statement {
Stmt::Assign(ast::StmtAssign { value, .. }) => value,
Stmt::AnnAssign(ast::StmtAnnAssign {
value: Some(value), ..
}) => value,
Stmt::Assign(assign) => &assign.value,
Stmt::AnnAssign(ann_assign) => {
match &ann_assign.value {
Some(value) => value,
None => continue,
}
}
_ => continue,
};

View File

@@ -153,13 +153,13 @@ impl fmt::Display for ContentType {
fn get_element_type(element: &Stmt, semantic: &SemanticModel) -> Option<ContentType> {
match element {
Stmt::Assign(ast::StmtAssign { targets, value, .. }) => {
if let Expr::Call(ast::ExprCall { func, .. }) = value.as_ref() {
Stmt::Assign(node) => {
if let Expr::Call(ast::ExprCall { func, .. }) = node.value.as_ref() {
if helpers::is_model_field(func, semantic) {
return Some(ContentType::FieldDeclaration);
}
}
let expr = targets.first()?;
let expr = node.targets.first()?;
let Expr::Name(ast::ExprName { id, .. }) = expr else {
return None;
};
@@ -169,14 +169,14 @@ fn get_element_type(element: &Stmt, semantic: &SemanticModel) -> Option<ContentT
None
}
}
Stmt::ClassDef(ast::StmtClassDef { name, .. }) => {
if name == "Meta" {
Stmt::ClassDef(node) => {
if node.name.as_str() == "Meta" {
Some(ContentType::MetaClass)
} else {
None
}
}
Stmt::FunctionDef(ast::StmtFunctionDef { name, .. }) => match name.as_str() {
Stmt::FunctionDef(node) => match node.name.as_str() {
name if is_dunder(name) => Some(ContentType::MagicMethod),
"save" => Some(ContentType::SaveMethod),
"get_absolute_url" => Some(ContentType::GetAbsoluteUrlMethod),

View File

@@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Expr, Operator};
use ruff_python_trivia::is_python_whitespace;
use ruff_source_file::LineRanges;
@@ -88,13 +88,7 @@ pub(crate) fn explicit(checker: &Checker, expr: &Expr) {
checker.report_diagnostic(ExplicitStringConcatenation, expr.range());
let is_parenthesized = |expr: &Expr| {
parenthesized_range(
expr.into(),
bin_op.into(),
checker.comment_ranges(),
checker.source(),
)
.is_some()
parenthesized_range(expr.into(), bin_op.into(), checker.tokens()).is_some()
};
// If either `left` or `right` is parenthesized, generating
// a fix would be too involved. Just report the diagnostic.

View File

@@ -1,4 +1,4 @@
use ruff_python_ast::{Stmt, StmtTry};
use ruff_python_ast::Stmt;
use ruff_python_semantic::SemanticModel;
use ruff_text_size::{Ranged, TextSize};
@@ -8,9 +8,10 @@ pub(super) fn outside_handlers(offset: TextSize, semantic: &SemanticModel) -> bo
break;
}
let Stmt::Try(StmtTry { handlers, .. }) = stmt else {
let Stmt::Try(try_stmt) = stmt else {
continue;
};
let handlers = &try_stmt.handlers;
if handlers
.iter()

View File

@@ -111,7 +111,6 @@ pub(crate) fn exc_info_outside_except_handler(checker: &Checker, call: &ExprCall
}
let arguments = &call.arguments;
let source = checker.source();
let mut diagnostic = checker.report_diagnostic(ExcInfoOutsideExceptHandler, exc_info.range);
@@ -120,8 +119,8 @@ pub(crate) fn exc_info_outside_except_handler(checker: &Checker, call: &ExprCall
exc_info,
arguments,
Parentheses::Preserve,
source,
checker.comment_ranges(),
checker.source(),
checker.tokens(),
)?;
Ok(Fix::unsafe_edit(edit))
});

View File

@@ -2,7 +2,7 @@ use rustc_hash::FxHashSet;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::any_over_expr;
use ruff_python_ast::{self as ast, Expr, Stmt};
use ruff_python_ast::{Expr, Stmt};
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
@@ -59,15 +59,15 @@ pub(crate) fn duplicate_class_field_definition(checker: &Checker, body: &[Stmt])
for stmt in body {
// Extract the property name from the assignment statement.
let target = match stmt {
Stmt::Assign(ast::StmtAssign { targets, .. }) => {
if let [Expr::Name(id)] = targets.as_slice() {
Stmt::Assign(assign_stmt) => {
if let [Expr::Name(id)] = assign_stmt.targets.as_slice() {
id
} else {
continue;
}
}
Stmt::AnnAssign(ast::StmtAnnAssign { target, .. }) => {
if let Expr::Name(id) = target.as_ref() {
Stmt::AnnAssign(ann_assign_stmt) => {
if let Expr::Name(id) = ann_assign_stmt.target.as_ref() {
id
} else {
continue;
@@ -78,20 +78,20 @@ pub(crate) fn duplicate_class_field_definition(checker: &Checker, body: &[Stmt])
// If this is an unrolled augmented assignment (e.g., `x = x + 1`), skip it.
match stmt {
Stmt::Assign(ast::StmtAssign { value, .. }) => {
if any_over_expr(value.as_ref(), &|expr| {
Stmt::Assign(assign_stmt) => {
if any_over_expr(assign_stmt.value.as_ref(), &|expr| {
expr.as_name_expr().is_some_and(|name| name.id == target.id)
}) {
continue;
}
}
Stmt::AnnAssign(ast::StmtAnnAssign {
value: Some(value), ..
}) => {
if any_over_expr(value.as_ref(), &|expr| {
expr.as_name_expr().is_some_and(|name| name.id == target.id)
}) {
continue;
Stmt::AnnAssign(ann_assign_stmt) => {
if let Some(value) = &ann_assign_stmt.value {
if any_over_expr(value.as_ref(), &|expr| {
expr.as_name_expr().is_some_and(|name| name.id == target.id)
}) {
continue;
}
}
}
_ => continue,

View File

@@ -58,11 +58,11 @@ impl Violation for NonUniqueEnums {
pub(crate) fn non_unique_enums(checker: &Checker, parent: &Stmt, body: &[Stmt]) {
let semantic = checker.semantic();
let Stmt::ClassDef(parent) = parent else {
let Stmt::ClassDef(class_def) = parent else {
return;
};
if !parent.bases().iter().any(|expr| {
if !class_def.bases().iter().any(|expr| {
semantic
.resolve_qualified_name(expr)
.is_some_and(|qualified_name| matches!(qualified_name.segments(), ["enum", "Enum"]))
@@ -72,9 +72,10 @@ pub(crate) fn non_unique_enums(checker: &Checker, parent: &Stmt, body: &[Stmt])
let mut seen_targets: FxHashSet<ComparableExpr> = FxHashSet::default();
for stmt in body {
let Stmt::Assign(ast::StmtAssign { value, .. }) = stmt else {
let Stmt::Assign(assign_stmt) = stmt else {
continue;
};
let value = &assign_stmt.value;
if is_call_to_enum_auto(semantic, value) {
continue;

View File

@@ -2,7 +2,7 @@ use itertools::Itertools;
use rustc_hash::{FxBuildHasher, FxHashSet};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Expr};
use ruff_python_stdlib::identifiers::is_identifier;
use ruff_text_size::Ranged;
@@ -129,8 +129,8 @@ pub(crate) fn unnecessary_dict_kwargs(checker: &Checker, call: &ast::ExprCall) {
keyword,
&call.arguments,
Parentheses::Preserve,
checker.locator().contents(),
checker.comment_ranges(),
checker.source(),
checker.tokens(),
)
.map(Fix::safe_edit)
});
@@ -158,8 +158,7 @@ pub(crate) fn unnecessary_dict_kwargs(checker: &Checker, call: &ast::ExprCall) {
parenthesized_range(
value.into(),
dict.into(),
checker.comment_ranges(),
checker.locator().contents(),
checker.tokens()
)
.unwrap_or(value.range())
)

View File

@@ -73,11 +73,11 @@ pub(crate) fn unnecessary_range_start(checker: &Checker, call: &ast::ExprCall) {
let mut diagnostic = checker.report_diagnostic(UnnecessaryRangeStart, start.range());
diagnostic.try_set_fix(|| {
remove_argument(
&start,
start,
&call.arguments,
Parentheses::Preserve,
checker.locator().contents(),
checker.comment_ranges(),
checker.source(),
checker.tokens(),
)
.map(Fix::safe_edit)
});

View File

@@ -160,20 +160,16 @@ fn generate_fix(
) -> anyhow::Result<Fix> {
let locator = checker.locator();
let source = locator.contents();
let tokens = checker.tokens();
let deletion = remove_argument(
generic_base,
arguments,
Parentheses::Preserve,
source,
checker.comment_ranges(),
tokens,
)?;
let insertion = add_argument(
locator.slice(generic_base),
arguments,
checker.comment_ranges(),
source,
);
let insertion = add_argument(locator.slice(generic_base), arguments, tokens);
Ok(Fix::unsafe_edits(deletion, [insertion]))
}

View File

@@ -5,7 +5,7 @@ use ruff_python_ast::{
helpers::{pep_604_union, typing_optional},
name::Name,
operator_precedence::OperatorPrecedence,
parenthesize::parenthesized_range,
token::{Tokens, parenthesized_range},
};
use ruff_python_semantic::analyze::typing::{traverse_literal, traverse_union};
use ruff_text_size::{Ranged, TextRange};
@@ -243,16 +243,12 @@ fn create_fix(
let union_expr = pep_604_union(&[new_literal_expr, none_expr]);
// Check if we need parentheses to preserve operator precedence
let content = if needs_parentheses_for_precedence(
semantic,
literal_expr,
checker.comment_ranges(),
checker.source(),
) {
format!("({})", checker.generator().expr(&union_expr))
} else {
checker.generator().expr(&union_expr)
};
let content =
if needs_parentheses_for_precedence(semantic, literal_expr, checker.tokens()) {
format!("({})", checker.generator().expr(&union_expr))
} else {
checker.generator().expr(&union_expr)
};
let union_edit = Edit::range_replacement(content, literal_expr.range());
Fix::applicable_edit(union_edit, applicability)
@@ -278,8 +274,7 @@ enum UnionKind {
fn needs_parentheses_for_precedence(
semantic: &ruff_python_semantic::SemanticModel,
literal_expr: &Expr,
comment_ranges: &ruff_python_trivia::CommentRanges,
source: &str,
tokens: &Tokens,
) -> bool {
// Get the parent expression to check if we're in a context that needs parentheses
let Some(parent_expr) = semantic.current_expression_parent() else {
@@ -287,14 +282,7 @@ fn needs_parentheses_for_precedence(
};
// Check if the literal expression is already parenthesized
if parenthesized_range(
literal_expr.into(),
parent_expr.into(),
comment_ranges,
source,
)
.is_some()
{
if parenthesized_range(literal_expr.into(), parent_expr.into(), tokens).is_some() {
return false; // Already parenthesized, don't add more
}

View File

@@ -1,4 +1,3 @@
use ruff_python_ast as ast;
use ruff_python_ast::Stmt;
use ruff_macros::{ViolationMetadata, derive_message_formats};
@@ -44,17 +43,15 @@ impl AlwaysFixableViolation for StrOrReprDefinedInStub {
/// PYI029
pub(crate) fn str_or_repr_defined_in_stub(checker: &Checker, stmt: &Stmt) {
let Stmt::FunctionDef(ast::StmtFunctionDef {
name,
decorator_list,
returns,
parameters,
..
}) = stmt
else {
let Stmt::FunctionDef(func_def) = stmt else {
return;
};
let name = &func_def.name;
let decorator_list = &func_def.decorator_list;
let returns = &func_def.returns;
let parameters = &func_def.parameters;
let Some(returns) = returns else {
return;
};

View File

@@ -196,15 +196,14 @@ pub(crate) fn unused_private_type_var(checker: &Checker, scope: &Scope) {
let Some(source) = binding.source else {
continue;
};
let stmt @ Stmt::Assign(ast::StmtAssign { targets, value, .. }) =
checker.semantic().statement(source)
else {
let stmt = checker.semantic().statement(source);
let Stmt::Assign(assign) = stmt else {
continue;
};
let [Expr::Name(ast::ExprName { id, .. })] = &targets[..] else {
let [Expr::Name(ast::ExprName { id, .. })] = &assign.targets[..] else {
continue;
};
let Expr::Call(ast::ExprCall { func, .. }) = value.as_ref() else {
let Expr::Call(ast::ExprCall { func, .. }) = assign.value.as_ref() else {
continue;
};
@@ -317,18 +316,16 @@ pub(crate) fn unused_private_type_alias(checker: &Checker, scope: &Scope) {
fn extract_type_alias_name<'a>(stmt: &'a ast::Stmt, semantic: &SemanticModel) -> Option<&'a str> {
match stmt {
ast::Stmt::AnnAssign(ast::StmtAnnAssign {
target, annotation, ..
}) => {
let ast::ExprName { id, .. } = target.as_name_expr()?;
if semantic.match_typing_expr(annotation, "TypeAlias") {
ast::Stmt::AnnAssign(ann_assign) => {
let ast::ExprName { id, .. } = ann_assign.target.as_name_expr()?;
if semantic.match_typing_expr(&ann_assign.annotation, "TypeAlias") {
Some(id)
} else {
None
}
}
ast::Stmt::TypeAlias(ast::StmtTypeAlias { name, .. }) => {
let ast::ExprName { id, .. } = name.as_name_expr()?;
ast::Stmt::TypeAlias(type_alias) => {
let ast::ExprName { id, .. } = type_alias.name.as_name_expr()?;
Some(id)
}
_ => None,
@@ -388,9 +385,9 @@ fn extract_typeddict_name<'a>(stmt: &'a Stmt, semantic: &SemanticModel) -> Optio
// class Bar(typing.TypedDict, typing.Generic[T]):
// y: T
// ```
Stmt::ClassDef(class_def @ ast::StmtClassDef { name, .. }) => {
Stmt::ClassDef(class_def) => {
if class_def.bases().iter().any(is_typeddict) {
Some(name)
Some(&class_def.name)
} else {
None
}
@@ -402,12 +399,12 @@ fn extract_typeddict_name<'a>(stmt: &'a Stmt, semantic: &SemanticModel) -> Optio
// import typing
// Baz = typing.TypedDict("Baz", {"z": bytes})
// ```
Stmt::Assign(ast::StmtAssign { targets, value, .. }) => {
let [target] = targets.as_slice() else {
Stmt::Assign(assign) => {
let [target] = assign.targets.as_slice() else {
return None;
};
let ast::ExprName { id, .. } = target.as_name_expr()?;
let ast::ExprCall { func, .. } = value.as_call_expr()?;
let ast::ExprCall { func, .. } = assign.value.as_call_expr()?;
if is_typeddict(func) { Some(id) } else { None }
}
_ => None,

View File

@@ -10,7 +10,7 @@ use libcst_native::{
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::Truthiness;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::{
self as ast, AnyNodeRef, Arguments, BoolOp, ExceptHandler, Expr, Keyword, Stmt, UnaryOp,
@@ -303,8 +303,7 @@ pub(crate) fn unittest_assertion(
parenthesized_range(
expr.into(),
checker.semantic().current_statement().into(),
checker.comment_ranges(),
checker.locator().contents(),
checker.tokens(),
)
.unwrap_or(expr.range()),
)));
@@ -370,10 +369,10 @@ impl Violation for PytestUnittestRaisesAssertion {
/// PT027
pub(crate) fn unittest_raises_assertion_call(checker: &Checker, call: &ast::ExprCall) {
// Bindings in `with` statements are handled by `unittest_raises_assertion_bindings`.
if let Stmt::With(ast::StmtWith { items, .. }) = checker.semantic().current_statement() {
if let Stmt::With(with_stmt) = checker.semantic().current_statement() {
let call_ref = AnyNodeRef::from(call);
if items.iter().any(|item| {
if with_stmt.items.iter().any(|item| {
AnyNodeRef::from(&item.context_expr).ptr_eq(call_ref) && item.optional_vars.is_some()
}) {
return;
@@ -391,7 +390,11 @@ pub(crate) fn unittest_raises_assertion_binding(checker: &Checker, binding: &Bin
let semantic = checker.semantic();
let Some(Stmt::With(with)) = binding.statement(semantic) else {
let Some(stmt) = binding.statement(semantic) else {
return;
};
let Stmt::With(with) = stmt else {
return;
};

View File

@@ -768,8 +768,8 @@ fn check_fixture_decorator(checker: &Checker, func_name: &str, decorator: &Decor
keyword,
arguments,
edits::Parentheses::Preserve,
checker.locator().contents(),
checker.comment_ranges(),
checker.source(),
checker.tokens(),
)
.map(Fix::unsafe_edit)
});

View File

@@ -2,10 +2,9 @@ use rustc_hash::{FxBuildHasher, FxHashMap};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::{Tokens, parenthesized_range};
use ruff_python_ast::{self as ast, Expr, ExprCall, ExprContext, StringLiteralFlags};
use ruff_python_codegen::Generator;
use ruff_python_trivia::CommentRanges;
use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer};
use ruff_text_size::{Ranged, TextRange, TextSize};
@@ -322,18 +321,8 @@ fn elts_to_csv(elts: &[Expr], generator: Generator, flags: StringLiteralFlags) -
/// ```
///
/// This method assumes that the first argument is a string.
fn get_parametrize_name_range(
call: &ExprCall,
expr: &Expr,
comment_ranges: &CommentRanges,
source: &str,
) -> Option<TextRange> {
parenthesized_range(
expr.into(),
(&call.arguments).into(),
comment_ranges,
source,
)
fn get_parametrize_name_range(call: &ExprCall, expr: &Expr, tokens: &Tokens) -> Option<TextRange> {
parenthesized_range(expr.into(), (&call.arguments).into(), tokens)
}
/// PT006
@@ -349,13 +338,8 @@ fn check_names(checker: &Checker, call: &ExprCall, expr: &Expr, argvalues: &Expr
if names.len() > 1 {
match names_type {
types::ParametrizeNameType::Tuple => {
let name_range = get_parametrize_name_range(
call,
expr,
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(expr.range());
let name_range = get_parametrize_name_range(call, expr, checker.tokens())
.unwrap_or(expr.range());
let mut diagnostic = checker.report_diagnostic(
PytestParametrizeNamesWrongType {
single_argument: false,
@@ -386,13 +370,8 @@ fn check_names(checker: &Checker, call: &ExprCall, expr: &Expr, argvalues: &Expr
)));
}
types::ParametrizeNameType::List => {
let name_range = get_parametrize_name_range(
call,
expr,
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(expr.range());
let name_range = get_parametrize_name_range(call, expr, checker.tokens())
.unwrap_or(expr.range());
let mut diagnostic = checker.report_diagnostic(
PytestParametrizeNamesWrongType {
single_argument: false,

View File

@@ -220,11 +220,11 @@ pub(crate) fn complex_raises(checker: &Checker, stmt: &Stmt, items: &[WithItem],
if raises_called {
let is_too_complex = if let [stmt] = body {
match stmt {
Stmt::With(ast::StmtWith { body, .. }) => is_non_trivial_with_body(body),
Stmt::With(with_stmt) => is_non_trivial_with_body(&with_stmt.body),
// Allow function and class definitions to test decorators.
Stmt::ClassDef(_) | Stmt::FunctionDef(_) => false,
// Allow empty `for` loops to test iterators.
Stmt::For(ast::StmtFor { body, .. }) => match &body[..] {
Stmt::For(for_stmt) => match &for_stmt.body[..] {
[Stmt::Pass(_)] => false,
[Stmt::Expr(ast::StmtExpr { value, .. })] => !value.is_ellipsis_literal_expr(),
_ => true,

View File

@@ -162,12 +162,12 @@ impl TryFrom<&str> for UnittestAssert {
}
fn assert(expr: &Expr, msg: Option<&Expr>) -> Stmt {
Stmt::Assert(ast::StmtAssert {
Stmt::Assert(Box::new(ast::StmtAssert {
test: Box::new(expr.clone()),
msg: msg.map(|msg| Box::new(msg.clone())),
range: TextRange::default(),
node_index: ruff_python_ast::AtomicNodeIndex::NONE,
})
}))
}
fn compare(left: &Expr, cmp_op: CmpOp, right: &Expr) -> Expr {

View File

@@ -206,11 +206,11 @@ pub(crate) fn complex_warns(checker: &Checker, stmt: &Stmt, items: &[WithItem],
if warns_called {
let is_too_complex = if let [stmt] = body {
match stmt {
Stmt::With(ast::StmtWith { body, .. }) => is_non_trivial_with_body(body),
Stmt::With(with_stmt) => is_non_trivial_with_body(&with_stmt.body),
// Allow function and class definitions to test decorators.
Stmt::ClassDef(_) | Stmt::FunctionDef(_) => false,
// Allow empty `for` loops to test iterators.
Stmt::For(ast::StmtFor { body, .. }) => match &body[..] {
Stmt::For(for_stmt) => match &for_stmt.body[..] {
[Stmt::Pass(_)] => false,
[Stmt::Expr(ast::StmtExpr { value, .. })] => !value.is_ellipsis_literal_expr(),
_ => true,

View File

@@ -448,12 +448,12 @@ fn is_noreturn_func(func: &Expr, semantic: &SemanticModel) -> bool {
return false;
};
let Stmt::FunctionDef(ast::StmtFunctionDef { returns, .. }) = semantic.statement(node_id)
let Stmt::FunctionDef(node) = semantic.statement(node_id)
else {
return false;
};
let Some(returns) = returns.as_ref() else {
let Some(returns) = node.returns.as_ref() else {
return false;
};
@@ -481,19 +481,16 @@ fn add_return_none(checker: &Checker, stmt: &Stmt, range: TextRange) {
fn has_implicit_return(checker: &Checker, stmt: &Stmt) -> bool {
match stmt {
Stmt::If(ast::StmtIf {
body,
elif_else_clauses,
..
}) => {
if body
Stmt::If(node) => {
if node
.body
.last()
.is_some_and(|last| has_implicit_return(checker, last))
{
return true;
}
if elif_else_clauses.iter().any(|clause| {
if node.elif_else_clauses.iter().any(|clause| {
clause
.body
.last()
@@ -504,25 +501,33 @@ fn has_implicit_return(checker: &Checker, stmt: &Stmt) -> bool {
// Check if we don't have an else clause
matches!(
elif_else_clauses.last(),
node.elif_else_clauses.last(),
None | Some(ast::ElifElseClause { test: Some(_), .. })
)
}
Stmt::Assert(ast::StmtAssert { test, .. }) if is_const_false(test) => false,
Stmt::While(ast::StmtWhile { test, .. }) if is_const_true(test) => false,
Stmt::For(ast::StmtFor { orelse, .. }) | Stmt::While(ast::StmtWhile { orelse, .. }) => {
if let Some(last_stmt) = orelse.last() {
Stmt::Assert(node) if is_const_false(&node.test) => false,
Stmt::While(node) if is_const_true(&node.test) => false,
Stmt::For(node) => {
if let Some(last_stmt) = node.orelse.last() {
has_implicit_return(checker, last_stmt)
} else {
true
}
}
Stmt::Match(ast::StmtMatch { cases, .. }) => cases.iter().any(|case| {
Stmt::While(node) => {
if let Some(last_stmt) = node.orelse.last() {
has_implicit_return(checker, last_stmt)
} else {
true
}
}
Stmt::Match(node) => node.cases.iter().any(|case| {
case.body
.last()
.is_some_and(|last| has_implicit_return(checker, last))
}),
Stmt::With(ast::StmtWith { body, .. }) => body
Stmt::With(node) => node
.body
.last()
.is_some_and(|last_stmt| has_implicit_return(checker, last_stmt)),
Stmt::Return(_) | Stmt::Raise(_) | Stmt::Try(_) => false,

View File

@@ -62,11 +62,11 @@ impl<'semantic, 'data> ReturnVisitor<'semantic, 'data> {
impl<'a> Visitor<'a> for ReturnVisitor<'_, 'a> {
fn visit_stmt(&mut self, stmt: &'a Stmt) {
match stmt {
Stmt::ClassDef(ast::StmtClassDef { decorator_list, .. }) => {
Stmt::ClassDef(node) => {
// Visit the decorators, etc.
self.sibling = Some(stmt);
self.parents.push(stmt);
for decorator in decorator_list {
for decorator in &node.decorator_list {
visitor::walk_decorator(self, decorator);
}
self.parents.pop();
@@ -74,12 +74,15 @@ impl<'a> Visitor<'a> for ReturnVisitor<'_, 'a> {
// But don't recurse into the body.
return;
}
Stmt::FunctionDef(ast::StmtFunctionDef {
parameters,
decorator_list,
returns,
..
}) => {
Stmt::FunctionDef(node) => {
let ast::StmtFunctionDef {
parameters,
decorator_list,
returns,
range: _,
node_index: _,
..
} = &**node;
// Visit the decorators, etc.
self.sibling = Some(stmt);
self.parents.push(stmt);
@@ -95,24 +98,30 @@ impl<'a> Visitor<'a> for ReturnVisitor<'_, 'a> {
// But don't recurse into the body.
return;
}
Stmt::Global(ast::StmtGlobal {
names,
range: _,
node_index: _,
})
| Stmt::Nonlocal(ast::StmtNonlocal {
names,
range: _,
node_index: _,
}) => {
Stmt::Global(node) => {
let ast::StmtGlobal {
names,
range: _,
node_index: _,
} = &**node;
self.stack
.non_locals
.extend(names.iter().map(Identifier::as_str));
}
Stmt::AnnAssign(ast::StmtAnnAssign { target, value, .. }) => {
Stmt::Nonlocal(node) => {
let ast::StmtNonlocal {
names,
range: _,
node_index: _,
} = &**node;
self.stack
.non_locals
.extend(names.iter().map(Identifier::as_str));
}
Stmt::AnnAssign(node) => {
// Ex) `x: int`
if value.is_none() {
if let Expr::Name(name) = target.as_ref() {
if node.value.is_none() {
if let Expr::Name(name) = node.target.as_ref() {
self.stack.annotations.insert(name.id.as_str());
}
}
@@ -140,11 +149,11 @@ impl<'a> Visitor<'a> for ReturnVisitor<'_, 'a> {
// x = f.read()
// return x
// ```
Stmt::With(with) => {
Stmt::With(with_node) => {
if let Some(stmt_assign) =
with.body.last().and_then(Stmt::as_assign_stmt)
with_node.body.last().and_then(Stmt::as_assign_stmt)
{
if !has_conditional_body(with, self.semantic) {
if !has_conditional_body(with_node, self.semantic) {
self.stack.assignment_return.push((
stmt_assign,
stmt_return,
@@ -159,11 +168,14 @@ impl<'a> Visitor<'a> for ReturnVisitor<'_, 'a> {
self.stack.returns.push(stmt_return);
}
Stmt::If(ast::StmtIf {
body,
elif_else_clauses,
..
}) => {
Stmt::If(node) => {
let ast::StmtIf {
body,
elif_else_clauses,
range: _,
node_index: _,
..
} = &**node;
if let Some(first) = elif_else_clauses.first() {
self.stack.elifs_elses.push((body, first));
}

View File

@@ -10,7 +10,7 @@ use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::helpers::{Truthiness, contains_effect};
use ruff_python_ast::name::Name;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_codegen::Generator;
use ruff_python_semantic::SemanticModel;
@@ -800,14 +800,9 @@ fn is_short_circuit(
edit = Some(get_short_circuit_edit(
value,
TextRange::new(
parenthesized_range(
furthest.into(),
expr.into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(furthest.range())
.start(),
parenthesized_range(furthest.into(), expr.into(), checker.tokens())
.unwrap_or(furthest.range())
.start(),
expr.end(),
),
short_circuit_truthiness,
@@ -828,14 +823,9 @@ fn is_short_circuit(
edit = Some(get_short_circuit_edit(
next_value,
TextRange::new(
parenthesized_range(
furthest.into(),
expr.into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(furthest.range())
.start(),
parenthesized_range(furthest.into(), expr.into(), checker.tokens())
.unwrap_or(furthest.range())
.start(),
expr.end(),
),
short_circuit_truthiness,

View File

@@ -4,7 +4,7 @@ use ruff_text_size::{Ranged, TextRange};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::{is_const_false, is_const_true};
use ruff_python_ast::name::Name;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use crate::checkers::ast::Checker;
use crate::{AlwaysFixableViolation, Edit, Fix, FixAvailability, Violation};
@@ -171,13 +171,8 @@ pub(crate) fn if_expr_with_true_false(
checker
.locator()
.slice(
parenthesized_range(
test.into(),
expr.into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(test.range()),
parenthesized_range(test.into(), expr.into(), checker.tokens())
.unwrap_or(test.range()),
)
.to_string(),
expr.range(),

View File

@@ -140,10 +140,10 @@ fn is_dunder_method(name: &str) -> bool {
}
fn is_exception_check(stmt: &Stmt) -> bool {
let Stmt::If(ast::StmtIf { body, .. }) = stmt else {
let Stmt::If(node) = stmt else {
return false;
};
matches!(body.as_slice(), [Stmt::Raise(_)])
matches!(node.body.as_slice(), [Stmt::Raise(_)])
}
/// SIM201

View File

@@ -68,18 +68,10 @@ impl Violation for MultipleWithStatements {
/// Returns a boolean indicating whether it's an async with statement, the items
/// and body.
fn next_with(body: &[Stmt]) -> Option<(bool, &[WithItem], &[Stmt])> {
let [
Stmt::With(ast::StmtWith {
is_async,
items,
body,
..
}),
] = body
else {
let [Stmt::With(node)] = body else {
return None;
};
Some((*is_async, items, body))
Some((node.is_async, &node.items, &node.body))
}
/// Check if `with_items` contains a single item which should not necessarily be
@@ -139,8 +131,8 @@ pub(crate) fn multiple_with_statements(
// with B(), C():
// print("hello")
// ```
if let Some(Stmt::With(ast::StmtWith { body, .. })) = with_parent {
if body.len() == 1 {
if let Some(Stmt::With(node)) = with_parent {
if node.body.len() == 1 {
return;
}
}

View File

@@ -230,21 +230,13 @@ fn nested_if_body(stmt_if: &ast::StmtIf) -> Option<NestedIf<'_>> {
/// ...
/// ```
fn find_last_nested_if(body: &[Stmt]) -> Option<&Expr> {
let [
Stmt::If(ast::StmtIf {
test,
body: inner_body,
elif_else_clauses,
..
}),
] = body
else {
let [Stmt::If(node)] = body else {
return None;
};
if !elif_else_clauses.is_empty() {
if !node.elif_else_clauses.is_empty() {
return None;
}
find_last_nested_if(inner_body).or(Some(test))
find_last_nested_if(&node.body).or(Some(&node.test))
}
/// Returns `true` if an expression is an `if __name__ == "__main__":` check.

View File

@@ -165,20 +165,18 @@ pub(crate) fn enumerate_for_loop(checker: &Checker, for_stmt: &ast::StmtFor) {
/// If the statement is an index increment statement (e.g., `i += 1`), return
/// the name of the index variable.
fn match_index_increment(stmt: &Stmt) -> Option<&ast::ExprName> {
let Stmt::AugAssign(ast::StmtAugAssign {
target,
op: Operator::Add,
value,
..
}) = stmt
else {
let Stmt::AugAssign(node) = stmt else {
return None;
};
let name = target.as_name_expr()?;
if !matches!(node.op, Operator::Add) {
return None;
}
let name = node.target.as_name_expr()?;
if matches!(
value.as_ref(),
node.value.as_ref(),
Expr::NumberLiteral(ast::ExprNumberLiteral {
value: Number::Int(Int::ONE),
..

View File

@@ -98,26 +98,16 @@ pub(crate) fn if_else_block_instead_of_dict_get(checker: &Checker, stmt_if: &ast
let [else_body_stmt] = else_body.as_slice() else {
return;
};
let Stmt::Assign(ast::StmtAssign {
targets: body_var,
value: body_value,
..
}) = &body_stmt
else {
let Stmt::Assign(body_node) = &body_stmt else {
return;
};
let [body_var] = body_var.as_slice() else {
let [body_var] = body_node.targets.as_slice() else {
return;
};
let Stmt::Assign(ast::StmtAssign {
targets: orelse_var,
value: orelse_value,
..
}) = &else_body_stmt
else {
let Stmt::Assign(orelse_node) = &else_body_stmt else {
return;
};
let [orelse_var] = orelse_var.as_slice() else {
let [orelse_var] = orelse_node.targets.as_slice() else {
return;
};
@@ -143,8 +133,8 @@ pub(crate) fn if_else_block_instead_of_dict_get(checker: &Checker, stmt_if: &ast
}
let (expected_var, expected_value, default_var, default_value) = match ops[..] {
[CmpOp::In] => (body_var, body_value, orelse_var, orelse_value.as_ref()),
[CmpOp::NotIn] => (orelse_var, orelse_value, body_var, body_value.as_ref()),
[CmpOp::In] => (body_var, &body_node.value, orelse_var, orelse_node.value.as_ref()),
[CmpOp::NotIn] => (orelse_var, &orelse_node.value, body_var, body_node.value.as_ref()),
_ => {
return;
}

View File

@@ -112,27 +112,13 @@ pub(crate) fn if_else_block_instead_of_if_exp(checker: &Checker, stmt_if: &ast::
else {
return;
};
let [
Stmt::Assign(ast::StmtAssign {
targets: body_targets,
value: body_value,
..
}),
] = body.as_slice()
else {
let [Stmt::Assign(body_node)] = body.as_slice() else {
return;
};
let [
Stmt::Assign(ast::StmtAssign {
targets: else_targets,
value: else_value,
..
}),
] = else_body.as_slice()
else {
let [Stmt::Assign(else_node)] = else_body.as_slice() else {
return;
};
let ([body_target], [else_target]) = (body_targets.as_slice(), else_targets.as_slice()) else {
let ([body_target], [else_target]) = (body_node.targets.as_slice(), else_node.targets.as_slice()) else {
return;
};
let Expr::Name(ast::ExprName { id: body_id, .. }) = body_target else {
@@ -148,13 +134,13 @@ pub(crate) fn if_else_block_instead_of_if_exp(checker: &Checker, stmt_if: &ast::
// Avoid suggesting ternary for `if (yield ...)`-style checks.
// TODO(charlie): Fix precedence handling for yields in generator.
if matches!(
body_value.as_ref(),
body_node.value.as_ref(),
Expr::Yield(_) | Expr::YieldFrom(_) | Expr::Await(_)
) {
return;
}
if matches!(
else_value.as_ref(),
else_node.value.as_ref(),
Expr::Yield(_) | Expr::YieldFrom(_) | Expr::Await(_)
) {
return;
@@ -190,20 +176,20 @@ pub(crate) fn if_else_block_instead_of_if_exp(checker: &Checker, stmt_if: &ast::
// - If `test == not body_value`, replace with `target_var = body_value and else_value`
// - If `not test == body_value`, replace with `target_var = body_value and else_value`
// - Otherwise, replace with `target_var = body_value if test else else_value`
let (contents, assignment_kind) = match (test, body_value) {
(test_node, body_node)
if ComparableExpr::from(test_node) == ComparableExpr::from(body_node)
let (contents, assignment_kind) = match (test, &body_node.value) {
(test_node, body_val_node)
if ComparableExpr::from(test_node) == ComparableExpr::from(body_val_node.as_ref())
&& !contains_effect(test_node, |id| checker.semantic().has_builtin_binding(id)) =>
{
let target_var = &body_target;
let binary = assignment_binary_or(target_var, body_value, else_value);
let binary = assignment_binary_or(target_var, &body_node.value, &else_node.value);
(checker.generator().stmt(&binary), AssignmentKind::Binary)
}
(test_node, body_node)
(test_node, body_val_node)
if (test_node.as_unary_op_expr().is_some_and(|op_expr| {
op_expr.op.is_not()
&& ComparableExpr::from(&op_expr.operand) == ComparableExpr::from(body_node)
}) || body_node.as_unary_op_expr().is_some_and(|op_expr| {
&& ComparableExpr::from(&op_expr.operand) == ComparableExpr::from(body_val_node.as_ref())
}) || body_val_node.as_ref().as_unary_op_expr().is_some_and(|op_expr| {
op_expr.op.is_not()
&& ComparableExpr::from(&op_expr.operand) == ComparableExpr::from(test_node)
})) && !contains_effect(test_node, |id| {
@@ -211,12 +197,12 @@ pub(crate) fn if_else_block_instead_of_if_exp(checker: &Checker, stmt_if: &ast::
}) =>
{
let target_var = &body_target;
let binary = assignment_binary_and(target_var, body_value, else_value);
let binary = assignment_binary_and(target_var, &body_node.value, &else_node.value);
(checker.generator().stmt(&binary), AssignmentKind::Binary)
}
_ => {
let target_var = &body_target;
let ternary = assignment_ternary(target_var, body_value, test, else_value);
let ternary = assignment_ternary(target_var, &body_node.value, test, &else_node.value);
(checker.generator().stmt(&ternary), AssignmentKind::Ternary)
}
};

View File

@@ -4,10 +4,10 @@ use anyhow::Result;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::comparable::ComparableStmt;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::stmt_if::{IfElifBranch, if_elif_branches};
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Expr};
use ruff_python_trivia::{CommentRanges, SimpleTokenKind, SimpleTokenizer};
use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer};
use ruff_source_file::LineRanges;
use ruff_text_size::{Ranged, TextRange};
@@ -99,7 +99,7 @@ pub(crate) fn if_with_same_arms(checker: &Checker, stmt_if: &ast::StmtIf) {
&current_branch,
following_branch,
checker.locator(),
checker.comment_ranges(),
checker.tokens(),
)
});
}
@@ -111,7 +111,7 @@ fn merge_branches(
current_branch: &IfElifBranch,
following_branch: &IfElifBranch,
locator: &Locator,
comment_ranges: &CommentRanges,
tokens: &ruff_python_ast::token::Tokens,
) -> Result<Fix> {
// Identify the colon (`:`) at the end of the current branch's test.
let Some(current_branch_colon) =
@@ -127,12 +127,9 @@ fn merge_branches(
);
// If the following test isn't parenthesized, consider parenthesizing it.
let following_branch_test = if let Some(range) = parenthesized_range(
following_branch.test.into(),
stmt_if.into(),
comment_ranges,
locator.contents(),
) {
let following_branch_test = if let Some(range) =
parenthesized_range(following_branch.test.into(), stmt_if.into(), tokens)
{
Cow::Borrowed(locator.slice(range))
} else if matches!(
following_branch.test,
@@ -153,24 +150,19 @@ fn merge_branches(
//
// For example, if the current test is `x if x else y`, we should parenthesize it to
// `(x if x else y) or ...`.
let parenthesize_edit = if matches!(
current_branch.test,
Expr::Lambda(_) | Expr::Named(_) | Expr::If(_)
) && parenthesized_range(
current_branch.test.into(),
stmt_if.into(),
comment_ranges,
locator.contents(),
)
.is_none()
{
Some(Edit::range_replacement(
format!("({})", locator.slice(current_branch.test)),
current_branch.test.range(),
))
} else {
None
};
let parenthesize_edit =
if matches!(
current_branch.test,
Expr::Lambda(_) | Expr::Named(_) | Expr::If(_)
) && parenthesized_range(current_branch.test.into(), stmt_if.into(), tokens).is_none()
{
Some(Edit::range_replacement(
format!("({})", locator.slice(current_branch.test)),
current_branch.test.range(),
))
} else {
None
};
Ok(Fix::safe_edits(
deletion_edit,

View File

@@ -1,6 +1,6 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::AnyNodeRef;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Arguments, CmpOp, Comprehension, Expr};
use ruff_python_semantic::analyze::typing;
use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer};
@@ -90,20 +90,10 @@ fn key_in_dict(checker: &Checker, left: &Expr, right: &Expr, operator: CmpOp, pa
}
// Extract the exact range of the left and right expressions.
let left_range = parenthesized_range(
left.into(),
parent,
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(left.range());
let right_range = parenthesized_range(
right.into(),
parent,
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(right.range());
let left_range =
parenthesized_range(left.into(), parent, checker.tokens()).unwrap_or(left.range());
let right_range =
parenthesized_range(right.into(), parent, checker.tokens()).unwrap_or(right.range());
let mut diagnostic = checker.report_diagnostic(
InDictKeys {

View File

@@ -92,15 +92,9 @@ impl Violation for NeedlessBool {
/// SIM103
pub(crate) fn needless_bool(checker: &Checker, stmt: &Stmt) {
let Stmt::If(stmt_if) = stmt else { return };
let ast::StmtIf {
test: if_test,
body: if_body,
elif_else_clauses,
..
} = stmt_if;
// Extract an `if` or `elif` (that returns) followed by an else (that returns the same value)
let (if_test, if_body, else_body, range) = match elif_else_clauses.as_slice() {
let (if_test, if_body, else_body, range) = match stmt_if.elif_else_clauses.as_slice() {
// if-else case:
// ```python
// if x > 0:
@@ -115,8 +109,8 @@ pub(crate) fn needless_bool(checker: &Checker, stmt: &Stmt) {
..
},
] => (
if_test.as_ref(),
if_body,
stmt_if.test.as_ref(),
stmt_if.body.as_slice(),
else_body.as_slice(),
stmt_if.range(),
),
@@ -143,7 +137,7 @@ pub(crate) fn needless_bool(checker: &Checker, stmt: &Stmt) {
},
] => (
elif_test,
elif_body,
elif_body.as_slice(),
else_body.as_slice(),
TextRange::new(elif_range.start(), else_range.end()),
),
@@ -155,7 +149,7 @@ pub(crate) fn needless_bool(checker: &Checker, stmt: &Stmt) {
// ```
[] => {
// Fetching the next sibling is expensive, so do some validation early.
if is_one_line_return_bool(if_body).is_none() {
if is_one_line_return_bool(&stmt_if.body).is_none() {
return;
}
@@ -175,8 +169,8 @@ pub(crate) fn needless_bool(checker: &Checker, stmt: &Stmt) {
}
(
if_test.as_ref(),
if_body,
stmt_if.test.as_ref(),
stmt_if.body.as_slice(),
std::slice::from_ref(next_stmt),
TextRange::new(stmt_if.start(), next_stmt.end()),
)
@@ -231,7 +225,7 @@ pub(crate) fn needless_bool(checker: &Checker, stmt: &Stmt) {
op: ast::UnaryOp::Not,
operand,
..
}) => Some((**operand).clone()),
}) => Some(operand.clone()),
Expr::Compare(ast::ExprCompare {
ops,
@@ -252,26 +246,26 @@ pub(crate) fn needless_bool(checker: &Checker, stmt: &Stmt) {
unreachable!("Single comparison with multiple comparators");
};
Some(Expr::Compare(ast::ExprCompare {
Some(Box::new(Expr::Compare(ast::ExprCompare {
ops: Box::new([op.negate()]),
left: left.clone(),
comparators: Box::new([right.clone()]),
range: TextRange::default(),
node_index: ruff_python_ast::AtomicNodeIndex::NONE,
}))
})))
}
_ => Some(Expr::UnaryOp(ast::ExprUnaryOp {
_ => Some(Box::new(Expr::UnaryOp(ast::ExprUnaryOp {
op: ast::UnaryOp::Not,
operand: Box::new(if_test.clone()),
range: TextRange::default(),
node_index: ruff_python_ast::AtomicNodeIndex::NONE,
})),
}))),
}
} else if if_test.is_compare_expr() {
// If the condition is a comparison, we can replace it with the condition, since we
// know it's a boolean.
Some(if_test.clone())
Some(Box::new(if_test.clone()))
} else if checker.semantic().has_builtin_binding("bool") {
// Otherwise, we need to wrap the condition in a call to `bool`.
let func_node = ast::ExprName {
@@ -291,7 +285,7 @@ pub(crate) fn needless_bool(checker: &Checker, stmt: &Stmt) {
range: TextRange::default(),
node_index: ruff_python_ast::AtomicNodeIndex::NONE,
};
Some(Expr::Call(call_node))
Some(Box::new(Expr::Call(call_node)))
} else {
None
}
@@ -300,7 +294,7 @@ pub(crate) fn needless_bool(checker: &Checker, stmt: &Stmt) {
// Generate the replacement `return` statement.
let replacement = condition.as_ref().map(|expr| {
Stmt::Return(ast::StmtReturn {
value: Some(Box::new(expr.clone())),
value: Some(expr.clone()),
range: TextRange::default(),
node_index: ruff_python_ast::AtomicNodeIndex::NONE,
})

View File

@@ -68,8 +68,8 @@ fn match_async_exit_stack(semantic: &SemanticModel) -> bool {
return false;
}
for parent in semantic.current_statements() {
if let Stmt::With(ast::StmtWith { items, .. }) = parent {
for item in items {
if let Stmt::With(node) = parent {
for item in &node.items {
if let Expr::Call(ast::ExprCall { func, .. }) = &item.context_expr {
if semantic
.resolve_qualified_name(func)
@@ -102,8 +102,8 @@ fn match_exit_stack(semantic: &SemanticModel) -> bool {
return false;
}
for parent in semantic.current_statements() {
if let Stmt::With(ast::StmtWith { items, .. }) = parent {
for item in items {
if let Stmt::With(node) = parent {
for item in &node.items {
if let Expr::Call(ast::ExprCall { func, .. }) = &item.context_expr {
if semantic
.resolve_qualified_name(func)

View File

@@ -269,27 +269,25 @@ struct Terminal<'a> {
}
fn match_loop(stmt: &Stmt) -> Option<Loop<'_>> {
let Stmt::For(ast::StmtFor {
body, target, iter, ..
}) = stmt
else {
let Stmt::For(for_stmt) = stmt else {
return None;
};
let ast::StmtFor {
body, target, iter, ..
} = &**for_stmt;
// The loop itself should contain a single `if` statement, with a single `return` statement in
// the body.
let [
Stmt::If(ast::StmtIf {
body: nested_body,
test: nested_test,
elif_else_clauses: nested_elif_else_clauses,
range: _,
node_index: _,
}),
] = body.as_slice()
else {
let [Stmt::If(if_stmt)] = body.as_slice() else {
return None;
};
let ast::StmtIf {
body: nested_body,
test: nested_test,
elif_else_clauses: nested_elif_else_clauses,
range: _,
node_index: _,
} = &**if_stmt;
if !nested_elif_else_clauses.is_empty() {
return None;
}
@@ -326,9 +324,10 @@ fn match_loop(stmt: &Stmt) -> Option<Loop<'_>> {
/// return False
/// ```
fn match_else_return(stmt: &Stmt) -> Option<Terminal<'_>> {
let Stmt::For(ast::StmtFor { orelse, .. }) = stmt else {
let Stmt::For(for_stmt) = stmt else {
return None;
};
let ast::StmtFor { orelse, .. } = &**for_stmt;
// The `else` block has to contain a single `return True` or `return False`.
let [
@@ -366,9 +365,10 @@ fn match_else_return(stmt: &Stmt) -> Option<Terminal<'_>> {
/// return False
/// ```
fn match_sibling_return<'a>(stmt: &'a Stmt, sibling: &'a Stmt) -> Option<Terminal<'a>> {
let Stmt::For(ast::StmtFor { orelse, .. }) = stmt else {
let Stmt::For(for_stmt) = stmt else {
return None;
};
let ast::StmtFor { orelse, .. } = &**for_stmt;
// The loop itself shouldn't have an `else` block.
if !orelse.is_empty() {

View File

@@ -4,7 +4,8 @@ use ruff_python_ast::{self as ast, Expr, Stmt};
pub(super) fn has_slots(body: &[Stmt]) -> bool {
for stmt in body {
match stmt {
Stmt::Assign(ast::StmtAssign { targets, .. }) => {
Stmt::Assign(assign) => {
let targets = &assign.targets;
for target in targets {
if let Expr::Name(ast::ExprName { id, .. }) = target {
if id.as_str() == "__slots__" {
@@ -13,7 +14,8 @@ pub(super) fn has_slots(body: &[Stmt]) -> bool {
}
}
}
Stmt::AnnAssign(ast::StmtAnnAssign { target, .. }) => {
Stmt::AnnAssign(ann_assign) => {
let target = &ann_assign.target;
if let Expr::Name(ast::ExprName { id, .. }) = target.as_ref() {
if id.as_str() == "__slots__" {
return true;

View File

@@ -92,9 +92,11 @@ impl<'a> BannedModuleImportPolicies<'a> {
pub(crate) fn new(stmt: &'a Stmt, checker: &Checker) -> Self {
match stmt {
Stmt::Import(import) => Self::Import(import),
Stmt::ImportFrom(import @ StmtImportFrom { module, level, .. }) => {
Stmt::ImportFrom(import) => {
let module = &import.module;
let level = import.level;
let module = resolve_imported_module_path(
*level,
level,
module.as_deref(),
checker.module.qualified_name(),
);

View File

@@ -91,9 +91,10 @@ fn fix_banned_relative_import(
return None;
}
let Stmt::ImportFrom(ast::StmtImportFrom { names, .. }) = stmt else {
let Stmt::ImportFrom(import_from) = stmt else {
panic!("Expected Stmt::ImportFrom");
};
let names = &import_from.names;
let node = ast::StmtImportFrom {
module: Some(Identifier::new(
module_path.to_string(),

View File

@@ -11,7 +11,7 @@ use crate::registry::Rule;
use crate::rules::flake8_type_checking::helpers::quote_type_expression;
use crate::{AlwaysFixableViolation, Edit, Fix, FixAvailability, Violation};
use ruff_python_ast::PythonVersion;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
/// ## What it does
/// Checks if [PEP 613] explicit type aliases contain references to
@@ -158,10 +158,10 @@ pub(crate) fn unquoted_type_alias(checker: &Checker, binding: &Binding) {
return;
}
let Some(Stmt::AnnAssign(ast::StmtAnnAssign {
value: Some(expr), ..
})) = binding.statement(checker.semantic())
else {
let Some(Stmt::AnnAssign(node)) = binding.statement(checker.semantic()) else {
return;
};
let Some(expr) = &node.value else {
return;
};
@@ -295,21 +295,20 @@ pub(crate) fn quoted_type_alias(
let range = annotation_expr.range();
let mut diagnostic = checker.report_diagnostic(QuotedTypeAlias, range);
let fix_string = annotation_expr.value.to_string();
let fix_string = if (fix_string.contains('\n') || fix_string.contains('\r'))
&& parenthesized_range(
// Check for parenthesis outside string ("""...""")
// Check for parentheses outside the string ("""...""")
annotation_expr.into(),
checker.semantic().current_statement().into(),
checker.comment_ranges(),
checker.locator().contents(),
checker.source_tokens(),
)
.is_none()
&& parenthesized_range(
// Check for parenthesis inside string """(...)"""
// Check for parentheses inside the string """(...)"""
expr.into(),
annotation_expr.into(),
checker.comment_ranges(),
checker.locator().contents(),
checker.tokens(),
)
.is_none()
{

View File

@@ -1,5 +1,5 @@
use ruff_python_ast as ast;
use ruff_python_ast::{Parameter, Parameters, Stmt, StmtExpr, StmtFunctionDef, StmtRaise};
use ruff_python_ast::{Parameter, Parameters, Stmt, StmtExpr, StmtFunctionDef};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_semantic::analyze::{function_type, visibility};
@@ -389,14 +389,20 @@ pub(crate) fn is_not_implemented_stub_with_variable(
_ => &function_def.body,
};
let [
Stmt::Assign(ast::StmtAssign { targets, value, .. }),
Stmt::Raise(StmtRaise {
exc: Some(exception),
..
}),
] = statements
else {
let [stmt1, stmt2] = statements else {
return false;
};
let Stmt::Assign(assign_node) = stmt1 else {
return false;
};
let targets = &assign_node.targets;
let value = &assign_node.value;
let Stmt::Raise(raise_node) = stmt2 else {
return false;
};
let Some(exception) = &raise_node.exc else {
return false;
};

View File

@@ -1,10 +1,9 @@
use std::ops::Range;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{Expr, ExprBinOp, ExprCall, Operator};
use ruff_python_semantic::SemanticModel;
use ruff_python_trivia::CommentRanges;
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
@@ -89,11 +88,7 @@ pub(crate) fn path_constructor_current_directory(
let mut diagnostic = checker.report_diagnostic(PathConstructorCurrentDirectory, arg.range());
match parent_and_next_path_fragment_range(
checker.semantic(),
checker.comment_ranges(),
checker.source(),
) {
match parent_and_next_path_fragment_range(checker.semantic(), checker.tokens()) {
Some((parent_range, next_fragment_range)) => {
let next_fragment_expr = checker.locator().slice(next_fragment_range);
let call_expr = checker.locator().slice(call.range());
@@ -116,7 +111,7 @@ pub(crate) fn path_constructor_current_directory(
arguments,
Parentheses::Preserve,
checker.source(),
checker.comment_ranges(),
checker.tokens(),
)?;
Ok(Fix::applicable_edit(edit, applicability(call.range())))
}),
@@ -125,8 +120,7 @@ pub(crate) fn path_constructor_current_directory(
fn parent_and_next_path_fragment_range(
semantic: &SemanticModel,
comment_ranges: &CommentRanges,
source: &str,
tokens: &ruff_python_ast::token::Tokens,
) -> Option<(TextRange, TextRange)> {
let parent = semantic.current_expression_parent()?;
@@ -142,6 +136,6 @@ fn parent_and_next_path_fragment_range(
Some((
parent.range(),
parenthesized_range(right.into(), parent.into(), comment_ranges, source).unwrap_or(range),
parenthesized_range(right.into(), parent.into(), tokens).unwrap_or(range),
))
}

View File

@@ -23,11 +23,13 @@ pub(crate) fn annotate_imports<'a>(
.iter()
.map(|import| {
match import {
Stmt::Import(ast::StmtImport {
names,
range,
node_index: _,
}) => {
Stmt::Import(import_stmt) => {
let ast::StmtImport {
names,
range,
node_index: _,
} = &**import_stmt;
// Find comments above.
let mut atop = vec![];
while let Some(comment) =
@@ -58,13 +60,15 @@ pub(crate) fn annotate_imports<'a>(
inline,
}
}
Stmt::ImportFrom(ast::StmtImportFrom {
module,
names,
level,
range: _,
node_index: _,
}) => {
Stmt::ImportFrom(import_from_stmt) => {
let ast::StmtImportFrom {
module,
names,
level,
range: _,
node_index: _,
} = &**import_from_stmt;
// Find comments above.
let mut atop = vec![];
while let Some(comment) =

View File

@@ -183,87 +183,77 @@ impl<'a> StatementVisitor<'a> for BlockBuilder<'a> {
let prev_nested = self.nested;
self.nested = true;
match stmt {
Stmt::FunctionDef(ast::StmtFunctionDef { body, .. }) => {
for stmt in body {
Stmt::FunctionDef(node) => {
for stmt in &node.body {
self.visit_stmt(stmt);
}
self.finalize(None);
}
Stmt::ClassDef(ast::StmtClassDef { body, .. }) => {
for stmt in body {
Stmt::ClassDef(node) => {
for stmt in &node.body {
self.visit_stmt(stmt);
}
self.finalize(None);
}
Stmt::For(ast::StmtFor { body, orelse, .. }) => {
for stmt in body {
Stmt::For(node) => {
for stmt in &node.body {
self.visit_stmt(stmt);
}
self.finalize(None);
for stmt in orelse {
for stmt in &node.orelse {
self.visit_stmt(stmt);
}
self.finalize(None);
}
Stmt::While(ast::StmtWhile { body, orelse, .. }) => {
for stmt in body {
Stmt::While(node) => {
for stmt in &node.body {
self.visit_stmt(stmt);
}
self.finalize(None);
for stmt in orelse {
for stmt in &node.orelse {
self.visit_stmt(stmt);
}
self.finalize(None);
}
Stmt::If(ast::StmtIf {
body,
elif_else_clauses,
..
}) => {
for stmt in body {
Stmt::If(node) => {
for stmt in &node.body {
self.visit_stmt(stmt);
}
self.finalize(None);
for clause in elif_else_clauses {
for clause in &node.elif_else_clauses {
self.visit_elif_else_clause(clause);
}
}
Stmt::With(ast::StmtWith { body, .. }) => {
for stmt in body {
Stmt::With(node) => {
for stmt in &node.body {
self.visit_stmt(stmt);
}
self.finalize(None);
}
Stmt::Match(ast::StmtMatch { cases, .. }) => {
for match_case in cases {
Stmt::Match(node) => {
for match_case in &node.cases {
self.visit_match_case(match_case);
}
}
Stmt::Try(ast::StmtTry {
body,
handlers,
orelse,
finalbody,
..
}) => {
for except_handler in handlers {
Stmt::Try(node) => {
for except_handler in &node.handlers {
self.visit_except_handler(except_handler);
}
for stmt in body {
for stmt in &node.body {
self.visit_stmt(stmt);
}
self.finalize(None);
for stmt in orelse {
for stmt in &node.orelse {
self.visit_stmt(stmt);
}
self.finalize(None);
for stmt in finalbody {
for stmt in &node.finalbody {
self.visit_stmt(stmt);
}
self.finalize(None);

View File

@@ -59,30 +59,30 @@ impl AlwaysFixableViolation for MissingRequiredImport {
fn includes_import(stmt: &Stmt, target: &NameImport) -> bool {
match target {
NameImport::Import(target) => {
let Stmt::Import(ast::StmtImport {
let Stmt::Import(import_stmt) = &stmt else {
return false;
};
let ast::StmtImport {
names,
range: _,
node_index: _,
}) = &stmt
else {
return false;
};
} = &**import_stmt;
names.iter().any(|alias| {
alias.name == target.name.name
&& alias.asname.as_deref() == target.name.as_name.as_deref()
})
}
NameImport::ImportFrom(target) => {
let Stmt::ImportFrom(ast::StmtImportFrom {
let Stmt::ImportFrom(import_from_stmt) = &stmt else {
return false;
};
let ast::StmtImportFrom {
module,
names,
level,
range: _,
node_index: _,
}) = &stmt
else {
return false;
};
} = &**import_from_stmt;
module.as_deref() == target.module.as_deref()
&& *level == target.level
&& names.iter().any(|alias| {

View File

@@ -71,39 +71,35 @@ fn get_complexity_number(stmts: &[Stmt]) -> usize {
let mut complexity = 0;
for stmt in stmts {
match stmt {
Stmt::If(ast::StmtIf {
body,
elif_else_clauses,
..
}) => {
Stmt::If(if_stmt) => {
complexity += 1;
complexity += get_complexity_number(body);
for clause in elif_else_clauses {
complexity += get_complexity_number(&if_stmt.body);
for clause in &if_stmt.elif_else_clauses {
if clause.test.is_some() {
complexity += 1;
}
complexity += get_complexity_number(&clause.body);
}
}
Stmt::For(ast::StmtFor { body, orelse, .. }) => {
Stmt::For(for_stmt) => {
complexity += 1;
complexity += get_complexity_number(body);
complexity += get_complexity_number(orelse);
complexity += get_complexity_number(&for_stmt.body);
complexity += get_complexity_number(&for_stmt.orelse);
}
Stmt::With(ast::StmtWith { body, .. }) => {
complexity += get_complexity_number(body);
Stmt::With(with_stmt) => {
complexity += get_complexity_number(&with_stmt.body);
}
Stmt::While(ast::StmtWhile { body, orelse, .. }) => {
Stmt::While(while_stmt) => {
complexity += 1;
complexity += get_complexity_number(body);
complexity += get_complexity_number(orelse);
complexity += get_complexity_number(&while_stmt.body);
complexity += get_complexity_number(&while_stmt.orelse);
}
Stmt::Match(ast::StmtMatch { cases, .. }) => {
for case in cases {
Stmt::Match(match_stmt) => {
for case in &match_stmt.cases {
complexity += 1;
complexity += get_complexity_number(&case.body);
}
if let Some(last_case) = cases.last() {
if let Some(last_case) = match_stmt.cases.last() {
// The complexity of an irrefutable pattern is similar to an `else` block of an `if` statement.
//
// For example:
@@ -121,20 +117,14 @@ fn get_complexity_number(stmts: &[Stmt]) -> usize {
}
}
}
Stmt::Try(ast::StmtTry {
body,
handlers,
orelse,
finalbody,
..
}) => {
complexity += get_complexity_number(body);
if !orelse.is_empty() {
Stmt::Try(try_stmt) => {
complexity += get_complexity_number(&try_stmt.body);
if !try_stmt.orelse.is_empty() {
complexity += 1;
}
complexity += get_complexity_number(orelse);
complexity += get_complexity_number(finalbody);
for handler in handlers {
complexity += get_complexity_number(&try_stmt.orelse);
complexity += get_complexity_number(&try_stmt.finalbody);
for handler in &try_stmt.handlers {
complexity += 1;
let ExceptHandler::ExceptHandler(ast::ExceptHandlerExceptHandler {
body, ..
@@ -142,12 +132,12 @@ fn get_complexity_number(stmts: &[Stmt]) -> usize {
complexity += get_complexity_number(body);
}
}
Stmt::FunctionDef(ast::StmtFunctionDef { body, .. }) => {
Stmt::FunctionDef(func_def) => {
complexity += 1;
complexity += get_complexity_number(body);
complexity += get_complexity_number(&func_def.body);
}
Stmt::ClassDef(ast::StmtClassDef { body, .. }) => {
complexity += get_complexity_number(body);
Stmt::ClassDef(class_def) => {
complexity += get_complexity_number(&class_def.body);
}
_ => {}
}

View File

@@ -3,7 +3,7 @@ use ruff_python_ast::name::QualifiedName;
use ruff_python_ast::statement_visitor::StatementVisitor;
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::visitor::{walk_expr, walk_stmt};
use ruff_python_ast::{Alias, Stmt, StmtImportFrom, statement_visitor};
use ruff_python_ast::{Alias, Stmt, statement_visitor};
use ruff_python_semantic::SemanticModel;
/// AST visitor that searches an AST tree for [`ast::StmtImportFrom`] nodes
@@ -28,7 +28,9 @@ impl StatementVisitor<'_> for ImportSearcher<'_> {
if self.found_import {
return;
}
if let Stmt::ImportFrom(StmtImportFrom { module, names, .. }) = stmt {
if let Stmt::ImportFrom(import_from) = stmt {
let module = &import_from.module;
let names = &import_from.names;
if module.as_ref().is_some_and(|module| module == self.module)
&& names.iter().any(|Alias { name, .. }| name == self.name)
{

View File

@@ -1,8 +1,7 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::is_const_true;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::{Tokens, parenthesized_range};
use ruff_python_ast::{self as ast, Keyword, Stmt};
use ruff_python_trivia::CommentRanges;
use ruff_text_size::Ranged;
use crate::Locator;
@@ -91,7 +90,7 @@ pub(crate) fn inplace_argument(checker: &Checker, call: &ast::ExprCall) {
call,
keyword,
statement,
checker.comment_ranges(),
checker.tokens(),
checker.locator(),
) {
diagnostic.set_fix(fix);
@@ -111,21 +110,16 @@ fn convert_inplace_argument_to_assignment(
call: &ast::ExprCall,
keyword: &Keyword,
statement: &Stmt,
comment_ranges: &CommentRanges,
tokens: &Tokens,
locator: &Locator,
) -> Option<Fix> {
// Add the assignment.
let attr = call.func.as_attribute_expr()?;
let insert_assignment = Edit::insertion(
format!("{name} = ", name = locator.slice(attr.value.range())),
parenthesized_range(
call.into(),
statement.into(),
comment_ranges,
locator.contents(),
)
.unwrap_or(call.range())
.start(),
parenthesized_range(call.into(), statement.into(), tokens)
.unwrap_or(call.range())
.start(),
);
// Remove the `inplace` argument.
@@ -134,7 +128,7 @@ fn convert_inplace_argument_to_assignment(
&call.arguments,
Parentheses::Preserve,
locator.contents(),
comment_ranges,
tokens,
)
.ok()?;

View File

@@ -25,10 +25,10 @@ pub(super) fn is_acronym(name: &str, asname: &str) -> bool {
/// Returns `true` if the statement is an assignment to a named tuple.
pub(super) fn is_named_tuple_assignment(stmt: &Stmt, semantic: &SemanticModel) -> bool {
let Stmt::Assign(ast::StmtAssign { value, .. }) = stmt else {
let Stmt::Assign(node) = stmt else {
return false;
};
let Expr::Call(ast::ExprCall { func, .. }) = value.as_ref() else {
let Expr::Call(ast::ExprCall { func, .. }) = node.value.as_ref() else {
return false;
};
semantic
@@ -45,10 +45,10 @@ pub(super) fn is_typed_dict_assignment(stmt: &Stmt, semantic: &SemanticModel) ->
return false;
}
let Stmt::Assign(ast::StmtAssign { value, .. }) = stmt else {
let Stmt::Assign(node) = stmt else {
return false;
};
let Expr::Call(ast::ExprCall { func, .. }) = value.as_ref() else {
let Expr::Call(ast::ExprCall { func, .. }) = node.value.as_ref() else {
return false;
};
semantic.match_typing_expr(func, "TypedDict")
@@ -60,10 +60,10 @@ pub(super) fn is_type_var_assignment(stmt: &Stmt, semantic: &SemanticModel) -> b
return false;
}
let Stmt::Assign(ast::StmtAssign { value, .. }) = stmt else {
let Stmt::Assign(node) = stmt else {
return false;
};
let Expr::Call(ast::ExprCall { func, .. }) = value.as_ref() else {
let Expr::Call(ast::ExprCall { func, .. }) = node.value.as_ref() else {
return false;
};
semantic
@@ -77,8 +77,8 @@ pub(super) fn is_type_var_assignment(stmt: &Stmt, semantic: &SemanticModel) -> b
/// Returns `true` if the statement is an assignment to a `TypeAlias`.
pub(super) fn is_type_alias_assignment(stmt: &Stmt, semantic: &SemanticModel) -> bool {
match stmt {
Stmt::AnnAssign(ast::StmtAnnAssign { annotation, .. }) => {
semantic.match_typing_expr(annotation, "TypeAlias")
Stmt::AnnAssign(node) => {
semantic.match_typing_expr(&node.annotation, "TypeAlias")
}
Stmt::TypeAlias(_) => true,
_ => false,
@@ -157,11 +157,15 @@ pub(super) fn is_django_model_import(name: &str, stmt: &Stmt, semantic: &Semanti
}
match stmt {
Stmt::AnnAssign(ast::StmtAnnAssign {
value: Some(value), ..
}) => match_model_import(name, value.as_ref(), semantic),
Stmt::Assign(ast::StmtAssign { value, .. }) => {
match_model_import(name, value.as_ref(), semantic)
Stmt::AnnAssign(node) => {
if let Some(value) = &node.value {
match_model_import(name, value.as_ref(), semantic)
} else {
false
}
}
Stmt::Assign(node) => {
match_model_import(name, node.value.as_ref(), semantic)
}
_ => false,
}

View File

@@ -92,23 +92,16 @@ pub(crate) fn manual_dict_comprehension(checker: &Checker, for_stmt: &ast::StmtF
// if idx % 2 == 0:
// result[name] = idx
// ```
[
Stmt::If(ast::StmtIf {
body,
elif_else_clauses,
test,
..
}),
] => {
[Stmt::If(node)] => {
// TODO(charlie): If there's an `else` clause, verify that the `else` has the
// same structure.
if !elif_else_clauses.is_empty() {
if !node.elif_else_clauses.is_empty() {
return;
}
let [stmt] = body.as_slice() else {
let [stmt] = node.body.as_slice() else {
return;
};
(stmt, Some(test))
(stmt, Some(&node.test))
}
// ```python
// for idx, name in enumerate(names):
@@ -118,15 +111,12 @@ pub(crate) fn manual_dict_comprehension(checker: &Checker, for_stmt: &ast::StmtF
_ => return,
};
let Stmt::Assign(ast::StmtAssign {
targets,
value,
range,
node_index: _,
}) = stmt
else {
let Stmt::Assign(node) = stmt else {
return;
};
let targets = &node.targets;
let value = &node.value;
let range = &node.range;
let [
Expr::Subscript(ast::ExprSubscript {
@@ -212,8 +202,8 @@ pub(crate) fn manual_dict_comprehension(checker: &Checker, for_stmt: &ast::StmtF
if is_fix_manual_dict_comprehension_enabled(checker.settings()) {
let binding_stmt = binding.statement(checker.semantic());
let binding_value = binding_stmt.and_then(|binding_stmt| match binding_stmt {
ast::Stmt::AnnAssign(assign) => assign.value.as_deref(),
ast::Stmt::Assign(assign) => Some(&assign.value),
ast::Stmt::AnnAssign(node) => node.value.as_deref(),
ast::Stmt::Assign(node) => Some(&node.value),
_ => None,
});
@@ -243,7 +233,7 @@ pub(crate) fn manual_dict_comprehension(checker: &Checker, for_stmt: &ast::StmtF
// but not necessarily, so this needs to be manually fixed. This does not apply when using an update.
let binding_has_one_target = binding_stmt.is_some_and(|binding_stmt| match binding_stmt {
ast::Stmt::AnnAssign(_) => true,
ast::Stmt::Assign(assign) => assign.targets.len() == 1,
ast::Stmt::Assign(node) => node.targets.len() == 1,
_ => false,
});
// If the binding gets used in between the assignment and the for loop, a comprehension is no longer safe

View File

@@ -109,21 +109,14 @@ pub(crate) fn manual_list_comprehension(checker: &Checker, for_stmt: &ast::StmtF
// if z:
// filtered.append(x)
// ```
[
ast::Stmt::If(ast::StmtIf {
body,
elif_else_clauses,
test,
..
}),
] => {
if !elif_else_clauses.is_empty() {
[ast::Stmt::If(node)] => {
if !node.elif_else_clauses.is_empty() {
return;
}
let [stmt] = body.as_slice() else {
let [stmt] = node.body.as_slice() else {
return;
};
(stmt, Some(test))
(stmt, Some(&node.test))
}
// ```python
// for x in y:
@@ -267,8 +260,8 @@ pub(crate) fn manual_list_comprehension(checker: &Checker, for_stmt: &ast::StmtF
let list_binding_stmt = list_binding.statement(checker.semantic());
let list_binding_value = list_binding_stmt.and_then(|binding_stmt| match binding_stmt {
ast::Stmt::AnnAssign(assign) => assign.value.as_deref(),
ast::Stmt::Assign(assign) => Some(&assign.value),
ast::Stmt::AnnAssign(node) => node.value.as_deref(),
ast::Stmt::Assign(node) => Some(&node.value),
_ => None,
});
@@ -304,7 +297,7 @@ pub(crate) fn manual_list_comprehension(checker: &Checker, for_stmt: &ast::StmtF
// but not necessarily, so this needs to be manually fixed. This does not apply when using an extend.
let binding_has_one_target = list_binding_stmt.is_some_and(|binding_stmt| match binding_stmt {
ast::Stmt::AnnAssign(_) => true,
ast::Stmt::Assign(assign) => assign.targets.len() == 1,
ast::Stmt::Assign(node) => node.targets.len() == 1,
_ => false,
});
@@ -464,8 +457,8 @@ fn convert_to_list_extend(
let binding_stmt = binding.statement(semantic);
let binding_stmt_range = binding_stmt
.and_then(|stmt| match stmt {
ast::Stmt::AnnAssign(assign) => Some(assign.range),
ast::Stmt::Assign(assign) => Some(assign.range),
ast::Stmt::AnnAssign(node) => Some(node.range),
ast::Stmt::Assign(node) => Some(node.range),
_ => None,
})
.ok_or(anyhow!(

View File

@@ -1,6 +1,6 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::statement_visitor::{StatementVisitor, walk_stmt};
use ruff_python_ast::{self as ast, PythonVersion, Stmt};
use ruff_python_ast::{PythonVersion, Stmt};
use ruff_text_size::Ranged;
use crate::Violation;
@@ -93,9 +93,11 @@ pub(crate) fn try_except_in_loop(checker: &Checker, body: &[Stmt]) {
return;
}
let [Stmt::Try(ast::StmtTry { handlers, body, .. })] = body else {
let [Stmt::Try(try_stmt)] = body else {
return;
};
let handlers = &try_stmt.handlers;
let body = &try_stmt.body;
let Some(handler) = handlers.first() else {
return;

View File

@@ -207,7 +207,7 @@ fn match_mutation(stmt: &Stmt, id: &str) -> bool {
target_id == id
}
// Ex) `foo[0] = bar`
Stmt::Assign(ast::StmtAssign { targets, .. }) => targets.iter().any(|target| {
Stmt::Assign(node) => node.targets.iter().any(|target| {
if let Some(ast::ExprSubscript { value: target, .. }) = target.as_subscript_expr() {
if let Some(ast::ExprName { id: target_id, .. }) = target.as_name_expr() {
return target_id == id;
@@ -216,16 +216,16 @@ fn match_mutation(stmt: &Stmt, id: &str) -> bool {
false
}),
// Ex) `foo += bar`
Stmt::AugAssign(ast::StmtAugAssign { target, .. }) => {
if let Some(ast::ExprName { id: target_id, .. }) = target.as_name_expr() {
Stmt::AugAssign(node) => {
if let Some(ast::ExprName { id: target_id, .. }) = node.target.as_name_expr() {
target_id == id
} else {
false
}
}
// Ex) `foo[0]: int = bar`
Stmt::AnnAssign(ast::StmtAnnAssign { target, .. }) => {
if let Some(ast::ExprSubscript { value: target, .. }) = target.as_subscript_expr() {
Stmt::AnnAssign(node) => {
if let Some(ast::ExprSubscript { value: target, .. }) = node.target.as_subscript_expr() {
if let Some(ast::ExprName { id: target_id, .. }) = target.as_name_expr() {
return target_id == id;
}
@@ -233,7 +233,7 @@ fn match_mutation(stmt: &Stmt, id: &str) -> bool {
false
}
// Ex) `del foo[0]`
Stmt::Delete(ast::StmtDelete { targets, .. }) => targets.iter().any(|target| {
Stmt::Delete(node) => node.targets.iter().any(|target| {
if let Some(ast::ExprSubscript { value: target, .. }) = target.as_subscript_expr() {
if let Some(ast::ExprName { id: target_id, .. }) = target.as_name_expr() {
return target_id == id;

View File

@@ -1,6 +1,6 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::identifier::except;
use ruff_python_ast::{self as ast, ExceptHandler, Expr, Stmt};
use ruff_python_ast::{ExceptHandler, Expr, Stmt};
use crate::Violation;
use crate::checkers::ast::Checker;
@@ -65,7 +65,7 @@ pub(crate) fn bare_except(
if type_.is_none()
&& !body
.iter()
.any(|stmt| matches!(stmt, Stmt::Raise(ast::StmtRaise { exc: None, .. })))
.any(|stmt| matches!(stmt, Stmt::Raise(raise_stmt) if raise_stmt.exc.is_none()))
{
checker.report_diagnostic(BareExcept, except(handler, checker.locator().contents()));
}

View File

@@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{
self as ast, Expr, ExprEllipsisLiteral, ExprLambda, Identifier, Parameter,
ParameterWithDefault, Parameters, Stmt,
@@ -223,7 +223,7 @@ fn function(
..parameter.clone()
})
.collect::<Vec<_>>();
let func = Stmt::FunctionDef(ast::StmtFunctionDef {
let func = Stmt::FunctionDef(Box::new(ast::StmtFunctionDef {
is_async: false,
name: Identifier::new(name.to_string(), TextRange::default()),
parameters: Box::new(Parameters {
@@ -237,13 +237,13 @@ fn function(
type_params: None,
range: TextRange::default(),
node_index: ruff_python_ast::AtomicNodeIndex::NONE,
});
}));
let generated = checker.generator().stmt(&func);
return replace_trailing_ellipsis_with_original_expr(generated, lambda, stmt, checker);
}
}
let function = Stmt::FunctionDef(ast::StmtFunctionDef {
let function = Stmt::FunctionDef(Box::new(ast::StmtFunctionDef {
is_async: false,
name: Identifier::new(name.to_string(), TextRange::default()),
parameters: Box::new(parameters),
@@ -253,7 +253,7 @@ fn function(
type_params: None,
range: TextRange::default(),
node_index: ruff_python_ast::AtomicNodeIndex::NONE,
});
}));
let generated = checker.generator().stmt(&function);
replace_trailing_ellipsis_with_original_expr(generated, lambda, stmt, checker)
@@ -265,29 +265,19 @@ fn replace_trailing_ellipsis_with_original_expr(
stmt: &Stmt,
checker: &Checker,
) -> String {
let original_expr_range = parenthesized_range(
(&lambda.body).into(),
lambda.into(),
checker.comment_ranges(),
checker.source(),
)
.unwrap_or(lambda.body.range());
let original_expr_range =
parenthesized_range((&lambda.body).into(), lambda.into(), checker.tokens())
.unwrap_or(lambda.body.range());
// This prevents the autofix of introducing a syntax error if the lambda's body is an
// expression spanned across multiple lines. To avoid the syntax error we preserve
// the parenthesis around the body.
let original_expr_in_source = if parenthesized_range(
lambda.into(),
stmt.into(),
checker.comment_ranges(),
checker.source(),
)
.is_some()
{
format!("({})", checker.locator().slice(original_expr_range))
} else {
checker.locator().slice(original_expr_range).to_string()
};
let original_expr_in_source =
if parenthesized_range(lambda.into(), stmt.into(), checker.tokens()).is_some() {
format!("({})", checker.locator().slice(original_expr_range))
} else {
checker.locator().slice(original_expr_range).to_string()
};
let placeholder_ellipsis_start = generated.rfind("...").unwrap();
let placeholder_ellipsis_end = placeholder_ellipsis_start + "...".len();

View File

@@ -1,4 +1,4 @@
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::{Tokens, parenthesized_range};
use rustc_hash::FxHashMap;
use ruff_macros::{ViolationMetadata, derive_message_formats};
@@ -179,15 +179,14 @@ fn is_redundant_boolean_comparison(op: CmpOp, comparator: &Expr) -> Option<bool>
fn generate_redundant_comparison(
compare: &ast::ExprCompare,
comment_ranges: &ruff_python_trivia::CommentRanges,
tokens: &Tokens,
source: &str,
comparator: &Expr,
kind: bool,
needs_wrap: bool,
) -> String {
let comparator_range =
parenthesized_range(comparator.into(), compare.into(), comment_ranges, source)
.unwrap_or(comparator.range());
let comparator_range = parenthesized_range(comparator.into(), compare.into(), tokens)
.unwrap_or(comparator.range());
let comparator_str = &source[comparator_range];
@@ -379,7 +378,7 @@ pub(crate) fn literal_comparisons(checker: &Checker, compare: &ast::ExprCompare)
.copied()
.collect::<Vec<_>>();
let comment_ranges = checker.comment_ranges();
let tokens = checker.tokens();
let source = checker.source();
let content = match (&*compare.ops, &*compare.comparators) {
@@ -387,18 +386,13 @@ pub(crate) fn literal_comparisons(checker: &Checker, compare: &ast::ExprCompare)
if let Some(kind) = is_redundant_boolean_comparison(*op, &compare.left) {
let needs_wrap = compare.left.range().start() != compare.range().start();
generate_redundant_comparison(
compare,
comment_ranges,
source,
comparator,
kind,
needs_wrap,
compare, tokens, source, comparator, kind, needs_wrap,
)
} else if let Some(kind) = is_redundant_boolean_comparison(*op, comparator) {
let needs_wrap = comparator.range().end() != compare.range().end();
generate_redundant_comparison(
compare,
comment_ranges,
tokens,
source,
&compare.left,
kind,
@@ -410,7 +404,7 @@ pub(crate) fn literal_comparisons(checker: &Checker, compare: &ast::ExprCompare)
&ops,
&compare.comparators,
compare.into(),
comment_ranges,
tokens,
source,
)
}
@@ -420,7 +414,7 @@ pub(crate) fn literal_comparisons(checker: &Checker, compare: &ast::ExprCompare)
&ops,
&compare.comparators,
compare.into(),
comment_ranges,
tokens,
source,
),
};

View File

@@ -107,7 +107,7 @@ pub(crate) fn not_tests(checker: &Checker, unary_op: &ast::ExprUnaryOp) {
&[CmpOp::NotIn],
comparators,
unary_op.into(),
checker.comment_ranges(),
checker.tokens(),
checker.source(),
),
unary_op.range(),
@@ -127,7 +127,7 @@ pub(crate) fn not_tests(checker: &Checker, unary_op: &ast::ExprUnaryOp) {
&[CmpOp::IsNot],
comparators,
unary_op.into(),
checker.comment_ranges(),
checker.tokens(),
checker.source(),
),
unary_op.range(),

Some files were not shown because too many files have changed in this diff Show More