Compare commits

...

88 Commits

Author SHA1 Message Date
Zanie Blue
6c2d87ed80 Invert case 2025-12-20 22:31:37 -06:00
Zanie Blue
7d4831597b Lint 2025-12-20 22:31:37 -06:00
Zanie Blue
81f39680d2 [ty] Cache Type::is_disjoint_from 2025-12-20 22:31:37 -06:00
Zanie Blue
3c2f534480 [ty] Cache Type::is_subtype_of 2025-12-20 22:31:37 -06:00
Zanie Blue
592a674447 [ty] Cache ClassType::nearest_disjoint_base 2025-12-20 22:31:37 -06:00
Ibraheem Ahmed
ad41728204 [ty] Avoid temporarily storing invalid multi-inference attempts (#22103)
## Summary

I missed this in https://github.com/astral-sh/ruff/pull/22062. This
avoids exponential runtime in the following snippet:

```py
class X1: ...
class X2: ...
class X3: ...
class X4: ...
class X5: ...
class X6: ...
...

def f(
    x:
        list[X1 | None]
      | list[X2 | None]
      | list[X3 | None]
      | list[X4 | None]
      | list[X5 | None]
      | list[X6 | None]
      ...
):
    ...

def g[T](x: T) -> list[T]:
    return [x]

def id[T](x: T) -> T:
    return x

f(id(id(id(id(g(X64()))))))
```

Eventually I want to refactor our multi-inference infrastructure (which
is currently very brittle) to handle this implicitly, but this is a
temporary performance fix until that happens.
2025-12-20 08:20:53 -08:00
Hugo
3ec63b964c [ty] Add support for dict(...) calls in typed dict contexts (#22113)
## Summary

fixes https://github.com/astral-sh/ty/issues/2127
- handle `dict(...)` calls in TypedDict context with bidirectional
inference
- validate keys/values using the existing TypedDict constructor implem
- mdtest: add 1 positive test, 1 negative test for invalid coverage

## Test Plan

```sh
cargo clippy --workspace --all-targets --all-features -- -D warnings  # Rust linting
cargo test  # Rust testing
uvx pre-commit run --all-files --show-diff-on-failure  # Rust and Python formatting, Markdown and Python linting, etc.
```
fully green
2025-12-20 07:59:03 -08:00
Matthew Mckee
f9a0e1e3f6 [ty] Fix panic introduced in #22076 (#22112)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

Was looking over that PR and this looked wrong.

panic introduced in #22076 

## Test Plan

Before running:

```bash
cargo run -p ty check test.py --force-exclude --no-progress 
```

would result in a panic

```text
thread 'main' (162713) panicked at crates/ty/src/args.rs:459:17:
internal error: entered unreachable code: Clap should make this impossible
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```

Now it does not.
2025-12-20 07:53:45 -08:00
William Woodruff
ef4507be96 Remove in-workflow deployment to Cloudflare Pages (#22098) 2025-12-20 10:21:47 -05:00
Alex Waygood
3398ab23a9 Update comments in sync_typeshed.yaml workflow (#22109) 2025-12-20 12:19:04 +00:00
Micha Reiser
5b475b45aa [ty] Add --force-exclude option (#22076) 2025-12-20 10:03:41 +01:00
Ibraheem Ahmed
2a959ef3f2 [ty] Avoid narrowing on non-generic calls (#22102)
## Summary

Resolves https://github.com/astral-sh/ty/issues/2026.
2025-12-19 23:18:07 -05:00
Ibraheem Ahmed
674d3902c6 [ty] Only prefer declared types in non-covariant positions (#22068)
## Summary

The following snippet currently errors because we widen the inferred
type, even though `X` is covariant over `T`. If `T` was contravariant or
invariant, this would be fine, as it would lead to an assignability
error anyways.

```python
class X[T]:
    def __init__(self: X[None]): ...

    def pop(self) -> T:
        raise NotImplementedError

# error: Argument to bound method `__init__` is incorrect: Expected `X[None]`, found `X[int | None]`
x: X[int | None] = X()
```

There are some cases where it is still helpful to prefer covariant
declared types, but this error seems hard to fix otherwise, and makes
our heuristics more consistent overall.
2025-12-19 17:27:31 -05:00
Alex Waygood
1a18ada931 [ty] Run mypy_primer and typing-conformance workflows on fewer PRs (#22096)
## Summary

Fixes https://github.com/astral-sh/ty/issues/2090. Quoting my rationale
from that issue:

> A PR that only touches code in [one of these crates] should never have
any impact on memory usage or diagnostics produced. And the comments
from the bot just lead to additional notifications which is annoying.

I _think_ I've got the syntax right here. The
[docs](https://docs.github.com/en/actions/reference/workflows-and-actions/workflow-syntax)
say:

>  The order that you define paths patterns matters:
>
> - A matching negative pattern (prefixed with !) after a positive match
will exclude the path.
> - A matching positive pattern after a negative match will include the
path again.

## Test Plan

No idea? Merge it and see?
2025-12-19 19:31:20 +00:00
Alex Waygood
dde0d0af68 [ty] List rules in alphabetical order in the reference docs (#22097)
## Summary

Fixes https://github.com/astral-sh/ty/issues/1885.

It wasn't obvious to me that there was a deliberate order to the way
these rules were listed in our reference docs -- it looked like it was
_nearly_ alphabetical, but not quite. I think it's simpler if we just
list them in alphabetical order.

## Test Plan

The output from running `cargo dev generate-all` (committed as part of
this PR) looks correct!
2025-12-19 19:11:05 +00:00
Chris Bachhuber
b342f60b40 Update T201 suggestion to not use root logger to satisfy LOG015 (#22059)
## Summary

Currently, the proposed fix for https://docs.astral.sh/ruff/rules/print/
violates https://docs.astral.sh/ruff/rules/root-logger-call/. Thus,
let's change the proposal to make LOG015 happy as well.

## Test Plan

Test manually in a project that has both T201 and LOG015 enabled and run
them over the previous and proposed code. Is there continuous testing of
the code snippets from the docs?
2025-12-19 11:08:12 -08:00
Alex Waygood
2151c3d351 [ty] Document that several rules are disabled by default because of the number of false positives they produce (#22095) 2025-12-19 18:45:18 +00:00
Aria Desires
cdb7a9fb33 [ty] Classify docstrings in semantic tokens (syntax highlighting) (#22031)
## Summary

* Related to, but does not handle
https://github.com/astral-sh/ty/issues/2021

## Test Plan

I also added some snapshot tests for future work on non-standard
attribute docstrings (didn't want to highlight them if we don't
recognize them elsewhere).
2025-12-19 13:36:01 -05:00
github-actions[bot]
df1552b9a4 [ty] Sync vendored typeshed stubs (#22091)
Co-authored-by: typeshedbot <>
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-12-19 18:23:09 +00:00
Alex Waygood
77de3df150 [ty] Fix sync-typeshed workflow timeouts (#22090) 2025-12-19 18:09:08 +00:00
Charlie Marsh
0f18a08a0a [ty] Respect intersections in iterations (#21965)
## Summary

This PR implements the strategy described in
https://github.com/astral-sh/ty/issues/1871: we iterate over the
positive types, resolve them, then intersect the results.
2025-12-19 12:36:37 -05:00
William Woodruff
b63b3c13fb Upload full ecosystem report as a GitHub Actions artifact (#22086) 2025-12-19 11:41:21 -05:00
Ibraheem Ahmed
270d755621 [ty] Avoid storing invalid multi-inference attempts (#22062)
## Summary

This should make revealed types a little nicer, as well as avoid
confusing the constraint solver in some cases (which were showing up in
https://github.com/astral-sh/ruff/pull/21930).
2025-12-19 15:38:47 +00:00
Micha Reiser
9809405b05 [ty] Only clear output between two successful checks (#22078) 2025-12-19 15:16:54 +01:00
Alex Waygood
28cdbb18c5 [ty] Minor followups to #22048 (#22082) 2025-12-19 13:58:59 +00:00
Alex Waygood
f9d1a282fb [ty] Collect mdtest failures as part of the assertion message rather than printing them to the terminal immediately (#22081) 2025-12-19 13:53:31 +00:00
David Peter
0bac023cd2 [ty] Lockfiles for mdtests with external dependencies (#22077)
## Summary

Add lockfiles for all mdtests which make use of external dependencies.
When running tests normally, we use this lockfile when creating the
temporary venv using `uv sync --locked`. A new
`MDTEST_UPGRADE_LOCKFILES` environment variable is used to switch to a
mode in which those lockfiles can be updated or regenerated. When using
the Python mdtest runner, this environment variable is automatically set
(because we use this command while developing, not to simulate exactly
what happens in CI). A command-line flag is provided to opt out of this.

## Test Plan

### Using the mdtest runner

#### Adding a new test (no lockfile yet)

* Removed `attrs.lock` to simulate this
* Ran `uv run crates/ty_python_semantic/mdtest.py -e external/`. The
lockfile is generated and the test succeeds.

#### Upgrading/downgrading a dependency

* Changed pydantic requirement from `pydantic==2.12.2` to
`pydantic==2.12.5` (also tested with `2.12.0`)
* Ran `uv run crates/ty_python_semantic/mdtest.py -e external/`. The
lockfile is updated and the test succeeds.

### Using cargo

#### Adding a new test (no lockfile yet)

* Removed `attrs.lock` to simulate this
* Ran `MDTEST_EXTERNAL=1 cargo test -p ty_python_semantic --test mdtest
mdtest__external` "naively", which outputs:
> Failed to setup in-memory virtual environment with dependencies:
Lockfile not found at
'/home/shark/ruff/crates/ty_python_semantic/resources/mdtest/external/attrs.lock'.
Run with `MDTEST_UPGRADE_LOCKFILES=1` to generate it.
* Ran `MDTEST_UPGRADE_LOCKFILES=1 MDTEST_EXTERNAL=1 cargo test -p
ty_python_semantic --test mdtest mdtest__external`. The lockfile is
updated and the test succeeds.

#### Upgrading/downgrading a dependency

* Changed pydantic requirement from `pydantic==2.12.2` to
`pydantic==2.12.5` (also tested with `2.12.0`)
* Ran `MDTEST_EXTERNAL=1 cargo test -p ty_python_semantic --test mdtest
mdtest__external` "naively", which outputs a similar error message as
above.
* Ran the command suggested in the error message (`MDTEST_EXTERNAL=1
MDTEST_UPGRADE_LOCKFILES=1 cargo test -p ty_python_semantic --test
mdtest mdtest__external`). The lockfile is updated and the test
succeeds.
2025-12-19 14:29:52 +01:00
Micha Reiser
30efab8138 [ty] Only print dashed line for failing tests (#22080) 2025-12-19 14:20:03 +01:00
RasmusNygren
58d25129aa [ty] Visit class arguments in source order for semantic tokens (#22063)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-19 13:19:49 +00:00
Micha Reiser
e177cc2a5a [ty] Improve union builder performance (#22048) 2025-12-19 08:29:16 +01:00
Micha Reiser
30ce679b9a [ty] Fix rules severity URL (#22069) 2025-12-19 07:25:02 +00:00
Charlie Marsh
76854fdb15 [ty] Unwrap enum.nonmember values (#22025)
## Summary

This PR unwraps the `enum.nonmember` type to match runtime behavior.

Closes https://github.com/astral-sh/ty/issues/1974.
2025-12-18 19:59:49 -05:00
Douglas Creager
5a2d3cda3d [ty] Remove some nondeterminism in constraint set tests (#22064)
We're seeing a lot of nondeterminism in the ecosystem tests at the
moment, which started (or at least got worse) once `Callable` inference
landed.

This PR attempts to remove this nondeterminism. We recently
(https://github.com/astral-sh/ruff/pull/21983) added a `source_order`
field to BDD nodes, which tracks when their constraint was added to the
BDD. Since we build up constraints based on the order that they appear
in the underlying source, that gives us a stable ordering even though we
use an arbitrary salsa-derived ordering for the BDD variables.

The issue (at least for some of the flakiness) is that we add "derived"
constraints when walking a BDD tree, and those derived constraints
inherit or borrow the `source_order` of the "real" constraint that
implied them. That means we can get multiple constraints in our
specialization that all have the same `source_order`. If we're not
careful, those "tied" constraints can be ordered arbitrarily.

The fix requires ~three~ ~four~ several steps:

- When starting to construct a sequent map (the data structure that
stores the derived constraints), we first sort all of the "real"
constraints by their `source_order`. That ensures that we insert things
into the sequent map in a stable order.
- During sequent map construction, derived facts are discovered by a
deterministic process applied to constraints in a (now) stable order. So
derived facts are now also inserted in a stable order.
- We update the fields of `SequentMap` to use `FxOrderSet` instead of
`FxHashSet`, so that we retain that stable insertion order.
- When walking BDD paths when constructing a specialization, we were
already sorting the constraints by their `source_order`. However, we
were not considering that we might get derived constraints, and
therefore constraints with "ties". Because of that, we need to make sure
to use a _stable_ sort, that retains the insertion order for those ties.

All together, this...should...fix the nondeterminism. (Unfortunately, I
haven't been able to effectively test this, since I haven't been able to
coerce local tests to flop into the other order that we sometimes see in
CI.)
2025-12-18 19:00:20 -05:00
Jack O'Connor
fa57253980 [ty] Implement disjointness for TypedDicts (#22044)
This is a preliminary step towards tagged union narrowing for `TypedDict`:
https://github.com/astral-sh/ty/issues/1479
2025-12-18 13:20:22 -08:00
Amethyst Reese
b7fbd986bc [ruff] fix preview-since values for RUF103 and RUF104 (#22061)
Missed including this in the follow-up on #21908
2025-12-18 13:18:04 -08:00
Amethyst Reese
3d334a313e Report diagnostics for invalid/unmatched range suppression comments (#21908)
## Summary

- Adds new RUF103 and RUF104 diagnostics for invalid and unmatched
suppression comments
- Reports RUF100 for any unused range suppression
- Reports RUF102 for range suppression comment with invalid rule codes
- Reports RUF103 for range suppression comment with invalid suppression syntax
- Reports RUF104 diagnostics for any unmatched range suppression comment (disable w/o enable)


## Test Plan

Updated snapshots from test cases with unmatched suppression comments

Issue #3711
Fixes #21878
Fixes #21875
2025-12-18 12:58:58 -08:00
Aria Desires
2e44a861cb [ty] Disable possibly-missing-imports by default (#22041)
@carljm put forth a reasonably compelling argument that just disabling
this lint might be advisable. If we agree, here's the implementation.

* Fixes https://github.com/astral-sh/ty/issues/309

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-12-18 20:06:34 +00:00
Dylan
45bbb4cbff Bump 0.14.10 (#22058) 2025-12-18 13:08:17 -06:00
Micha Reiser
42b972753a [ty] Use datatest instead of dirtest (#21937) 2025-12-18 18:05:02 +00:00
Micha Reiser
f7ec178400 [ty] Gracefully handle client requests that can't be deserialized (#22051) 2025-12-18 18:01:01 +00:00
Rasmus Nygren
c315164732 [ty] Don't suggest keyword statements when only expressions are valid
There are cases where the python grammar enforces expressions
after certain statements. In such cases we want to suppress
irrelevant keywords from the auto-complete suggestions.

E.g. `with a<CURSOR>`, suggesting `raise` here never makes sense
because it is not valid by the grammar.
2025-12-18 12:27:57 -05:00
Andrew Gallant
bb1955e98c [ty] Use cursor context in a few more places...
... and also add a `ContextCursor::covering_node` helper, since it's
used so much.
2025-12-18 11:00:09 -05:00
Andrew Gallant
070e08a043 [ty] Move completion function to the top
This is the main entry point to this module. It should be at the top.
2025-12-18 11:00:09 -05:00
Andrew Gallant
bab3924833 [ty] Refactor completion generation
This refactor is intended to give more structure to how we generate
completions. There's now a `Context` for "how do we figure out what kind
of completions to offer" and also a `CollectionContext` for "how do we
figure out which completions are appropriate or not." We double down on
`Completions` as a collector and a single point of truth for this. It
now handles adding information to `Completion` (based on the context)
and also skipping completions that are inappropriate (instead of
filtering them after-the-fact).

We also bundle a bunch of state into a new `ContextCursor` type, and
then define a bunch of predicates/accessors on that type that were
previously free functions with loads of parameters.

Finally, we introduce more structure to ranking. Instead of an anonymous
tuple, we define an explicit type with some helper types to hopefully
make the influence on ranking from each constituent piece a bit clearer.

This does seem to fix one bug around detecting the target for non-import
completions, but otherwise should not have any changes in behavior.

This is meant to be a precursor to improving completion ranking.
2025-12-18 11:00:09 -05:00
mahiro
10748b2fdb [flake8-pytest-style] Allow match and check keyword arguments without an expected exception type (PT010) (#21964)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Updates PT010(`pytest-raises-without-exception`) to recognize `match`
and `check` keyword arguments as valid alternatives to specifying an
exception class.

As of pytest 8.4.0, `pytest.raises()` can be called with only `match` or
`check` keyword arguments without an expected exception.

Fixes #18653

## Test Plan

<!-- How was it tested? -->

- Added test cases for `match`-only, `check`-only, and both arguments.
- `cargo test -p ruff_linter -- "pytestraiseswithoutexception"` passes
2025-12-18 10:42:06 -05:00
Aria Desires
56539db520 [ty] Fix some configuration panics in the LSP (#22040)
## Summary

This is a revival of https://github.com/astral-sh/ruff/pull/21047 now
that we have a reproducer again.

* Fixes https://github.com/astral-sh/ty/issues/2031
* Fixes https://github.com/astral-sh/ty/issues/859

## Test Plan

e2e test from @zanieb

---------

Co-authored-by: Zanie Blue <contact@zanie.dev>
2025-12-18 09:47:02 -05:00
Aria Desires
8d32ad1cab [ty] Add support for attribute docstrings (#22036)
## Summary

I should have factored this better but this includes a drive-by move of
find_node to ruff_python_ast so ty_python_semantic can use it too.

* Fixes https://github.com/astral-sh/ty/issues/2017 

## Test Plan

Snapshots galore
2025-12-18 12:18:20 +00:00
Micha Reiser
b2a8c42b51 [ty] Correctly encode multiline tokens for clients not supporting multiline tokens (#22033) 2025-12-18 11:38:21 +00:00
Aria Desires
7bb5dd87ff [ty] Fix goto-declaration on the RHS of from module import submodule (#22042) 2025-12-18 08:28:05 +01:00
Charlie Marsh
06305f3c02 Make analyze_single_pattern_predicate a #[salsa::tracked] function (#22045)
## Summary

We had a report of a blowup with the following snippet:

```python
from enum import StrEnum

class BigEnum(StrEnum):
    VALUE_01 = "VALUE_01"
    VALUE_02 = "VALUE_02"
    VALUE_03 = "VALUE_03"
    VALUE_04 = "VALUE_04"
    VALUE_05 = "VALUE_05"
    VALUE_06 = "VALUE_06"
    VALUE_07 = "VALUE_07"
    VALUE_08 = "VALUE_08"
    VALUE_09 = "VALUE_09"
    VALUE_10 = "VALUE_10"
    VALUE_11 = "VALUE_11"
    VALUE_12 = "VALUE_12"
    VALUE_13 = "VALUE_13"
    VALUE_14 = "VALUE_14"
    VALUE_15 = "VALUE_15"
    VALUE_16 = "VALUE_16"
    VALUE_17 = "VALUE_17"
    VALUE_18 = "VALUE_18"
    VALUE_19 = "VALUE_19"
    VALUE_20 = "VALUE_20"
    VALUE_21 = "VALUE_21"
    VALUE_22 = "VALUE_22"
    VALUE_23 = "VALUE_23"
    VALUE_24 = "VALUE_24"
    VALUE_25 = "VALUE_25"
    VALUE_26 = "VALUE_26"
    VALUE_27 = "VALUE_27"
    VALUE_28 = "VALUE_28"
    VALUE_29 = "VALUE_29"
    VALUE_30 = "VALUE_30"
    VALUE_31 = "VALUE_31"
    VALUE_32 = "VALUE_32"
    VALUE_33 = "VALUE_33"
    VALUE_34 = "VALUE_34"
    VALUE_35 = "VALUE_35"
    VALUE_36 = "VALUE_36"
    VALUE_37 = "VALUE_37"
    VALUE_38 = "VALUE_38"
    VALUE_39 = "VALUE_39"
    VALUE_40 = "VALUE_40"
    VALUE_41 = "VALUE_41"
    VALUE_42 = "VALUE_42"
    VALUE_43 = "VALUE_43"
    VALUE_44 = "VALUE_44"
    VALUE_45 = "VALUE_45"
    VALUE_46 = "VALUE_46"
    VALUE_47 = "VALUE_47"
    VALUE_48 = "VALUE_48"
    VALUE_49 = "VALUE_49"
    VALUE_50 = "VALUE_50"
    VALUE_51 = "VALUE_51"
    VALUE_52 = "VALUE_52"
    VALUE_53 = "VALUE_53"
    VALUE_54 = "VALUE_54"
    VALUE_55 = "VALUE_55"
    VALUE_56 = "VALUE_56"
    VALUE_57 = "VALUE_57"
    VALUE_58 = "VALUE_58"
    VALUE_59 = "VALUE_59"
    VALUE_60 = "VALUE_60"
    VALUE_61 = "VALUE_61"
    VALUE_62 = "VALUE_62"
    VALUE_63 = "VALUE_63"
    VALUE_64 = "VALUE_64"
    VALUE_65 = "VALUE_65"
    VALUE_66 = "VALUE_66"
    VALUE_67 = "VALUE_67"
    VALUE_68 = "VALUE_68"
    VALUE_69 = "VALUE_69"
    VALUE_70 = "VALUE_70"
    VALUE_71 = "VALUE_71"
    VALUE_72 = "VALUE_72"
    VALUE_73 = "VALUE_73"
    VALUE_74 = "VALUE_74"
    VALUE_75 = "VALUE_75"
    VALUE_76 = "VALUE_76"
    VALUE_77 = "VALUE_77"
    VALUE_78 = "VALUE_78"
    VALUE_79 = "VALUE_79"
    VALUE_80 = "VALUE_80"
    VALUE_81 = "VALUE_81"
    VALUE_82 = "VALUE_82"
    VALUE_83 = "VALUE_83"
    VALUE_84 = "VALUE_84"
    VALUE_85 = "VALUE_85"
    VALUE_86 = "VALUE_86"
    VALUE_87 = "VALUE_87"
    VALUE_88 = "VALUE_88"
    VALUE_89 = "VALUE_89"
    VALUE_90 = "VALUE_90"
    VALUE_91 = "VALUE_91"
    VALUE_92 = "VALUE_92"
    VALUE_93 = "VALUE_93"
    VALUE_94 = "VALUE_94"
    VALUE_95 = "VALUE_95"
    VALUE_96 = "VALUE_96"
    VALUE_97 = "VALUE_97"
    VALUE_98 = "VALUE_98"
    VALUE_99 = "VALUE_99"

    def get_info(self) -> tuple[str, int]:
        match self:
            case BigEnum.VALUE_01:
                return self.value, 1
            case BigEnum.VALUE_02:
                return self.value, 2
            case BigEnum.VALUE_03:
                return self.value, 3
            case BigEnum.VALUE_04:
                return self.value, 4
            case BigEnum.VALUE_05:
                return self.value, 5
            case BigEnum.VALUE_06:
                return self.value, 6
            case BigEnum.VALUE_07:
                return self.value, 7
            case BigEnum.VALUE_08:
                return self.value, 8
            case BigEnum.VALUE_09:
                return self.value, 9
            case BigEnum.VALUE_10:
                return self.value, 10
            case BigEnum.VALUE_11:
                return self.value, 11
            case BigEnum.VALUE_12:
                return self.value, 12
            case BigEnum.VALUE_13:
                return self.value, 13
            case BigEnum.VALUE_14:
                return self.value, 14
            case BigEnum.VALUE_15:
                return self.value, 15
            case BigEnum.VALUE_16:
                return self.value, 16
            case BigEnum.VALUE_17:
                return self.value, 17
            case BigEnum.VALUE_18:
                return self.value, 18
            case BigEnum.VALUE_19:
                return self.value, 19
            case BigEnum.VALUE_20:
                return self.value, 20
            case BigEnum.VALUE_21:
                return self.value, 21
            case BigEnum.VALUE_22:
                return self.value, 22
            case BigEnum.VALUE_23:
                return self.value, 23
            case BigEnum.VALUE_24:
                return self.value, 24
            case BigEnum.VALUE_25:
                return self.value, 25
            case BigEnum.VALUE_26:
                return self.value, 26
            case BigEnum.VALUE_27:
                return self.value, 27
            case BigEnum.VALUE_28:
                return self.value, 28
            case BigEnum.VALUE_29:
                return self.value, 29
            case BigEnum.VALUE_30:
                return self.value, 30
            case BigEnum.VALUE_31:
                return self.value, 31
            case BigEnum.VALUE_32:
                return self.value, 32
            case BigEnum.VALUE_33:
                return self.value, 33
            case BigEnum.VALUE_34:
                return self.value, 34
            case BigEnum.VALUE_35:
                return self.value, 35
            case BigEnum.VALUE_36:
                return self.value, 36
            case BigEnum.VALUE_37:
                return self.value, 37
            case BigEnum.VALUE_38:
                return self.value, 38
            case BigEnum.VALUE_39:
                return self.value, 39
            case BigEnum.VALUE_40:
                return self.value, 40
            case BigEnum.VALUE_41:
                return self.value, 41
            case BigEnum.VALUE_42:
                return self.value, 42
            case BigEnum.VALUE_43:
                return self.value, 43
            case BigEnum.VALUE_44:
                return self.value, 44
            case BigEnum.VALUE_45:
                return self.value, 45
            case BigEnum.VALUE_46:
                return self.value, 46
            case BigEnum.VALUE_47:
                return self.value, 47
            case BigEnum.VALUE_48:
                return self.value, 48
            case BigEnum.VALUE_49:
                return self.value, 49
            case BigEnum.VALUE_50:
                return self.value, 50
            case BigEnum.VALUE_51:
                return self.value, 51
            case BigEnum.VALUE_52:
                return self.value, 52
            case BigEnum.VALUE_53:
                return self.value, 53
            case BigEnum.VALUE_54:
                return self.value, 54
            case BigEnum.VALUE_55:
                return self.value, 55
            case BigEnum.VALUE_56:
                return self.value, 56
            case BigEnum.VALUE_57:
                return self.value, 57
            case BigEnum.VALUE_58:
                return self.value, 58
            case BigEnum.VALUE_59:
                return self.value, 59
            case BigEnum.VALUE_60:
                return self.value, 60
            case BigEnum.VALUE_61:
                return self.value, 61
            case BigEnum.VALUE_62:
                return self.value, 62
            case BigEnum.VALUE_63:
                return self.value, 63
            case BigEnum.VALUE_64:
                return self.value, 64
            case BigEnum.VALUE_65:
                return self.value, 65
            case BigEnum.VALUE_66:
                return self.value, 66
            case BigEnum.VALUE_67:
                return self.value, 67
            case BigEnum.VALUE_68:
                return self.value, 68
            case BigEnum.VALUE_69:
                return self.value, 69
            case BigEnum.VALUE_70:
                return self.value, 70
            case BigEnum.VALUE_71:
                return self.value, 71
            case BigEnum.VALUE_72:
                return self.value, 72
            case BigEnum.VALUE_73:
                return self.value, 73
            case BigEnum.VALUE_74:
                return self.value, 74
            case BigEnum.VALUE_75:
                return self.value, 75
            case BigEnum.VALUE_76:
                return self.value, 76
            case BigEnum.VALUE_77:
                return self.value, 77
            case BigEnum.VALUE_78:
                return self.value, 78
            case BigEnum.VALUE_79:
                return self.value, 79
            case BigEnum.VALUE_80:
                return self.value, 80
            case BigEnum.VALUE_81:
                return self.value, 81
            case BigEnum.VALUE_82:
                return self.value, 82
            case BigEnum.VALUE_83:
                return self.value, 83
            case BigEnum.VALUE_84:
                return self.value, 84
            case BigEnum.VALUE_85:
                return self.value, 85
            case BigEnum.VALUE_86:
                return self.value, 86
            case BigEnum.VALUE_87:
                return self.value, 87
            case BigEnum.VALUE_88:
                return self.value, 88
            case BigEnum.VALUE_89:
                return self.value, 89
            case BigEnum.VALUE_90:
                return self.value, 90
            case BigEnum.VALUE_91:
                return self.value, 91
            case BigEnum.VALUE_92:
                return self.value, 92
            case BigEnum.VALUE_93:
                return self.value, 93
            case BigEnum.VALUE_94:
                return self.value, 94
            case BigEnum.VALUE_95:
                return self.value, 95
            case BigEnum.VALUE_96:
                return self.value, 96
            case BigEnum.VALUE_97:
                return self.value, 97
            case BigEnum.VALUE_98:
                return self.value, 98
            case BigEnum.VALUE_99:
                return self.value, 99
```

On my machine, memoizing the computation brings us from 70s to 0.6s.
2025-12-17 22:01:50 -05:00
Amethyst Reese
9cc132f098 [eradicate] ignore ruff:disable and ruff:enable comments in ERA001 (#22038)
## Summary

Don't flag `# ruff: disable` or `# ruff: enable` comments as
commented-out code.

## Test Plan

New test cases.

Issue #3711
2025-12-17 17:04:49 -08:00
Shantanu
cf8d2e35a8 New rule to prevent implicit string concatenation in collections (#21972)
This is a common footgun, see the example in
https://github.com/astral-sh/ruff/issues/13014#issuecomment-3411496519

Fixes #13014 , fixes #13031
2025-12-17 17:37:01 -05:00
Brent Westbrook
0290f5dc3b [flake8-bandit] Fix broken link (S704) (#22039)
Summary
--

While going through all the rules I noticed a broken link in the S704
docs. (I then got really confused because it appeared to be correct in
the _other_ `unsafe_markup_use.rs` file for the removed RUF version of
the rule).

Test Plan
--

[Before](https://docs.astral.sh/ruff/rules/unsafe-markup-use/):

<img width="537" height="171" alt="image"
src="https://github.com/user-attachments/assets/01007cab-6673-48e5-b3a5-6006bc78a027"
/>


After:

<img width="451" height="189" alt="image"
src="https://github.com/user-attachments/assets/4e5d0e0d-76be-4f66-b747-e209f11ab11a"
/>

I also did at least a cursory grep for other cases with escaped link
brackets (`\[`) and only turned up this rule on main.
2025-12-17 17:35:04 -05:00
Charlie Marsh
5bb9ee2a9d [ty] Respect deferred values in keyword arguments et al for .pyi files (#22029)
## Summary

Closes https://github.com/astral-sh/ty/issues/2019.
2025-12-17 14:02:10 -08:00
Aria Desires
638f230910 [ty] improve rendering of signatures in hovers (#22007)
This is the return of #21438 because we never found anything better and
I think it would be good to have this for the beta.
2025-12-17 20:09:31 +00:00
Douglas Creager
b36ff75a24 [ty] Don't add identical lower/upper bounds multiple times when inferring specializations (#22030)
When inferring a specialization of a `Callable` type, we use the new
constraint set implementation. In the example in
https://github.com/astral-sh/ty/issues/1968, we end up with a constraint
set that includes all of the following clauses:

```
     U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
M1 ≤ U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
M2 ≤ U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
M3 ≤ U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
M4 ≤ U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
M5 ≤ U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
M6 ≤ U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
M7 ≤ U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
```

In general, we take the upper bounds of those constraints to get the
specialization. However, the upper bounds of those constraints are not
all guaranteed to be the same, and so first we need to intersect them
all together. In this case, the upper bounds are all identical, so their
intersection is trivial:

```
U_co = M1 | M2 | M3 | M4 | M5 | M6 | M7
```

But we were still doing the work of calculating that trivial
intersection 7 times. And each time we have to do 7^2 comparisons of the
`M*` classes, ending up with O(n^3) overall work.

This pattern is common enough that we can put in a quick heuristic to
prune identical copies of the same type before performing the
intersection.

Fixes https://github.com/astral-sh/ty/issues/1968
2025-12-17 13:35:52 -05:00
Charlie Marsh
30c3f9aafe [ty] Apply narrowing to len calls based on argument size (#22026)
## Summary

Closes https://github.com/astral-sh/ty/issues/1983.
2025-12-17 13:15:58 -05:00
chiri
883701ae88 [flake8-use-pathlib] Make fixes unsafe when types change in compound statements (PTH104, PTH105, PTH109, PTH115) (#22009)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/21794

## Test Plan

`cargo nextest run flake8_use_pathlib`
2025-12-17 12:31:27 -05:00
Alex Waygood
0bd7a94c27 [ty] Improve unsupported-base and invalid-super-argument diagnostics to avoid extremely long lines when encountering verbose types (#22022) 2025-12-17 14:43:11 +00:00
Bhuminjay Soni
421f88bb32 [refurb] Extend support for Path.open (FURB101, FURB103) (#21080)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
This PR fixes https://github.com/astral-sh/ruff/issues/18409

## Test Plan

<!-- How was it tested? -->
I have added tests in FURB103.

---------

Signed-off-by: 11happy <soni5happy@gmail.com>
Signed-off-by: 11happy <bhuminjaysoni@gmail.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-12-17 09:18:13 -05:00
David Peter
b0eb39d112 [ty] ecosystem-analyzer: Flush full stderr output in case of panics (#22023)
Pulls in
2e1816eac0
2025-12-17 15:02:59 +01:00
charliecloudberry
260f463edd Update setup.md (#22024) 2025-12-17 14:56:25 +01:00
Bhuminjay Soni
52849a5e68 [syntax-errors] Annotated name cannot be global (#20868)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
This PR implements a new semantic syntax error where annotated name
can't be global
example
```
x: int = 1

def f():
    global x
    x: str = "foo"  # SyntaxError: annotated name 'x' can't be global
 ```

## Test Plan

<!-- How was it tested? -->
I have written tests as directed in #17412

---------

Signed-off-by: 11happy <soni5happy@gmail.com>
Signed-off-by: 11happy <bhuminjaysoni@gmail.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-12-17 08:39:47 -05:00
David Peter
2a61fe2353 [ty] Handle field specifier functions that accept **kwargs and recognize metaclass-based transformers as instances of DataclassInstance (#22018)
## Summary

This contains two bug fixes:

- [Handle field specifier functions that accept
`**kwargs`](ad6918d505)
- [Recognize metaclass-based transformers as instances of
`DataclassInstance`](1a8e29b23c)

closes https://github.com/astral-sh/ty/issues/1987

## Test Plan

* New Markdown tests
* Made sure that the example in 1987 checks without errors
2025-12-17 14:22:16 +01:00
Alex Waygood
764ad8b29b [ty] Improve disambiguation of types in many cases (#22019) 2025-12-17 11:41:07 +00:00
mahiro
85af715880 Fix playground Share button showing "Copied!" before clipboard copy completes (#21942)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-17 12:16:01 +01:00
Phong Do
b0bc990cbf [pyupgrade] Fix parsing named Unicode escape sequences (UP032) (#21901)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/19771

Fixes incorrect parsing of Unicode named escape sequences like `Hey
\N{snowman}` in `FormatString`, which were being incorrectly split into
separate literal and field parts instead of being treated as a single
literal unit.

## Problem

The `FormatString` parser incorrectly handles Unicode named escape
sequences:
- **Current**: `Hey \N{snowman}` is parsed into 2 parts `Literal("Hey
\N")` & `Field("snowman")`
- **Expected**: `Hey \N{snowman}` should be parsed into 1 part
`Literal("Hey \N{snowman}")`

This affects f-string conversion rules when fixing `UP032` that rely on
proper format string parsing.

## Solution

I modified `parse_literal` to detect and handle Unicode named escape
sequences before parsing single characters:
- Introduced a flag to track when a backslash is "available" to escape
something.
- When the flag is `true`, and the text starts with `N{`, try to parse
the complete Unicode escape sequence as one unit, and set the flag to
`false` after parsing successfully.
- Set the flag to `false` when the backslash is already consumed.

## Manual Verification

`"\N{angle}AOB = {angle}°".format(angle=180)` 

**Result**

```bash
 def foo():
-    "\N{angle}AOB = {angle}°".format(angle=180)
+    f"\N{angle}AOB = {180}°"

Would fix 1 error.
```

`"\N{snowman} {snowman}".format(snowman=1)`

**Result**
```bash
 def foo():
-    "\N{snowman} {snowman}".format(snowman=1)
+    f"\N{snowman} {1}"

Would fix 1 error.
```

`"\\N{snowman} {snowman}".format(snowman=1)`

**Result**
```bash
 def foo():
-    "\\N{snowman} {snowman}".format(snowman=1)
+    f"\\N{1} {1}"

Would fix 1 error.
```

## Test Plan

- Added test cases (happy case, invalid case, edge case) for
`FormatString` when parsing Unicode escape sequence.
- Updated snapshots.
2025-12-16 16:33:39 -05:00
Aria Desires
ad3de4e488 [ty] Improve rendering of default values for function args (#22010)
## Summary

We're actually quite good at computing this but the main issue is just
that we compute it at the type-level and so wrap it in `Literal[...]`.
So just special-case the rendering of these to omit `Literal[...]` and
fallback to `...` in cases where the thing we'll show is probably
useless (i.e. `x: str = str`).

Fixes https://github.com/astral-sh/ty/issues/1882
2025-12-16 13:39:19 -05:00
Douglas Creager
2214a46139 [ty] Don't use implicit superclass annotation when converting a class constructor into a Callable (#22011)
This fixes a bug @zsol found running ty against pyx. His original repro
is:

```py
class Base:
    def __init__(self) -> None: pass

class A(Base):
    pass

def foo[T](callable: Callable[..., T]) -> T:
    return callable()

a: A = foo(A)
```

The call at the bottom would fail, since we would infer `() -> Base` as
the callable type of `A`, when it should be `() -> A`.

The issue was how we add implicit annotations to `self` parameters.
Typically, we turn it into `self: Self`. But in cases where we don't
need to introduce a full typevar, we turn it into `self: [the class
itself]` — in this case, `self: Base`. Then, when turning the class
constructor into a callable, we would see this non-`Self` annotation and
think that it was important and load-bearing.

The fix is that we skip all implicit annotations when determining
whether the `self` annotation should take precedence in the callable's
return type.
2025-12-16 13:37:11 -05:00
Douglas Creager
c02bd11b93 [ty] Infer typevar specializations for Callable types (#21551)
This is a first stab at solving
https://github.com/astral-sh/ty/issues/500, at least in part, with the
old solver. We add a new `TypeRelation` that lets us opt into using
constraint sets to describe when a typevar is assignability to some
type, and then use that to calculate a constraint set that describes
when two callable types are assignable. If the callable types contain
typevars, that constraint set will describe their valid specializations.
We can then walk through all of the ways the constraint set can be
satisfied, and record a type mapping in the old solver for each one.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-12-16 09:16:49 -08:00
Carl Meyer
eeaaa8e9fe [ty] propagate classmethod-ness through decorators returning Callables (#21958)
Fixes https://github.com/astral-sh/ty/issues/1787

## Summary

Allow method decorators returning Callables to presumptively propagate
"classmethod-ness" in the same way that they already presumptively
propagate "function-like-ness". We can't actually be sure that this is
the case, based on the decorator's annotations, but (along with other
type checkers) we heuristically assume it to be the case for decorators
applied via decorator syntax.

## Test Plan

Added mdtest.
2025-12-16 09:16:40 -08:00
Rob Hand
7f7485d608 [ty] Fixed benchmark markdown auto-linenumbers (#22008) 2025-12-16 16:38:11 +01:00
Aria Desires
d755f3b522 [ty] Improve syntax-highlighting of constants (#22006)
## Summary

BLAH1 getting different highlighting/classification from BLAH is very
distracting and undesirable.
2025-12-16 09:23:40 -05:00
Aria Desires
83168a1bb1 [ty] highlight special type syntax in hovers as xml (#22005)
## Summary

These types look better rendered as XML than python

## Test Plan

<img width="532" height="299" alt="Screenshot 2025-12-16 at 8 40 56 AM"
src="https://github.com/user-attachments/assets/42d9abfa-3f4a-44ba-b6b4-6700ab06832d"
/>
2025-12-16 14:20:35 +00:00
Andrew Gallant
0f373603eb [ty] Suppress keyword argument completions unless we're in the "arguments"
Otherwise, given a case like this:

```
(lambda foo: (<CURSOR> + 1))(2)
```

we'll offer _argument_ completions for `foo` at the cursor position.
While we do actually want to offer completions for `foo` in this
context, it is currently difficult to do so. But we definitely don't
want to offer completions for `foo` as an argument to a function here.
Which is what we were doing.

We also add an end-to-end test here to verify that the actual label we
offer in completion suggestions includes the `=` suffix.

Closes https://github.com/astral-sh/ruff/pull/21970
2025-12-16 08:00:04 -05:00
Andrew Gallant
cc23af944f [ty] Tweak how we show qualified auto-import completions
Specifically, we make two changes:

1. We only show `import ...` when there is an actual import edit.
2. We now show the text we will insert. This means that when we
insert a qualified symbol, the qualification will show in the
completions suggested.

Ref https://github.com/astral-sh/ty/issues/1274#issuecomment-3352233790
2025-12-16 08:00:04 -05:00
Andrew Gallant
0589700ca1 [ty] Prefer unqualified imports over qualified imports
It seems like this is perhaps a better default:
https://github.com/astral-sh/ty/issues/1274#issuecomment-3352233790

For me personally, I think I'd prefer the qualified
variant. But I can easily see this changing based on
the specific scenario. I think the thing that pushes
me toward prioritizing the unqualified variant is that
the user could have typed the qualified variant themselves,
but they didn't. So we should perhaps prioritize the
form they typed, which is unqualified.
2025-12-16 08:00:04 -05:00
Andrew Gallant
43d983ecae [ty] Add tests for how qualified auto-imports are shown
Specifically, we want to test that something like `import typing`
should only be shown when we are actually going to insert an import.
*And* that when we insert a qualified name, then we should show it
as such in the completion suggestions.
2025-12-16 08:00:04 -05:00
Andrew Gallant
5c69bb564c [ty] Add a test capturing sub-optimal auto-import heuristic
Specifically, here, we'd probably like to add `TypedDict` to
the existing `from typing import ...` statement instead of
using the fully qualified `typing.TypedDict` form.

To test this, we add another snapshot mode for including imports
that a completion will insert when selected.

Ref https://github.com/astral-sh/ty/issues/1274#issuecomment-3352233790
2025-12-16 08:00:04 -05:00
Andrew Gallant
89fed85a8d [ty] Switch completion tests to use "insertion" text in snapshots
I think this gives us better tests overall, since it
tells us what is actually going to be inserted. Plus,
we'll want this in the next commit.
2025-12-16 08:00:04 -05:00
Zanie Blue
051f6896ac [ty] Remove extra headings and split examples in the overrides configuration docs (#21994)
Having these as markdown headings ends up being weird in the reference
documentation, e.g., before:

<img width="1071" height="779" alt="Screenshot 2025-12-15 at 8 45 25 PM"
src="https://github.com/user-attachments/assets/2118d4f1-f557-46f3-a4b6-56c406cf9aca"
/>
2025-12-16 06:57:06 -06:00
David Peter
5b1d3ac9b9 [ty] Document TY_CONFIG_FILE (#22001) 2025-12-16 13:15:24 +01:00
Micha Reiser
b2b0ad38ea [ty] Cache KnownClass::to_class_literal (#22000) 2025-12-16 13:04:12 +01:00
Micha Reiser
01c0a3e960 [ty] Fix benchmark assertion (#22003) 2025-12-16 12:24:54 +01:00
Zanie Blue
5c942119f8 Add uv and ty to the Ruff README (#21996)
Matching the style of the uv README
2025-12-16 10:37:15 +01:00
David Peter
2acf1cc0fd [ty] Infer precise types for isinstance(…) calls involving typevars (#21999)
## Summary

Infer `Literal[True]` for `isinstance(x, C)` calls when `x: T` and `T`
has a bound `B` that satisfies the `isinstance` check against `C`.
Similar for constrained typevars.

closes https://github.com/astral-sh/ty/issues/1895

## Test Plan

* New Markdown tests
* Verified the the example in the linked ticket checks without errors
2025-12-16 10:34:30 +01:00
Micha Reiser
4fdbe26445 [ty] Use FxHashMap in Signature::has_relation_to (#21997) 2025-12-16 10:10:45 +01:00
Charlie Marsh
682d29c256 [ty] Avoid enforcing standalone expression for tests in f-strings (#21967)
## Summary

Based on what we do elsewhere and my understanding of "standalone"
here...

Closes https://github.com/astral-sh/ty/issues/1865.
2025-12-15 22:31:04 -05:00
286 changed files with 12384 additions and 3580 deletions

1
.gitattributes vendored
View File

@@ -22,6 +22,7 @@ crates/ruff_linter/resources/test/fixtures/pyupgrade/UP018_CR.py text eol=cr
crates/ruff_linter/resources/test/fixtures/pyupgrade/UP018_LF.py text eol=lf
crates/ruff_python_parser/resources/inline linguist-generated=true
crates/ty_python_semantic/resources/mdtest/external/*.lock linguist-generated=true
ruff.schema.json -diff linguist-generated=true text=auto eol=lf
ty.schema.json -diff linguist-generated=true text=auto eol=lf

View File

@@ -4,5 +4,6 @@
# Enable off-by-default rules.
[rules]
possibly-unresolved-reference = "warn"
possibly-missing-import = "warn"
unused-ignore-comment = "warn"
division-by-zero = "warn"

View File

@@ -62,7 +62,7 @@ jobs:
name: Create an issue if the daily fuzz surfaced any bugs
runs-on: ubuntu-latest
needs: fuzz
if: ${{ github.repository == 'astral-sh/ruff' && always() && github.event_name == 'schedule' && needs.fuzz.result == 'failure' }}
if: ${{ github.repository == 'astral-sh/ruff' && always() && github.event_name == 'schedule' && needs.fuzz.result != 'success' }}
permissions:
issues: write
steps:

View File

@@ -6,6 +6,11 @@ on:
pull_request:
paths:
- "crates/ty*/**"
- "!crates/ty_ide/**"
- "!crates/ty_server/**"
- "!crates/ty_test/**"
- "!crates/ty_completion_eval/**"
- "!crates/ty_wasm/**"
- "crates/ruff_db"
- "crates/ruff_python_ast"
- "crates/ruff_python_parser"

View File

@@ -16,8 +16,7 @@ name: Sync typeshed
# 3. Once the Windows worker is done, a MacOS worker:
# a. Checks out the branch created by the Linux worker
# b. Syncs all docstrings available on MacOS that are not available on Linux or Windows
# c. Attempts to update any snapshots that might have changed
# (this sub-step is allowed to fail)
# c. Formats the code again
# d. Commits the changes and pushes them to the same upstream branch
# e. Creates a PR against the `main` branch using the branch all three workers have pushed to
# 4. If any of steps 1-3 failed, an issue is created in the `astral-sh/ruff` repository
@@ -198,42 +197,6 @@ jobs:
run: |
rm "${VENDORED_TYPESHED}/pyproject.toml"
git commit -am "Remove pyproject.toml file"
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: "Install Rust toolchain"
if: ${{ success() }}
run: rustup show
- name: "Install mold"
if: ${{ success() }}
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo nextest"
if: ${{ success() }}
uses: taiki-e/install-action@3575e532701a5fc614b0c842e4119af4cc5fd16d # v2.62.60
with:
tool: cargo-nextest
- name: "Install cargo insta"
if: ${{ success() }}
uses: taiki-e/install-action@3575e532701a5fc614b0c842e4119af4cc5fd16d # v2.62.60
with:
tool: cargo-insta
- name: Update snapshots
if: ${{ success() }}
run: |
cargo r \
--profile=profiling \
-p ty_completion_eval \
-- all --tasks ./crates/ty_completion_eval/completion-evaluation-tasks.csv
# The `cargo insta` docs indicate that `--unreferenced=delete` might be a good option,
# but from local testing it appears to just revert all changes made by `cargo insta test --accept`.
#
# If there were only snapshot-related failures, `cargo insta test --accept` will have exit code 0,
# but if there were also other mdtest failures (for example), it will return a nonzero exit code.
# We don't care about other tests failing here, we just want snapshots updated where possible,
# so we use `|| true` here to ignore the exit code.
cargo insta test --accept --color=always --all-features --test-runner=nextest || true
- name: Commit snapshot changes
if: ${{ success() }}
run: git commit -am "Update snapshots" || echo "No snapshot changes to commit"
- name: Push changes upstream and create a PR
if: ${{ success() }}
run: |
@@ -245,7 +208,7 @@ jobs:
name: Create an issue if the typeshed sync failed
runs-on: ubuntu-latest
needs: [sync, docstrings-windows, docstrings-macos-and-pr]
if: ${{ github.repository == 'astral-sh/ruff' && always() && github.event_name == 'schedule' && (needs.sync.result == 'failure' || needs.docstrings-windows.result == 'failure' || needs.docstrings-macos-and-pr.result == 'failure') }}
if: ${{ github.repository == 'astral-sh/ruff' && always() && github.event_name == 'schedule' && (needs.sync.result != 'success' || needs.docstrings-windows.result != 'success' || needs.docstrings-macos-and-pr.result != 'success') }}
permissions:
issues: write
steps:

View File

@@ -17,7 +17,6 @@ env:
RUSTUP_MAX_RETRIES: 10
RUST_BACKTRACE: 1
REF_NAME: ${{ github.ref_name }}
CF_API_TOKEN_EXISTS: ${{ secrets.CF_API_TOKEN != '' }}
jobs:
ty-ecosystem-analyzer:
@@ -67,7 +66,7 @@ jobs:
cd ..
uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@55df3c868f3fa9ab34cff0498dd6106722aac205"
uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@2e1816eac09c90140b1ba51d19afc5f59da460f5"
ecosystem-analyzer \
--repository ruff \
@@ -112,22 +111,13 @@ jobs:
cat diff-statistics.md >> "$GITHUB_STEP_SUMMARY"
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
id: deploy
uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3.14.1
# NOTE: astral-sh-bot uses this artifact to post comments on PRs.
# Make sure to update the bot if you rename the artifact.
- name: "Upload full report"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}
command: pages deploy dist --project-name=ty-ecosystem --branch ${{ github.head_ref }} --commit-hash ${GITHUB_SHA}
- name: "Append deployment URL"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
env:
DEPLOYMENT_URL: ${{ steps.deploy.outputs.pages-deployment-alias-url }}
run: |
echo >> comment.md
echo "**[Full report with detailed diff]($DEPLOYMENT_URL/diff)** ([timing results]($DEPLOYMENT_URL/timing))" >> comment.md
name: full-report
path: dist/
# NOTE: astral-sh-bot uses this artifact to post comments on PRs.
# Make sure to update the bot if you rename the artifact.

View File

@@ -52,7 +52,7 @@ jobs:
cd ..
uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@55df3c868f3fa9ab34cff0498dd6106722aac205"
uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@2e1816eac09c90140b1ba51d19afc5f59da460f5"
ecosystem-analyzer \
--verbose \

View File

@@ -6,6 +6,11 @@ on:
pull_request:
paths:
- "crates/ty*/**"
- "!crates/ty_ide/**"
- "!crates/ty_server/**"
- "!crates/ty_test/**"
- "!crates/ty_completion_eval/**"
- "!crates/ty_wasm/**"
- "crates/ruff_db"
- "crates/ruff_python_ast"
- "crates/ruff_python_parser"

View File

@@ -1,5 +1,54 @@
# Changelog
## 0.14.10
Released on 2025-12-18.
### Preview features
- [formatter] Fluent formatting of method chains ([#21369](https://github.com/astral-sh/ruff/pull/21369))
- [formatter] Keep lambda parameters on one line and parenthesize the body if it expands ([#21385](https://github.com/astral-sh/ruff/pull/21385))
- \[`flake8-implicit-str-concat`\] New rule to prevent implicit string concatenation in collections (`ISC004`) ([#21972](https://github.com/astral-sh/ruff/pull/21972))
- \[`flake8-use-pathlib`\] Make fixes unsafe when types change in compound statements (`PTH104`, `PTH105`, `PTH109`, `PTH115`) ([#22009](https://github.com/astral-sh/ruff/pull/22009))
- \[`refurb`\] Extend support for `Path.open` (`FURB101`, `FURB103`) ([#21080](https://github.com/astral-sh/ruff/pull/21080))
### Bug fixes
- \[`pyupgrade`\] Fix parsing named Unicode escape sequences (`UP032`) ([#21901](https://github.com/astral-sh/ruff/pull/21901))
### Rule changes
- \[`eradicate`\] Ignore `ruff:disable` and `ruff:enable` comments in `ERA001` ([#22038](https://github.com/astral-sh/ruff/pull/22038))
- \[`flake8-pytest-style`\] Allow `match` and `check` keyword arguments without an expected exception type (`PT010`) ([#21964](https://github.com/astral-sh/ruff/pull/21964))
- [syntax-errors] Annotated name cannot be global ([#20868](https://github.com/astral-sh/ruff/pull/20868))
### Documentation
- Add `uv` and `ty` to the Ruff README ([#21996](https://github.com/astral-sh/ruff/pull/21996))
- Document known lambda formatting deviations from Black ([#21954](https://github.com/astral-sh/ruff/pull/21954))
- Update `setup.md` ([#22024](https://github.com/astral-sh/ruff/pull/22024))
- \[`flake8-bandit`\] Fix broken link (`S704`) ([#22039](https://github.com/astral-sh/ruff/pull/22039))
### Other changes
- Fix playground Share button showing "Copied!" before clipboard copy completes ([#21942](https://github.com/astral-sh/ruff/pull/21942))
### Contributors
- [@dylwil3](https://github.com/dylwil3)
- [@charliecloudberry](https://github.com/charliecloudberry)
- [@charliermarsh](https://github.com/charliermarsh)
- [@chirizxc](https://github.com/chirizxc)
- [@ntBre](https://github.com/ntBre)
- [@zanieb](https://github.com/zanieb)
- [@amyreese](https://github.com/amyreese)
- [@hauntsaninja](https://github.com/hauntsaninja)
- [@11happy](https://github.com/11happy)
- [@mahiro72](https://github.com/mahiro72)
- [@MichaReiser](https://github.com/MichaReiser)
- [@phongddo](https://github.com/phongddo)
- [@PeterJCLaw](https://github.com/PeterJCLaw)
## 0.14.9
Released on 2025-12-11.

29
Cargo.lock generated
View File

@@ -1004,27 +1004,6 @@ dependencies = [
"crypto-common",
]
[[package]]
name = "dir-test"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "62c013fe825864f3e4593f36426c1fa7a74f5603f13ca8d1af7a990c1cd94a79"
dependencies = [
"dir-test-macros",
]
[[package]]
name = "dir-test-macros"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d42f54d7b4a6bc2400fe5b338e35d1a335787585375322f49c5d5fe7b243da7e"
dependencies = [
"glob",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "dirs"
version = "6.0.0"
@@ -2908,7 +2887,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.14.9"
version = "0.14.10"
dependencies = [
"anyhow",
"argfile",
@@ -3166,7 +3145,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.14.9"
version = "0.14.10"
dependencies = [
"aho-corasick",
"anyhow",
@@ -3525,7 +3504,7 @@ dependencies = [
[[package]]
name = "ruff_wasm"
version = "0.14.9"
version = "0.14.10"
dependencies = [
"console_error_panic_hook",
"console_log",
@@ -4513,7 +4492,7 @@ dependencies = [
"camino",
"colored 3.0.0",
"compact_str",
"dir-test",
"datatest-stable",
"drop_bomb",
"get-size2",
"glob",

View File

@@ -82,7 +82,6 @@ criterion = { version = "0.7.0", default-features = false }
crossbeam = { version = "0.8.4" }
dashmap = { version = "6.0.1" }
datatest-stable = { version = "0.3.3" }
dir-test = { version = "0.4.0" }
dunce = { version = "1.0.5" }
drop_bomb = { version = "0.1.5" }
etcetera = { version = "0.11.0" }

View File

@@ -57,8 +57,11 @@ Ruff is extremely actively developed and used in major open-source projects like
...and [many more](#whos-using-ruff).
Ruff is backed by [Astral](https://astral.sh). Read the [launch post](https://astral.sh/blog/announcing-astral-the-company-behind-ruff),
or the original [project announcement](https://notes.crmarsh.com/python-tooling-could-be-much-much-faster).
Ruff is backed by [Astral](https://astral.sh), the creators of
[uv](https://github.com/astral-sh/uv) and [ty](https://github.com/astral-sh/ty).
Read the [launch post](https://astral.sh/blog/announcing-astral-the-company-behind-ruff), or the
original [project announcement](https://notes.crmarsh.com/python-tooling-could-be-much-much-faster).
## Testimonials
@@ -147,8 +150,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.14.9/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.9/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.14.10/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.10/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -181,7 +184,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.14.9
rev: v0.14.10
hooks:
# Run the linter.
- id: ruff-check

View File

@@ -4,6 +4,7 @@ extend-exclude = [
"crates/ty_vendored/vendor/**/*",
"**/resources/**/*",
"**/snapshots/**/*",
"crates/ruff_linter/src/rules/flake8_implicit_str_concat/rules/collection_literal.rs",
# Completion tests tend to have a lot of incomplete
# words naturally. It's annoying to have to make all
# of them actually words. So just ignore typos here.

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.14.9"
version = "0.14.10"
publish = true
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -194,7 +194,7 @@ static SYMPY: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY312,
},
13030,
13100,
);
static TANJUN: Benchmark = Benchmark::new(
@@ -223,7 +223,7 @@ static STATIC_FRAME: Benchmark = Benchmark::new(
max_dep_date: "2025-08-09",
python_version: PythonVersion::PY311,
},
950,
1100,
);
#[track_caller]

View File

@@ -1,5 +1,6 @@
use glob::PatternError;
use ruff_notebook::{Notebook, NotebookError};
use rustc_hash::FxHashMap;
use std::panic::RefUnwindSafe;
use std::sync::{Arc, Mutex};
@@ -20,18 +21,44 @@ use super::walk_directory::WalkDirectoryBuilder;
///
/// ## Warning
/// Don't use this system for production code. It's intended for testing only.
#[derive(Debug, Clone)]
#[derive(Debug)]
pub struct TestSystem {
inner: Arc<dyn WritableSystem + RefUnwindSafe + Send + Sync>,
/// Environment variable overrides. If a key is present here, it takes precedence
/// over the inner system's environment variables.
env_overrides: Arc<Mutex<FxHashMap<String, Option<String>>>>,
}
impl Clone for TestSystem {
fn clone(&self) -> Self {
Self {
inner: self.inner.clone(),
env_overrides: self.env_overrides.clone(),
}
}
}
impl TestSystem {
pub fn new(inner: impl WritableSystem + RefUnwindSafe + Send + Sync + 'static) -> Self {
Self {
inner: Arc::new(inner),
env_overrides: Arc::new(Mutex::new(FxHashMap::default())),
}
}
/// Sets an environment variable override. This takes precedence over the inner system.
pub fn set_env_var(&self, name: impl Into<String>, value: impl Into<String>) {
self.env_overrides
.lock()
.unwrap()
.insert(name.into(), Some(value.into()));
}
/// Removes an environment variable override, making it appear as not set.
pub fn remove_env_var(&self, name: impl Into<String>) {
self.env_overrides.lock().unwrap().insert(name.into(), None);
}
/// Returns the [`InMemorySystem`].
///
/// ## Panics
@@ -147,6 +174,18 @@ impl System for TestSystem {
self.system().case_sensitivity()
}
fn env_var(&self, name: &str) -> std::result::Result<String, std::env::VarError> {
// Check overrides first
if let Some(override_value) = self.env_overrides.lock().unwrap().get(name) {
return match override_value {
Some(value) => Ok(value.clone()),
None => Err(std::env::VarError::NotPresent),
};
}
// Fall back to inner system
self.system().env_var(name)
}
fn dyn_clone(&self) -> Box<dyn System> {
Box::new(self.clone())
}
@@ -156,6 +195,7 @@ impl Default for TestSystem {
fn default() -> Self {
Self {
inner: Arc::new(InMemorySystem::default()),
env_overrides: Arc::new(Mutex::new(FxHashMap::default())),
}
}
}

View File

@@ -63,12 +63,7 @@ fn generate_markdown() -> String {
let _ = writeln!(&mut output, "# Rules\n");
let mut lints: Vec<_> = registry.lints().iter().collect();
lints.sort_by(|a, b| {
a.default_level()
.cmp(&b.default_level())
.reverse()
.then_with(|| a.name().cmp(&b.name()))
});
lints.sort_by_key(|a| a.name());
for lint in lints {
let _ = writeln!(&mut output, "## `{rule_name}`\n", rule_name = lint.name());
@@ -119,7 +114,7 @@ fn generate_markdown() -> String {
let _ = writeln!(
&mut output,
r#"<small>
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of '{level}'."><code>{level}</code></a> ·
Default level: <a href="../../rules#rule-levels" title="This lint has a default level of '{level}'."><code>{level}</code></a> ·
{status_text} ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20{encoded_name}" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/{file}#L{line}" target="_blank">View source</a>

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.14.9"
version = "0.14.10"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -0,0 +1,66 @@
facts = (
"Lobsters have blue blood.",
"The liver is the only human organ that can fully regenerate itself.",
"Clarinets are made almost entirely out of wood from the mpingo tree."
"In 1971, astronaut Alan Shepard played golf on the moon.",
)
facts = [
"Lobsters have blue blood.",
"The liver is the only human organ that can fully regenerate itself.",
"Clarinets are made almost entirely out of wood from the mpingo tree."
"In 1971, astronaut Alan Shepard played golf on the moon.",
]
facts = {
"Lobsters have blue blood.",
"The liver is the only human organ that can fully regenerate itself.",
"Clarinets are made almost entirely out of wood from the mpingo tree."
"In 1971, astronaut Alan Shepard played golf on the moon.",
}
facts = {
(
"Clarinets are made almost entirely out of wood from the mpingo tree."
"In 1971, astronaut Alan Shepard played golf on the moon."
),
}
facts = (
"Octopuses have three hearts."
# Missing comma here.
"Honey never spoils.",
)
facts = [
"Octopuses have three hearts."
# Missing comma here.
"Honey never spoils.",
]
facts = {
"Octopuses have three hearts."
# Missing comma here.
"Honey never spoils.",
}
facts = (
(
"Clarinets are made almost entirely out of wood from the mpingo tree."
"In 1971, astronaut Alan Shepard played golf on the moon."
),
)
facts = [
(
"Clarinets are made almost entirely out of wood from the mpingo tree."
"In 1971, astronaut Alan Shepard played golf on the moon."
),
]
facts = (
"Lobsters have blue blood.\n"
"The liver is the only human organ that can fully regenerate itself.\n"
"Clarinets are made almost entirely out of wood from the mpingo tree.\n"
"In 1971, astronaut Alan Shepard played golf on the moon.\n"
)

View File

@@ -9,3 +9,15 @@ def test_ok():
def test_error():
with pytest.raises(UnicodeError):
pass
def test_match_only():
with pytest.raises(match="some error message"):
pass
def test_check_only():
with pytest.raises(check=lambda e: True):
pass
def test_match_and_check():
with pytest.raises(match="some error message", check=lambda e: True):
pass

View File

@@ -136,4 +136,38 @@ os.chmod("pth1_file", 0o700, None, True, 1, *[1], **{"x": 1}, foo=1)
os.rename("pth1_file", "pth1_file1", None, None, 1, *[1], **{"x": 1}, foo=1)
os.replace("pth1_file1", "pth1_file", None, None, 1, *[1], **{"x": 1}, foo=1)
os.path.samefile("pth1_file", "pth1_link", 1, *[1], **{"x": 1}, foo=1)
os.path.samefile("pth1_file", "pth1_link", 1, *[1], **{"x": 1}, foo=1)
# See: https://github.com/astral-sh/ruff/issues/21794
import sys
if os.rename("pth1.py", "pth1.py.bak"):
print("rename: truthy")
else:
print("rename: falsey")
if os.replace("pth1.py.bak", "pth1.py"):
print("replace: truthy")
else:
print("replace: falsey")
try:
for _ in os.getcwd():
print("getcwd: iterable")
break
except TypeError as e:
print("getcwd: not iterable")
try:
for _ in os.getcwdb():
print("getcwdb: iterable")
break
except TypeError as e:
print("getcwdb: not iterable")
try:
for _ in os.readlink(sys.executable):
print("readlink: iterable")
break
except TypeError as e:
print("readlink: not iterable")

View File

@@ -132,7 +132,6 @@ async def c():
# Non-errors
###
# False-negative: RustPython doesn't parse the `\N{snowman}`.
"\N{snowman} {}".format(a)
"{".format(a)
@@ -276,3 +275,6 @@ if __name__ == "__main__":
number = 0
string = "{}".format(number := number + 1)
print(string)
# Unicode escape
"\N{angle}AOB = {angle}°".format(angle=180)

View File

@@ -138,5 +138,6 @@ with open("file.txt", encoding="utf-8") as f:
with open("file.txt", encoding="utf-8") as f:
contents = process_contents(f.read())
with open("file.txt", encoding="utf-8") as f:
with open("file1.txt", encoding="utf-8") as f:
contents: str = process_contents(f.read())

View File

@@ -0,0 +1,8 @@
from pathlib import Path
with Path("file.txt").open() as f:
contents = f.read()
with Path("file.txt").open("r") as f:
contents = f.read()

View File

@@ -0,0 +1,26 @@
from pathlib import Path
with Path("file.txt").open("w") as f:
f.write("test")
with Path("file.txt").open("wb") as f:
f.write(b"test")
with Path("file.txt").open(mode="w") as f:
f.write("test")
with Path("file.txt").open("w", encoding="utf8") as f:
f.write("test")
with Path("file.txt").open("w", errors="ignore") as f:
f.write("test")
with Path(foo()).open("w") as f:
f.write("test")
p = Path("file.txt")
with p.open("w") as f:
f.write("test")
with Path("foo", "bar", "baz").open("w") as f:
f.write("test")

View File

@@ -86,3 +86,26 @@ def f():
# Multiple codes but none are used
# ruff: disable[E741, F401, F841]
print("hello")
def f():
# Unknown rule codes
# ruff: disable[YF829]
# ruff: disable[F841, RQW320]
value = 0
# ruff: enable[F841, RQW320]
# ruff: enable[YF829]
def f():
# External rule codes should be ignored
# ruff: disable[TK421]
print("hello")
# ruff: enable[TK421]
def f():
# Empty or missing rule codes
# ruff: disable
# ruff: disable[]
print("hello")

View File

@@ -0,0 +1,38 @@
a: int = 1
def f1():
global a
a: str = "foo" # error
b: int = 1
def outer():
def inner():
global b
b: str = "nested" # error
c: int = 1
def f2():
global c
c: list[str] = [] # error
d: int = 1
def f3():
global d
d: str # error
e: int = 1
def f4():
e: str = "happy" # okay
global f
f: int = 1 # okay
g: int = 1
global g # error
class C:
x: str
global x # error
class D:
global x # error
x: str

View File

@@ -214,6 +214,13 @@ pub(crate) fn expression(expr: &Expr, checker: &Checker) {
range: _,
node_index: _,
}) => {
if checker.is_rule_enabled(Rule::ImplicitStringConcatenationInCollectionLiteral) {
flake8_implicit_str_concat::rules::implicit_string_concatenation_in_collection_literal(
checker,
expr,
elts,
);
}
if ctx.is_store() {
let check_too_many_expressions =
checker.is_rule_enabled(Rule::ExpressionsInStarAssignment);
@@ -1329,6 +1336,13 @@ pub(crate) fn expression(expr: &Expr, checker: &Checker) {
}
}
Expr::Set(set) => {
if checker.is_rule_enabled(Rule::ImplicitStringConcatenationInCollectionLiteral) {
flake8_implicit_str_concat::rules::implicit_string_concatenation_in_collection_literal(
checker,
expr,
&set.elts,
);
}
if checker.is_rule_enabled(Rule::DuplicateValue) {
flake8_bugbear::rules::duplicate_value(checker, set);
}

View File

@@ -454,6 +454,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8ImplicitStrConcat, "001") => rules::flake8_implicit_str_concat::rules::SingleLineImplicitStringConcatenation,
(Flake8ImplicitStrConcat, "002") => rules::flake8_implicit_str_concat::rules::MultiLineImplicitStringConcatenation,
(Flake8ImplicitStrConcat, "003") => rules::flake8_implicit_str_concat::rules::ExplicitStringConcatenation,
(Flake8ImplicitStrConcat, "004") => rules::flake8_implicit_str_concat::rules::ImplicitStringConcatenationInCollectionLiteral,
// flake8-print
(Flake8Print, "1") => rules::flake8_print::rules::Print,
@@ -1063,6 +1064,8 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Ruff, "100") => rules::ruff::rules::UnusedNOQA,
(Ruff, "101") => rules::ruff::rules::RedirectedNOQA,
(Ruff, "102") => rules::ruff::rules::InvalidRuleCode,
(Ruff, "103") => rules::ruff::rules::InvalidSuppressionComment,
(Ruff, "104") => rules::ruff::rules::UnmatchedSuppressionComment,
(Ruff, "200") => rules::ruff::rules::InvalidPyprojectToml,
#[cfg(any(feature = "test-rules", test))]

View File

@@ -1001,6 +1001,7 @@ mod tests {
#[test_case(Path::new("write_to_debug.py"), PythonVersion::PY310)]
#[test_case(Path::new("invalid_expression.py"), PythonVersion::PY312)]
#[test_case(Path::new("global_parameter.py"), PythonVersion::PY310)]
#[test_case(Path::new("annotated_global.py"), PythonVersion::PY314)]
fn test_semantic_errors(path: &Path, python_version: PythonVersion) -> Result<()> {
let snapshot = format!(
"semantic_syntax_error_{}_{}",

View File

@@ -22,6 +22,7 @@ static ALLOWLIST_REGEX: LazyLock<Regex> = LazyLock::new(|| {
# Case-sensitive
pyright
| pyrefly
| ruff\s*:\s*(disable|enable)
| mypy:
| type:\s*ignore
| SPDX-License-Identifier:
@@ -148,6 +149,8 @@ mod tests {
assert!(!comment_contains_code("# 123", &[]));
assert!(!comment_contains_code("# 123.1", &[]));
assert!(!comment_contains_code("# 1, 2, 3", &[]));
assert!(!comment_contains_code("# ruff: disable[E501]", &[]));
assert!(!comment_contains_code("#ruff:enable[E501, F84]", &[]));
assert!(!comment_contains_code(
"# pylint: disable=redefined-outer-name",
&[]

View File

@@ -70,7 +70,7 @@ fn is_open_call(func: &Expr, semantic: &SemanticModel) -> bool {
}
/// Returns `true` if an expression resolves to a call to `pathlib.Path.open`.
fn is_open_call_from_pathlib(func: &Expr, semantic: &SemanticModel) -> bool {
pub(crate) fn is_open_call_from_pathlib(func: &Expr, semantic: &SemanticModel) -> bool {
let Expr::Attribute(ast::ExprAttribute { attr, value, .. }) = func else {
return false;
};

View File

@@ -18,7 +18,7 @@ mod async_zero_sleep;
mod blocking_http_call;
mod blocking_http_call_httpx;
mod blocking_input;
mod blocking_open_call;
pub(crate) mod blocking_open_call;
mod blocking_path_methods;
mod blocking_process_invocation;
mod blocking_sleep;

View File

@@ -12,7 +12,7 @@ use crate::{checkers::ast::Checker, settings::LinterSettings};
/// Checks for non-literal strings being passed to [`markupsafe.Markup`][markupsafe-markup].
///
/// ## Why is this bad?
/// [`markupsafe.Markup`] does not perform any escaping, so passing dynamic
/// [`markupsafe.Markup`][markupsafe-markup] does not perform any escaping, so passing dynamic
/// content, like f-strings, variables or interpolated strings will potentially
/// lead to XSS vulnerabilities.
///

View File

@@ -32,6 +32,10 @@ mod tests {
Path::new("ISC_syntax_error_2.py")
)]
#[test_case(Rule::ExplicitStringConcatenation, Path::new("ISC.py"))]
#[test_case(
Rule::ImplicitStringConcatenationInCollectionLiteral,
Path::new("ISC004.py")
)]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
let diagnostics = test_path(

View File

@@ -0,0 +1,103 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{Expr, StringLike};
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::{Edit, Fix, FixAvailability, Violation};
/// ## What it does
/// Checks for implicitly concatenated strings inside list, tuple, and set literals.
///
/// ## Why is this bad?
/// In collection literals, implicit string concatenation is often the result of
/// a missing comma between elements, which can silently merge items together.
///
/// ## Example
/// ```python
/// facts = (
/// "Lobsters have blue blood.",
/// "The liver is the only human organ that can fully regenerate itself.",
/// "Clarinets are made almost entirely out of wood from the mpingo tree."
/// "In 1971, astronaut Alan Shepard played golf on the moon.",
/// )
/// ```
///
/// Instead, you likely intended:
/// ```python
/// facts = (
/// "Lobsters have blue blood.",
/// "The liver is the only human organ that can fully regenerate itself.",
/// "Clarinets are made almost entirely out of wood from the mpingo tree.",
/// "In 1971, astronaut Alan Shepard played golf on the moon.",
/// )
/// ```
///
/// If the concatenation is intentional, wrap it in parentheses to make it
/// explicit:
/// ```python
/// facts = (
/// "Lobsters have blue blood.",
/// "The liver is the only human organ that can fully regenerate itself.",
/// (
/// "Clarinets are made almost entirely out of wood from the mpingo tree."
/// "In 1971, astronaut Alan Shepard played golf on the moon."
/// ),
/// )
/// ```
///
/// ## Fix safety
/// The fix is safe in that it does not change the semantics of your code.
/// However, the issue is that you may often want to change semantics
/// by adding a missing comma.
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "0.14.10")]
pub(crate) struct ImplicitStringConcatenationInCollectionLiteral;
impl Violation for ImplicitStringConcatenationInCollectionLiteral {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Always;
#[derive_message_formats]
fn message(&self) -> String {
"Unparenthesized implicit string concatenation in collection".to_string()
}
fn fix_title(&self) -> Option<String> {
Some("Wrap implicitly concatenated strings in parentheses".to_string())
}
}
/// ISC004
pub(crate) fn implicit_string_concatenation_in_collection_literal(
checker: &Checker,
expr: &Expr,
elements: &[Expr],
) {
for element in elements {
let Ok(string_like) = StringLike::try_from(element) else {
continue;
};
if !string_like.is_implicit_concatenated() {
continue;
}
if parenthesized_range(
string_like.as_expression_ref(),
expr.into(),
checker.tokens(),
)
.is_some()
{
continue;
}
let mut diagnostic = checker.report_diagnostic(
ImplicitStringConcatenationInCollectionLiteral,
string_like.range(),
);
diagnostic.help("Did you forget a comma?");
diagnostic.set_fix(Fix::unsafe_edits(
Edit::insertion("(".to_string(), string_like.range().start()),
[Edit::insertion(")".to_string(), string_like.range().end())],
));
}
}

View File

@@ -1,5 +1,7 @@
pub(crate) use collection_literal::*;
pub(crate) use explicit::*;
pub(crate) use implicit::*;
mod collection_literal;
mod explicit;
mod implicit;

View File

@@ -0,0 +1,149 @@
---
source: crates/ruff_linter/src/rules/flake8_implicit_str_concat/mod.rs
---
ISC004 [*] Unparenthesized implicit string concatenation in collection
--> ISC004.py:4:5
|
2 | "Lobsters have blue blood.",
3 | "The liver is the only human organ that can fully regenerate itself.",
4 | / "Clarinets are made almost entirely out of wood from the mpingo tree."
5 | | "In 1971, astronaut Alan Shepard played golf on the moon.",
| |______________________________________________________________^
6 | )
|
help: Wrap implicitly concatenated strings in parentheses
help: Did you forget a comma?
1 | facts = (
2 | "Lobsters have blue blood.",
3 | "The liver is the only human organ that can fully regenerate itself.",
- "Clarinets are made almost entirely out of wood from the mpingo tree."
- "In 1971, astronaut Alan Shepard played golf on the moon.",
4 + ("Clarinets are made almost entirely out of wood from the mpingo tree."
5 + "In 1971, astronaut Alan Shepard played golf on the moon."),
6 | )
7 |
8 | facts = [
note: This is an unsafe fix and may change runtime behavior
ISC004 [*] Unparenthesized implicit string concatenation in collection
--> ISC004.py:11:5
|
9 | "Lobsters have blue blood.",
10 | "The liver is the only human organ that can fully regenerate itself.",
11 | / "Clarinets are made almost entirely out of wood from the mpingo tree."
12 | | "In 1971, astronaut Alan Shepard played golf on the moon.",
| |______________________________________________________________^
13 | ]
|
help: Wrap implicitly concatenated strings in parentheses
help: Did you forget a comma?
8 | facts = [
9 | "Lobsters have blue blood.",
10 | "The liver is the only human organ that can fully regenerate itself.",
- "Clarinets are made almost entirely out of wood from the mpingo tree."
- "In 1971, astronaut Alan Shepard played golf on the moon.",
11 + ("Clarinets are made almost entirely out of wood from the mpingo tree."
12 + "In 1971, astronaut Alan Shepard played golf on the moon."),
13 | ]
14 |
15 | facts = {
note: This is an unsafe fix and may change runtime behavior
ISC004 [*] Unparenthesized implicit string concatenation in collection
--> ISC004.py:18:5
|
16 | "Lobsters have blue blood.",
17 | "The liver is the only human organ that can fully regenerate itself.",
18 | / "Clarinets are made almost entirely out of wood from the mpingo tree."
19 | | "In 1971, astronaut Alan Shepard played golf on the moon.",
| |______________________________________________________________^
20 | }
|
help: Wrap implicitly concatenated strings in parentheses
help: Did you forget a comma?
15 | facts = {
16 | "Lobsters have blue blood.",
17 | "The liver is the only human organ that can fully regenerate itself.",
- "Clarinets are made almost entirely out of wood from the mpingo tree."
- "In 1971, astronaut Alan Shepard played golf on the moon.",
18 + ("Clarinets are made almost entirely out of wood from the mpingo tree."
19 + "In 1971, astronaut Alan Shepard played golf on the moon."),
20 | }
21 |
22 | facts = {
note: This is an unsafe fix and may change runtime behavior
ISC004 [*] Unparenthesized implicit string concatenation in collection
--> ISC004.py:30:5
|
29 | facts = (
30 | / "Octopuses have three hearts."
31 | | # Missing comma here.
32 | | "Honey never spoils.",
| |_________________________^
33 | )
|
help: Wrap implicitly concatenated strings in parentheses
help: Did you forget a comma?
27 | }
28 |
29 | facts = (
- "Octopuses have three hearts."
30 + ("Octopuses have three hearts."
31 | # Missing comma here.
- "Honey never spoils.",
32 + "Honey never spoils."),
33 | )
34 |
35 | facts = [
note: This is an unsafe fix and may change runtime behavior
ISC004 [*] Unparenthesized implicit string concatenation in collection
--> ISC004.py:36:5
|
35 | facts = [
36 | / "Octopuses have three hearts."
37 | | # Missing comma here.
38 | | "Honey never spoils.",
| |_________________________^
39 | ]
|
help: Wrap implicitly concatenated strings in parentheses
help: Did you forget a comma?
33 | )
34 |
35 | facts = [
- "Octopuses have three hearts."
36 + ("Octopuses have three hearts."
37 | # Missing comma here.
- "Honey never spoils.",
38 + "Honey never spoils."),
39 | ]
40 |
41 | facts = {
note: This is an unsafe fix and may change runtime behavior
ISC004 [*] Unparenthesized implicit string concatenation in collection
--> ISC004.py:42:5
|
41 | facts = {
42 | / "Octopuses have three hearts."
43 | | # Missing comma here.
44 | | "Honey never spoils.",
| |_________________________^
45 | }
|
help: Wrap implicitly concatenated strings in parentheses
help: Did you forget a comma?
39 | ]
40 |
41 | facts = {
- "Octopuses have three hearts."
42 + ("Octopuses have three hearts."
43 | # Missing comma here.
- "Honey never spoils.",
44 + "Honey never spoils."),
45 | }
46 |
47 | facts = (
note: This is an unsafe fix and may change runtime behavior

View File

@@ -37,10 +37,11 @@ use crate::{Fix, FixAvailability, Violation};
/// import logging
///
/// logging.basicConfig(level=logging.INFO)
/// logger = logging.getLogger(__name__)
///
///
/// def sum_less_than_four(a, b):
/// logging.debug("Calling sum_less_than_four")
/// logger.debug("Calling sum_less_than_four")
/// return a + b < 4
/// ```
///

View File

@@ -125,6 +125,9 @@ impl Violation for PytestRaisesTooBroad {
/// ## Why is this bad?
/// `pytest.raises` expects to receive an expected exception as its first
/// argument. If omitted, the `pytest.raises` call will fail at runtime.
/// The rule will also accept calls without an expected exception but with
/// `match` and/or `check` keyword arguments, which are also valid after
/// pytest version 8.4.0.
///
/// ## Example
/// ```python
@@ -181,6 +184,8 @@ pub(crate) fn raises_call(checker: &Checker, call: &ast::ExprCall) {
.arguments
.find_argument("expected_exception", 0)
.is_none()
&& call.arguments.find_keyword("match").is_none()
&& call.arguments.find_keyword("check").is_none()
{
checker.report_diagnostic(PytestRaisesWithoutException, call.func.range());
}

View File

@@ -210,6 +210,7 @@ pub(crate) fn is_argument_non_default(arguments: &Arguments, name: &str, positio
/// Returns `true` if the given call is a top-level expression in its statement.
/// This means the call's return value is not used, so return type changes don't matter.
pub(crate) fn is_top_level_expression_call(checker: &Checker) -> bool {
pub(crate) fn is_top_level_expression_in_statement(checker: &Checker) -> bool {
checker.semantic().current_expression_parent().is_none()
&& checker.semantic().current_statement().is_expr_stmt()
}

View File

@@ -6,7 +6,7 @@ use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::importer::ImportRequest;
use crate::preview::is_fix_os_getcwd_enabled;
use crate::rules::flake8_use_pathlib::helpers::is_top_level_expression_call;
use crate::rules::flake8_use_pathlib::helpers::is_top_level_expression_in_statement;
use crate::{FixAvailability, Violation};
/// ## What it does
@@ -89,7 +89,7 @@ pub(crate) fn os_getcwd(checker: &Checker, call: &ExprCall, segments: &[&str]) {
// Unsafe when the fix would delete comments or change a used return value
let applicability = if checker.comment_ranges().intersects(range)
|| !is_top_level_expression_call(checker)
|| !is_top_level_expression_in_statement(checker)
{
Applicability::Unsafe
} else {

View File

@@ -6,7 +6,7 @@ use crate::checkers::ast::Checker;
use crate::preview::is_fix_os_readlink_enabled;
use crate::rules::flake8_use_pathlib::helpers::{
check_os_pathlib_single_arg_calls, is_keyword_only_argument_non_default,
is_top_level_expression_call,
is_top_level_expression_in_statement,
};
use crate::{FixAvailability, Violation};
@@ -86,7 +86,7 @@ pub(crate) fn os_readlink(checker: &Checker, call: &ExprCall, segments: &[&str])
return;
}
let applicability = if !is_top_level_expression_call(checker) {
let applicability = if !is_top_level_expression_in_statement(checker) {
// Unsafe because the return type changes (str/bytes -> Path)
Applicability::Unsafe
} else {

View File

@@ -6,7 +6,7 @@ use crate::checkers::ast::Checker;
use crate::preview::is_fix_os_rename_enabled;
use crate::rules::flake8_use_pathlib::helpers::{
check_os_pathlib_two_arg_calls, has_unknown_keywords_or_starred_expr,
is_keyword_only_argument_non_default, is_top_level_expression_call,
is_keyword_only_argument_non_default, is_top_level_expression_in_statement,
};
use crate::{FixAvailability, Violation};
@@ -92,7 +92,7 @@ pub(crate) fn os_rename(checker: &Checker, call: &ExprCall, segments: &[&str]) {
);
// Unsafe when the fix would delete comments or change a used return value
let applicability = if !is_top_level_expression_call(checker) {
let applicability = if !is_top_level_expression_in_statement(checker) {
// Unsafe because the return type changes (None -> Path)
Applicability::Unsafe
} else {

View File

@@ -6,7 +6,7 @@ use crate::checkers::ast::Checker;
use crate::preview::is_fix_os_replace_enabled;
use crate::rules::flake8_use_pathlib::helpers::{
check_os_pathlib_two_arg_calls, has_unknown_keywords_or_starred_expr,
is_keyword_only_argument_non_default, is_top_level_expression_call,
is_keyword_only_argument_non_default, is_top_level_expression_in_statement,
};
use crate::{FixAvailability, Violation};
@@ -95,7 +95,7 @@ pub(crate) fn os_replace(checker: &Checker, call: &ExprCall, segments: &[&str])
);
// Unsafe when the fix would delete comments or change a used return value
let applicability = if !is_top_level_expression_call(checker) {
let applicability = if !is_top_level_expression_in_statement(checker) {
// Unsafe because the return type changes (None -> Path)
Applicability::Unsafe
} else {

View File

@@ -567,5 +567,64 @@ PTH121 `os.path.samefile()` should be replaced by `Path.samefile()`
138 |
139 | os.path.samefile("pth1_file", "pth1_link", 1, *[1], **{"x": 1}, foo=1)
| ^^^^^^^^^^^^^^^^
140 |
141 | # See: https://github.com/astral-sh/ruff/issues/21794
|
help: Replace with `Path(...).samefile()`
PTH104 `os.rename()` should be replaced by `Path.rename()`
--> full_name.py:144:4
|
142 | import sys
143 |
144 | if os.rename("pth1.py", "pth1.py.bak"):
| ^^^^^^^^^
145 | print("rename: truthy")
146 | else:
|
help: Replace with `Path(...).rename(...)`
PTH105 `os.replace()` should be replaced by `Path.replace()`
--> full_name.py:149:4
|
147 | print("rename: falsey")
148 |
149 | if os.replace("pth1.py.bak", "pth1.py"):
| ^^^^^^^^^^
150 | print("replace: truthy")
151 | else:
|
help: Replace with `Path(...).replace(...)`
PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
--> full_name.py:155:14
|
154 | try:
155 | for _ in os.getcwd():
| ^^^^^^^^^
156 | print("getcwd: iterable")
157 | break
|
help: Replace with `Path.cwd()`
PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
--> full_name.py:162:14
|
161 | try:
162 | for _ in os.getcwdb():
| ^^^^^^^^^^
163 | print("getcwdb: iterable")
164 | break
|
help: Replace with `Path.cwd()`
PTH115 `os.readlink()` should be replaced by `Path.readlink()`
--> full_name.py:169:14
|
168 | try:
169 | for _ in os.readlink(sys.executable):
| ^^^^^^^^^^^
170 | print("readlink: iterable")
171 | break
|
help: Replace with `Path(...).readlink()`

View File

@@ -1037,5 +1037,142 @@ PTH121 `os.path.samefile()` should be replaced by `Path.samefile()`
138 |
139 | os.path.samefile("pth1_file", "pth1_link", 1, *[1], **{"x": 1}, foo=1)
| ^^^^^^^^^^^^^^^^
140 |
141 | # See: https://github.com/astral-sh/ruff/issues/21794
|
help: Replace with `Path(...).samefile()`
PTH104 [*] `os.rename()` should be replaced by `Path.rename()`
--> full_name.py:144:4
|
142 | import sys
143 |
144 | if os.rename("pth1.py", "pth1.py.bak"):
| ^^^^^^^^^
145 | print("rename: truthy")
146 | else:
|
help: Replace with `Path(...).rename(...)`
140 |
141 | # See: https://github.com/astral-sh/ruff/issues/21794
142 | import sys
143 + import pathlib
144 |
- if os.rename("pth1.py", "pth1.py.bak"):
145 + if pathlib.Path("pth1.py").rename("pth1.py.bak"):
146 | print("rename: truthy")
147 | else:
148 | print("rename: falsey")
note: This is an unsafe fix and may change runtime behavior
PTH105 [*] `os.replace()` should be replaced by `Path.replace()`
--> full_name.py:149:4
|
147 | print("rename: falsey")
148 |
149 | if os.replace("pth1.py.bak", "pth1.py"):
| ^^^^^^^^^^
150 | print("replace: truthy")
151 | else:
|
help: Replace with `Path(...).replace(...)`
140 |
141 | # See: https://github.com/astral-sh/ruff/issues/21794
142 | import sys
143 + import pathlib
144 |
145 | if os.rename("pth1.py", "pth1.py.bak"):
146 | print("rename: truthy")
147 | else:
148 | print("rename: falsey")
149 |
- if os.replace("pth1.py.bak", "pth1.py"):
150 + if pathlib.Path("pth1.py.bak").replace("pth1.py"):
151 | print("replace: truthy")
152 | else:
153 | print("replace: falsey")
note: This is an unsafe fix and may change runtime behavior
PTH109 [*] `os.getcwd()` should be replaced by `Path.cwd()`
--> full_name.py:155:14
|
154 | try:
155 | for _ in os.getcwd():
| ^^^^^^^^^
156 | print("getcwd: iterable")
157 | break
|
help: Replace with `Path.cwd()`
140 |
141 | # See: https://github.com/astral-sh/ruff/issues/21794
142 | import sys
143 + import pathlib
144 |
145 | if os.rename("pth1.py", "pth1.py.bak"):
146 | print("rename: truthy")
--------------------------------------------------------------------------------
153 | print("replace: falsey")
154 |
155 | try:
- for _ in os.getcwd():
156 + for _ in pathlib.Path.cwd():
157 | print("getcwd: iterable")
158 | break
159 | except TypeError as e:
note: This is an unsafe fix and may change runtime behavior
PTH109 [*] `os.getcwd()` should be replaced by `Path.cwd()`
--> full_name.py:162:14
|
161 | try:
162 | for _ in os.getcwdb():
| ^^^^^^^^^^
163 | print("getcwdb: iterable")
164 | break
|
help: Replace with `Path.cwd()`
140 |
141 | # See: https://github.com/astral-sh/ruff/issues/21794
142 | import sys
143 + import pathlib
144 |
145 | if os.rename("pth1.py", "pth1.py.bak"):
146 | print("rename: truthy")
--------------------------------------------------------------------------------
160 | print("getcwd: not iterable")
161 |
162 | try:
- for _ in os.getcwdb():
163 + for _ in pathlib.Path.cwd():
164 | print("getcwdb: iterable")
165 | break
166 | except TypeError as e:
note: This is an unsafe fix and may change runtime behavior
PTH115 [*] `os.readlink()` should be replaced by `Path.readlink()`
--> full_name.py:169:14
|
168 | try:
169 | for _ in os.readlink(sys.executable):
| ^^^^^^^^^^^
170 | print("readlink: iterable")
171 | break
|
help: Replace with `Path(...).readlink()`
140 |
141 | # See: https://github.com/astral-sh/ruff/issues/21794
142 | import sys
143 + import pathlib
144 |
145 | if os.rename("pth1.py", "pth1.py.bak"):
146 | print("rename: truthy")
--------------------------------------------------------------------------------
167 | print("getcwdb: not iterable")
168 |
169 | try:
- for _ in os.readlink(sys.executable):
170 + for _ in pathlib.Path(sys.executable).readlink():
171 | print("readlink: iterable")
172 | break
173 | except TypeError as e:
note: This is an unsafe fix and may change runtime behavior

View File

@@ -902,56 +902,76 @@ help: Convert to f-string
132 | # Non-errors
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:160:1
--> UP032_0.py:135:1
|
158 | r'"\N{snowman} {}".format(a)'
159 |
160 | / "123456789 {}".format(
161 | | 11111111111111111111111111111111111111111111111111111111111111111111111111,
162 | | )
| |_^
163 |
164 | """
133 | ###
134 |
135 | "\N{snowman} {}".format(a)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
136 |
137 | "{".format(a)
|
help: Convert to f-string
157 |
158 | r'"\N{snowman} {}".format(a)'
159 |
132 | # Non-errors
133 | ###
134 |
- "\N{snowman} {}".format(a)
135 + f"\N{snowman} {a}"
136 |
137 | "{".format(a)
138 |
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:159:1
|
157 | r'"\N{snowman} {}".format(a)'
158 |
159 | / "123456789 {}".format(
160 | | 11111111111111111111111111111111111111111111111111111111111111111111111111,
161 | | )
| |_^
162 |
163 | """
|
help: Convert to f-string
156 |
157 | r'"\N{snowman} {}".format(a)'
158 |
- "123456789 {}".format(
- 11111111111111111111111111111111111111111111111111111111111111111111111111,
- )
160 + f"123456789 {11111111111111111111111111111111111111111111111111111111111111111111111111}"
161 |
162 | """
163 | {}
159 + f"123456789 {11111111111111111111111111111111111111111111111111111111111111111111111111}"
160 |
161 | """
162 | {}
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:164:1
--> UP032_0.py:163:1
|
162 | )
163 |
164 | / """
161 | )
162 |
163 | / """
164 | | {}
165 | | {}
166 | | {}
167 | | {}
168 | | """.format(
169 | | 1,
170 | | 2,
171 | | 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111,
172 | | )
167 | | """.format(
168 | | 1,
169 | | 2,
170 | | 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111,
171 | | )
| |_^
173 |
174 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
172 |
173 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
|
help: Convert to f-string
161 | 11111111111111111111111111111111111111111111111111111111111111111111111111,
162 | )
163 |
164 + f"""
165 + {1}
166 + {2}
167 + {111111111111111111111111111111111111111111111111111111111111111111111111111111111111111}
168 | """
160 | 11111111111111111111111111111111111111111111111111111111111111111111111111,
161 | )
162 |
163 + f"""
164 + {1}
165 + {2}
166 + {111111111111111111111111111111111111111111111111111111111111111111111111111111111111111}
167 | """
- {}
- {}
- {}
@@ -960,392 +980,408 @@ help: Convert to f-string
- 2,
- 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111,
- )
169 |
170 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
171 | """.format(
168 |
169 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
170 | """.format(
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:174:84
--> UP032_0.py:173:84
|
172 | )
173 |
174 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
171 | )
172 |
173 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
| ____________________________________________________________________________________^
175 | | """.format(
176 | | 111111
177 | | )
174 | | """.format(
175 | | 111111
176 | | )
| |_^
178 |
179 | "{}".format(
177 |
178 | "{}".format(
|
help: Convert to f-string
171 | 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111,
172 | )
173 |
170 | 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111,
171 | )
172 |
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
- """.format(
- 111111
- )
174 + aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = f"""{111111}
175 + """
176 |
177 | "{}".format(
178 | [
173 + aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = f"""{111111}
174 + """
175 |
176 | "{}".format(
177 | [
UP032 Use f-string instead of `format` call
--> UP032_0.py:202:1
--> UP032_0.py:201:1
|
200 | "{}".format(**c)
201 |
202 | / "{}".format(
203 | | 1 # comment
204 | | )
199 | "{}".format(**c)
200 |
201 | / "{}".format(
202 | | 1 # comment
203 | | )
| |_^
|
help: Convert to f-string
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:209:1
--> UP032_0.py:208:1
|
207 | # The fixed string will exceed the line length, but it's still smaller than the
208 | # existing line length, so it's fine.
209 | "<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
206 | # The fixed string will exceed the line length, but it's still smaller than the
207 | # existing line length, so it's fine.
208 | "<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
210 |
211 | # When fixing, trim the trailing empty string.
209 |
210 | # When fixing, trim the trailing empty string.
|
help: Convert to f-string
206 |
207 | # The fixed string will exceed the line length, but it's still smaller than the
208 | # existing line length, so it's fine.
205 |
206 | # The fixed string will exceed the line length, but it's still smaller than the
207 | # existing line length, so it's fine.
- "<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
209 + f"<Customer: {self.internal_ids}, {self.external_ids}, {self.properties}, {self.tags}, {self.others}>"
210 |
211 | # When fixing, trim the trailing empty string.
212 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
208 + f"<Customer: {self.internal_ids}, {self.external_ids}, {self.properties}, {self.tags}, {self.others}>"
209 |
210 | # When fixing, trim the trailing empty string.
211 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:212:18
--> UP032_0.py:211:18
|
211 | # When fixing, trim the trailing empty string.
212 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
210 | # When fixing, trim the trailing empty string.
211 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
| __________________^
213 | | "".format(new_dict, d))
212 | | "".format(new_dict, d))
| |_______________________________________^
214 |
215 | # When fixing, trim the trailing empty string.
213 |
214 | # When fixing, trim the trailing empty string.
|
help: Convert to f-string
209 | "<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
210 |
211 | # When fixing, trim the trailing empty string.
208 | "<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
209 |
210 | # When fixing, trim the trailing empty string.
- raise ValueError("Conflicting configuration dicts: {!r} {!r}"
- "".format(new_dict, d))
212 + raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}")
211 + raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}")
212 |
213 | # When fixing, trim the trailing empty string.
214 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:215:18
|
214 | # When fixing, trim the trailing empty string.
215 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
| __________________^
216 | | .format(new_dict, d))
| |_____________________________________^
217 |
218 | raise ValueError(
|
help: Convert to f-string
212 | "".format(new_dict, d))
213 |
214 | # When fixing, trim the trailing empty string.
215 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:216:18
|
215 | # When fixing, trim the trailing empty string.
216 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
| __________________^
217 | | .format(new_dict, d))
| |_____________________________________^
218 |
219 | raise ValueError(
|
help: Convert to f-string
213 | "".format(new_dict, d))
214 |
215 | # When fixing, trim the trailing empty string.
- raise ValueError("Conflicting configuration dicts: {!r} {!r}"
- .format(new_dict, d))
216 + raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}"
217 + )
218 |
219 | raise ValueError(
220 | "Conflicting configuration dicts: {!r} {!r}"
215 + raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}"
216 + )
217 |
218 | raise ValueError(
219 | "Conflicting configuration dicts: {!r} {!r}"
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:220:5
--> UP032_0.py:219:5
|
219 | raise ValueError(
220 | / "Conflicting configuration dicts: {!r} {!r}"
221 | | "".format(new_dict, d)
218 | raise ValueError(
219 | / "Conflicting configuration dicts: {!r} {!r}"
220 | | "".format(new_dict, d)
| |__________________________^
222 | )
221 | )
|
help: Convert to f-string
217 | .format(new_dict, d))
218 |
219 | raise ValueError(
216 | .format(new_dict, d))
217 |
218 | raise ValueError(
- "Conflicting configuration dicts: {!r} {!r}"
- "".format(new_dict, d)
220 + f"Conflicting configuration dicts: {new_dict!r} {d!r}"
219 + f"Conflicting configuration dicts: {new_dict!r} {d!r}"
220 | )
221 |
222 | raise ValueError(
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:224:5
|
223 | raise ValueError(
224 | / "Conflicting configuration dicts: {!r} {!r}"
225 | | "".format(new_dict, d)
| |__________________________^
226 |
227 | )
|
help: Convert to f-string
221 | )
222 |
223 | raise ValueError(
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:225:5
|
224 | raise ValueError(
225 | / "Conflicting configuration dicts: {!r} {!r}"
226 | | "".format(new_dict, d)
| |__________________________^
227 |
228 | )
|
help: Convert to f-string
222 | )
223 |
224 | raise ValueError(
- "Conflicting configuration dicts: {!r} {!r}"
- "".format(new_dict, d)
225 + f"Conflicting configuration dicts: {new_dict!r} {d!r}"
226 |
227 | )
228 |
224 + f"Conflicting configuration dicts: {new_dict!r} {d!r}"
225 |
226 | )
227 |
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:231:1
--> UP032_0.py:230:1
|
230 | # The first string will be converted to an f-string and the curly braces in the second should be converted to be unescaped
231 | / (
232 | | "{}"
233 | | "{{}}"
234 | | ).format(a)
229 | # The first string will be converted to an f-string and the curly braces in the second should be converted to be unescaped
230 | / (
231 | | "{}"
232 | | "{{}}"
233 | | ).format(a)
| |___________^
235 |
236 | ("{}" "{{}}").format(a)
234 |
235 | ("{}" "{{}}").format(a)
|
help: Convert to f-string
229 |
230 | # The first string will be converted to an f-string and the curly braces in the second should be converted to be unescaped
231 | (
232 + f"{a}"
233 | "{}"
228 |
229 | # The first string will be converted to an f-string and the curly braces in the second should be converted to be unescaped
230 | (
231 + f"{a}"
232 | "{}"
- "{{}}"
- ).format(a)
234 + )
235 |
236 | ("{}" "{{}}").format(a)
237 |
233 + )
234 |
235 | ("{}" "{{}}").format(a)
236 |
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:236:1
--> UP032_0.py:235:1
|
234 | ).format(a)
235 |
236 | ("{}" "{{}}").format(a)
233 | ).format(a)
234 |
235 | ("{}" "{{}}").format(a)
| ^^^^^^^^^^^^^^^^^^^^^^^
|
help: Convert to f-string
233 | "{{}}"
234 | ).format(a)
235 |
232 | "{{}}"
233 | ).format(a)
234 |
- ("{}" "{{}}").format(a)
236 + (f"{a}" "{}")
235 + (f"{a}" "{}")
236 |
237 |
238 |
239 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
238 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:240:1
--> UP032_0.py:239:1
|
239 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
240 | / (
241 | | "{}"
242 | | "{{{}}}"
243 | | ).format(a, b)
238 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
239 | / (
240 | | "{}"
241 | | "{{{}}}"
242 | | ).format(a, b)
| |______________^
244 |
245 | ("{}" "{{{}}}").format(a, b)
243 |
244 | ("{}" "{{{}}}").format(a, b)
|
help: Convert to f-string
238 |
239 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
240 | (
237 |
238 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
239 | (
- "{}"
- "{{{}}}"
- ).format(a, b)
241 + f"{a}"
242 + f"{{{b}}}"
243 + )
244 |
245 | ("{}" "{{{}}}").format(a, b)
246 |
240 + f"{a}"
241 + f"{{{b}}}"
242 + )
243 |
244 | ("{}" "{{{}}}").format(a, b)
245 |
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:245:1
--> UP032_0.py:244:1
|
243 | ).format(a, b)
244 |
245 | ("{}" "{{{}}}").format(a, b)
242 | ).format(a, b)
243 |
244 | ("{}" "{{{}}}").format(a, b)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
246 |
247 | # The dictionary should be parenthesized.
245 |
246 | # The dictionary should be parenthesized.
|
help: Convert to f-string
242 | "{{{}}}"
243 | ).format(a, b)
244 |
241 | "{{{}}}"
242 | ).format(a, b)
243 |
- ("{}" "{{{}}}").format(a, b)
245 + (f"{a}" f"{{{b}}}")
246 |
247 | # The dictionary should be parenthesized.
248 | "{}".format({0: 1}[0])
244 + (f"{a}" f"{{{b}}}")
245 |
246 | # The dictionary should be parenthesized.
247 | "{}".format({0: 1}[0])
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:248:1
--> UP032_0.py:247:1
|
247 | # The dictionary should be parenthesized.
248 | "{}".format({0: 1}[0])
246 | # The dictionary should be parenthesized.
247 | "{}".format({0: 1}[0])
| ^^^^^^^^^^^^^^^^^^^^^^
249 |
250 | # The dictionary should be parenthesized.
248 |
249 | # The dictionary should be parenthesized.
|
help: Convert to f-string
245 | ("{}" "{{{}}}").format(a, b)
246 |
247 | # The dictionary should be parenthesized.
244 | ("{}" "{{{}}}").format(a, b)
245 |
246 | # The dictionary should be parenthesized.
- "{}".format({0: 1}[0])
248 + f"{({0: 1}[0])}"
249 |
250 | # The dictionary should be parenthesized.
251 | "{}".format({0: 1}.bar)
247 + f"{({0: 1}[0])}"
248 |
249 | # The dictionary should be parenthesized.
250 | "{}".format({0: 1}.bar)
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:251:1
--> UP032_0.py:250:1
|
250 | # The dictionary should be parenthesized.
251 | "{}".format({0: 1}.bar)
249 | # The dictionary should be parenthesized.
250 | "{}".format({0: 1}.bar)
| ^^^^^^^^^^^^^^^^^^^^^^^
252 |
253 | # The dictionary should be parenthesized.
251 |
252 | # The dictionary should be parenthesized.
|
help: Convert to f-string
248 | "{}".format({0: 1}[0])
249 |
250 | # The dictionary should be parenthesized.
247 | "{}".format({0: 1}[0])
248 |
249 | # The dictionary should be parenthesized.
- "{}".format({0: 1}.bar)
251 + f"{({0: 1}.bar)}"
252 |
253 | # The dictionary should be parenthesized.
254 | "{}".format({0: 1}())
250 + f"{({0: 1}.bar)}"
251 |
252 | # The dictionary should be parenthesized.
253 | "{}".format({0: 1}())
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:254:1
--> UP032_0.py:253:1
|
253 | # The dictionary should be parenthesized.
254 | "{}".format({0: 1}())
252 | # The dictionary should be parenthesized.
253 | "{}".format({0: 1}())
| ^^^^^^^^^^^^^^^^^^^^^
255 |
256 | # The string shouldn't be converted, since it would require repeating the function call.
254 |
255 | # The string shouldn't be converted, since it would require repeating the function call.
|
help: Convert to f-string
251 | "{}".format({0: 1}.bar)
252 |
253 | # The dictionary should be parenthesized.
250 | "{}".format({0: 1}.bar)
251 |
252 | # The dictionary should be parenthesized.
- "{}".format({0: 1}())
254 + f"{({0: 1}())}"
255 |
256 | # The string shouldn't be converted, since it would require repeating the function call.
257 | "{x} {x}".format(x=foo())
253 + f"{({0: 1}())}"
254 |
255 | # The string shouldn't be converted, since it would require repeating the function call.
256 | "{x} {x}".format(x=foo())
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:261:1
--> UP032_0.py:260:1
|
260 | # The string _should_ be converted, since the function call is repeated in the arguments.
261 | "{0} {1}".format(foo(), foo())
259 | # The string _should_ be converted, since the function call is repeated in the arguments.
260 | "{0} {1}".format(foo(), foo())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
262 |
263 | # The call should be removed, but the string itself should remain.
261 |
262 | # The call should be removed, but the string itself should remain.
|
help: Convert to f-string
258 | "{0} {0}".format(foo())
259 |
260 | # The string _should_ be converted, since the function call is repeated in the arguments.
257 | "{0} {0}".format(foo())
258 |
259 | # The string _should_ be converted, since the function call is repeated in the arguments.
- "{0} {1}".format(foo(), foo())
261 + f"{foo()} {foo()}"
262 |
263 | # The call should be removed, but the string itself should remain.
264 | ''.format(self.project)
260 + f"{foo()} {foo()}"
261 |
262 | # The call should be removed, but the string itself should remain.
263 | ''.format(self.project)
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:264:1
--> UP032_0.py:263:1
|
263 | # The call should be removed, but the string itself should remain.
264 | ''.format(self.project)
262 | # The call should be removed, but the string itself should remain.
263 | ''.format(self.project)
| ^^^^^^^^^^^^^^^^^^^^^^^
265 |
266 | # The call should be removed, but the string itself should remain.
264 |
265 | # The call should be removed, but the string itself should remain.
|
help: Convert to f-string
261 | "{0} {1}".format(foo(), foo())
262 |
263 | # The call should be removed, but the string itself should remain.
260 | "{0} {1}".format(foo(), foo())
261 |
262 | # The call should be removed, but the string itself should remain.
- ''.format(self.project)
264 + ''
265 |
266 | # The call should be removed, but the string itself should remain.
267 | "".format(self.project)
263 + ''
264 |
265 | # The call should be removed, but the string itself should remain.
266 | "".format(self.project)
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:267:1
--> UP032_0.py:266:1
|
266 | # The call should be removed, but the string itself should remain.
267 | "".format(self.project)
265 | # The call should be removed, but the string itself should remain.
266 | "".format(self.project)
| ^^^^^^^^^^^^^^^^^^^^^^^
268 |
269 | # Not a valid type annotation but this test shouldn't result in a panic.
267 |
268 | # Not a valid type annotation but this test shouldn't result in a panic.
|
help: Convert to f-string
264 | ''.format(self.project)
265 |
266 | # The call should be removed, but the string itself should remain.
263 | ''.format(self.project)
264 |
265 | # The call should be removed, but the string itself should remain.
- "".format(self.project)
267 + ""
268 |
269 | # Not a valid type annotation but this test shouldn't result in a panic.
270 | # Refer: https://github.com/astral-sh/ruff/issues/11736
266 + ""
267 |
268 | # Not a valid type annotation but this test shouldn't result in a panic.
269 | # Refer: https://github.com/astral-sh/ruff/issues/11736
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:271:5
--> UP032_0.py:270:5
|
269 | # Not a valid type annotation but this test shouldn't result in a panic.
270 | # Refer: https://github.com/astral-sh/ruff/issues/11736
271 | x: "'{} + {}'.format(x, y)"
268 | # Not a valid type annotation but this test shouldn't result in a panic.
269 | # Refer: https://github.com/astral-sh/ruff/issues/11736
270 | x: "'{} + {}'.format(x, y)"
| ^^^^^^^^^^^^^^^^^^^^^^
272 |
273 | # Regression https://github.com/astral-sh/ruff/issues/21000
271 |
272 | # Regression https://github.com/astral-sh/ruff/issues/21000
|
help: Convert to f-string
268 |
269 | # Not a valid type annotation but this test shouldn't result in a panic.
270 | # Refer: https://github.com/astral-sh/ruff/issues/11736
267 |
268 | # Not a valid type annotation but this test shouldn't result in a panic.
269 | # Refer: https://github.com/astral-sh/ruff/issues/11736
- x: "'{} + {}'.format(x, y)"
271 + x: "f'{x} + {y}'"
272 |
273 | # Regression https://github.com/astral-sh/ruff/issues/21000
274 | # Fix should parenthesize walrus
270 + x: "f'{x} + {y}'"
271 |
272 | # Regression https://github.com/astral-sh/ruff/issues/21000
273 | # Fix should parenthesize walrus
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:277:14
--> UP032_0.py:276:14
|
275 | if __name__ == "__main__":
276 | number = 0
277 | string = "{}".format(number := number + 1)
274 | if __name__ == "__main__":
275 | number = 0
276 | string = "{}".format(number := number + 1)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
278 | print(string)
277 | print(string)
|
help: Convert to f-string
274 | # Fix should parenthesize walrus
275 | if __name__ == "__main__":
276 | number = 0
273 | # Fix should parenthesize walrus
274 | if __name__ == "__main__":
275 | number = 0
- string = "{}".format(number := number + 1)
277 + string = f"{(number := number + 1)}"
278 | print(string)
276 + string = f"{(number := number + 1)}"
277 | print(string)
278 |
279 | # Unicode escape
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:280:1
|
279 | # Unicode escape
280 | "\N{angle}AOB = {angle}°".format(angle=180)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Convert to f-string
277 | print(string)
278 |
279 | # Unicode escape
- "\N{angle}AOB = {angle}°".format(angle=180)
280 + f"\N{angle}AOB = {180}°"

View File

@@ -3,10 +3,11 @@ use std::borrow::Cow;
use ruff_python_ast::PythonVersion;
use ruff_python_ast::{self as ast, Expr, name::Name, token::parenthesized_range};
use ruff_python_codegen::Generator;
use ruff_python_semantic::{BindingId, ResolvedReference, SemanticModel};
use ruff_python_semantic::{ResolvedReference, SemanticModel};
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
use crate::rules::flake8_async::rules::blocking_open_call::is_open_call_from_pathlib;
use crate::{Applicability, Edit, Fix};
/// Format a code snippet to call `name.method()`.
@@ -119,14 +120,13 @@ impl OpenMode {
pub(super) struct FileOpen<'a> {
/// With item where the open happens, we use it for the reporting range.
pub(super) item: &'a ast::WithItem,
/// Filename expression used as the first argument in `open`, we use it in the diagnostic message.
pub(super) filename: &'a Expr,
/// The file open mode.
pub(super) mode: OpenMode,
/// The file open keywords.
pub(super) keywords: Vec<&'a ast::Keyword>,
/// We only check `open` operations whose file handles are used exactly once.
pub(super) reference: &'a ResolvedReference,
pub(super) argument: OpenArgument<'a>,
}
impl FileOpen<'_> {
@@ -137,6 +137,45 @@ impl FileOpen<'_> {
}
}
#[derive(Debug, Clone, Copy)]
pub(super) enum OpenArgument<'a> {
/// The filename argument to `open`, e.g. "foo.txt" in:
///
/// ```py
/// f = open("foo.txt")
/// ```
Builtin { filename: &'a Expr },
/// The `Path` receiver of a `pathlib.Path.open` call, e.g. the `p` in the
/// context manager in:
///
/// ```py
/// p = Path("foo.txt")
/// with p.open() as f: ...
/// ```
///
/// or `Path("foo.txt")` in
///
/// ```py
/// with Path("foo.txt").open() as f: ...
/// ```
Pathlib { path: &'a Expr },
}
impl OpenArgument<'_> {
pub(super) fn display<'src>(&self, source: &'src str) -> &'src str {
&source[self.range()]
}
}
impl Ranged for OpenArgument<'_> {
fn range(&self) -> TextRange {
match self {
OpenArgument::Builtin { filename } => filename.range(),
OpenArgument::Pathlib { path } => path.range(),
}
}
}
/// Find and return all `open` operations in the given `with` statement.
pub(super) fn find_file_opens<'a>(
with: &'a ast::StmtWith,
@@ -146,10 +185,65 @@ pub(super) fn find_file_opens<'a>(
) -> Vec<FileOpen<'a>> {
with.items
.iter()
.filter_map(|item| find_file_open(item, with, semantic, read_mode, python_version))
.filter_map(|item| {
find_file_open(item, with, semantic, read_mode, python_version)
.or_else(|| find_path_open(item, with, semantic, read_mode, python_version))
})
.collect()
}
fn resolve_file_open<'a>(
item: &'a ast::WithItem,
with: &'a ast::StmtWith,
semantic: &'a SemanticModel<'a>,
read_mode: bool,
mode: OpenMode,
keywords: Vec<&'a ast::Keyword>,
argument: OpenArgument<'a>,
) -> Option<FileOpen<'a>> {
match mode {
OpenMode::ReadText | OpenMode::ReadBytes => {
if !read_mode {
return None;
}
}
OpenMode::WriteText | OpenMode::WriteBytes => {
if read_mode {
return None;
}
}
}
if matches!(mode, OpenMode::ReadBytes | OpenMode::WriteBytes) && !keywords.is_empty() {
return None;
}
let var = item.optional_vars.as_deref()?.as_name_expr()?;
let scope = semantic.current_scope();
let binding = scope.get_all(var.id.as_str()).find_map(|id| {
let b = semantic.binding(id);
(b.range() == var.range()).then_some(b)
})?;
let references: Vec<&ResolvedReference> = binding
.references
.iter()
.map(|id| semantic.reference(*id))
.filter(|reference| with.range().contains_range(reference.range()))
.collect();
let [reference] = references.as_slice() else {
return None;
};
Some(FileOpen {
item,
mode,
keywords,
reference,
argument,
})
}
/// Find `open` operation in the given `with` item.
fn find_file_open<'a>(
item: &'a ast::WithItem,
@@ -165,8 +259,6 @@ fn find_file_open<'a>(
..
} = item.context_expr.as_call_expr()?;
let var = item.optional_vars.as_deref()?.as_name_expr()?;
// Ignore calls with `*args` and `**kwargs`. In the exact case of `open(*filename, mode="w")`,
// it could be a match; but in all other cases, the call _could_ contain unsupported keyword
// arguments, like `buffering`.
@@ -187,58 +279,57 @@ fn find_file_open<'a>(
let (keywords, kw_mode) = match_open_keywords(keywords, read_mode, python_version)?;
let mode = kw_mode.unwrap_or(pos_mode);
match mode {
OpenMode::ReadText | OpenMode::ReadBytes => {
if !read_mode {
return None;
}
}
OpenMode::WriteText | OpenMode::WriteBytes => {
if read_mode {
return None;
}
}
}
// Path.read_bytes and Path.write_bytes do not support any kwargs.
if matches!(mode, OpenMode::ReadBytes | OpenMode::WriteBytes) && !keywords.is_empty() {
return None;
}
// Now we need to find what is this variable bound to...
let scope = semantic.current_scope();
let bindings: Vec<BindingId> = scope.get_all(var.id.as_str()).collect();
let binding = bindings
.iter()
.map(|id| semantic.binding(*id))
// We might have many bindings with the same name, but we only care
// for the one we are looking at right now.
.find(|binding| binding.range() == var.range())?;
// Since many references can share the same binding, we can limit our attention span
// exclusively to the body of the current `with` statement.
let references: Vec<&ResolvedReference> = binding
.references
.iter()
.map(|id| semantic.reference(*id))
.filter(|reference| with.range().contains_range(reference.range()))
.collect();
// And even with all these restrictions, if the file handle gets used not exactly once,
// it doesn't fit the bill.
let [reference] = references.as_slice() else {
return None;
};
Some(FileOpen {
resolve_file_open(
item,
filename,
with,
semantic,
read_mode,
mode,
keywords,
reference,
})
OpenArgument::Builtin { filename },
)
}
fn find_path_open<'a>(
item: &'a ast::WithItem,
with: &'a ast::StmtWith,
semantic: &'a SemanticModel<'a>,
read_mode: bool,
python_version: PythonVersion,
) -> Option<FileOpen<'a>> {
let ast::ExprCall {
func,
arguments: ast::Arguments { args, keywords, .. },
..
} = item.context_expr.as_call_expr()?;
if args.iter().any(Expr::is_starred_expr)
|| keywords.iter().any(|keyword| keyword.arg.is_none())
{
return None;
}
if !is_open_call_from_pathlib(func, semantic) {
return None;
}
let attr = func.as_attribute_expr()?;
let mode = if args.is_empty() {
OpenMode::ReadText
} else {
match_open_mode(args.first()?)?
};
let (keywords, kw_mode) = match_open_keywords(keywords, read_mode, python_version)?;
let mode = kw_mode.unwrap_or(mode);
resolve_file_open(
item,
with,
semantic,
read_mode,
mode,
keywords,
OpenArgument::Pathlib {
path: attr.value.as_ref(),
},
)
}
/// Match positional arguments. Return expression for the file name and open mode.

View File

@@ -15,7 +15,8 @@ mod tests {
use crate::test::test_path;
use crate::{assert_diagnostics, settings};
#[test_case(Rule::ReadWholeFile, Path::new("FURB101.py"))]
#[test_case(Rule::ReadWholeFile, Path::new("FURB101_0.py"))]
#[test_case(Rule::ReadWholeFile, Path::new("FURB101_1.py"))]
#[test_case(Rule::RepeatedAppend, Path::new("FURB113.py"))]
#[test_case(Rule::IfExpInsteadOfOrOperator, Path::new("FURB110.py"))]
#[test_case(Rule::ReimplementedOperator, Path::new("FURB118.py"))]
@@ -46,7 +47,8 @@ mod tests {
#[test_case(Rule::MetaClassABCMeta, Path::new("FURB180.py"))]
#[test_case(Rule::HashlibDigestHex, Path::new("FURB181.py"))]
#[test_case(Rule::ListReverseCopy, Path::new("FURB187.py"))]
#[test_case(Rule::WriteWholeFile, Path::new("FURB103.py"))]
#[test_case(Rule::WriteWholeFile, Path::new("FURB103_0.py"))]
#[test_case(Rule::WriteWholeFile, Path::new("FURB103_1.py"))]
#[test_case(Rule::FStringNumberFormat, Path::new("FURB116.py"))]
#[test_case(Rule::SortedMinMax, Path::new("FURB192.py"))]
#[test_case(Rule::SliceToRemovePrefixOrSuffix, Path::new("FURB188.py"))]
@@ -65,7 +67,7 @@ mod tests {
#[test]
fn write_whole_file_python_39() -> Result<()> {
let diagnostics = test_path(
Path::new("refurb/FURB103.py"),
Path::new("refurb/FURB103_0.py"),
&settings::LinterSettings::for_rule(Rule::WriteWholeFile)
.with_target_version(PythonVersion::PY39),
)?;

View File

@@ -10,7 +10,7 @@ use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
use crate::fix::snippet::SourceCodeSnippet;
use crate::importer::ImportRequest;
use crate::rules::refurb::helpers::{FileOpen, find_file_opens};
use crate::rules::refurb::helpers::{FileOpen, OpenArgument, find_file_opens};
use crate::{FixAvailability, Violation};
/// ## What it does
@@ -42,27 +42,41 @@ use crate::{FixAvailability, Violation};
/// - [Python documentation: `Path.read_text`](https://docs.python.org/3/library/pathlib.html#pathlib.Path.read_text)
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "v0.1.2")]
pub(crate) struct ReadWholeFile {
pub(crate) struct ReadWholeFile<'a> {
filename: SourceCodeSnippet,
suggestion: SourceCodeSnippet,
argument: OpenArgument<'a>,
}
impl Violation for ReadWholeFile {
impl Violation for ReadWholeFile<'_> {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let filename = self.filename.truncated_display();
let suggestion = self.suggestion.truncated_display();
format!("`open` and `read` should be replaced by `Path({filename}).{suggestion}`")
match self.argument {
OpenArgument::Pathlib { .. } => {
format!(
"`Path.open()` followed by `read()` can be replaced by `{filename}.{suggestion}`"
)
}
OpenArgument::Builtin { .. } => {
format!("`open` and `read` should be replaced by `Path({filename}).{suggestion}`")
}
}
}
fn fix_title(&self) -> Option<String> {
Some(format!(
"Replace with `Path({}).{}`",
self.filename.truncated_display(),
self.suggestion.truncated_display(),
))
let filename = self.filename.truncated_display();
let suggestion = self.suggestion.truncated_display();
match self.argument {
OpenArgument::Pathlib { .. } => Some(format!("Replace with `{filename}.{suggestion}`")),
OpenArgument::Builtin { .. } => {
Some(format!("Replace with `Path({filename}).{suggestion}`"))
}
}
}
}
@@ -114,13 +128,13 @@ impl<'a> Visitor<'a> for ReadMatcher<'a, '_> {
.position(|open| open.is_ref(read_from))
{
let open = self.candidates.remove(open);
let filename_display = open.argument.display(self.checker.source());
let suggestion = make_suggestion(&open, self.checker.generator());
let mut diagnostic = self.checker.report_diagnostic(
ReadWholeFile {
filename: SourceCodeSnippet::from_str(
&self.checker.generator().expr(open.filename),
),
filename: SourceCodeSnippet::from_str(filename_display),
suggestion: SourceCodeSnippet::from_str(&suggestion),
argument: open.argument,
},
open.item.range(),
);
@@ -188,8 +202,6 @@ fn generate_fix(
let locator = checker.locator();
let filename_code = locator.slice(open.filename.range());
let (import_edit, binding) = checker
.importer()
.get_or_import_symbol(
@@ -206,10 +218,15 @@ fn generate_fix(
[Stmt::Assign(ast::StmtAssign { targets, value, .. })] if value.range() == expr.range() => {
match targets.as_slice() {
[Expr::Name(name)] => {
format!(
"{name} = {binding}({filename_code}).{suggestion}",
name = name.id
)
let target = match open.argument {
OpenArgument::Builtin { filename } => {
let filename_code = locator.slice(filename.range());
format!("{binding}({filename_code})")
}
OpenArgument::Pathlib { path } => locator.slice(path.range()).to_string(),
};
format!("{name} = {target}.{suggestion}", name = name.id)
}
_ => return None,
}
@@ -223,8 +240,16 @@ fn generate_fix(
}),
] if value.range() == expr.range() => match target.as_ref() {
Expr::Name(name) => {
let target = match open.argument {
OpenArgument::Builtin { filename } => {
let filename_code = locator.slice(filename.range());
format!("{binding}({filename_code})")
}
OpenArgument::Pathlib { path } => locator.slice(path.range()).to_string(),
};
format!(
"{var}: {ann} = {binding}({filename_code}).{suggestion}",
"{var}: {ann} = {target}.{suggestion}",
var = name.id,
ann = locator.slice(annotation.range())
)

View File

@@ -176,7 +176,7 @@ fn match_consecutive_appends<'a>(
let suite = if semantic.at_top_level() {
// If the statement is at the top level, we should go to the parent module.
// Module is available in the definitions list.
EnclosingSuite::new(semantic.definitions.python_ast()?, stmt)?
EnclosingSuite::new(semantic.definitions.python_ast()?, stmt.into())?
} else {
// Otherwise, go to the parent, and take its body as a sequence of siblings.
semantic

View File

@@ -9,7 +9,7 @@ use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::fix::snippet::SourceCodeSnippet;
use crate::importer::ImportRequest;
use crate::rules::refurb::helpers::{FileOpen, find_file_opens};
use crate::rules::refurb::helpers::{FileOpen, OpenArgument, find_file_opens};
use crate::{FixAvailability, Locator, Violation};
/// ## What it does
@@ -42,26 +42,40 @@ use crate::{FixAvailability, Locator, Violation};
/// - [Python documentation: `Path.write_text`](https://docs.python.org/3/library/pathlib.html#pathlib.Path.write_text)
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "v0.3.6")]
pub(crate) struct WriteWholeFile {
pub(crate) struct WriteWholeFile<'a> {
filename: SourceCodeSnippet,
suggestion: SourceCodeSnippet,
argument: OpenArgument<'a>,
}
impl Violation for WriteWholeFile {
impl Violation for WriteWholeFile<'_> {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let filename = self.filename.truncated_display();
let suggestion = self.suggestion.truncated_display();
format!("`open` and `write` should be replaced by `Path({filename}).{suggestion}`")
match self.argument {
OpenArgument::Pathlib { .. } => {
format!(
"`Path.open()` followed by `write()` can be replaced by `{filename}.{suggestion}`"
)
}
OpenArgument::Builtin { .. } => {
format!("`open` and `write` should be replaced by `Path({filename}).{suggestion}`")
}
}
}
fn fix_title(&self) -> Option<String> {
Some(format!(
"Replace with `Path({}).{}`",
self.filename.truncated_display(),
self.suggestion.truncated_display(),
))
let filename = self.filename.truncated_display();
let suggestion = self.suggestion.truncated_display();
match self.argument {
OpenArgument::Pathlib { .. } => Some(format!("Replace with `{filename}.{suggestion}`")),
OpenArgument::Builtin { .. } => {
Some(format!("Replace with `Path({filename}).{suggestion}`"))
}
}
}
}
@@ -125,16 +139,15 @@ impl<'a> Visitor<'a> for WriteMatcher<'a, '_> {
.position(|open| open.is_ref(write_to))
{
let open = self.candidates.remove(open);
if self.loop_counter == 0 {
let filename_display = open.argument.display(self.checker.source());
let suggestion = make_suggestion(&open, content, self.checker.locator());
let mut diagnostic = self.checker.report_diagnostic(
WriteWholeFile {
filename: SourceCodeSnippet::from_str(
&self.checker.generator().expr(open.filename),
),
filename: SourceCodeSnippet::from_str(filename_display),
suggestion: SourceCodeSnippet::from_str(&suggestion),
argument: open.argument,
},
open.item.range(),
);
@@ -198,7 +211,6 @@ fn generate_fix(
}
let locator = checker.locator();
let filename_code = locator.slice(open.filename.range());
let (import_edit, binding) = checker
.importer()
@@ -209,7 +221,15 @@ fn generate_fix(
)
.ok()?;
let replacement = format!("{binding}({filename_code}).{suggestion}");
let target = match open.argument {
OpenArgument::Builtin { filename } => {
let filename_code = locator.slice(filename.range());
format!("{binding}({filename_code})")
}
OpenArgument::Pathlib { path } => locator.slice(path.range()).to_string(),
};
let replacement = format!("{target}.{suggestion}");
let applicability = if checker.comment_ranges().intersects(with_stmt.range()) {
Applicability::Unsafe

View File

@@ -2,7 +2,7 @@
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text()`
--> FURB101.py:12:6
--> FURB101_0.py:12:6
|
11 | # FURB101
12 | with open("file.txt") as f:
@@ -26,7 +26,7 @@ help: Replace with `Path("file.txt").read_text()`
16 | with open("file.txt", "rb") as f:
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_bytes()`
--> FURB101.py:16:6
--> FURB101_0.py:16:6
|
15 | # FURB101
16 | with open("file.txt", "rb") as f:
@@ -50,7 +50,7 @@ help: Replace with `Path("file.txt").read_bytes()`
20 | with open("file.txt", mode="rb") as f:
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_bytes()`
--> FURB101.py:20:6
--> FURB101_0.py:20:6
|
19 | # FURB101
20 | with open("file.txt", mode="rb") as f:
@@ -74,7 +74,7 @@ help: Replace with `Path("file.txt").read_bytes()`
24 | with open("file.txt", encoding="utf8") as f:
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf8")`
--> FURB101.py:24:6
--> FURB101_0.py:24:6
|
23 | # FURB101
24 | with open("file.txt", encoding="utf8") as f:
@@ -98,7 +98,7 @@ help: Replace with `Path("file.txt").read_text(encoding="utf8")`
28 | with open("file.txt", errors="ignore") as f:
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text(errors="ignore")`
--> FURB101.py:28:6
--> FURB101_0.py:28:6
|
27 | # FURB101
28 | with open("file.txt", errors="ignore") as f:
@@ -122,7 +122,7 @@ help: Replace with `Path("file.txt").read_text(errors="ignore")`
32 | with open("file.txt", mode="r") as f: # noqa: FURB120
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text()`
--> FURB101.py:32:6
--> FURB101_0.py:32:6
|
31 | # FURB101
32 | with open("file.txt", mode="r") as f: # noqa: FURB120
@@ -147,7 +147,7 @@ help: Replace with `Path("file.txt").read_text()`
note: This is an unsafe fix and may change runtime behavior
FURB101 `open` and `read` should be replaced by `Path(foo()).read_bytes()`
--> FURB101.py:36:6
--> FURB101_0.py:36:6
|
35 | # FURB101
36 | with open(foo(), "rb") as f:
@@ -158,7 +158,7 @@ FURB101 `open` and `read` should be replaced by `Path(foo()).read_bytes()`
help: Replace with `Path(foo()).read_bytes()`
FURB101 `open` and `read` should be replaced by `Path("a.txt").read_text()`
--> FURB101.py:44:6
--> FURB101_0.py:44:6
|
43 | # FURB101
44 | with open("a.txt") as a, open("b.txt", "rb") as b:
@@ -169,7 +169,7 @@ FURB101 `open` and `read` should be replaced by `Path("a.txt").read_text()`
help: Replace with `Path("a.txt").read_text()`
FURB101 `open` and `read` should be replaced by `Path("b.txt").read_bytes()`
--> FURB101.py:44:26
--> FURB101_0.py:44:26
|
43 | # FURB101
44 | with open("a.txt") as a, open("b.txt", "rb") as b:
@@ -180,7 +180,7 @@ FURB101 `open` and `read` should be replaced by `Path("b.txt").read_bytes()`
help: Replace with `Path("b.txt").read_bytes()`
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text()`
--> FURB101.py:49:18
--> FURB101_0.py:49:18
|
48 | # FURB101
49 | with foo() as a, open("file.txt") as b, foo() as c:
@@ -191,7 +191,7 @@ FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text()`
help: Replace with `Path("file.txt").read_text()`
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf-8")`
--> FURB101.py:130:6
--> FURB101_0.py:130:6
|
129 | # FURB101
130 | with open("file.txt", encoding="utf-8") as f:
@@ -215,7 +215,7 @@ help: Replace with `Path("file.txt").read_text(encoding="utf-8")`
134 | with open("file.txt", encoding="utf-8") as f:
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf-8")`
--> FURB101.py:134:6
--> FURB101_0.py:134:6
|
133 | # FURB101 but no fix because it would remove the assignment to `x`
134 | with open("file.txt", encoding="utf-8") as f:
@@ -225,7 +225,7 @@ FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(enco
help: Replace with `Path("file.txt").read_text(encoding="utf-8")`
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf-8")`
--> FURB101.py:138:6
--> FURB101_0.py:138:6
|
137 | # FURB101 but no fix because it would remove the `process_contents` call
138 | with open("file.txt", encoding="utf-8") as f:
@@ -234,13 +234,13 @@ FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(enco
|
help: Replace with `Path("file.txt").read_text(encoding="utf-8")`
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf-8")`
--> FURB101.py:141:6
FURB101 `open` and `read` should be replaced by `Path("file1.txt").read_text(encoding="utf-8")`
--> FURB101_0.py:141:6
|
139 | contents = process_contents(f.read())
140 |
141 | with open("file.txt", encoding="utf-8") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
141 | with open("file1.txt", encoding="utf-8") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
142 | contents: str = process_contents(f.read())
|
help: Replace with `Path("file.txt").read_text(encoding="utf-8")`
help: Replace with `Path("file1.txt").read_text(encoding="utf-8")`

View File

@@ -0,0 +1,39 @@
---
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB101 [*] `Path.open()` followed by `read()` can be replaced by `Path("file.txt").read_text()`
--> FURB101_1.py:4:6
|
2 | from pathlib import Path
3 |
4 | with Path("file.txt").open() as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5 | contents = f.read()
|
help: Replace with `Path("file.txt").read_text()`
1 |
2 | from pathlib import Path
3 |
- with Path("file.txt").open() as f:
- contents = f.read()
4 + contents = Path("file.txt").read_text()
5 |
6 | with Path("file.txt").open("r") as f:
7 | contents = f.read()
FURB101 [*] `Path.open()` followed by `read()` can be replaced by `Path("file.txt").read_text()`
--> FURB101_1.py:7:6
|
5 | contents = f.read()
6 |
7 | with Path("file.txt").open("r") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
8 | contents = f.read()
|
help: Replace with `Path("file.txt").read_text()`
4 | with Path("file.txt").open() as f:
5 | contents = f.read()
6 |
- with Path("file.txt").open("r") as f:
- contents = f.read()
7 + contents = Path("file.txt").read_text()

View File

@@ -2,7 +2,7 @@
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text("test")`
--> FURB103.py:12:6
--> FURB103_0.py:12:6
|
11 | # FURB103
12 | with open("file.txt", "w") as f:
@@ -26,7 +26,7 @@ help: Replace with `Path("file.txt").write_text("test")`
16 | with open("file.txt", "wb") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_bytes(foobar)`
--> FURB103.py:16:6
--> FURB103_0.py:16:6
|
15 | # FURB103
16 | with open("file.txt", "wb") as f:
@@ -50,7 +50,7 @@ help: Replace with `Path("file.txt").write_bytes(foobar)`
20 | with open("file.txt", mode="wb") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_bytes(b"abc")`
--> FURB103.py:20:6
--> FURB103_0.py:20:6
|
19 | # FURB103
20 | with open("file.txt", mode="wb") as f:
@@ -74,7 +74,7 @@ help: Replace with `Path("file.txt").write_bytes(b"abc")`
24 | with open("file.txt", "w", encoding="utf8") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, encoding="utf8")`
--> FURB103.py:24:6
--> FURB103_0.py:24:6
|
23 | # FURB103
24 | with open("file.txt", "w", encoding="utf8") as f:
@@ -98,7 +98,7 @@ help: Replace with `Path("file.txt").write_text(foobar, encoding="utf8")`
28 | with open("file.txt", "w", errors="ignore") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, errors="ignore")`
--> FURB103.py:28:6
--> FURB103_0.py:28:6
|
27 | # FURB103
28 | with open("file.txt", "w", errors="ignore") as f:
@@ -122,7 +122,7 @@ help: Replace with `Path("file.txt").write_text(foobar, errors="ignore")`
32 | with open("file.txt", mode="w") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar)`
--> FURB103.py:32:6
--> FURB103_0.py:32:6
|
31 | # FURB103
32 | with open("file.txt", mode="w") as f:
@@ -146,7 +146,7 @@ help: Replace with `Path("file.txt").write_text(foobar)`
36 | with open(foo(), "wb") as f:
FURB103 `open` and `write` should be replaced by `Path(foo()).write_bytes(bar())`
--> FURB103.py:36:6
--> FURB103_0.py:36:6
|
35 | # FURB103
36 | with open(foo(), "wb") as f:
@@ -157,7 +157,7 @@ FURB103 `open` and `write` should be replaced by `Path(foo()).write_bytes(bar())
help: Replace with `Path(foo()).write_bytes(bar())`
FURB103 `open` and `write` should be replaced by `Path("a.txt").write_text(x)`
--> FURB103.py:44:6
--> FURB103_0.py:44:6
|
43 | # FURB103
44 | with open("a.txt", "w") as a, open("b.txt", "wb") as b:
@@ -168,7 +168,7 @@ FURB103 `open` and `write` should be replaced by `Path("a.txt").write_text(x)`
help: Replace with `Path("a.txt").write_text(x)`
FURB103 `open` and `write` should be replaced by `Path("b.txt").write_bytes(y)`
--> FURB103.py:44:31
--> FURB103_0.py:44:31
|
43 | # FURB103
44 | with open("a.txt", "w") as a, open("b.txt", "wb") as b:
@@ -179,7 +179,7 @@ FURB103 `open` and `write` should be replaced by `Path("b.txt").write_bytes(y)`
help: Replace with `Path("b.txt").write_bytes(y)`
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(bar(bar(a + x)))`
--> FURB103.py:49:18
--> FURB103_0.py:49:18
|
48 | # FURB103
49 | with foo() as a, open("file.txt", "w") as b, foo() as c:
@@ -190,7 +190,7 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(ba
help: Replace with `Path("file.txt").write_text(bar(bar(a + x)))`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, newline="\r\n")`
--> FURB103.py:58:6
--> FURB103_0.py:58:6
|
57 | # FURB103
58 | with open("file.txt", "w", newline="\r\n") as f:
@@ -214,7 +214,7 @@ help: Replace with `Path("file.txt").write_text(foobar, newline="\r\n")`
62 | import builtins
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, newline="\r\n")`
--> FURB103.py:66:6
--> FURB103_0.py:66:6
|
65 | # FURB103
66 | with builtins.open("file.txt", "w", newline="\r\n") as f:
@@ -237,7 +237,7 @@ help: Replace with `Path("file.txt").write_text(foobar, newline="\r\n")`
70 | from builtins import open as o
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, newline="\r\n")`
--> FURB103.py:74:6
--> FURB103_0.py:74:6
|
73 | # FURB103
74 | with o("file.txt", "w", newline="\r\n") as f:
@@ -260,7 +260,7 @@ help: Replace with `Path("file.txt").write_text(foobar, newline="\r\n")`
78 |
FURB103 [*] `open` and `write` should be replaced by `Path("test.json")....`
--> FURB103.py:154:6
--> FURB103_0.py:154:6
|
152 | data = {"price": 100}
153 |
@@ -284,7 +284,7 @@ help: Replace with `Path("test.json")....`
158 | with open("tmp_path/pyproject.toml", "w") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("tmp_path/pyproject.toml")....`
--> FURB103.py:158:6
--> FURB103_0.py:158:6
|
157 | # See: https://github.com/astral-sh/ruff/issues/21381
158 | with open("tmp_path/pyproject.toml", "w") as f:

View File

@@ -0,0 +1,157 @@
---
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `Path("file.txt").write_text("test")`
--> FURB103_1.py:3:6
|
1 | from pathlib import Path
2 |
3 | with Path("file.txt").open("w") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4 | f.write("test")
|
help: Replace with `Path("file.txt").write_text("test")`
1 | from pathlib import Path
2 |
- with Path("file.txt").open("w") as f:
- f.write("test")
3 + Path("file.txt").write_text("test")
4 |
5 | with Path("file.txt").open("wb") as f:
6 | f.write(b"test")
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `Path("file.txt").write_bytes(b"test")`
--> FURB103_1.py:6:6
|
4 | f.write("test")
5 |
6 | with Path("file.txt").open("wb") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
7 | f.write(b"test")
|
help: Replace with `Path("file.txt").write_bytes(b"test")`
3 | with Path("file.txt").open("w") as f:
4 | f.write("test")
5 |
- with Path("file.txt").open("wb") as f:
- f.write(b"test")
6 + Path("file.txt").write_bytes(b"test")
7 |
8 | with Path("file.txt").open(mode="w") as f:
9 | f.write("test")
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `Path("file.txt").write_text("test")`
--> FURB103_1.py:9:6
|
7 | f.write(b"test")
8 |
9 | with Path("file.txt").open(mode="w") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
10 | f.write("test")
|
help: Replace with `Path("file.txt").write_text("test")`
6 | with Path("file.txt").open("wb") as f:
7 | f.write(b"test")
8 |
- with Path("file.txt").open(mode="w") as f:
- f.write("test")
9 + Path("file.txt").write_text("test")
10 |
11 | with Path("file.txt").open("w", encoding="utf8") as f:
12 | f.write("test")
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `Path("file.txt").write_text("test", encoding="utf8")`
--> FURB103_1.py:12:6
|
10 | f.write("test")
11 |
12 | with Path("file.txt").open("w", encoding="utf8") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
13 | f.write("test")
|
help: Replace with `Path("file.txt").write_text("test", encoding="utf8")`
9 | with Path("file.txt").open(mode="w") as f:
10 | f.write("test")
11 |
- with Path("file.txt").open("w", encoding="utf8") as f:
- f.write("test")
12 + Path("file.txt").write_text("test", encoding="utf8")
13 |
14 | with Path("file.txt").open("w", errors="ignore") as f:
15 | f.write("test")
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `Path("file.txt").write_text("test", errors="ignore")`
--> FURB103_1.py:15:6
|
13 | f.write("test")
14 |
15 | with Path("file.txt").open("w", errors="ignore") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
16 | f.write("test")
|
help: Replace with `Path("file.txt").write_text("test", errors="ignore")`
12 | with Path("file.txt").open("w", encoding="utf8") as f:
13 | f.write("test")
14 |
- with Path("file.txt").open("w", errors="ignore") as f:
- f.write("test")
15 + Path("file.txt").write_text("test", errors="ignore")
16 |
17 | with Path(foo()).open("w") as f:
18 | f.write("test")
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `Path(foo()).write_text("test")`
--> FURB103_1.py:18:6
|
16 | f.write("test")
17 |
18 | with Path(foo()).open("w") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
19 | f.write("test")
|
help: Replace with `Path(foo()).write_text("test")`
15 | with Path("file.txt").open("w", errors="ignore") as f:
16 | f.write("test")
17 |
- with Path(foo()).open("w") as f:
- f.write("test")
18 + Path(foo()).write_text("test")
19 |
20 | p = Path("file.txt")
21 | with p.open("w") as f:
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `p.write_text("test")`
--> FURB103_1.py:22:6
|
21 | p = Path("file.txt")
22 | with p.open("w") as f:
| ^^^^^^^^^^^^^^^^
23 | f.write("test")
|
help: Replace with `p.write_text("test")`
19 | f.write("test")
20 |
21 | p = Path("file.txt")
- with p.open("w") as f:
- f.write("test")
22 + p.write_text("test")
23 |
24 | with Path("foo", "bar", "baz").open("w") as f:
25 | f.write("test")
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `Path("foo", "bar", "baz").write_text("test")`
--> FURB103_1.py:25:6
|
23 | f.write("test")
24 |
25 | with Path("foo", "bar", "baz").open("w") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
26 | f.write("test")
|
help: Replace with `Path("foo", "bar", "baz").write_text("test")`
22 | with p.open("w") as f:
23 | f.write("test")
24 |
- with Path("foo", "bar", "baz").open("w") as f:
- f.write("test")
25 + Path("foo", "bar", "baz").write_text("test")

View File

@@ -2,7 +2,7 @@
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text("test")`
--> FURB103.py:12:6
--> FURB103_0.py:12:6
|
11 | # FURB103
12 | with open("file.txt", "w") as f:
@@ -26,7 +26,7 @@ help: Replace with `Path("file.txt").write_text("test")`
16 | with open("file.txt", "wb") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_bytes(foobar)`
--> FURB103.py:16:6
--> FURB103_0.py:16:6
|
15 | # FURB103
16 | with open("file.txt", "wb") as f:
@@ -50,7 +50,7 @@ help: Replace with `Path("file.txt").write_bytes(foobar)`
20 | with open("file.txt", mode="wb") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_bytes(b"abc")`
--> FURB103.py:20:6
--> FURB103_0.py:20:6
|
19 | # FURB103
20 | with open("file.txt", mode="wb") as f:
@@ -74,7 +74,7 @@ help: Replace with `Path("file.txt").write_bytes(b"abc")`
24 | with open("file.txt", "w", encoding="utf8") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, encoding="utf8")`
--> FURB103.py:24:6
--> FURB103_0.py:24:6
|
23 | # FURB103
24 | with open("file.txt", "w", encoding="utf8") as f:
@@ -98,7 +98,7 @@ help: Replace with `Path("file.txt").write_text(foobar, encoding="utf8")`
28 | with open("file.txt", "w", errors="ignore") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, errors="ignore")`
--> FURB103.py:28:6
--> FURB103_0.py:28:6
|
27 | # FURB103
28 | with open("file.txt", "w", errors="ignore") as f:
@@ -122,7 +122,7 @@ help: Replace with `Path("file.txt").write_text(foobar, errors="ignore")`
32 | with open("file.txt", mode="w") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar)`
--> FURB103.py:32:6
--> FURB103_0.py:32:6
|
31 | # FURB103
32 | with open("file.txt", mode="w") as f:
@@ -146,7 +146,7 @@ help: Replace with `Path("file.txt").write_text(foobar)`
36 | with open(foo(), "wb") as f:
FURB103 `open` and `write` should be replaced by `Path(foo()).write_bytes(bar())`
--> FURB103.py:36:6
--> FURB103_0.py:36:6
|
35 | # FURB103
36 | with open(foo(), "wb") as f:
@@ -157,7 +157,7 @@ FURB103 `open` and `write` should be replaced by `Path(foo()).write_bytes(bar())
help: Replace with `Path(foo()).write_bytes(bar())`
FURB103 `open` and `write` should be replaced by `Path("a.txt").write_text(x)`
--> FURB103.py:44:6
--> FURB103_0.py:44:6
|
43 | # FURB103
44 | with open("a.txt", "w") as a, open("b.txt", "wb") as b:
@@ -168,7 +168,7 @@ FURB103 `open` and `write` should be replaced by `Path("a.txt").write_text(x)`
help: Replace with `Path("a.txt").write_text(x)`
FURB103 `open` and `write` should be replaced by `Path("b.txt").write_bytes(y)`
--> FURB103.py:44:31
--> FURB103_0.py:44:31
|
43 | # FURB103
44 | with open("a.txt", "w") as a, open("b.txt", "wb") as b:
@@ -179,7 +179,7 @@ FURB103 `open` and `write` should be replaced by `Path("b.txt").write_bytes(y)`
help: Replace with `Path("b.txt").write_bytes(y)`
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(bar(bar(a + x)))`
--> FURB103.py:49:18
--> FURB103_0.py:49:18
|
48 | # FURB103
49 | with foo() as a, open("file.txt", "w") as b, foo() as c:
@@ -190,7 +190,7 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(ba
help: Replace with `Path("file.txt").write_text(bar(bar(a + x)))`
FURB103 [*] `open` and `write` should be replaced by `Path("test.json")....`
--> FURB103.py:154:6
--> FURB103_0.py:154:6
|
152 | data = {"price": 100}
153 |
@@ -214,7 +214,7 @@ help: Replace with `Path("test.json")....`
158 | with open("tmp_path/pyproject.toml", "w") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("tmp_path/pyproject.toml")....`
--> FURB103.py:158:6
--> FURB103_0.py:158:6
|
157 | # See: https://github.com/astral-sh/ruff/issues/21381
158 | with open("tmp_path/pyproject.toml", "w") as f:

View File

@@ -313,12 +313,20 @@ mod tests {
Rule::UnusedVariable,
Rule::AmbiguousVariableName,
Rule::UnusedNOQA,
]),
Rule::InvalidRuleCode,
Rule::InvalidSuppressionComment,
Rule::UnmatchedSuppressionComment,
])
.with_external_rules(&["TK421"]),
&settings::LinterSettings::for_rules(vec![
Rule::UnusedVariable,
Rule::AmbiguousVariableName,
Rule::UnusedNOQA,
Rule::InvalidRuleCode,
Rule::InvalidSuppressionComment,
Rule::UnmatchedSuppressionComment,
])
.with_external_rules(&["TK421"])
.with_preview_mode(),
);
Ok(())

View File

@@ -9,6 +9,21 @@ use crate::registry::Rule;
use crate::rule_redirects::get_redirect_target;
use crate::{AlwaysFixableViolation, Edit, Fix};
#[derive(Debug, PartialEq, Eq)]
pub(crate) enum InvalidRuleCodeKind {
Noqa,
Suppression,
}
impl InvalidRuleCodeKind {
fn as_str(&self) -> &str {
match self {
InvalidRuleCodeKind::Noqa => "`# noqa`",
InvalidRuleCodeKind::Suppression => "suppression",
}
}
}
/// ## What it does
/// Checks for `noqa` codes that are invalid.
///
@@ -36,12 +51,17 @@ use crate::{AlwaysFixableViolation, Edit, Fix};
#[violation_metadata(preview_since = "0.11.4")]
pub(crate) struct InvalidRuleCode {
pub(crate) rule_code: String,
pub(crate) kind: InvalidRuleCodeKind,
}
impl AlwaysFixableViolation for InvalidRuleCode {
#[derive_message_formats]
fn message(&self) -> String {
format!("Invalid rule code in `# noqa`: {}", self.rule_code)
format!(
"Invalid rule code in {}: {}",
self.kind.as_str(),
self.rule_code
)
}
fn fix_title(&self) -> String {
@@ -61,7 +81,9 @@ pub(crate) fn invalid_noqa_code(
continue;
};
let all_valid = directive.iter().all(|code| code_is_valid(code, external));
let all_valid = directive
.iter()
.all(|code| code_is_valid(code.as_str(), external));
if all_valid {
continue;
@@ -69,7 +91,7 @@ pub(crate) fn invalid_noqa_code(
let (valid_codes, invalid_codes): (Vec<_>, Vec<_>) = directive
.iter()
.partition(|&code| code_is_valid(code, external));
.partition(|&code| code_is_valid(code.as_str(), external));
if valid_codes.is_empty() {
all_codes_invalid_diagnostic(directive, invalid_codes, context);
@@ -81,10 +103,9 @@ pub(crate) fn invalid_noqa_code(
}
}
fn code_is_valid(code: &Code, external: &[String]) -> bool {
let code_str = code.as_str();
Rule::from_code(get_redirect_target(code_str).unwrap_or(code_str)).is_ok()
|| external.iter().any(|ext| code_str.starts_with(ext))
pub(crate) fn code_is_valid(code: &str, external: &[String]) -> bool {
Rule::from_code(get_redirect_target(code).unwrap_or(code)).is_ok()
|| external.iter().any(|ext| code.starts_with(ext))
}
fn all_codes_invalid_diagnostic(
@@ -100,6 +121,7 @@ fn all_codes_invalid_diagnostic(
.map(Code::as_str)
.collect::<Vec<_>>()
.join(", "),
kind: InvalidRuleCodeKind::Noqa,
},
directive.range(),
)
@@ -116,6 +138,7 @@ fn some_codes_are_invalid_diagnostic(
.report_diagnostic(
InvalidRuleCode {
rule_code: invalid_code.to_string(),
kind: InvalidRuleCodeKind::Noqa,
},
invalid_code.range(),
)

View File

@@ -0,0 +1,59 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use crate::AlwaysFixableViolation;
use crate::suppression::{InvalidSuppressionKind, ParseErrorKind};
/// ## What it does
/// Checks for invalid suppression comments
///
/// ## Why is this bad?
/// Invalid suppression comments are ignored by Ruff, and should either
/// be fixed or removed to avoid confusion.
///
/// ## Example
/// ```python
/// ruff: disable # missing codes
/// ```
///
/// Use instead:
/// ```python
/// # ruff: disable[E501]
/// ```
///
/// Or delete the invalid suppression comment.
///
/// ## References
/// - [Ruff error suppression](https://docs.astral.sh/ruff/linter/#error-suppression)
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "0.14.11")]
pub(crate) struct InvalidSuppressionComment {
pub(crate) kind: InvalidSuppressionCommentKind,
}
impl AlwaysFixableViolation for InvalidSuppressionComment {
#[derive_message_formats]
fn message(&self) -> String {
let msg = match self.kind {
InvalidSuppressionCommentKind::Invalid(InvalidSuppressionKind::Indentation) => {
"unexpected indentation".to_string()
}
InvalidSuppressionCommentKind::Invalid(InvalidSuppressionKind::Trailing) => {
"trailing comments are not supported".to_string()
}
InvalidSuppressionCommentKind::Invalid(InvalidSuppressionKind::Unmatched) => {
"no matching 'disable' comment".to_string()
}
InvalidSuppressionCommentKind::Error(error) => format!("{error}"),
};
format!("Invalid suppression comment: {msg}")
}
fn fix_title(&self) -> String {
"Remove suppression comment".to_string()
}
}
pub(crate) enum InvalidSuppressionCommentKind {
Invalid(InvalidSuppressionKind),
Error(ParseErrorKind),
}

View File

@@ -22,6 +22,7 @@ pub(crate) use invalid_formatter_suppression_comment::*;
pub(crate) use invalid_index_type::*;
pub(crate) use invalid_pyproject_toml::*;
pub(crate) use invalid_rule_code::*;
pub(crate) use invalid_suppression_comment::*;
pub(crate) use legacy_form_pytest_raises::*;
pub(crate) use logging_eager_conversion::*;
pub(crate) use map_int_version_parsing::*;
@@ -46,6 +47,7 @@ pub(crate) use starmap_zip::*;
pub(crate) use static_key_dict_comprehension::*;
#[cfg(any(feature = "test-rules", test))]
pub(crate) use test_rules::*;
pub(crate) use unmatched_suppression_comment::*;
pub(crate) use unnecessary_cast_to_int::*;
pub(crate) use unnecessary_iterable_allocation_for_first_element::*;
pub(crate) use unnecessary_key_check::*;
@@ -87,6 +89,7 @@ mod invalid_formatter_suppression_comment;
mod invalid_index_type;
mod invalid_pyproject_toml;
mod invalid_rule_code;
mod invalid_suppression_comment;
mod legacy_form_pytest_raises;
mod logging_eager_conversion;
mod map_int_version_parsing;
@@ -113,6 +116,7 @@ mod static_key_dict_comprehension;
mod suppression_comment_visitor;
#[cfg(any(feature = "test-rules", test))]
pub(crate) mod test_rules;
mod unmatched_suppression_comment;
mod unnecessary_cast_to_int;
mod unnecessary_iterable_allocation_for_first_element;
mod unnecessary_key_check;

View File

@@ -0,0 +1,42 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use crate::Violation;
/// ## What it does
/// Checks for unmatched range suppression comments
///
/// ## Why is this bad?
/// Unmatched range suppression comments can inadvertently suppress violations
/// over larger sections of code than intended, particularly at module scope.
///
/// ## Example
/// ```python
/// def foo():
/// # ruff: disable[E501] # unmatched
/// REALLY_LONG_VALUES = [...]
///
/// print(REALLY_LONG_VALUES)
/// ```
///
/// Use instead:
/// ```python
/// def foo():
/// # ruff: disable[E501]
/// REALLY_LONG_VALUES = [...]
/// # ruff: enable[E501]
///
/// print(REALLY_LONG_VALUES)
/// ```
///
/// ## References
/// - [Ruff error suppression](https://docs.astral.sh/ruff/linter/#error-suppression)
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "0.14.11")]
pub(crate) struct UnmatchedSuppressionComment;
impl Violation for UnmatchedSuppressionComment {
#[derive_message_formats]
fn message(&self) -> String {
"Suppression comment without matching `#ruff:enable` comment".to_string()
}
}

View File

@@ -6,8 +6,8 @@ source: crates/ruff_linter/src/rules/ruff/mod.rs
+linter.preview = enabled
--- Summary ---
Removed: 14
Added: 11
Removed: 15
Added: 23
--- Removed ---
E741 Ambiguous variable name: `I`
@@ -238,8 +238,60 @@ help: Remove assignment to unused variable `I`
note: This is an unsafe fix and may change runtime behavior
F841 [*] Local variable `value` is assigned to but never used
--> suppressions.py:95:5
|
93 | # ruff: disable[YF829]
94 | # ruff: disable[F841, RQW320]
95 | value = 0
| ^^^^^
96 | # ruff: enable[F841, RQW320]
97 | # ruff: enable[YF829]
|
help: Remove assignment to unused variable `value`
92 | # Unknown rule codes
93 | # ruff: disable[YF829]
94 | # ruff: disable[F841, RQW320]
- value = 0
95 + pass
96 | # ruff: enable[F841, RQW320]
97 | # ruff: enable[YF829]
98 |
note: This is an unsafe fix and may change runtime behavior
--- Added ---
RUF104 Suppression comment without matching `#ruff:enable` comment
--> suppressions.py:11:5
|
9 | # These should both be ignored by the implicit range suppression.
10 | # Should also generate an "unmatched suppression" warning.
11 | # ruff:disable[E741,F841]
| ^^^^^^^^^^^^^^^^^^^^^^^^^
12 | I = 1
|
RUF103 [*] Invalid suppression comment: no matching 'disable' comment
--> suppressions.py:19:5
|
17 | # should be generated.
18 | I = 1
19 | # ruff: enable[E741, F841]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Remove suppression comment
16 | # Neither warning is ignored, and an "unmatched suppression"
17 | # should be generated.
18 | I = 1
- # ruff: enable[E741, F841]
19 |
20 |
21 | def f():
note: This is an unsafe fix and may change runtime behavior
RUF100 [*] Unused suppression (non-enabled: `E501`)
--> suppressions.py:46:5
|
@@ -298,6 +350,17 @@ help: Remove unused `noqa` directive
58 |
RUF104 Suppression comment without matching `#ruff:enable` comment
--> suppressions.py:61:5
|
59 | def f():
60 | # TODO: Duplicate codes should be counted as duplicate, not unused
61 | # ruff: disable[F841, F841]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
62 | foo = 0
|
RUF100 [*] Unused suppression (unused: `F841`)
--> suppressions.py:61:21
|
@@ -318,6 +381,18 @@ help: Remove unused suppression
64 |
RUF104 Suppression comment without matching `#ruff:enable` comment
--> suppressions.py:68:5
|
66 | # Overlapping range suppressions, one should be marked as used,
67 | # and the other should trigger an unused suppression diagnostic
68 | # ruff: disable[F841]
| ^^^^^^^^^^^^^^^^^^^^^
69 | # ruff: disable[F841]
70 | foo = 0
|
RUF100 [*] Unused suppression (unused: `F841`)
--> suppressions.py:69:5
|
@@ -337,6 +412,17 @@ help: Remove unused suppression
71 |
RUF104 Suppression comment without matching `#ruff:enable` comment
--> suppressions.py:75:5
|
73 | def f():
74 | # Multiple codes but only one is used
75 | # ruff: disable[E741, F401, F841]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
76 | foo = 0
|
RUF100 [*] Unused suppression (unused: `E741`)
--> suppressions.py:75:21
|
@@ -377,6 +463,17 @@ help: Remove unused suppression
78 |
RUF104 Suppression comment without matching `#ruff:enable` comment
--> suppressions.py:81:5
|
79 | def f():
80 | # Multiple codes but only two are used
81 | # ruff: disable[E741, F401, F841]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
82 | I = 0
|
RUF100 [*] Unused suppression (non-enabled: `F401`)
--> suppressions.py:81:27
|
@@ -413,6 +510,8 @@ help: Remove unused suppression
- # ruff: disable[E741, F401, F841]
87 + # ruff: disable[F401, F841]
88 | print("hello")
89 |
90 |
RUF100 [*] Unused suppression (non-enabled: `F401`)
@@ -431,6 +530,8 @@ help: Remove unused suppression
- # ruff: disable[E741, F401, F841]
87 + # ruff: disable[E741, F841]
88 | print("hello")
89 |
90 |
RUF100 [*] Unused suppression (unused: `F841`)
@@ -449,3 +550,122 @@ help: Remove unused suppression
- # ruff: disable[E741, F401, F841]
87 + # ruff: disable[E741, F401]
88 | print("hello")
89 |
90 |
RUF102 [*] Invalid rule code in suppression: YF829
--> suppressions.py:93:21
|
91 | def f():
92 | # Unknown rule codes
93 | # ruff: disable[YF829]
| ^^^^^
94 | # ruff: disable[F841, RQW320]
95 | value = 0
|
help: Remove the rule code
90 |
91 | def f():
92 | # Unknown rule codes
- # ruff: disable[YF829]
93 | # ruff: disable[F841, RQW320]
94 | value = 0
95 | # ruff: enable[F841, RQW320]
RUF102 [*] Invalid rule code in suppression: RQW320
--> suppressions.py:94:27
|
92 | # Unknown rule codes
93 | # ruff: disable[YF829]
94 | # ruff: disable[F841, RQW320]
| ^^^^^^
95 | value = 0
96 | # ruff: enable[F841, RQW320]
|
help: Remove the rule code
91 | def f():
92 | # Unknown rule codes
93 | # ruff: disable[YF829]
- # ruff: disable[F841, RQW320]
94 + # ruff: disable[F841]
95 | value = 0
96 | # ruff: enable[F841, RQW320]
97 | # ruff: enable[YF829]
RUF102 [*] Invalid rule code in suppression: RQW320
--> suppressions.py:96:26
|
94 | # ruff: disable[F841, RQW320]
95 | value = 0
96 | # ruff: enable[F841, RQW320]
| ^^^^^^
97 | # ruff: enable[YF829]
|
help: Remove the rule code
93 | # ruff: disable[YF829]
94 | # ruff: disable[F841, RQW320]
95 | value = 0
- # ruff: enable[F841, RQW320]
96 + # ruff: enable[F841]
97 | # ruff: enable[YF829]
98 |
99 |
RUF102 [*] Invalid rule code in suppression: YF829
--> suppressions.py:97:20
|
95 | value = 0
96 | # ruff: enable[F841, RQW320]
97 | # ruff: enable[YF829]
| ^^^^^
|
help: Remove the rule code
94 | # ruff: disable[F841, RQW320]
95 | value = 0
96 | # ruff: enable[F841, RQW320]
- # ruff: enable[YF829]
97 |
98 |
99 | def f():
RUF103 [*] Invalid suppression comment: missing suppression codes like `[E501, ...]`
--> suppressions.py:109:5
|
107 | def f():
108 | # Empty or missing rule codes
109 | # ruff: disable
| ^^^^^^^^^^^^^^^
110 | # ruff: disable[]
111 | print("hello")
|
help: Remove suppression comment
106 |
107 | def f():
108 | # Empty or missing rule codes
- # ruff: disable
109 | # ruff: disable[]
110 | print("hello")
note: This is an unsafe fix and may change runtime behavior
RUF103 [*] Invalid suppression comment: missing suppression codes like `[E501, ...]`
--> suppressions.py:110:5
|
108 | # Empty or missing rule codes
109 | # ruff: disable
110 | # ruff: disable[]
| ^^^^^^^^^^^^^^^^^
111 | print("hello")
|
help: Remove suppression comment
107 | def f():
108 | # Empty or missing rule codes
109 | # ruff: disable
- # ruff: disable[]
110 | print("hello")
note: This is an unsafe fix and may change runtime behavior

View File

@@ -471,6 +471,13 @@ impl LinterSettings {
self
}
#[must_use]
pub fn with_external_rules(mut self, rules: &[&str]) -> Self {
self.external
.extend(rules.iter().map(std::string::ToString::to_string));
self
}
/// Resolve the [`TargetVersion`] to use for linting.
///
/// This method respects the per-file version overrides in

View File

@@ -0,0 +1,74 @@
---
source: crates/ruff_linter/src/linter.rs
---
invalid-syntax: annotated name `a` can't be global
--> resources/test/fixtures/semantic_errors/annotated_global.py:4:5
|
2 | def f1():
3 | global a
4 | a: str = "foo" # error
| ^
5 |
6 | b: int = 1
|
invalid-syntax: annotated name `b` can't be global
--> resources/test/fixtures/semantic_errors/annotated_global.py:10:9
|
8 | def inner():
9 | global b
10 | b: str = "nested" # error
| ^
11 |
12 | c: int = 1
|
invalid-syntax: annotated name `c` can't be global
--> resources/test/fixtures/semantic_errors/annotated_global.py:15:5
|
13 | def f2():
14 | global c
15 | c: list[str] = [] # error
| ^
16 |
17 | d: int = 1
|
invalid-syntax: annotated name `d` can't be global
--> resources/test/fixtures/semantic_errors/annotated_global.py:20:5
|
18 | def f3():
19 | global d
20 | d: str # error
| ^
21 |
22 | e: int = 1
|
invalid-syntax: annotated name `g` can't be global
--> resources/test/fixtures/semantic_errors/annotated_global.py:29:1
|
27 | f: int = 1 # okay
28 |
29 | g: int = 1
| ^
30 | global g # error
|
invalid-syntax: annotated name `x` can't be global
--> resources/test/fixtures/semantic_errors/annotated_global.py:33:5
|
32 | class C:
33 | x: str
| ^
34 | global x # error
|
invalid-syntax: annotated name `x` can't be global
--> resources/test/fixtures/semantic_errors/annotated_global.py:38:5
|
36 | class D:
37 | global x # error
38 | x: str
| ^
|

View File

@@ -4,6 +4,7 @@ use ruff_db::diagnostic::Diagnostic;
use ruff_diagnostics::{Edit, Fix};
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_ast::whitespace::indentation;
use rustc_hash::FxHashSet;
use std::cell::Cell;
use std::{error::Error, fmt::Formatter};
use thiserror::Error;
@@ -17,7 +18,11 @@ use crate::checkers::ast::LintContext;
use crate::codes::Rule;
use crate::fix::edits::delete_comment;
use crate::preview::is_range_suppressions_enabled;
use crate::rules::ruff::rules::{UnusedCodes, UnusedNOQA, UnusedNOQAKind};
use crate::rule_redirects::get_redirect_target;
use crate::rules::ruff::rules::{
InvalidRuleCode, InvalidRuleCodeKind, InvalidSuppressionComment, InvalidSuppressionCommentKind,
UnmatchedSuppressionComment, UnusedCodes, UnusedNOQA, UnusedNOQAKind, code_is_valid,
};
use crate::settings::LinterSettings;
#[derive(Clone, Debug, Eq, PartialEq)]
@@ -130,7 +135,7 @@ impl Suppressions {
}
pub(crate) fn is_empty(&self) -> bool {
self.valid.is_empty()
self.valid.is_empty() && self.invalid.is_empty() && self.errors.is_empty()
}
/// Check if a diagnostic is suppressed by any known range suppressions
@@ -150,7 +155,9 @@ impl Suppressions {
};
for suppression in &self.valid {
if *code == suppression.code.as_str() && suppression.range.contains_range(range) {
let suppression_code =
get_redirect_target(suppression.code.as_str()).unwrap_or(suppression.code.as_str());
if *code == suppression_code && suppression.range.contains_range(range) {
suppression.used.set(true);
return true;
}
@@ -159,81 +166,140 @@ impl Suppressions {
}
pub(crate) fn check_suppressions(&self, context: &LintContext, locator: &Locator) {
if !context.any_rule_enabled(&[Rule::UnusedNOQA, Rule::InvalidRuleCode]) {
return;
}
let unused = self
.valid
.iter()
.filter(|suppression| !suppression.used.get());
for suppression in unused {
let Ok(rule) = Rule::from_code(&suppression.code) else {
continue; // TODO: invalid code
};
for comment in &suppression.comments {
let mut range = comment.range;
let edit = if comment.codes.len() == 1 {
delete_comment(comment.range, locator)
} else {
let code_index = comment
.codes
.iter()
.position(|range| locator.slice(range) == suppression.code)
.unwrap();
range = comment.codes[code_index];
let code_range = if code_index < (comment.codes.len() - 1) {
TextRange::new(
comment.codes[code_index].start(),
comment.codes[code_index + 1].start(),
)
} else {
TextRange::new(
comment.codes[code_index - 1].end(),
comment.codes[code_index].end(),
)
let mut unmatched_ranges = FxHashSet::default();
for suppression in &self.valid {
if !code_is_valid(&suppression.code, &context.settings().external) {
// InvalidRuleCode
if context.is_rule_enabled(Rule::InvalidRuleCode) {
for comment in &suppression.comments {
let (range, edit) = Suppressions::delete_code_or_comment(
locator,
suppression,
comment,
true,
);
context
.report_diagnostic(
InvalidRuleCode {
rule_code: suppression.code.to_string(),
kind: InvalidRuleCodeKind::Suppression,
},
range,
)
.set_fix(Fix::safe_edit(edit));
}
}
} else if !suppression.used.get() {
// UnusedNOQA
if context.is_rule_enabled(Rule::UnusedNOQA) {
let Ok(rule) = Rule::from_code(
get_redirect_target(&suppression.code).unwrap_or(&suppression.code),
) else {
continue; // "external" lint code, don't treat it as unused
};
Edit::range_deletion(code_range)
};
for comment in &suppression.comments {
let (range, edit) = Suppressions::delete_code_or_comment(
locator,
suppression,
comment,
false,
);
let codes = if context.is_rule_enabled(rule) {
UnusedCodes {
unmatched: vec![suppression.code.to_string()],
..Default::default()
}
} else {
UnusedCodes {
disabled: vec![suppression.code.to_string()],
..Default::default()
}
};
let codes = if context.is_rule_enabled(rule) {
UnusedCodes {
unmatched: vec![suppression.code.to_string()],
..Default::default()
}
} else {
UnusedCodes {
disabled: vec![suppression.code.to_string()],
..Default::default()
}
};
let mut diagnostic = context.report_diagnostic(
UnusedNOQA {
codes: Some(codes),
kind: UnusedNOQAKind::Suppression,
},
range,
);
diagnostic.set_fix(Fix::safe_edit(edit));
context
.report_diagnostic(
UnusedNOQA {
codes: Some(codes),
kind: UnusedNOQAKind::Suppression,
},
range,
)
.set_fix(Fix::safe_edit(edit));
}
}
} else if suppression.comments.len() == 1 {
// UnmatchedSuppressionComment
let range = suppression.comments[0].range;
if unmatched_ranges.insert(range) {
context.report_diagnostic_if_enabled(UnmatchedSuppressionComment {}, range);
}
}
}
for error in self
.errors
.iter()
.filter(|error| error.kind == ParseErrorKind::MissingCodes)
{
let mut diagnostic = context.report_diagnostic(
UnusedNOQA {
codes: Some(UnusedCodes::default()),
kind: UnusedNOQAKind::Suppression,
},
error.range,
);
diagnostic.set_fix(Fix::safe_edit(delete_comment(error.range, locator)));
if context.is_rule_enabled(Rule::InvalidSuppressionComment) {
for error in &self.errors {
context
.report_diagnostic(
InvalidSuppressionComment {
kind: InvalidSuppressionCommentKind::Error(error.kind),
},
error.range,
)
.set_fix(Fix::unsafe_edit(delete_comment(error.range, locator)));
}
}
if context.is_rule_enabled(Rule::InvalidSuppressionComment) {
for invalid in &self.invalid {
context
.report_diagnostic(
InvalidSuppressionComment {
kind: InvalidSuppressionCommentKind::Invalid(invalid.kind),
},
invalid.comment.range,
)
.set_fix(Fix::unsafe_edit(delete_comment(
invalid.comment.range,
locator,
)));
}
}
}
fn delete_code_or_comment(
locator: &Locator<'_>,
suppression: &Suppression,
comment: &SuppressionComment,
highlight_only_code: bool,
) -> (TextRange, Edit) {
let mut range = comment.range;
let edit = if comment.codes.len() == 1 {
if highlight_only_code {
range = comment.codes[0];
}
delete_comment(comment.range, locator)
} else {
let code_index = comment
.codes
.iter()
.position(|range| locator.slice(range) == suppression.code)
.unwrap();
range = comment.codes[code_index];
let code_range = if code_index < (comment.codes.len() - 1) {
TextRange::new(
comment.codes[code_index].start(),
comment.codes[code_index + 1].start(),
)
} else {
TextRange::new(
comment.codes[code_index - 1].end(),
comment.codes[code_index].end(),
)
};
Edit::range_deletion(code_range)
};
(range, edit)
}
}
@@ -391,7 +457,7 @@ impl<'a> SuppressionsBuilder<'a> {
}
#[derive(Copy, Clone, Debug, Eq, Error, PartialEq)]
enum ParseErrorKind {
pub(crate) enum ParseErrorKind {
#[error("not a suppression comment")]
NotASuppression,
@@ -401,7 +467,7 @@ enum ParseErrorKind {
#[error("unknown ruff directive")]
UnknownAction,
#[error("missing suppression codes")]
#[error("missing suppression codes like `[E501, ...]`")]
MissingCodes,
#[error("missing closing bracket")]

View File

@@ -1,5 +1,5 @@
use ruff_python_ast::AnyNodeRef;
use ruff_python_ast::visitor::source_order::{SourceOrderVisitor, TraversalSignal};
use crate::AnyNodeRef;
use crate::visitor::source_order::{SourceOrderVisitor, TraversalSignal, walk_node};
use ruff_text_size::{Ranged, TextRange};
use std::fmt;
use std::fmt::Formatter;
@@ -11,7 +11,7 @@ use std::fmt::Formatter;
///
/// ## Panics
/// Panics if `range` is not contained within `root`.
pub(crate) fn covering_node(root: AnyNodeRef, range: TextRange) -> CoveringNode {
pub fn covering_node(root: AnyNodeRef, range: TextRange) -> CoveringNode {
struct Visitor<'a> {
range: TextRange,
found: bool,
@@ -48,15 +48,12 @@ pub(crate) fn covering_node(root: AnyNodeRef, range: TextRange) -> CoveringNode
ancestors: Vec::new(),
};
root.visit_source_order(&mut visitor);
if visitor.ancestors.is_empty() {
visitor.ancestors.push(root);
}
walk_node(&mut visitor, root);
CoveringNode::from_ancestors(visitor.ancestors)
}
/// The node with a minimal range that fully contains the search range.
pub(crate) struct CoveringNode<'a> {
pub struct CoveringNode<'a> {
/// The covering node, along with all of its ancestors up to the
/// root. The root is always the first element and the covering
/// node found is always the last node. This sequence is guaranteed
@@ -67,12 +64,12 @@ pub(crate) struct CoveringNode<'a> {
impl<'a> CoveringNode<'a> {
/// Creates a new `CoveringNode` from a list of ancestor nodes.
/// The ancestors should be ordered from root to the covering node.
pub(crate) fn from_ancestors(ancestors: Vec<AnyNodeRef<'a>>) -> Self {
pub fn from_ancestors(ancestors: Vec<AnyNodeRef<'a>>) -> Self {
Self { nodes: ancestors }
}
/// Returns the covering node found.
pub(crate) fn node(&self) -> AnyNodeRef<'a> {
pub fn node(&self) -> AnyNodeRef<'a> {
*self
.nodes
.last()
@@ -80,7 +77,7 @@ impl<'a> CoveringNode<'a> {
}
/// Returns the node's parent.
pub(crate) fn parent(&self) -> Option<AnyNodeRef<'a>> {
pub fn parent(&self) -> Option<AnyNodeRef<'a>> {
let penultimate = self.nodes.len().checked_sub(2)?;
self.nodes.get(penultimate).copied()
}
@@ -90,7 +87,7 @@ impl<'a> CoveringNode<'a> {
///
/// The "first" here means that the node closest to a leaf is
/// returned.
pub(crate) fn find_first(mut self, f: impl Fn(AnyNodeRef<'a>) -> bool) -> Result<Self, Self> {
pub fn find_first(mut self, f: impl Fn(AnyNodeRef<'a>) -> bool) -> Result<Self, Self> {
let Some(index) = self.find_first_index(f) else {
return Err(self);
};
@@ -105,7 +102,7 @@ impl<'a> CoveringNode<'a> {
/// the highest ancestor found satisfying the given predicate is
/// returned. Note that this is *not* the same as finding the node
/// closest to the root that satisfies the given predictate.
pub(crate) fn find_last(mut self, f: impl Fn(AnyNodeRef<'a>) -> bool) -> Result<Self, Self> {
pub fn find_last(mut self, f: impl Fn(AnyNodeRef<'a>) -> bool) -> Result<Self, Self> {
let Some(mut index) = self.find_first_index(&f) else {
return Err(self);
};
@@ -118,7 +115,7 @@ impl<'a> CoveringNode<'a> {
/// Returns an iterator over the ancestor nodes, starting with the node itself
/// and walking towards the root.
pub(crate) fn ancestors(&self) -> impl DoubleEndedIterator<Item = AnyNodeRef<'a>> + '_ {
pub fn ancestors(&self) -> impl DoubleEndedIterator<Item = AnyNodeRef<'a>> + '_ {
self.nodes.iter().copied().rev()
}

View File

@@ -12,6 +12,7 @@ pub use python_version::*;
pub mod comparable;
pub mod docstrings;
mod expression;
pub mod find_node;
mod generated;
pub mod helpers;
pub mod identifier;

View File

@@ -2,18 +2,25 @@
use crate::{self as ast, AnyNodeRef, ExceptHandler, Stmt};
/// Given a [`Stmt`] and its parent, return the [`ast::Suite`] that contains the [`Stmt`].
pub fn suite<'a>(stmt: &'a Stmt, parent: &'a Stmt) -> Option<EnclosingSuite<'a>> {
pub fn suite<'a>(
stmt: impl Into<AnyNodeRef<'a>>,
parent: impl Into<AnyNodeRef<'a>>,
) -> Option<EnclosingSuite<'a>> {
// TODO: refactor this to work without a parent, ie when `stmt` is at the top level
match parent {
Stmt::FunctionDef(ast::StmtFunctionDef { body, .. }) => EnclosingSuite::new(body, stmt),
Stmt::ClassDef(ast::StmtClassDef { body, .. }) => EnclosingSuite::new(body, stmt),
Stmt::For(ast::StmtFor { body, orelse, .. }) => [body, orelse]
let stmt = stmt.into();
match parent.into() {
AnyNodeRef::ModModule(ast::ModModule { body, .. }) => EnclosingSuite::new(body, stmt),
AnyNodeRef::StmtFunctionDef(ast::StmtFunctionDef { body, .. }) => {
EnclosingSuite::new(body, stmt)
}
AnyNodeRef::StmtClassDef(ast::StmtClassDef { body, .. }) => EnclosingSuite::new(body, stmt),
AnyNodeRef::StmtFor(ast::StmtFor { body, orelse, .. }) => [body, orelse]
.iter()
.find_map(|suite| EnclosingSuite::new(suite, stmt)),
Stmt::While(ast::StmtWhile { body, orelse, .. }) => [body, orelse]
AnyNodeRef::StmtWhile(ast::StmtWhile { body, orelse, .. }) => [body, orelse]
.iter()
.find_map(|suite| EnclosingSuite::new(suite, stmt)),
Stmt::If(ast::StmtIf {
AnyNodeRef::StmtIf(ast::StmtIf {
body,
elif_else_clauses,
..
@@ -21,12 +28,12 @@ pub fn suite<'a>(stmt: &'a Stmt, parent: &'a Stmt) -> Option<EnclosingSuite<'a>>
.into_iter()
.chain(elif_else_clauses.iter().map(|clause| &clause.body))
.find_map(|suite| EnclosingSuite::new(suite, stmt)),
Stmt::With(ast::StmtWith { body, .. }) => EnclosingSuite::new(body, stmt),
Stmt::Match(ast::StmtMatch { cases, .. }) => cases
AnyNodeRef::StmtWith(ast::StmtWith { body, .. }) => EnclosingSuite::new(body, stmt),
AnyNodeRef::StmtMatch(ast::StmtMatch { cases, .. }) => cases
.iter()
.map(|case| &case.body)
.find_map(|body| EnclosingSuite::new(body, stmt)),
Stmt::Try(ast::StmtTry {
AnyNodeRef::StmtTry(ast::StmtTry {
body,
handlers,
orelse,
@@ -51,10 +58,10 @@ pub struct EnclosingSuite<'a> {
}
impl<'a> EnclosingSuite<'a> {
pub fn new(suite: &'a [Stmt], stmt: &'a Stmt) -> Option<Self> {
pub fn new(suite: &'a [Stmt], stmt: AnyNodeRef<'a>) -> Option<Self> {
let position = suite
.iter()
.position(|sibling| AnyNodeRef::ptr_eq(sibling.into(), stmt.into()))?;
.position(|sibling| AnyNodeRef::ptr_eq(sibling.into(), stmt))?;
Some(EnclosingSuite { suite, position })
}

View File

@@ -222,6 +222,17 @@ where
visitor.leave_node(node);
}
pub fn walk_node<'a, V>(visitor: &mut V, node: AnyNodeRef<'a>)
where
V: SourceOrderVisitor<'a> + ?Sized,
{
if visitor.enter_node(node).is_traverse() {
node.visit_source_order(visitor);
}
visitor.leave_node(node);
}
#[derive(Copy, Clone, Eq, PartialEq, Debug)]
pub enum TraversalSignal {
Traverse,

View File

@@ -592,11 +592,23 @@ impl FormatString {
fn parse_literal(text: &str) -> Result<(FormatPart, &str), FormatParseError> {
let mut cur_text = text;
let mut result_string = String::new();
let mut pending_escape = false;
while !cur_text.is_empty() {
if pending_escape
&& let Some((unicode_string, remaining)) =
FormatString::parse_escaped_unicode_string(cur_text)
{
result_string.push_str(unicode_string);
cur_text = remaining;
pending_escape = false;
continue;
}
match FormatString::parse_literal_single(cur_text) {
Ok((next_char, remaining)) => {
result_string.push(next_char);
cur_text = remaining;
pending_escape = next_char == '\\' && !pending_escape;
}
Err(err) => {
return if result_string.is_empty() {
@@ -678,6 +690,13 @@ impl FormatString {
}
Err(FormatParseError::UnmatchedBracket)
}
fn parse_escaped_unicode_string(text: &str) -> Option<(&str, &str)> {
text.strip_prefix("N{")?.find('}').map(|idx| {
let end_idx = idx + 3; // 3 for "N{"
(&text[..end_idx], &text[end_idx..])
})
}
}
pub trait FromTemplate<'a>: Sized {
@@ -1020,4 +1039,48 @@ mod tests {
Err(FormatParseError::InvalidCharacterAfterRightBracket)
);
}
#[test]
fn test_format_unicode_escape() {
let expected = Ok(FormatString {
format_parts: vec![FormatPart::Literal("I am a \\N{snowman}".to_owned())],
});
assert_eq!(FormatString::from_str("I am a \\N{snowman}"), expected);
}
#[test]
fn test_format_unicode_escape_with_field() {
let expected = Ok(FormatString {
format_parts: vec![
FormatPart::Literal("I am a \\N{snowman}".to_owned()),
FormatPart::Field {
field_name: "snowman".to_owned(),
conversion_spec: None,
format_spec: String::new(),
},
],
});
assert_eq!(
FormatString::from_str("I am a \\N{snowman}{snowman}"),
expected
);
}
#[test]
fn test_format_multiple_escape_with_field() {
let expected = Ok(FormatString {
format_parts: vec![
FormatPart::Literal("I am a \\\\N".to_owned()),
FormatPart::Field {
field_name: "snowman".to_owned(),
conversion_spec: None,
format_spec: String::new(),
},
],
});
assert_eq!(FormatString::from_str("I am a \\\\N{snowman}"), expected);
}
}

View File

@@ -272,7 +272,9 @@ impl SemanticSyntaxChecker {
fn check_annotation<Ctx: SemanticSyntaxContext>(stmt: &ast::Stmt, ctx: &Ctx) {
match stmt {
Stmt::AnnAssign(ast::StmtAnnAssign { annotation, .. }) => {
Stmt::AnnAssign(ast::StmtAnnAssign {
target, annotation, ..
}) => {
if ctx.python_version() > PythonVersion::PY313 {
// test_ok valid_annotation_py313
// # parse_options: {"target-version": "3.13"}
@@ -297,6 +299,18 @@ impl SemanticSyntaxChecker {
};
visitor.visit_expr(annotation);
}
if let Expr::Name(ast::ExprName { id, .. }) = target.as_ref() {
if let Some(global_stmt) = ctx.global(id.as_str()) {
let global_start = global_stmt.start();
if !ctx.in_module_scope() || target.start() < global_start {
Self::add_error(
ctx,
SemanticSyntaxErrorKind::AnnotatedGlobal(id.to_string()),
target.range(),
);
}
}
}
}
Stmt::FunctionDef(ast::StmtFunctionDef {
type_params,

View File

@@ -179,42 +179,45 @@ impl LineIndex {
let line = self.line_index(offset);
let line_start = self.line_start(line, text);
let character_offset =
self.characters_between(TextRange::new(line_start, offset), text, encoding);
SourceLocation {
line,
character_offset: OneIndexed::from_zero_indexed(character_offset),
}
}
fn characters_between(
&self,
range: TextRange,
text: &str,
encoding: PositionEncoding,
) -> usize {
if self.is_ascii() {
return SourceLocation {
line,
character_offset: OneIndexed::from_zero_indexed((offset - line_start).to_usize()),
};
return (range.end() - range.start()).to_usize();
}
match encoding {
PositionEncoding::Utf8 => {
let character_offset = offset - line_start;
SourceLocation {
line,
character_offset: OneIndexed::from_zero_indexed(character_offset.to_usize()),
}
}
PositionEncoding::Utf8 => (range.end() - range.start()).to_usize(),
PositionEncoding::Utf16 => {
let up_to_character = &text[TextRange::new(line_start, offset)];
let character = up_to_character.encode_utf16().count();
SourceLocation {
line,
character_offset: OneIndexed::from_zero_indexed(character),
}
let up_to_character = &text[range];
up_to_character.encode_utf16().count()
}
PositionEncoding::Utf32 => {
let up_to_character = &text[TextRange::new(line_start, offset)];
let character = up_to_character.chars().count();
SourceLocation {
line,
character_offset: OneIndexed::from_zero_indexed(character),
}
let up_to_character = &text[range];
up_to_character.chars().count()
}
}
}
/// Returns the length of the line in characters, respecting the given encoding
pub fn line_len(&self, line: OneIndexed, text: &str, encoding: PositionEncoding) -> usize {
let line_range = self.line_range(line, text);
self.characters_between(line_range, text, encoding)
}
/// Return the number of lines in the source code.
pub fn line_count(&self) -> usize {
self.line_starts().len()

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_wasm"
version = "0.14.9"
version = "0.14.10"
publish = false
authors = { workspace = true }
edition = { workspace = true }

1
crates/ty/docs/cli.md generated
View File

@@ -56,6 +56,7 @@ over all configuration files.</p>
</dd><dt id="ty-check--exit-zero"><a href="#ty-check--exit-zero"><code>--exit-zero</code></a></dt><dd><p>Always use exit code 0, even when there are error-level diagnostics</p>
</dd><dt id="ty-check--extra-search-path"><a href="#ty-check--extra-search-path"><code>--extra-search-path</code></a> <i>path</i></dt><dd><p>Additional path to use as a module-resolution source (can be passed multiple times).</p>
<p>This is an advanced option that should usually only be used for first-party or third-party modules that are not installed into your Python environment in a conventional way. Use <code>--python</code> to point ty to your Python environment if it is in an unusual location.</p>
</dd><dt id="ty-check--force-exclude"><a href="#ty-check--force-exclude"><code>--force-exclude</code></a></dt><dd><p>Enforce exclusions, even for paths passed to ty directly on the command-line. Use <code>--no-force-exclude</code> to disable</p>
</dd><dt id="ty-check--help"><a href="#ty-check--help"><code>--help</code></a>, <code>-h</code></dt><dd><p>Print help (see a summary with '-h')</p>
</dd><dt id="ty-check--ignore"><a href="#ty-check--ignore"><code>--ignore</code></a> <i>rule</i></dt><dd><p>Disables the rule. Can be specified multiple times.</p>
</dd><dt id="ty-check--no-progress"><a href="#ty-check--no-progress"><code>--no-progress</code></a></dt><dd><p>Hide all progress outputs.</p>

View File

@@ -200,24 +200,22 @@ Configuration override that applies to specific files based on glob patterns.
An override allows you to apply different rule configurations to specific
files or directories. Multiple overrides can match the same file, with
later overrides take precedence.
later overrides take precedence. Override rules take precedence over global
rules for matching files.
### Precedence
- Later overrides in the array take precedence over earlier ones
- Override rules take precedence over global rules for matching files
### Examples
For example, to relax enforcement of rules in test files:
```toml
# Relax rules for test files
[[tool.ty.overrides]]
include = ["tests/**", "**/test_*.py"]
[tool.ty.overrides.rules]
possibly-unresolved-reference = "warn"
```
# Ignore generated files but still check important ones
Or, to ignore a rule in generated files but retain enforcement in an important file:
```toml
[[tool.ty.overrides]]
include = ["generated/**"]
exclude = ["generated/important.py"]

View File

@@ -2,6 +2,15 @@
ty defines and respects the following environment variables:
### `TY_CONFIG_FILE`
Path to a `ty.toml` configuration file to use.
When set, ty will use this file for configuration instead of
discovering configuration files automatically.
Equivalent to the `--config-file` command-line argument.
### `TY_LOG`
If set, ty will use this value as the log level for its `--verbose` output.

1217
crates/ty/docs/rules.md generated

File diff suppressed because it is too large Load Diff

View File

@@ -9,6 +9,7 @@ use ty_combine::Combine;
use ty_project::metadata::options::{EnvironmentOptions, Options, SrcOptions, TerminalOptions};
use ty_project::metadata::value::{RangedValue, RelativeGlobPattern, RelativePathBuf, ValueSource};
use ty_python_semantic::lint;
use ty_static::EnvVars;
// Configures Clap v3-style help menu colors
const STYLES: Styles = Styles::styled()
@@ -121,7 +122,7 @@ pub(crate) struct CheckCommand {
/// The path to a `ty.toml` file to use for configuration.
///
/// While ty configuration can be included in a `pyproject.toml` file, it is not allowed in this context.
#[arg(long, env = "TY_CONFIG_FILE", value_name = "PATH")]
#[arg(long, env = EnvVars::TY_CONFIG_FILE, value_name = "PATH")]
pub(crate) config_file: Option<SystemPathBuf>,
/// The format to use for printing diagnostic messages.
@@ -153,6 +154,17 @@ pub(crate) struct CheckCommand {
#[clap(long, overrides_with("respect_ignore_files"), hide = true)]
no_respect_ignore_files: bool,
/// Enforce exclusions, even for paths passed to ty directly on the command-line.
/// Use `--no-force-exclude` to disable.
#[arg(
long,
overrides_with("no_force_exclude"),
help_heading = "File selection"
)]
force_exclude: bool,
#[clap(long, overrides_with("force_exclude"), hide = true)]
no_force_exclude: bool,
/// Glob patterns for files to exclude from type checking.
///
/// Uses gitignore-style syntax to exclude files and directories from type checking.
@@ -177,6 +189,10 @@ pub(crate) struct CheckCommand {
}
impl CheckCommand {
pub(crate) fn force_exclude(&self) -> bool {
resolve_bool_arg(self.force_exclude, self.no_force_exclude).unwrap_or_default()
}
pub(crate) fn into_options(self) -> Options {
let rules = if self.rules.is_empty() {
None
@@ -434,3 +450,12 @@ impl ConfigsArg {
self.0
}
}
fn resolve_bool_arg(yes: bool, no: bool) -> Option<bool> {
match (yes, no) {
(true, false) => Some(true),
(false, true) => Some(false),
(false, false) => None,
(..) => unreachable!("Clap should make this impossible"),
}
}

View File

@@ -118,6 +118,7 @@ fn run_check(args: CheckCommand) -> anyhow::Result<ExitStatus> {
.config_file
.as_ref()
.map(|path| SystemPath::absolute(path, &cwd));
let force_exclude = args.force_exclude();
let mut project_metadata = match &config_file {
Some(config_file) => ProjectMetadata::from_config_file(config_file.clone(), &system)?,
@@ -130,11 +131,13 @@ fn run_check(args: CheckCommand) -> anyhow::Result<ExitStatus> {
project_metadata.apply_overrides(&project_options_overrides);
let mut db = ProjectDatabase::new(project_metadata, system)?;
let project = db.project();
project.set_verbose(&mut db, verbosity >= VerbosityLevel::Verbose);
project.set_force_exclude(&mut db, force_exclude);
db.project()
.set_verbose(&mut db, verbosity >= VerbosityLevel::Verbose);
if !check_paths.is_empty() {
db.project().set_included_paths(&mut db, check_paths);
project.set_included_paths(&mut db, check_paths);
}
let (main_loop, main_loop_cancellation_token) =
@@ -273,9 +276,6 @@ impl MainLoop {
let mut revision = 0u64;
while let Ok(message) = self.receiver.recv() {
if self.watcher.is_some() {
Printer::clear_screen()?;
}
match message {
MainLoopMessage::CheckWorkspace => {
let db = db.clone();
@@ -384,12 +384,15 @@ impl MainLoop {
}
MainLoopMessage::ApplyChanges(changes) => {
Printer::clear_screen()?;
revision += 1;
// Automatically cancels any pending queries and waits for them to complete.
db.apply_changes(changes, Some(&self.project_options_overrides));
if let Some(watcher) = self.watcher.as_mut() {
watcher.update(db);
}
self.sender.send(MainLoopMessage::CheckWorkspace).unwrap();
}
MainLoopMessage::Exit => {

View File

@@ -589,6 +589,128 @@ fn explicit_path_overrides_exclude() -> anyhow::Result<()> {
Ok(())
}
/// Test behavior when explicitly checking a path that matches an exclude pattern and `--force-exclude` is provided
#[test]
fn explicit_path_overrides_exclude_force_exclude() -> anyhow::Result<()> {
let case = CliTest::with_files([
(
"src/main.py",
r#"
print(undefined_var) # error: unresolved-reference
"#,
),
(
"tests/generated.py",
r#"
print(dist_undefined_var) # error: unresolved-reference
"#,
),
(
"dist/other.py",
r#"
print(other_undefined_var) # error: unresolved-reference
"#,
),
(
"ty.toml",
r#"
[src]
exclude = ["tests/generated.py"]
"#,
),
])?;
// Explicitly checking a file in an excluded directory should still check that file
assert_cmd_snapshot!(case.command().arg("tests/generated.py").arg("src/main.py"), @r"
success: false
exit_code: 1
----- stdout -----
error[unresolved-reference]: Name `undefined_var` used when not defined
--> src/main.py:2:7
|
2 | print(undefined_var) # error: unresolved-reference
| ^^^^^^^^^^^^^
|
info: rule `unresolved-reference` is enabled by default
error[unresolved-reference]: Name `dist_undefined_var` used when not defined
--> tests/generated.py:2:7
|
2 | print(dist_undefined_var) # error: unresolved-reference
| ^^^^^^^^^^^^^^^^^^
|
info: rule `unresolved-reference` is enabled by default
Found 2 diagnostics
----- stderr -----
");
// Except when `--force-exclude` is set.
assert_cmd_snapshot!(case.command().arg("tests/generated.py").arg("src/main.py").arg("--force-exclude"), @r"
success: false
exit_code: 1
----- stdout -----
error[unresolved-reference]: Name `undefined_var` used when not defined
--> src/main.py:2:7
|
2 | print(undefined_var) # error: unresolved-reference
| ^^^^^^^^^^^^^
|
info: rule `unresolved-reference` is enabled by default
Found 1 diagnostic
----- stderr -----
");
// Explicitly checking the entire excluded directory should check all files in it
assert_cmd_snapshot!(case.command().arg("dist/").arg("src/main.py"), @r"
success: false
exit_code: 1
----- stdout -----
error[unresolved-reference]: Name `other_undefined_var` used when not defined
--> dist/other.py:2:7
|
2 | print(other_undefined_var) # error: unresolved-reference
| ^^^^^^^^^^^^^^^^^^^
|
info: rule `unresolved-reference` is enabled by default
error[unresolved-reference]: Name `undefined_var` used when not defined
--> src/main.py:2:7
|
2 | print(undefined_var) # error: unresolved-reference
| ^^^^^^^^^^^^^
|
info: rule `unresolved-reference` is enabled by default
Found 2 diagnostics
----- stderr -----
");
// Except when using `--force-exclude`
assert_cmd_snapshot!(case.command().arg("dist/").arg("src/main.py").arg("--force-exclude"), @r"
success: false
exit_code: 1
----- stdout -----
error[unresolved-reference]: Name `undefined_var` used when not defined
--> src/main.py:2:7
|
2 | print(undefined_var) # error: unresolved-reference
| ^^^^^^^^^^^^^
|
info: rule `unresolved-reference` is enabled by default
Found 1 diagnostic
----- stderr -----
");
Ok(())
}
#[test]
fn cli_and_configuration_exclude() -> anyhow::Result<()> {
let case = CliTest::with_files([

View File

@@ -2703,3 +2703,51 @@ fn pythonpath_multiple_dirs_is_respected() -> anyhow::Result<()> {
Ok(())
}
/// Test behavior when `VIRTUAL_ENV` is set but points to a non-existent path.
#[test]
fn missing_virtual_env() -> anyhow::Result<()> {
let working_venv_package1_path = if cfg!(windows) {
"project/.venv/Lib/site-packages/package1/__init__.py"
} else {
"project/.venv/lib/python3.13/site-packages/package1/__init__.py"
};
let case = CliTest::with_files([
(
"project/test.py",
r#"
from package1 import WorkingVenv
"#,
),
(
"project/.venv/pyvenv.cfg",
r#"
home = ./
"#,
),
(
working_venv_package1_path,
r#"
class WorkingVenv: ...
"#,
),
])?;
assert_cmd_snapshot!(case.command()
.current_dir(case.root().join("project"))
.env("VIRTUAL_ENV", case.root().join("nonexistent-venv")), @r"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ty failed
Cause: Failed to discover local Python environment
Cause: Invalid `VIRTUAL_ENV` environment variable `<temp_dir>/nonexistent-venv`: does not point to a directory on disk
Cause: No such file or directory (os error 2)
");
Ok(())
}

View File

@@ -15,7 +15,7 @@ import-deprioritizes-type_check_only,main.py,3,2
import-deprioritizes-type_check_only,main.py,4,3
import-keyword-completion,main.py,0,1
internal-typeshed-hidden,main.py,0,2
none-completion,main.py,0,2
none-completion,main.py,0,1
numpy-array,main.py,0,159
numpy-array,main.py,1,1
object-attr-instance-methods,main.py,0,1
@@ -28,4 +28,4 @@ scope-simple-long-identifier,main.py,0,1
tstring-completions,main.py,0,1
ty-extensions-lower-stdlib,main.py,0,9
type-var-typing-over-ast,main.py,0,3
type-var-typing-over-ast,main.py,1,251
type-var-typing-over-ast,main.py,1,253
1 name file index rank
15 import-deprioritizes-type_check_only main.py 4 3
16 import-keyword-completion main.py 0 1
17 internal-typeshed-hidden main.py 0 2
18 none-completion main.py 0 2 1
19 numpy-array main.py 0 159
20 numpy-array main.py 1 1
21 object-attr-instance-methods main.py 0 1
28 tstring-completions main.py 0 1
29 ty-extensions-lower-stdlib main.py 0 9
30 type-var-typing-over-ast main.py 0 3
31 type-var-typing-over-ast main.py 1 251 253

View File

@@ -1,7 +1,8 @@
use crate::{completion, find_node::covering_node};
use crate::completion;
use ruff_db::{files::File, parsed::parsed_module};
use ruff_diagnostics::Edit;
use ruff_python_ast::find_node::covering_node;
use ruff_text_size::TextRange;
use ty_project::Db;
use ty_python_semantic::create_suppression_fix;

File diff suppressed because it is too large Load Diff

View File

@@ -5,9 +5,9 @@ pub use crate::goto_type_definition::goto_type_definition;
use std::borrow::Cow;
use crate::find_node::covering_node;
use crate::stub_mapping::StubMapper;
use ruff_db::parsed::ParsedModuleRef;
use ruff_python_ast::find_node::{CoveringNode, covering_node};
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_ast::{self as ast, AnyNodeRef};
use ruff_text_size::{Ranged, TextRange, TextSize};
@@ -665,7 +665,7 @@ impl GotoTarget<'_> {
/// Creates a `GotoTarget` from a `CoveringNode` and an offset within the node
pub(crate) fn from_covering_node<'a>(
model: &SemanticModel,
covering_node: &crate::find_node::CoveringNode<'a>,
covering_node: &CoveringNode<'a>,
offset: TextSize,
tokens: &Tokens,
) -> Option<GotoTarget<'a>> {

View File

@@ -386,6 +386,29 @@ FOO = 0
");
}
#[test]
fn goto_declaration_from_import_rhs_is_module() {
let test = CursorTest::builder()
.source("lib/__init__.py", r#""#)
.source("lib/module.py", r#""#)
.source("main.py", r#"from lib import module<CURSOR>"#)
.build();
// Should resolve to the actual function definition, not the import statement
assert_snapshot!(test.goto_declaration(), @r"
info[goto-declaration]: Go to declaration
--> main.py:1:17
|
1 | from lib import module
| ^^^^^^ Clicking here
|
info: Found 1 declaration
--> lib/module.py:1:1
|
|
");
}
#[test]
fn goto_declaration_from_import_as() {
let test = CursorTest::builder()

View File

@@ -1339,13 +1339,13 @@ f(**kwargs<CURSOR>)
| ^^^^^^ Clicking here
|
info: Found 1 type definition
--> stdlib/builtins.pyi:2920:7
--> stdlib/builtins.pyi:2947:7
|
2919 | @disjoint_base
2920 | class dict(MutableMapping[_KT, _VT]):
2946 | @disjoint_base
2947 | class dict(MutableMapping[_KT, _VT]):
| ----
2921 | """dict() -> new empty dictionary
2922 | dict(mapping) -> new dictionary initialized from a mapping object's
2948 | """dict() -> new empty dictionary
2949 | dict(mapping) -> new dictionary initialized from a mapping object's
|
"#);
}

File diff suppressed because it is too large Load Diff

View File

@@ -745,8 +745,17 @@ impl ImportResponseKind<'_> {
fn priority(&self) -> usize {
match *self {
ImportResponseKind::Unqualified { .. } => 0,
ImportResponseKind::Qualified { .. } => 1,
ImportResponseKind::Partial(_) => 2,
ImportResponseKind::Partial(_) => 1,
// N.B. When given the choice between adding a
// name to an existing `from ... import ...`
// statement and using an existing `import ...`
// in a qualified manner, we currently choose
// the former. Originally we preferred qualification,
// but there is some evidence that this violates
// expectations.
//
// Ref: https://github.com/astral-sh/ty/issues/1274#issuecomment-3352233790
ImportResponseKind::Qualified { .. } => 2,
}
}
}
@@ -860,7 +869,6 @@ mod tests {
use insta::assert_snapshot;
use insta::internals::SettingsBindDropGuard;
use crate::find_node::covering_node;
use crate::tests::{CursorTest, CursorTestBuilder, cursor_test};
use ruff_db::diagnostic::{Diagnostic, DiagnosticFormat, DisplayDiagnosticConfig};
use ruff_db::files::{File, FileRootKind, system_path_to_file};
@@ -868,6 +876,7 @@ mod tests {
use ruff_db::source::source_text;
use ruff_db::system::{DbWithWritableSystem, SystemPath, SystemPathBuf};
use ruff_db::{Db, system};
use ruff_python_ast::find_node::covering_node;
use ruff_python_codegen::Stylist;
use ruff_python_trivia::textwrap::dedent;
use ruff_text_size::TextSize;
@@ -1332,9 +1341,9 @@ import collections
);
assert_snapshot!(
test.import("collections", "defaultdict"), @r"
from collections import OrderedDict
from collections import OrderedDict, defaultdict
import collections
collections.defaultdict
defaultdict
");
}

View File

@@ -1241,12 +1241,12 @@ mod tests {
---------------------------------------------
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2695:7
--> stdlib/builtins.pyi:2722:7
|
2694 | @disjoint_base
2695 | class tuple(Sequence[_T_co]):
2721 | @disjoint_base
2722 | class tuple(Sequence[_T_co]):
| ^^^^^
2696 | """Built-in immutable sequence.
2723 | """Built-in immutable sequence.
|
info: Source
--> main2.py:8:5
@@ -1335,12 +1335,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2695:7
--> stdlib/builtins.pyi:2722:7
|
2694 | @disjoint_base
2695 | class tuple(Sequence[_T_co]):
2721 | @disjoint_base
2722 | class tuple(Sequence[_T_co]):
| ^^^^^
2696 | """Built-in immutable sequence.
2723 | """Built-in immutable sequence.
|
info: Source
--> main2.py:9:5
@@ -1391,12 +1391,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2695:7
--> stdlib/builtins.pyi:2722:7
|
2694 | @disjoint_base
2695 | class tuple(Sequence[_T_co]):
2721 | @disjoint_base
2722 | class tuple(Sequence[_T_co]):
| ^^^^^
2696 | """Built-in immutable sequence.
2723 | """Built-in immutable sequence.
|
info: Source
--> main2.py:10:5
@@ -2217,12 +2217,12 @@ mod tests {
---------------------------------------------
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:2:5
@@ -2270,12 +2270,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:3:5
@@ -2325,12 +2325,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:4:5
@@ -2364,13 +2364,13 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2591:7
--> stdlib/builtins.pyi:2618:7
|
2590 | @final
2591 | class bool(int):
2617 | @final
2618 | class bool(int):
| ^^^^
2592 | """Returns True when the argument is true, False otherwise.
2593 | The builtins True and False are the only two instances of the class bool.
2619 | """Returns True when the argument is true, False otherwise.
2620 | The builtins True and False are the only two instances of the class bool.
|
info: Source
--> main2.py:4:20
@@ -2384,12 +2384,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:5:5
@@ -2443,12 +2443,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:6:5
@@ -2502,12 +2502,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:7:5
@@ -2561,12 +2561,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:8:5
@@ -2620,12 +2620,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:9:5
@@ -2678,12 +2678,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:10:5
@@ -2737,12 +2737,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:11:5
@@ -2811,12 +2811,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:12:5
@@ -2925,12 +2925,12 @@ mod tests {
---------------------------------------------
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2695:7
--> stdlib/builtins.pyi:2722:7
|
2694 | @disjoint_base
2695 | class tuple(Sequence[_T_co]):
2721 | @disjoint_base
2722 | class tuple(Sequence[_T_co]):
| ^^^^^
2696 | """Built-in immutable sequence.
2723 | """Built-in immutable sequence.
|
info: Source
--> main2.py:7:5
@@ -3092,12 +3092,12 @@ mod tests {
---------------------------------------------
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:4:18
@@ -3110,12 +3110,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2695:7
--> stdlib/builtins.pyi:2722:7
|
2694 | @disjoint_base
2695 | class tuple(Sequence[_T_co]):
2721 | @disjoint_base
2722 | class tuple(Sequence[_T_co]):
| ^^^^^
2696 | """Built-in immutable sequence.
2723 | """Built-in immutable sequence.
|
info: Source
--> main2.py:5:18
@@ -3248,12 +3248,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2695:7
--> stdlib/builtins.pyi:2722:7
|
2694 | @disjoint_base
2695 | class tuple(Sequence[_T_co]):
2721 | @disjoint_base
2722 | class tuple(Sequence[_T_co]):
| ^^^^^
2696 | """Built-in immutable sequence.
2723 | """Built-in immutable sequence.
|
info: Source
--> main2.py:8:5
@@ -4250,12 +4250,12 @@ mod tests {
foo([x=]y[0])
---------------------------------------------
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:3:5
@@ -4303,12 +4303,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:4:5
@@ -5609,12 +5609,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:3:14
@@ -6046,13 +6046,13 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2591:7
--> stdlib/builtins.pyi:2618:7
|
2590 | @final
2591 | class bool(int):
2617 | @final
2618 | class bool(int):
| ^^^^
2592 | """Returns True when the argument is true, False otherwise.
2593 | The builtins True and False are the only two instances of the class bool.
2619 | """Returns True when the argument is true, False otherwise.
2620 | The builtins True and False are the only two instances of the class bool.
|
info: Source
--> main2.py:4:25
@@ -6100,12 +6100,12 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/builtins.pyi:2802:7
--> stdlib/builtins.pyi:2829:7
|
2801 | @disjoint_base
2802 | class list(MutableSequence[_T]):
2828 | @disjoint_base
2829 | class list(MutableSequence[_T]):
| ^^^^
2803 | """Built-in mutable sequence.
2830 | """Built-in mutable sequence.
|
info: Source
--> main2.py:4:49
@@ -6216,7 +6216,7 @@ mod tests {
assert_snapshot!(test.inlay_hints(), @r#"
from typing import Literal
a[: <special form 'Literal["a", "b", "c"]'>] = Literal['a', 'b', 'c']
a[: <special-form 'Literal["a", "b", "c"]'>] = Literal['a', 'b', 'c']
---------------------------------------------
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/typing.pyi:351:1
@@ -6232,7 +6232,7 @@ mod tests {
|
2 | from typing import Literal
3 |
4 | a[: <special form 'Literal["a", "b", "c"]'>] = Literal['a', 'b', 'c']
4 | a[: <special-form 'Literal["a", "b", "c"]'>] = Literal['a', 'b', 'c']
| ^^^^^^^
|
@@ -6250,7 +6250,7 @@ mod tests {
|
2 | from typing import Literal
3 |
4 | a[: <special form 'Literal["a", "b", "c"]'>] = Literal['a', 'b', 'c']
4 | a[: <special-form 'Literal["a", "b", "c"]'>] = Literal['a', 'b', 'c']
| ^^^
|
@@ -6268,7 +6268,7 @@ mod tests {
|
2 | from typing import Literal
3 |
4 | a[: <special form 'Literal["a", "b", "c"]'>] = Literal['a', 'b', 'c']
4 | a[: <special-form 'Literal["a", "b", "c"]'>] = Literal['a', 'b', 'c']
| ^^^
|
@@ -6286,7 +6286,7 @@ mod tests {
|
2 | from typing import Literal
3 |
4 | a[: <special form 'Literal["a", "b", "c"]'>] = Literal['a', 'b', 'c']
4 | a[: <special-form 'Literal["a", "b", "c"]'>] = Literal['a', 'b', 'c']
| ^^^
|
"#);
@@ -6636,7 +6636,7 @@ mod tests {
assert_snapshot!(test.inlay_hints(), @r"
from typing import Protocol, TypeVar
T = TypeVar([name=]'T')
Strange[: <special form 'typing.Protocol[T]'>] = Protocol[T]
Strange[: <special-form 'typing.Protocol[T]'>] = Protocol[T]
---------------------------------------------
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/typing.pyi:276:13
@@ -6654,7 +6654,7 @@ mod tests {
2 | from typing import Protocol, TypeVar
3 | T = TypeVar([name=]'T')
| ^^^^
4 | Strange[: <special form 'typing.Protocol[T]'>] = Protocol[T]
4 | Strange[: <special-form 'typing.Protocol[T]'>] = Protocol[T]
|
info[inlay-hint-location]: Inlay Hint Target
@@ -6671,7 +6671,7 @@ mod tests {
|
2 | from typing import Protocol, TypeVar
3 | T = TypeVar([name=]'T')
4 | Strange[: <special form 'typing.Protocol[T]'>] = Protocol[T]
4 | Strange[: <special-form 'typing.Protocol[T]'>] = Protocol[T]
| ^^^^^^^^^^^^^^^
|
@@ -6688,7 +6688,7 @@ mod tests {
|
2 | from typing import Protocol, TypeVar
3 | T = TypeVar([name=]'T')
4 | Strange[: <special form 'typing.Protocol[T]'>] = Protocol[T]
4 | Strange[: <special-form 'typing.Protocol[T]'>] = Protocol[T]
| ^
|
");
@@ -6739,14 +6739,14 @@ mod tests {
A = TypeAliasType([name=]'A', [value=]str)
---------------------------------------------
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/typing.pyi:2032:26
--> stdlib/typing.pyi:2037:26
|
2030 | """
2031 |
2032 | def __new__(cls, name: str, value: Any, *, type_params: tuple[_TypeParameter, ...] = ()) -> Self: ...
2035 | """
2036 |
2037 | def __new__(cls, name: str, value: Any, *, type_params: tuple[_TypeParameter, ...] = ()) -> Self: ...
| ^^^^
2033 | @property
2034 | def __value__(self) -> Any: ... # AnnotationForm
2038 | @property
2039 | def __value__(self) -> Any: ... # AnnotationForm
|
info: Source
--> main2.py:3:20
@@ -6757,14 +6757,14 @@ mod tests {
|
info[inlay-hint-location]: Inlay Hint Target
--> stdlib/typing.pyi:2032:37
--> stdlib/typing.pyi:2037:37
|
2030 | """
2031 |
2032 | def __new__(cls, name: str, value: Any, *, type_params: tuple[_TypeParameter, ...] = ()) -> Self: ...
2035 | """
2036 |
2037 | def __new__(cls, name: str, value: Any, *, type_params: tuple[_TypeParameter, ...] = ()) -> Self: ...
| ^^^^^
2033 | @property
2034 | def __value__(self) -> Any: ... # AnnotationForm
2038 | @property
2039 | def __value__(self) -> Any: ... # AnnotationForm
|
info: Source
--> main2.py:3:32

View File

@@ -8,7 +8,6 @@ mod completion;
mod doc_highlights;
mod docstring;
mod document_symbols;
mod find_node;
mod find_references;
mod goto;
mod goto_declaration;

View File

@@ -10,10 +10,10 @@
//! all references to these externally-visible symbols therefore requires
//! an expensive search of all source files in the workspace.
use crate::find_node::CoveringNode;
use crate::goto::GotoTarget;
use crate::{Db, NavigationTargets, ReferenceKind, ReferenceTarget};
use ruff_db::files::File;
use ruff_python_ast::find_node::CoveringNode;
use ruff_python_ast::token::Tokens;
use ruff_python_ast::{
self as ast, AnyNodeRef,
@@ -334,10 +334,7 @@ impl LocalReferencesFinder<'_> {
/// Determines whether the given covering node is a reference to
/// the symbol we are searching for
fn check_reference_from_covering_node(
&mut self,
covering_node: &crate::find_node::CoveringNode<'_>,
) {
fn check_reference_from_covering_node(&mut self, covering_node: &CoveringNode<'_>) {
// Use the start of the covering node as the offset. Any offset within
// the node is fine here. Offsets matter only for import statements
// where the identifier might be a multi-part module name.

View File

@@ -1,9 +1,9 @@
use ruff_db::files::File;
use ruff_db::parsed::parsed_module;
use ruff_python_ast::find_node::covering_node;
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::Db;
use crate::find_node::covering_node;
/// Returns a list of nested selection ranges, where each range contains the next one.
/// The first range in the list is the largest range containing the cursor position.
@@ -66,20 +66,28 @@ x = 1 + <CURSOR>2
assert_snapshot!(test.selection_range(), @r"
info[selection-range]: Selection Range 0
--> main.py:1:1
|
1 | /
2 | | x = 1 + 2
| |__________^
|
info[selection-range]: Selection Range 1
--> main.py:2:1
|
2 | x = 1 + 2
| ^^^^^^^^^
|
info[selection-range]: Selection Range 1
info[selection-range]: Selection Range 2
--> main.py:2:5
|
2 | x = 1 + 2
| ^^^^^
|
info[selection-range]: Selection Range 2
info[selection-range]: Selection Range 3
--> main.py:2:9
|
2 | x = 1 + 2
@@ -102,20 +110,28 @@ print(\"he<CURSOR>llo\")
assert_snapshot!(test.selection_range(), @r#"
info[selection-range]: Selection Range 0
--> main.py:1:1
|
1 | /
2 | | print("hello")
| |_______________^
|
info[selection-range]: Selection Range 1
--> main.py:2:1
|
2 | print("hello")
| ^^^^^^^^^^^^^^
|
info[selection-range]: Selection Range 1
info[selection-range]: Selection Range 2
--> main.py:2:6
|
2 | print("hello")
| ^^^^^^^^^
|
info[selection-range]: Selection Range 2
info[selection-range]: Selection Range 3
--> main.py:2:7
|
2 | print("hello")
@@ -139,6 +155,15 @@ def my_<CURSOR>function():
assert_snapshot!(test.selection_range(), @r"
info[selection-range]: Selection Range 0
--> main.py:1:1
|
1 | /
2 | | def my_function():
3 | | return 42
| |______________^
|
info[selection-range]: Selection Range 1
--> main.py:2:1
|
2 | / def my_function():
@@ -146,7 +171,7 @@ def my_<CURSOR>function():
| |_____________^
|
info[selection-range]: Selection Range 1
info[selection-range]: Selection Range 2
--> main.py:2:5
|
2 | def my_function():
@@ -172,6 +197,16 @@ class My<CURSOR>Class:
assert_snapshot!(test.selection_range(), @r"
info[selection-range]: Selection Range 0
--> main.py:1:1
|
1 | /
2 | | class MyClass:
3 | | def __init__(self):
4 | | self.value = 1
| |_______________________^
|
info[selection-range]: Selection Range 1
--> main.py:2:1
|
2 | / class MyClass:
@@ -180,7 +215,7 @@ class My<CURSOR>Class:
| |______________________^
|
info[selection-range]: Selection Range 1
info[selection-range]: Selection Range 2
--> main.py:2:7
|
2 | class MyClass:
@@ -205,48 +240,56 @@ result = [(lambda x: x[key.<CURSOR>attr])(item) for item in data if item is not
assert_snapshot!(test.selection_range(), @r"
info[selection-range]: Selection Range 0
--> main.py:1:1
|
1 | /
2 | | result = [(lambda x: x[key.attr])(item) for item in data if item is not None]
| |______________________________________________________________________________^
|
info[selection-range]: Selection Range 1
--> main.py:2:1
|
2 | result = [(lambda x: x[key.attr])(item) for item in data if item is not None]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
info[selection-range]: Selection Range 1
info[selection-range]: Selection Range 2
--> main.py:2:10
|
2 | result = [(lambda x: x[key.attr])(item) for item in data if item is not None]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
info[selection-range]: Selection Range 2
info[selection-range]: Selection Range 3
--> main.py:2:11
|
2 | result = [(lambda x: x[key.attr])(item) for item in data if item is not None]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
info[selection-range]: Selection Range 3
info[selection-range]: Selection Range 4
--> main.py:2:12
|
2 | result = [(lambda x: x[key.attr])(item) for item in data if item is not None]
| ^^^^^^^^^^^^^^^^^^^^^
|
info[selection-range]: Selection Range 4
info[selection-range]: Selection Range 5
--> main.py:2:22
|
2 | result = [(lambda x: x[key.attr])(item) for item in data if item is not None]
| ^^^^^^^^^^^
|
info[selection-range]: Selection Range 5
info[selection-range]: Selection Range 6
--> main.py:2:24
|
2 | result = [(lambda x: x[key.attr])(item) for item in data if item is not None]
| ^^^^^^^^
|
info[selection-range]: Selection Range 6
info[selection-range]: Selection Range 7
--> main.py:2:28
|
2 | result = [(lambda x: x[key.attr])(item) for item in data if item is not None]

View File

@@ -37,13 +37,12 @@ use bitflags::bitflags;
use itertools::Itertools;
use ruff_db::files::File;
use ruff_db::parsed::parsed_module;
use ruff_python_ast as ast;
use ruff_python_ast::visitor::source_order::{
SourceOrderVisitor, TraversalSignal, walk_expr, walk_stmt,
SourceOrderVisitor, TraversalSignal, walk_arguments, walk_expr, walk_stmt,
};
use ruff_python_ast::{
AnyNodeRef, BytesLiteral, Expr, FString, InterpolatedStringElement, Stmt, StringLiteral,
TypeParam,
self as ast, AnyNodeRef, BytesLiteral, Expr, FString, InterpolatedStringElement, Stmt,
StringLiteral, TypeParam,
};
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
use std::ops::Deref;
@@ -132,6 +131,7 @@ bitflags! {
const DEFINITION = 1 << 0;
const READONLY = 1 << 1;
const ASYNC = 1 << 2;
const DOCUMENTATION = 1 << 3;
}
}
@@ -144,7 +144,7 @@ impl SemanticTokenModifier {
/// highlighting. For details, refer to this LSP specification:
/// <https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#semanticTokenModifiers>
pub fn all_names() -> Vec<&'static str> {
vec!["definition", "readonly", "async"]
vec!["definition", "readonly", "async", "documentation"]
}
}
@@ -190,18 +190,22 @@ pub fn semantic_tokens(db: &dyn Db, file: File, range: Option<TextRange>) -> Sem
let model = SemanticModel::new(db, file);
let mut visitor = SemanticTokenVisitor::new(&model, range);
visitor.expecting_docstring = true;
visitor.visit_body(parsed.suite());
SemanticTokens::new(visitor.tokens)
}
/// AST visitor that collects semantic tokens.
#[expect(clippy::struct_excessive_bools)]
struct SemanticTokenVisitor<'db> {
model: &'db SemanticModel<'db>,
tokens: Vec<SemanticToken>,
in_class_scope: bool,
in_type_annotation: bool,
in_target_creating_definition: bool,
in_docstring: bool,
expecting_docstring: bool,
range_filter: Option<TextRange>,
}
@@ -213,7 +217,9 @@ impl<'db> SemanticTokenVisitor<'db> {
in_class_scope: false,
in_target_creating_definition: false,
in_type_annotation: false,
in_docstring: false,
range_filter,
expecting_docstring: false,
}
}
@@ -254,7 +260,9 @@ impl<'db> SemanticTokenVisitor<'db> {
}
fn is_constant_name(name: &str) -> bool {
name.chars().all(|c| c.is_uppercase() || c == '_') && name.len() > 1
name.chars()
.all(|c| c.is_uppercase() || c == '_' || c.is_numeric())
&& name.len() > 1
}
fn classify_name(&self, name: &ast::ExprName) -> (SemanticTokenType, SemanticTokenModifier) {
@@ -600,6 +608,8 @@ impl SourceOrderVisitor<'_> for SemanticTokenVisitor<'_> {
}
fn visit_stmt(&mut self, stmt: &Stmt) {
let expecting_docstring = self.expecting_docstring;
self.expecting_docstring = false;
match stmt {
ast::Stmt::FunctionDef(func) => {
// Visit decorator expressions
@@ -641,7 +651,9 @@ impl SourceOrderVisitor<'_> for SemanticTokenVisitor<'_> {
let prev_in_class = self.in_class_scope;
self.in_class_scope = false;
self.expecting_docstring = true;
self.visit_body(&func.body);
self.expecting_docstring = false;
self.in_class_scope = prev_in_class;
}
ast::Stmt::ClassDef(class) => {
@@ -666,19 +678,14 @@ impl SourceOrderVisitor<'_> for SemanticTokenVisitor<'_> {
// Handle base classes and type annotations in inheritance
if let Some(arguments) = &class.arguments {
// Visit base class arguments
for arg in &arguments.args {
self.visit_expr(arg);
}
// Visit keyword arguments (for metaclass, etc.)
for keyword in &arguments.keywords {
self.visit_expr(&keyword.value);
}
walk_arguments(self, arguments);
}
let prev_in_class = self.in_class_scope;
self.in_class_scope = true;
self.expecting_docstring = true;
self.visit_body(&class.body);
self.expecting_docstring = false;
self.in_class_scope = prev_in_class;
}
ast::Stmt::TypeAlias(type_alias) => {
@@ -760,6 +767,7 @@ impl SourceOrderVisitor<'_> for SemanticTokenVisitor<'_> {
self.in_target_creating_definition = false;
self.visit_expr(&assignment.value);
self.expecting_docstring = true;
}
ast::Stmt::AnnAssign(assignment) => {
self.in_target_creating_definition = true;
@@ -771,6 +779,7 @@ impl SourceOrderVisitor<'_> for SemanticTokenVisitor<'_> {
if let Some(value) = &assignment.value {
self.visit_expr(value);
}
self.expecting_docstring = true;
}
ast::Stmt::For(for_stmt) => {
self.in_target_creating_definition = true;
@@ -815,7 +824,13 @@ impl SourceOrderVisitor<'_> for SemanticTokenVisitor<'_> {
self.visit_body(&try_stmt.orelse);
self.visit_body(&try_stmt.finalbody);
}
ast::Stmt::Expr(expr) => {
if expecting_docstring && expr.value.is_string_literal_expr() {
self.in_docstring = true;
}
walk_stmt(self, stmt);
self.in_docstring = false;
}
_ => {
// For all other statement types, let the default visitor handle them
walk_stmt(self, stmt);
@@ -909,11 +924,12 @@ impl SourceOrderVisitor<'_> for SemanticTokenVisitor<'_> {
fn visit_string_literal(&mut self, string_literal: &StringLiteral) {
// Emit a semantic token for this string literal part
self.add_token(
string_literal.range(),
SemanticTokenType::String,
SemanticTokenModifier::empty(),
);
let modifiers = if self.in_docstring {
SemanticTokenModifier::DOCUMENTATION
} else {
SemanticTokenModifier::empty()
};
self.add_token(string_literal.range(), SemanticTokenType::String, modifiers);
}
fn visit_bytes_literal(&mut self, bytes_literal: &BytesLiteral) {
@@ -1136,6 +1152,22 @@ mod tests {
"###);
}
#[test]
fn test_semantic_tokens_class_args() {
// This used to cause a panic because of an incorrect
// insertion-order when visiting arguments inside
// class definitions.
let test = SemanticTokenTest::new("class Foo(m=x, m)");
let tokens = test.highlight_file();
assert_snapshot!(test.to_snapshot(&tokens), @r###"
"Foo" @ 6..9: Class [definition]
"x" @ 12..13: Variable
"m" @ 15..16: Variable
"###);
}
#[test]
fn test_semantic_tokens_variables() {
let test = SemanticTokenTest::new(
@@ -1842,6 +1874,456 @@ y: Optional[str] = None
"#);
}
#[test]
fn function_docstring_classification() {
let test = SemanticTokenTest::new(
r#"
def my_function(param1: int, param2: str) -> bool:
"""Example function
Args:
param1: The first parameter.
param2: The second parameter.
Returns:
The return value. True for success, False otherwise.
"""
x = "hello"
def other_func(): pass
"""unrelated string"""
return False
"#,
);
let tokens = test.highlight_file();
assert_snapshot!(test.to_snapshot(&tokens), @r#"
"my_function" @ 5..16: Function [definition]
"param1" @ 17..23: Parameter [definition]
"int" @ 25..28: Class
"param2" @ 30..36: Parameter [definition]
"str" @ 38..41: Class
"bool" @ 46..50: Class
"\"\"\"Example function\n\n Args:\n param1: The first parameter.\n param2: The second parameter.\n\n Returns:\n The return value. True for success, False otherwise.\n\n \"\"\"" @ 56..245: String [documentation]
"x" @ 251..252: Variable [definition]
"\"hello\"" @ 255..262: String
"other_func" @ 271..281: Function [definition]
"\"\"\"unrelated string\"\"\"" @ 295..317: String
"False" @ 330..335: BuiltinConstant
"#);
}
#[test]
fn class_docstring_classification() {
let test = SemanticTokenTest::new(
r#"
class MyClass:
"""Example class
What a good class wowwee
"""
def __init__(self): pass
"""unrelated string"""
x: str = "hello"
"#,
);
let tokens = test.highlight_file();
assert_snapshot!(test.to_snapshot(&tokens), @r#"
"MyClass" @ 7..14: Class [definition]
"\"\"\"Example class\n\n What a good class wowwee\n \"\"\"" @ 20..74: String [documentation]
"__init__" @ 84..92: Method [definition]
"self" @ 93..97: SelfParameter [definition]
"\"\"\"unrelated string\"\"\"" @ 110..132: String
"x" @ 138..139: Variable [definition]
"str" @ 141..144: Class
"\"hello\"" @ 147..154: String
"#);
}
#[test]
fn module_docstring_classification() {
let test = SemanticTokenTest::new(
r#"
"""Example module
What a good module wooo
"""
def my_func(): pass
"""unrelated string"""
x: str = "hello"
"#,
);
let tokens = test.highlight_file();
assert_snapshot!(test.to_snapshot(&tokens), @r#"
"\"\"\"Example module\n\nWhat a good module wooo\n\"\"\"" @ 1..47: String [documentation]
"my_func" @ 53..60: Function [definition]
"\"\"\"unrelated string\"\"\"" @ 70..92: String
"x" @ 94..95: Variable [definition]
"str" @ 97..100: Class
"\"hello\"" @ 103..110: String
"#);
}
#[test]
fn attribute_docstring_classification() {
let test = SemanticTokenTest::new(
r#"
important_value: str = "wow"
"""This is the most important value
Don't trust the other guy
"""
x = "unrelated string"
other_value: int = 2
"""This is such an import value omg
Trust me
"""
"#,
);
let tokens = test.highlight_file();
assert_snapshot!(test.to_snapshot(&tokens), @r#"
"important_value" @ 1..16: Variable [definition]
"str" @ 18..21: Class
"\"wow\"" @ 24..29: String
"\"\"\"This is the most important value\n\nDon't trust the other guy\n\"\"\"" @ 30..96: String [documentation]
"x" @ 98..99: Variable [definition]
"\"unrelated string\"" @ 102..120: String
"other_value" @ 122..133: Variable [definition]
"int" @ 135..138: Class
"2" @ 141..142: Number
"\"\"\"This is such an import value omg\n\nTrust me\n\"\"\"" @ 143..192: String [documentation]
"#);
}
#[test]
fn attribute_docstring_classification_spill() {
let test = SemanticTokenTest::new(
r#"
if True:
x = 1
"this shouldn't be a docstring but also it doesn't matter much"
"""
"#,
);
let tokens = test.highlight_file();
assert_snapshot!(test.to_snapshot(&tokens), @r#"
"True" @ 4..8: BuiltinConstant
"x" @ 14..15: Variable [definition]
"1" @ 18..19: Number
"\"this shouldn't be a docstring but also it doesn't matter much\"" @ 20..83: String [documentation]
"\"\"\"\n" @ 84..88: String
"#);
}
#[test]
fn docstring_classification_concat() {
let test = SemanticTokenTest::new(
r#"
class MyClass:
"""wow cool docs""" """and docs"""
def my_func():
"""wow cool docs""" """and docs"""
x = 1
"""wow cool docs""" """and docs"""
"#,
);
let tokens = test.highlight_file();
assert_snapshot!(test.to_snapshot(&tokens), @r#"
"MyClass" @ 7..14: Class [definition]
"\"\"\"wow cool docs\"\"\"" @ 20..39: String [documentation]
"\"\"\"and docs\"\"\"" @ 40..54: String [documentation]
"my_func" @ 60..67: Function [definition]
"\"\"\"wow cool docs\"\"\"" @ 75..94: String [documentation]
"\"\"\"and docs\"\"\"" @ 95..109: String [documentation]
"x" @ 111..112: Variable [definition]
"1" @ 115..116: Number
"\"\"\"wow cool docs\"\"\"" @ 117..136: String [documentation]
"\"\"\"and docs\"\"\"" @ 137..151: String [documentation]
"#);
}
#[test]
fn docstring_classification_concat_parens() {
let test = SemanticTokenTest::new(
r#"
class MyClass:
(
"""wow cool docs"""
"""and docs"""
)
def my_func():
(
"""wow cool docs"""
"""and docs"""
)
x = 1
(
"""wow cool docs"""
"""and docs"""
)
"#,
);
let tokens = test.highlight_file();
assert_snapshot!(test.to_snapshot(&tokens), @r#"
"MyClass" @ 7..14: Class [definition]
"\"\"\"wow cool docs\"\"\"" @ 30..49: String [documentation]
"\"\"\"and docs\"\"\"" @ 58..72: String [documentation]
"my_func" @ 84..91: Function [definition]
"\"\"\"wow cool docs\"\"\"" @ 109..128: String [documentation]
"\"\"\"and docs\"\"\"" @ 137..151: String [documentation]
"x" @ 159..160: Variable [definition]
"1" @ 163..164: Number
"\"\"\"wow cool docs\"\"\"" @ 171..190: String [documentation]
"\"\"\"and docs\"\"\"" @ 195..209: String [documentation]
"#);
}
#[test]
fn docstring_classification_concat_parens_commented_nextline() {
let test = SemanticTokenTest::new(
r#"
class MyClass:
(
"""wow cool docs"""
# and a comment that shouldn't be included
"""and docs"""
)
def my_func():
(
"""wow cool docs"""
# and a comment that shouldn't be included
"""and docs"""
)
x = 1
(
"""wow cool docs"""
# and a comment that shouldn't be included
"""and docs"""
)
"#,
);
let tokens = test.highlight_file();
assert_snapshot!(test.to_snapshot(&tokens), @r#"
"MyClass" @ 7..14: Class [definition]
"\"\"\"wow cool docs\"\"\"" @ 30..49: String [documentation]
"\"\"\"and docs\"\"\"" @ 109..123: String [documentation]
"my_func" @ 135..142: Function [definition]
"\"\"\"wow cool docs\"\"\"" @ 160..179: String [documentation]
"\"\"\"and docs\"\"\"" @ 239..253: String [documentation]
"x" @ 261..262: Variable [definition]
"1" @ 265..266: Number
"\"\"\"wow cool docs\"\"\"" @ 273..292: String [documentation]
"\"\"\"and docs\"\"\"" @ 344..358: String [documentation]
"#);
}
#[test]
fn docstring_classification_concat_commented_nextline() {
let test = SemanticTokenTest::new(
r#"
class MyClass:
"""wow cool docs"""
# and a comment that shouldn't be included
"""and docs"""
def my_func():
"""wow cool docs"""
# and a comment that shouldn't be included
"""and docs"""
x = 1
"""wow cool docs"""
# and a comment that shouldn't be included
"""and docs"""
"#,
);
let tokens = test.highlight_file();
assert_snapshot!(test.to_snapshot(&tokens), @r#"
"MyClass" @ 7..14: Class [definition]
"\"\"\"wow cool docs\"\"\"" @ 20..39: String [documentation]
"\"\"\"and docs\"\"\"" @ 91..105: String
"my_func" @ 111..118: Function [definition]
"\"\"\"wow cool docs\"\"\"" @ 126..145: String [documentation]
"\"\"\"and docs\"\"\"" @ 197..211: String
"x" @ 213..214: Variable [definition]
"1" @ 217..218: Number
"\"\"\"wow cool docs\"\"\"" @ 219..238: String [documentation]
"\"\"\"and docs\"\"\"" @ 282..296: String
"#);
}
#[test]
fn docstring_classification_concat_commented_sameline() {
let test = SemanticTokenTest::new(
r#"
class MyClass:
"""wow cool docs""" # and a comment
"""and docs""" # that shouldn't be included
def my_func():
"""wow cool docs""" # and a comment
"""and docs""" # that shouldn't be included
x = 1
"""wow cool docs""" # and a comment
"""and docs""" # that shouldn't be included
"#,
);
let tokens = test.highlight_file();
assert_snapshot!(test.to_snapshot(&tokens), @r#"
"MyClass" @ 7..14: Class [definition]
"\"\"\"wow cool docs\"\"\"" @ 20..39: String [documentation]
"\"\"\"and docs\"\"\"" @ 60..74: String
"my_func" @ 114..121: Function [definition]
"\"\"\"wow cool docs\"\"\"" @ 129..148: String [documentation]
"\"\"\"and docs\"\"\"" @ 169..183: String
"x" @ 219..220: Variable [definition]
"1" @ 223..224: Number
"\"\"\"wow cool docs\"\"\"" @ 225..244: String [documentation]
"\"\"\"and docs\"\"\"" @ 261..275: String
"#);
}
#[test]
fn docstring_classification_concat_slashed() {
let test = SemanticTokenTest::new(
r#"
class MyClass:
"""wow cool docs""" \
"""and docs"""
def my_func():
"""wow cool docs""" \
"""and docs"""
x = 1
"""wow cool docs""" \
"""and docs"""
"#,
);
let tokens = test.highlight_file();
assert_snapshot!(test.to_snapshot(&tokens), @r#"
"MyClass" @ 7..14: Class [definition]
"\"\"\"wow cool docs\"\"\"" @ 20..39: String [documentation]
"\"\"\"and docs\"\"\"" @ 46..60: String [documentation]
"my_func" @ 66..73: Function [definition]
"\"\"\"wow cool docs\"\"\"" @ 81..100: String [documentation]
"\"\"\"and docs\"\"\"" @ 107..121: String [documentation]
"x" @ 123..124: Variable [definition]
"1" @ 127..128: Number
"\"\"\"wow cool docs\"\"\"" @ 129..148: String [documentation]
"\"\"\"and docs\"\"\"" @ 151..165: String [documentation]
"#);
}
#[test]
fn docstring_classification_plus() {
let test = SemanticTokenTest::new(
r#"
class MyClass:
"wow cool docs" + "and docs"
def my_func():
"wow cool docs" + "and docs"
x = 1
"wow cool docs" + "and docs"
"#,
);
let tokens = test.highlight_file();
assert_snapshot!(test.to_snapshot(&tokens), @r#"
"MyClass" @ 7..14: Class [definition]
"\"wow cool docs\"" @ 20..35: String
"\"and docs\"" @ 38..48: String
"my_func" @ 54..61: Function [definition]
"\"wow cool docs\"" @ 69..84: String
"\"and docs\"" @ 87..97: String
"x" @ 99..100: Variable [definition]
"1" @ 103..104: Number
"\"wow cool docs\"" @ 105..120: String
"\"and docs\"" @ 123..133: String
"#);
}
#[test]
fn class_attribute_docstring_classification() {
let test = SemanticTokenTest::new(
r#"
class MyClass:
important_value: str = "wow"
"""This is the most important value
Don't trust the other guy
"""
x = "unrelated string"
other_value: int = 2
"""This is such an import value omg
Trust me
"""
"#,
);
let tokens = test.highlight_file();
assert_snapshot!(test.to_snapshot(&tokens), @r#"
"MyClass" @ 7..14: Class [definition]
"important_value" @ 20..35: Variable [definition]
"str" @ 37..40: Class
"\"wow\"" @ 43..48: String
"\"\"\"This is the most important value\n\n Don't trust the other guy\n \"\"\"" @ 53..127: String [documentation]
"x" @ 133..134: Variable [definition]
"\"unrelated string\"" @ 137..155: String
"other_value" @ 161..172: Variable [definition]
"int" @ 174..177: Class
"2" @ 180..181: Number
"\"\"\"This is such an import value omg\n\n Trust me\n \"\"\"" @ 186..243: String [documentation]
"#);
}
#[test]
fn test_debug_int_classification() {
let test = SemanticTokenTest::new(
@@ -2230,6 +2712,49 @@ class MyClass:
"###);
}
#[test]
fn test_constant_variations() {
let test = SemanticTokenTest::new(
r#"
A = 1
AB = 1
ABC = 1
A1 = 1
AB1 = 1
ABC1 = 1
A_B = 1
A1_B = 1
A_B1 = 1
A_1 = 1
"#,
);
let tokens = test.highlight_file();
assert_snapshot!(test.to_snapshot(&tokens), @r#"
"A" @ 1..2: Variable [definition]
"1" @ 5..6: Number
"AB" @ 7..9: Variable [definition, readonly]
"1" @ 12..13: Number
"ABC" @ 14..17: Variable [definition, readonly]
"1" @ 20..21: Number
"A1" @ 22..24: Variable [definition, readonly]
"1" @ 27..28: Number
"AB1" @ 29..32: Variable [definition, readonly]
"1" @ 35..36: Number
"ABC1" @ 37..41: Variable [definition, readonly]
"1" @ 44..45: Number
"A_B" @ 46..49: Variable [definition, readonly]
"1" @ 52..53: Number
"A1_B" @ 54..58: Variable [definition, readonly]
"1" @ 61..62: Number
"A_B1" @ 63..67: Variable [definition, readonly]
"1" @ 70..71: Number
"A_1" @ 72..75: Variable [definition, readonly]
"1" @ 78..79: Number
"#);
}
#[test]
fn test_implicitly_concatenated_strings() {
let test = SemanticTokenTest::new(
@@ -2803,6 +3328,12 @@ def foo(self, **key, value=10):
if token.modifiers.contains(SemanticTokenModifier::ASYNC) {
mods.push("async");
}
if token
.modifiers
.contains(SemanticTokenModifier::DOCUMENTATION)
{
mods.push("documentation");
}
format!(" [{}]", mods.join(", "))
};

View File

@@ -6,11 +6,12 @@
//! types, and documentation. It supports multiple signatures for union types
//! and overloads.
use crate::Db;
use crate::docstring::Docstring;
use crate::goto::Definitions;
use crate::{Db, find_node::covering_node};
use ruff_db::files::File;
use ruff_db::parsed::parsed_module;
use ruff_python_ast::find_node::covering_node;
use ruff_python_ast::token::TokenKind;
use ruff_python_ast::{self as ast, AnyNodeRef};
use ruff_text_size::{Ranged, TextRange, TextSize};
@@ -124,6 +125,11 @@ fn get_call_expr(
})?;
// Find the covering node at the given position that is a function call.
// Note that we are okay with the range being anywhere within a call
// expression, even if it's not in the arguments portion of the call
// expression. This is because, e.g., a user can request signature
// information at a call site, and this should ideally work anywhere
// within the call site, even at the function name.
let call = covering_node(root_node, token.range())
.find_first(|node| {
if !node.is_expr_call() {

View File

@@ -115,6 +115,10 @@ pub struct Project {
#[default]
verbose_flag: bool,
/// Whether to enforce exclusion rules even to files explicitly passed to ty on the command line.
#[default]
force_exclude_flag: bool,
}
/// A progress reporter.
@@ -380,6 +384,16 @@ impl Project {
self.verbose_flag(db)
}
pub fn set_force_exclude(self, db: &mut dyn Db, force: bool) {
if self.force_exclude_flag(db) != force {
self.set_force_exclude_flag(db).to(force);
}
}
pub fn force_exclude(self, db: &dyn Db) -> bool {
self.force_exclude_flag(db)
}
/// Returns the paths that should be checked.
///
/// The default is to check the entire project in which case this method returns

View File

@@ -5,7 +5,7 @@ use ruff_python_ast::name::Name;
use std::sync::Arc;
use thiserror::Error;
use ty_combine::Combine;
use ty_python_semantic::ProgramSettings;
use ty_python_semantic::{MisconfigurationMode, ProgramSettings};
use crate::metadata::options::ProjectOptionsOverrides;
use crate::metadata::pyproject::{Project, PyProject, PyProjectError, ResolveRequiresPythonError};
@@ -37,6 +37,9 @@ pub struct ProjectMetadata {
/// The path ordering doesn't imply precedence.
#[cfg_attr(test, serde(skip_serializing_if = "Vec::is_empty"))]
pub(super) extra_configuration_paths: Vec<SystemPathBuf>,
#[cfg_attr(test, serde(skip))]
pub(super) misconfiguration_mode: MisconfigurationMode,
}
impl ProjectMetadata {
@@ -47,6 +50,7 @@ impl ProjectMetadata {
root,
extra_configuration_paths: Vec::default(),
options: Options::default(),
misconfiguration_mode: MisconfigurationMode::Fail,
}
}
@@ -70,6 +74,7 @@ impl ProjectMetadata {
root: system.current_directory().to_path_buf(),
options,
extra_configuration_paths: vec![path],
misconfiguration_mode: MisconfigurationMode::Fail,
})
}
@@ -82,6 +87,7 @@ impl ProjectMetadata {
pyproject.tool.and_then(|tool| tool.ty).unwrap_or_default(),
root,
pyproject.project.as_ref(),
MisconfigurationMode::Fail,
)
}
@@ -90,6 +96,7 @@ impl ProjectMetadata {
mut options: Options,
root: SystemPathBuf,
project: Option<&Project>,
misconfiguration_mode: MisconfigurationMode,
) -> Result<Self, ResolveRequiresPythonError> {
let name = project
.and_then(|project| project.name.as_deref())
@@ -117,6 +124,7 @@ impl ProjectMetadata {
root,
options,
extra_configuration_paths: Vec::new(),
misconfiguration_mode,
})
}
@@ -194,6 +202,7 @@ impl ProjectMetadata {
pyproject
.as_ref()
.and_then(|pyproject| pyproject.project.as_ref()),
MisconfigurationMode::Fail,
)
.map_err(|err| {
ProjectMetadataError::InvalidRequiresPythonConstraint {
@@ -273,8 +282,13 @@ impl ProjectMetadata {
system: &dyn System,
vendored: &VendoredFileSystem,
) -> anyhow::Result<ProgramSettings> {
self.options
.to_program_settings(self.root(), self.name(), system, vendored)
self.options.to_program_settings(
self.root(),
self.name(),
system,
vendored,
self.misconfiguration_mode,
)
}
pub fn apply_overrides(&mut self, overrides: &ProjectOptionsOverrides) {

Some files were not shown because too many files have changed in this diff Show More