Compare commits

...

104 Commits

Author SHA1 Message Date
Claude
6a2f78786a [ty] Fix sub-union optimization to require both flags be No
The optimization condition checked if `recursively_defined` flags matched,
which would apply when both are Yes. However, unions with recursively_defined=Yes
may have been built during cycle_recovery where simplification is incomplete.

Change the condition to require both builder and union have recursively_defined=No,
ensuring the optimization only applies to unions that are definitely fully simplified.
2025-12-19 09:24:57 -06:00
Micha Reiser
7b6d0a90bc Apply same check for literals 2025-12-19 11:58:18 +01:00
Micha Reiser
07d352741a [ty] Skip redundancy checks for sub-union-elements 2025-12-19 08:54:01 +01:00
Micha Reiser
e177cc2a5a [ty] Improve union builder performance (#22048) 2025-12-19 08:29:16 +01:00
Micha Reiser
30ce679b9a [ty] Fix rules severity URL (#22069) 2025-12-19 07:25:02 +00:00
Charlie Marsh
76854fdb15 [ty] Unwrap enum.nonmember values (#22025)
## Summary

This PR unwraps the `enum.nonmember` type to match runtime behavior.

Closes https://github.com/astral-sh/ty/issues/1974.
2025-12-18 19:59:49 -05:00
Douglas Creager
5a2d3cda3d [ty] Remove some nondeterminism in constraint set tests (#22064)
We're seeing a lot of nondeterminism in the ecosystem tests at the
moment, which started (or at least got worse) once `Callable` inference
landed.

This PR attempts to remove this nondeterminism. We recently
(https://github.com/astral-sh/ruff/pull/21983) added a `source_order`
field to BDD nodes, which tracks when their constraint was added to the
BDD. Since we build up constraints based on the order that they appear
in the underlying source, that gives us a stable ordering even though we
use an arbitrary salsa-derived ordering for the BDD variables.

The issue (at least for some of the flakiness) is that we add "derived"
constraints when walking a BDD tree, and those derived constraints
inherit or borrow the `source_order` of the "real" constraint that
implied them. That means we can get multiple constraints in our
specialization that all have the same `source_order`. If we're not
careful, those "tied" constraints can be ordered arbitrarily.

The fix requires ~three~ ~four~ several steps:

- When starting to construct a sequent map (the data structure that
stores the derived constraints), we first sort all of the "real"
constraints by their `source_order`. That ensures that we insert things
into the sequent map in a stable order.
- During sequent map construction, derived facts are discovered by a
deterministic process applied to constraints in a (now) stable order. So
derived facts are now also inserted in a stable order.
- We update the fields of `SequentMap` to use `FxOrderSet` instead of
`FxHashSet`, so that we retain that stable insertion order.
- When walking BDD paths when constructing a specialization, we were
already sorting the constraints by their `source_order`. However, we
were not considering that we might get derived constraints, and
therefore constraints with "ties". Because of that, we need to make sure
to use a _stable_ sort, that retains the insertion order for those ties.

All together, this...should...fix the nondeterminism. (Unfortunately, I
haven't been able to effectively test this, since I haven't been able to
coerce local tests to flop into the other order that we sometimes see in
CI.)
2025-12-18 19:00:20 -05:00
Jack O'Connor
fa57253980 [ty] Implement disjointness for TypedDicts (#22044)
This is a preliminary step towards tagged union narrowing for `TypedDict`:
https://github.com/astral-sh/ty/issues/1479
2025-12-18 13:20:22 -08:00
Amethyst Reese
b7fbd986bc [ruff] fix preview-since values for RUF103 and RUF104 (#22061)
Missed including this in the follow-up on #21908
2025-12-18 13:18:04 -08:00
Amethyst Reese
3d334a313e Report diagnostics for invalid/unmatched range suppression comments (#21908)
## Summary

- Adds new RUF103 and RUF104 diagnostics for invalid and unmatched
suppression comments
- Reports RUF100 for any unused range suppression
- Reports RUF102 for range suppression comment with invalid rule codes
- Reports RUF103 for range suppression comment with invalid suppression syntax
- Reports RUF104 diagnostics for any unmatched range suppression comment (disable w/o enable)


## Test Plan

Updated snapshots from test cases with unmatched suppression comments

Issue #3711
Fixes #21878
Fixes #21875
2025-12-18 12:58:58 -08:00
Aria Desires
2e44a861cb [ty] Disable possibly-missing-imports by default (#22041)
@carljm put forth a reasonably compelling argument that just disabling
this lint might be advisable. If we agree, here's the implementation.

* Fixes https://github.com/astral-sh/ty/issues/309

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-12-18 20:06:34 +00:00
Dylan
45bbb4cbff Bump 0.14.10 (#22058) 2025-12-18 13:08:17 -06:00
Micha Reiser
42b972753a [ty] Use datatest instead of dirtest (#21937) 2025-12-18 18:05:02 +00:00
Micha Reiser
f7ec178400 [ty] Gracefully handle client requests that can't be deserialized (#22051) 2025-12-18 18:01:01 +00:00
Rasmus Nygren
c315164732 [ty] Don't suggest keyword statements when only expressions are valid
There are cases where the python grammar enforces expressions
after certain statements. In such cases we want to suppress
irrelevant keywords from the auto-complete suggestions.

E.g. `with a<CURSOR>`, suggesting `raise` here never makes sense
because it is not valid by the grammar.
2025-12-18 12:27:57 -05:00
Andrew Gallant
bb1955e98c [ty] Use cursor context in a few more places...
... and also add a `ContextCursor::covering_node` helper, since it's
used so much.
2025-12-18 11:00:09 -05:00
Andrew Gallant
070e08a043 [ty] Move completion function to the top
This is the main entry point to this module. It should be at the top.
2025-12-18 11:00:09 -05:00
Andrew Gallant
bab3924833 [ty] Refactor completion generation
This refactor is intended to give more structure to how we generate
completions. There's now a `Context` for "how do we figure out what kind
of completions to offer" and also a `CollectionContext` for "how do we
figure out which completions are appropriate or not." We double down on
`Completions` as a collector and a single point of truth for this. It
now handles adding information to `Completion` (based on the context)
and also skipping completions that are inappropriate (instead of
filtering them after-the-fact).

We also bundle a bunch of state into a new `ContextCursor` type, and
then define a bunch of predicates/accessors on that type that were
previously free functions with loads of parameters.

Finally, we introduce more structure to ranking. Instead of an anonymous
tuple, we define an explicit type with some helper types to hopefully
make the influence on ranking from each constituent piece a bit clearer.

This does seem to fix one bug around detecting the target for non-import
completions, but otherwise should not have any changes in behavior.

This is meant to be a precursor to improving completion ranking.
2025-12-18 11:00:09 -05:00
mahiro
10748b2fdb [flake8-pytest-style] Allow match and check keyword arguments without an expected exception type (PT010) (#21964)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Updates PT010(`pytest-raises-without-exception`) to recognize `match`
and `check` keyword arguments as valid alternatives to specifying an
exception class.

As of pytest 8.4.0, `pytest.raises()` can be called with only `match` or
`check` keyword arguments without an expected exception.

Fixes #18653

## Test Plan

<!-- How was it tested? -->

- Added test cases for `match`-only, `check`-only, and both arguments.
- `cargo test -p ruff_linter -- "pytestraiseswithoutexception"` passes
2025-12-18 10:42:06 -05:00
Aria Desires
56539db520 [ty] Fix some configuration panics in the LSP (#22040)
## Summary

This is a revival of https://github.com/astral-sh/ruff/pull/21047 now
that we have a reproducer again.

* Fixes https://github.com/astral-sh/ty/issues/2031
* Fixes https://github.com/astral-sh/ty/issues/859

## Test Plan

e2e test from @zanieb

---------

Co-authored-by: Zanie Blue <contact@zanie.dev>
2025-12-18 09:47:02 -05:00
Aria Desires
8d32ad1cab [ty] Add support for attribute docstrings (#22036)
## Summary

I should have factored this better but this includes a drive-by move of
find_node to ruff_python_ast so ty_python_semantic can use it too.

* Fixes https://github.com/astral-sh/ty/issues/2017 

## Test Plan

Snapshots galore
2025-12-18 12:18:20 +00:00
Micha Reiser
b2a8c42b51 [ty] Correctly encode multiline tokens for clients not supporting multiline tokens (#22033) 2025-12-18 11:38:21 +00:00
Aria Desires
7bb5dd87ff [ty] Fix goto-declaration on the RHS of from module import submodule (#22042) 2025-12-18 08:28:05 +01:00
Charlie Marsh
06305f3c02 Make analyze_single_pattern_predicate a #[salsa::tracked] function (#22045)
## Summary

We had a report of a blowup with the following snippet:

```python
from enum import StrEnum

class BigEnum(StrEnum):
    VALUE_01 = "VALUE_01"
    VALUE_02 = "VALUE_02"
    VALUE_03 = "VALUE_03"
    VALUE_04 = "VALUE_04"
    VALUE_05 = "VALUE_05"
    VALUE_06 = "VALUE_06"
    VALUE_07 = "VALUE_07"
    VALUE_08 = "VALUE_08"
    VALUE_09 = "VALUE_09"
    VALUE_10 = "VALUE_10"
    VALUE_11 = "VALUE_11"
    VALUE_12 = "VALUE_12"
    VALUE_13 = "VALUE_13"
    VALUE_14 = "VALUE_14"
    VALUE_15 = "VALUE_15"
    VALUE_16 = "VALUE_16"
    VALUE_17 = "VALUE_17"
    VALUE_18 = "VALUE_18"
    VALUE_19 = "VALUE_19"
    VALUE_20 = "VALUE_20"
    VALUE_21 = "VALUE_21"
    VALUE_22 = "VALUE_22"
    VALUE_23 = "VALUE_23"
    VALUE_24 = "VALUE_24"
    VALUE_25 = "VALUE_25"
    VALUE_26 = "VALUE_26"
    VALUE_27 = "VALUE_27"
    VALUE_28 = "VALUE_28"
    VALUE_29 = "VALUE_29"
    VALUE_30 = "VALUE_30"
    VALUE_31 = "VALUE_31"
    VALUE_32 = "VALUE_32"
    VALUE_33 = "VALUE_33"
    VALUE_34 = "VALUE_34"
    VALUE_35 = "VALUE_35"
    VALUE_36 = "VALUE_36"
    VALUE_37 = "VALUE_37"
    VALUE_38 = "VALUE_38"
    VALUE_39 = "VALUE_39"
    VALUE_40 = "VALUE_40"
    VALUE_41 = "VALUE_41"
    VALUE_42 = "VALUE_42"
    VALUE_43 = "VALUE_43"
    VALUE_44 = "VALUE_44"
    VALUE_45 = "VALUE_45"
    VALUE_46 = "VALUE_46"
    VALUE_47 = "VALUE_47"
    VALUE_48 = "VALUE_48"
    VALUE_49 = "VALUE_49"
    VALUE_50 = "VALUE_50"
    VALUE_51 = "VALUE_51"
    VALUE_52 = "VALUE_52"
    VALUE_53 = "VALUE_53"
    VALUE_54 = "VALUE_54"
    VALUE_55 = "VALUE_55"
    VALUE_56 = "VALUE_56"
    VALUE_57 = "VALUE_57"
    VALUE_58 = "VALUE_58"
    VALUE_59 = "VALUE_59"
    VALUE_60 = "VALUE_60"
    VALUE_61 = "VALUE_61"
    VALUE_62 = "VALUE_62"
    VALUE_63 = "VALUE_63"
    VALUE_64 = "VALUE_64"
    VALUE_65 = "VALUE_65"
    VALUE_66 = "VALUE_66"
    VALUE_67 = "VALUE_67"
    VALUE_68 = "VALUE_68"
    VALUE_69 = "VALUE_69"
    VALUE_70 = "VALUE_70"
    VALUE_71 = "VALUE_71"
    VALUE_72 = "VALUE_72"
    VALUE_73 = "VALUE_73"
    VALUE_74 = "VALUE_74"
    VALUE_75 = "VALUE_75"
    VALUE_76 = "VALUE_76"
    VALUE_77 = "VALUE_77"
    VALUE_78 = "VALUE_78"
    VALUE_79 = "VALUE_79"
    VALUE_80 = "VALUE_80"
    VALUE_81 = "VALUE_81"
    VALUE_82 = "VALUE_82"
    VALUE_83 = "VALUE_83"
    VALUE_84 = "VALUE_84"
    VALUE_85 = "VALUE_85"
    VALUE_86 = "VALUE_86"
    VALUE_87 = "VALUE_87"
    VALUE_88 = "VALUE_88"
    VALUE_89 = "VALUE_89"
    VALUE_90 = "VALUE_90"
    VALUE_91 = "VALUE_91"
    VALUE_92 = "VALUE_92"
    VALUE_93 = "VALUE_93"
    VALUE_94 = "VALUE_94"
    VALUE_95 = "VALUE_95"
    VALUE_96 = "VALUE_96"
    VALUE_97 = "VALUE_97"
    VALUE_98 = "VALUE_98"
    VALUE_99 = "VALUE_99"

    def get_info(self) -> tuple[str, int]:
        match self:
            case BigEnum.VALUE_01:
                return self.value, 1
            case BigEnum.VALUE_02:
                return self.value, 2
            case BigEnum.VALUE_03:
                return self.value, 3
            case BigEnum.VALUE_04:
                return self.value, 4
            case BigEnum.VALUE_05:
                return self.value, 5
            case BigEnum.VALUE_06:
                return self.value, 6
            case BigEnum.VALUE_07:
                return self.value, 7
            case BigEnum.VALUE_08:
                return self.value, 8
            case BigEnum.VALUE_09:
                return self.value, 9
            case BigEnum.VALUE_10:
                return self.value, 10
            case BigEnum.VALUE_11:
                return self.value, 11
            case BigEnum.VALUE_12:
                return self.value, 12
            case BigEnum.VALUE_13:
                return self.value, 13
            case BigEnum.VALUE_14:
                return self.value, 14
            case BigEnum.VALUE_15:
                return self.value, 15
            case BigEnum.VALUE_16:
                return self.value, 16
            case BigEnum.VALUE_17:
                return self.value, 17
            case BigEnum.VALUE_18:
                return self.value, 18
            case BigEnum.VALUE_19:
                return self.value, 19
            case BigEnum.VALUE_20:
                return self.value, 20
            case BigEnum.VALUE_21:
                return self.value, 21
            case BigEnum.VALUE_22:
                return self.value, 22
            case BigEnum.VALUE_23:
                return self.value, 23
            case BigEnum.VALUE_24:
                return self.value, 24
            case BigEnum.VALUE_25:
                return self.value, 25
            case BigEnum.VALUE_26:
                return self.value, 26
            case BigEnum.VALUE_27:
                return self.value, 27
            case BigEnum.VALUE_28:
                return self.value, 28
            case BigEnum.VALUE_29:
                return self.value, 29
            case BigEnum.VALUE_30:
                return self.value, 30
            case BigEnum.VALUE_31:
                return self.value, 31
            case BigEnum.VALUE_32:
                return self.value, 32
            case BigEnum.VALUE_33:
                return self.value, 33
            case BigEnum.VALUE_34:
                return self.value, 34
            case BigEnum.VALUE_35:
                return self.value, 35
            case BigEnum.VALUE_36:
                return self.value, 36
            case BigEnum.VALUE_37:
                return self.value, 37
            case BigEnum.VALUE_38:
                return self.value, 38
            case BigEnum.VALUE_39:
                return self.value, 39
            case BigEnum.VALUE_40:
                return self.value, 40
            case BigEnum.VALUE_41:
                return self.value, 41
            case BigEnum.VALUE_42:
                return self.value, 42
            case BigEnum.VALUE_43:
                return self.value, 43
            case BigEnum.VALUE_44:
                return self.value, 44
            case BigEnum.VALUE_45:
                return self.value, 45
            case BigEnum.VALUE_46:
                return self.value, 46
            case BigEnum.VALUE_47:
                return self.value, 47
            case BigEnum.VALUE_48:
                return self.value, 48
            case BigEnum.VALUE_49:
                return self.value, 49
            case BigEnum.VALUE_50:
                return self.value, 50
            case BigEnum.VALUE_51:
                return self.value, 51
            case BigEnum.VALUE_52:
                return self.value, 52
            case BigEnum.VALUE_53:
                return self.value, 53
            case BigEnum.VALUE_54:
                return self.value, 54
            case BigEnum.VALUE_55:
                return self.value, 55
            case BigEnum.VALUE_56:
                return self.value, 56
            case BigEnum.VALUE_57:
                return self.value, 57
            case BigEnum.VALUE_58:
                return self.value, 58
            case BigEnum.VALUE_59:
                return self.value, 59
            case BigEnum.VALUE_60:
                return self.value, 60
            case BigEnum.VALUE_61:
                return self.value, 61
            case BigEnum.VALUE_62:
                return self.value, 62
            case BigEnum.VALUE_63:
                return self.value, 63
            case BigEnum.VALUE_64:
                return self.value, 64
            case BigEnum.VALUE_65:
                return self.value, 65
            case BigEnum.VALUE_66:
                return self.value, 66
            case BigEnum.VALUE_67:
                return self.value, 67
            case BigEnum.VALUE_68:
                return self.value, 68
            case BigEnum.VALUE_69:
                return self.value, 69
            case BigEnum.VALUE_70:
                return self.value, 70
            case BigEnum.VALUE_71:
                return self.value, 71
            case BigEnum.VALUE_72:
                return self.value, 72
            case BigEnum.VALUE_73:
                return self.value, 73
            case BigEnum.VALUE_74:
                return self.value, 74
            case BigEnum.VALUE_75:
                return self.value, 75
            case BigEnum.VALUE_76:
                return self.value, 76
            case BigEnum.VALUE_77:
                return self.value, 77
            case BigEnum.VALUE_78:
                return self.value, 78
            case BigEnum.VALUE_79:
                return self.value, 79
            case BigEnum.VALUE_80:
                return self.value, 80
            case BigEnum.VALUE_81:
                return self.value, 81
            case BigEnum.VALUE_82:
                return self.value, 82
            case BigEnum.VALUE_83:
                return self.value, 83
            case BigEnum.VALUE_84:
                return self.value, 84
            case BigEnum.VALUE_85:
                return self.value, 85
            case BigEnum.VALUE_86:
                return self.value, 86
            case BigEnum.VALUE_87:
                return self.value, 87
            case BigEnum.VALUE_88:
                return self.value, 88
            case BigEnum.VALUE_89:
                return self.value, 89
            case BigEnum.VALUE_90:
                return self.value, 90
            case BigEnum.VALUE_91:
                return self.value, 91
            case BigEnum.VALUE_92:
                return self.value, 92
            case BigEnum.VALUE_93:
                return self.value, 93
            case BigEnum.VALUE_94:
                return self.value, 94
            case BigEnum.VALUE_95:
                return self.value, 95
            case BigEnum.VALUE_96:
                return self.value, 96
            case BigEnum.VALUE_97:
                return self.value, 97
            case BigEnum.VALUE_98:
                return self.value, 98
            case BigEnum.VALUE_99:
                return self.value, 99
```

On my machine, memoizing the computation brings us from 70s to 0.6s.
2025-12-17 22:01:50 -05:00
Amethyst Reese
9cc132f098 [eradicate] ignore ruff:disable and ruff:enable comments in ERA001 (#22038)
## Summary

Don't flag `# ruff: disable` or `# ruff: enable` comments as
commented-out code.

## Test Plan

New test cases.

Issue #3711
2025-12-17 17:04:49 -08:00
Shantanu
cf8d2e35a8 New rule to prevent implicit string concatenation in collections (#21972)
This is a common footgun, see the example in
https://github.com/astral-sh/ruff/issues/13014#issuecomment-3411496519

Fixes #13014 , fixes #13031
2025-12-17 17:37:01 -05:00
Brent Westbrook
0290f5dc3b [flake8-bandit] Fix broken link (S704) (#22039)
Summary
--

While going through all the rules I noticed a broken link in the S704
docs. (I then got really confused because it appeared to be correct in
the _other_ `unsafe_markup_use.rs` file for the removed RUF version of
the rule).

Test Plan
--

[Before](https://docs.astral.sh/ruff/rules/unsafe-markup-use/):

<img width="537" height="171" alt="image"
src="https://github.com/user-attachments/assets/01007cab-6673-48e5-b3a5-6006bc78a027"
/>


After:

<img width="451" height="189" alt="image"
src="https://github.com/user-attachments/assets/4e5d0e0d-76be-4f66-b747-e209f11ab11a"
/>

I also did at least a cursory grep for other cases with escaped link
brackets (`\[`) and only turned up this rule on main.
2025-12-17 17:35:04 -05:00
Charlie Marsh
5bb9ee2a9d [ty] Respect deferred values in keyword arguments et al for .pyi files (#22029)
## Summary

Closes https://github.com/astral-sh/ty/issues/2019.
2025-12-17 14:02:10 -08:00
Aria Desires
638f230910 [ty] improve rendering of signatures in hovers (#22007)
This is the return of #21438 because we never found anything better and
I think it would be good to have this for the beta.
2025-12-17 20:09:31 +00:00
Douglas Creager
b36ff75a24 [ty] Don't add identical lower/upper bounds multiple times when inferring specializations (#22030)
When inferring a specialization of a `Callable` type, we use the new
constraint set implementation. In the example in
https://github.com/astral-sh/ty/issues/1968, we end up with a constraint
set that includes all of the following clauses:

```
     U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
M1 ≤ U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
M2 ≤ U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
M3 ≤ U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
M4 ≤ U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
M5 ≤ U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
M6 ≤ U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
M7 ≤ U_co ≤ M1 | M2 | M3 | M4 | M5 | M6 | M7
```

In general, we take the upper bounds of those constraints to get the
specialization. However, the upper bounds of those constraints are not
all guaranteed to be the same, and so first we need to intersect them
all together. In this case, the upper bounds are all identical, so their
intersection is trivial:

```
U_co = M1 | M2 | M3 | M4 | M5 | M6 | M7
```

But we were still doing the work of calculating that trivial
intersection 7 times. And each time we have to do 7^2 comparisons of the
`M*` classes, ending up with O(n^3) overall work.

This pattern is common enough that we can put in a quick heuristic to
prune identical copies of the same type before performing the
intersection.

Fixes https://github.com/astral-sh/ty/issues/1968
2025-12-17 13:35:52 -05:00
Charlie Marsh
30c3f9aafe [ty] Apply narrowing to len calls based on argument size (#22026)
## Summary

Closes https://github.com/astral-sh/ty/issues/1983.
2025-12-17 13:15:58 -05:00
chiri
883701ae88 [flake8-use-pathlib] Make fixes unsafe when types change in compound statements (PTH104, PTH105, PTH109, PTH115) (#22009)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/21794

## Test Plan

`cargo nextest run flake8_use_pathlib`
2025-12-17 12:31:27 -05:00
Alex Waygood
0bd7a94c27 [ty] Improve unsupported-base and invalid-super-argument diagnostics to avoid extremely long lines when encountering verbose types (#22022) 2025-12-17 14:43:11 +00:00
Bhuminjay Soni
421f88bb32 [refurb] Extend support for Path.open (FURB101, FURB103) (#21080)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
This PR fixes https://github.com/astral-sh/ruff/issues/18409

## Test Plan

<!-- How was it tested? -->
I have added tests in FURB103.

---------

Signed-off-by: 11happy <soni5happy@gmail.com>
Signed-off-by: 11happy <bhuminjaysoni@gmail.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-12-17 09:18:13 -05:00
David Peter
b0eb39d112 [ty] ecosystem-analyzer: Flush full stderr output in case of panics (#22023)
Pulls in
2e1816eac0
2025-12-17 15:02:59 +01:00
charliecloudberry
260f463edd Update setup.md (#22024) 2025-12-17 14:56:25 +01:00
Bhuminjay Soni
52849a5e68 [syntax-errors] Annotated name cannot be global (#20868)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
This PR implements a new semantic syntax error where annotated name
can't be global
example
```
x: int = 1

def f():
    global x
    x: str = "foo"  # SyntaxError: annotated name 'x' can't be global
 ```

## Test Plan

<!-- How was it tested? -->
I have written tests as directed in #17412

---------

Signed-off-by: 11happy <soni5happy@gmail.com>
Signed-off-by: 11happy <bhuminjaysoni@gmail.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-12-17 08:39:47 -05:00
David Peter
2a61fe2353 [ty] Handle field specifier functions that accept **kwargs and recognize metaclass-based transformers as instances of DataclassInstance (#22018)
## Summary

This contains two bug fixes:

- [Handle field specifier functions that accept
`**kwargs`](ad6918d505)
- [Recognize metaclass-based transformers as instances of
`DataclassInstance`](1a8e29b23c)

closes https://github.com/astral-sh/ty/issues/1987

## Test Plan

* New Markdown tests
* Made sure that the example in 1987 checks without errors
2025-12-17 14:22:16 +01:00
Alex Waygood
764ad8b29b [ty] Improve disambiguation of types in many cases (#22019) 2025-12-17 11:41:07 +00:00
mahiro
85af715880 Fix playground Share button showing "Copied!" before clipboard copy completes (#21942)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-17 12:16:01 +01:00
Phong Do
b0bc990cbf [pyupgrade] Fix parsing named Unicode escape sequences (UP032) (#21901)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/19771

Fixes incorrect parsing of Unicode named escape sequences like `Hey
\N{snowman}` in `FormatString`, which were being incorrectly split into
separate literal and field parts instead of being treated as a single
literal unit.

## Problem

The `FormatString` parser incorrectly handles Unicode named escape
sequences:
- **Current**: `Hey \N{snowman}` is parsed into 2 parts `Literal("Hey
\N")` & `Field("snowman")`
- **Expected**: `Hey \N{snowman}` should be parsed into 1 part
`Literal("Hey \N{snowman}")`

This affects f-string conversion rules when fixing `UP032` that rely on
proper format string parsing.

## Solution

I modified `parse_literal` to detect and handle Unicode named escape
sequences before parsing single characters:
- Introduced a flag to track when a backslash is "available" to escape
something.
- When the flag is `true`, and the text starts with `N{`, try to parse
the complete Unicode escape sequence as one unit, and set the flag to
`false` after parsing successfully.
- Set the flag to `false` when the backslash is already consumed.

## Manual Verification

`"\N{angle}AOB = {angle}°".format(angle=180)` 

**Result**

```bash
 def foo():
-    "\N{angle}AOB = {angle}°".format(angle=180)
+    f"\N{angle}AOB = {180}°"

Would fix 1 error.
```

`"\N{snowman} {snowman}".format(snowman=1)`

**Result**
```bash
 def foo():
-    "\N{snowman} {snowman}".format(snowman=1)
+    f"\N{snowman} {1}"

Would fix 1 error.
```

`"\\N{snowman} {snowman}".format(snowman=1)`

**Result**
```bash
 def foo():
-    "\\N{snowman} {snowman}".format(snowman=1)
+    f"\\N{1} {1}"

Would fix 1 error.
```

## Test Plan

- Added test cases (happy case, invalid case, edge case) for
`FormatString` when parsing Unicode escape sequence.
- Updated snapshots.
2025-12-16 16:33:39 -05:00
Aria Desires
ad3de4e488 [ty] Improve rendering of default values for function args (#22010)
## Summary

We're actually quite good at computing this but the main issue is just
that we compute it at the type-level and so wrap it in `Literal[...]`.
So just special-case the rendering of these to omit `Literal[...]` and
fallback to `...` in cases where the thing we'll show is probably
useless (i.e. `x: str = str`).

Fixes https://github.com/astral-sh/ty/issues/1882
2025-12-16 13:39:19 -05:00
Douglas Creager
2214a46139 [ty] Don't use implicit superclass annotation when converting a class constructor into a Callable (#22011)
This fixes a bug @zsol found running ty against pyx. His original repro
is:

```py
class Base:
    def __init__(self) -> None: pass

class A(Base):
    pass

def foo[T](callable: Callable[..., T]) -> T:
    return callable()

a: A = foo(A)
```

The call at the bottom would fail, since we would infer `() -> Base` as
the callable type of `A`, when it should be `() -> A`.

The issue was how we add implicit annotations to `self` parameters.
Typically, we turn it into `self: Self`. But in cases where we don't
need to introduce a full typevar, we turn it into `self: [the class
itself]` — in this case, `self: Base`. Then, when turning the class
constructor into a callable, we would see this non-`Self` annotation and
think that it was important and load-bearing.

The fix is that we skip all implicit annotations when determining
whether the `self` annotation should take precedence in the callable's
return type.
2025-12-16 13:37:11 -05:00
Douglas Creager
c02bd11b93 [ty] Infer typevar specializations for Callable types (#21551)
This is a first stab at solving
https://github.com/astral-sh/ty/issues/500, at least in part, with the
old solver. We add a new `TypeRelation` that lets us opt into using
constraint sets to describe when a typevar is assignability to some
type, and then use that to calculate a constraint set that describes
when two callable types are assignable. If the callable types contain
typevars, that constraint set will describe their valid specializations.
We can then walk through all of the ways the constraint set can be
satisfied, and record a type mapping in the old solver for each one.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-12-16 09:16:49 -08:00
Carl Meyer
eeaaa8e9fe [ty] propagate classmethod-ness through decorators returning Callables (#21958)
Fixes https://github.com/astral-sh/ty/issues/1787

## Summary

Allow method decorators returning Callables to presumptively propagate
"classmethod-ness" in the same way that they already presumptively
propagate "function-like-ness". We can't actually be sure that this is
the case, based on the decorator's annotations, but (along with other
type checkers) we heuristically assume it to be the case for decorators
applied via decorator syntax.

## Test Plan

Added mdtest.
2025-12-16 09:16:40 -08:00
Rob Hand
7f7485d608 [ty] Fixed benchmark markdown auto-linenumbers (#22008) 2025-12-16 16:38:11 +01:00
Aria Desires
d755f3b522 [ty] Improve syntax-highlighting of constants (#22006)
## Summary

BLAH1 getting different highlighting/classification from BLAH is very
distracting and undesirable.
2025-12-16 09:23:40 -05:00
Aria Desires
83168a1bb1 [ty] highlight special type syntax in hovers as xml (#22005)
## Summary

These types look better rendered as XML than python

## Test Plan

<img width="532" height="299" alt="Screenshot 2025-12-16 at 8 40 56 AM"
src="https://github.com/user-attachments/assets/42d9abfa-3f4a-44ba-b6b4-6700ab06832d"
/>
2025-12-16 14:20:35 +00:00
Andrew Gallant
0f373603eb [ty] Suppress keyword argument completions unless we're in the "arguments"
Otherwise, given a case like this:

```
(lambda foo: (<CURSOR> + 1))(2)
```

we'll offer _argument_ completions for `foo` at the cursor position.
While we do actually want to offer completions for `foo` in this
context, it is currently difficult to do so. But we definitely don't
want to offer completions for `foo` as an argument to a function here.
Which is what we were doing.

We also add an end-to-end test here to verify that the actual label we
offer in completion suggestions includes the `=` suffix.

Closes https://github.com/astral-sh/ruff/pull/21970
2025-12-16 08:00:04 -05:00
Andrew Gallant
cc23af944f [ty] Tweak how we show qualified auto-import completions
Specifically, we make two changes:

1. We only show `import ...` when there is an actual import edit.
2. We now show the text we will insert. This means that when we
insert a qualified symbol, the qualification will show in the
completions suggested.

Ref https://github.com/astral-sh/ty/issues/1274#issuecomment-3352233790
2025-12-16 08:00:04 -05:00
Andrew Gallant
0589700ca1 [ty] Prefer unqualified imports over qualified imports
It seems like this is perhaps a better default:
https://github.com/astral-sh/ty/issues/1274#issuecomment-3352233790

For me personally, I think I'd prefer the qualified
variant. But I can easily see this changing based on
the specific scenario. I think the thing that pushes
me toward prioritizing the unqualified variant is that
the user could have typed the qualified variant themselves,
but they didn't. So we should perhaps prioritize the
form they typed, which is unqualified.
2025-12-16 08:00:04 -05:00
Andrew Gallant
43d983ecae [ty] Add tests for how qualified auto-imports are shown
Specifically, we want to test that something like `import typing`
should only be shown when we are actually going to insert an import.
*And* that when we insert a qualified name, then we should show it
as such in the completion suggestions.
2025-12-16 08:00:04 -05:00
Andrew Gallant
5c69bb564c [ty] Add a test capturing sub-optimal auto-import heuristic
Specifically, here, we'd probably like to add `TypedDict` to
the existing `from typing import ...` statement instead of
using the fully qualified `typing.TypedDict` form.

To test this, we add another snapshot mode for including imports
that a completion will insert when selected.

Ref https://github.com/astral-sh/ty/issues/1274#issuecomment-3352233790
2025-12-16 08:00:04 -05:00
Andrew Gallant
89fed85a8d [ty] Switch completion tests to use "insertion" text in snapshots
I think this gives us better tests overall, since it
tells us what is actually going to be inserted. Plus,
we'll want this in the next commit.
2025-12-16 08:00:04 -05:00
Zanie Blue
051f6896ac [ty] Remove extra headings and split examples in the overrides configuration docs (#21994)
Having these as markdown headings ends up being weird in the reference
documentation, e.g., before:

<img width="1071" height="779" alt="Screenshot 2025-12-15 at 8 45 25 PM"
src="https://github.com/user-attachments/assets/2118d4f1-f557-46f3-a4b6-56c406cf9aca"
/>
2025-12-16 06:57:06 -06:00
David Peter
5b1d3ac9b9 [ty] Document TY_CONFIG_FILE (#22001) 2025-12-16 13:15:24 +01:00
Micha Reiser
b2b0ad38ea [ty] Cache KnownClass::to_class_literal (#22000) 2025-12-16 13:04:12 +01:00
Micha Reiser
01c0a3e960 [ty] Fix benchmark assertion (#22003) 2025-12-16 12:24:54 +01:00
Zanie Blue
5c942119f8 Add uv and ty to the Ruff README (#21996)
Matching the style of the uv README
2025-12-16 10:37:15 +01:00
David Peter
2acf1cc0fd [ty] Infer precise types for isinstance(…) calls involving typevars (#21999)
## Summary

Infer `Literal[True]` for `isinstance(x, C)` calls when `x: T` and `T`
has a bound `B` that satisfies the `isinstance` check against `C`.
Similar for constrained typevars.

closes https://github.com/astral-sh/ty/issues/1895

## Test Plan

* New Markdown tests
* Verified the the example in the linked ticket checks without errors
2025-12-16 10:34:30 +01:00
Micha Reiser
4fdbe26445 [ty] Use FxHashMap in Signature::has_relation_to (#21997) 2025-12-16 10:10:45 +01:00
Charlie Marsh
682d29c256 [ty] Avoid enforcing standalone expression for tests in f-strings (#21967)
## Summary

Based on what we do elsewhere and my understanding of "standalone"
here...

Closes https://github.com/astral-sh/ty/issues/1865.
2025-12-15 22:31:04 -05:00
Zanie Blue
8e13765b57 [ty] Use title for configuration code fences in ty reference documentation (#21992)
Part of https://github.com/astral-sh/ty/pull/1904
2025-12-15 16:36:08 -05:00
Douglas Creager
7d3b7c5754 [ty] Consistent ordering of constraint set specializations, take 2 (#21983)
In https://github.com/astral-sh/ruff/pull/21957, we tried to use
`union_or_intersection_elements_ordering` to provide a stable ordering
of the union and intersection elements that are created when determining
which type a typevar should specialize to. @AlexWaygood [pointed
out](https://github.com/astral-sh/ruff/pull/21551#discussion_r2616543762)
that this won't work, since that provides a consistent ordering within a
single process run, but does not provide a stable ordering across runs.

This is an attempt to produce a proper stable ordering for constraint
sets, so that we end up with consistent diagnostic and test output.

We do this by maintaining a new `source_order` field on each interior
BDD node, which records when that node's constraint was added to the
set. Several of the BDD operators (`and`, `or`, etc) now have
`_with_offset` variants, which update each `source_order` in the rhs to
be larger than any of the `source_order`s in the lhs. This is what
causes that field to be in line with (a) when you add each constraint to
the set, and (b) the order of the parameters you provide to `and`, `or`,
etc. Then we sort by that new field before constructing the
union/intersection types when creating a specialization.
2025-12-15 14:24:08 -05:00
RasmusNygren
d6a5bbd91c [ty] Remove invalid statement-keyword completions in for-statements (#21979)
In `for x in <CURSOR>` statements it's only valid to provide expressions
that eventually evaluate to an iterable. While it's extremely difficult
to know if something can evaulate to an iterable in a general case,
there are some suggestions we know can never lead to an iterable. Most
keywords are such and hence we remove them here.

## Summary
This suppresses statement-keywords from auto-complete suggestions in
`for x in <CURSOR>` statements where we know they can never be valid, as
whatever is typed has to (at some point) evaluate to an iterable.

It handles the core issue from
https://github.com/astral-sh/ty/issues/1774 but there's a lot of related
cases that probably has to be handled piece-wise.

## Test Plan
New tests and verifying in the playground.
2025-12-15 12:56:34 -05:00
Micha Reiser
1df6544ad8 [ty] Avoid caching trivial is-redundant-with calls (#21989) 2025-12-15 18:45:03 +01:00
Dylan
4e1cf5747a Fluent formatting of method chains (#21369)
This PR implements a modification (in preview) to fluent formatting for
method chains: We break _at_ the first call instead of _after_.

For example, we have the following diff between `main` and this PR (with
`line-length=8` so I don't have to stretch out the text):

```diff
 x = (
-    df.merge()
+    df
+    .merge()
     .groupby()
     .agg()
     .filter()
 )
```

## Explanation of current implementation

Recall that we traverse the AST to apply formatting. A method chain,
while read left-to-right, is stored in the AST "in reverse". So if we
start with something like

```python
a.b.c.d().e.f()
```

then the first syntax node we meet is essentially `.f()`. So we have to
peek ahead. And we actually _already_ do this in our current fluent
formatting logic: we peek ahead to count how many calls we have in the
chain to see whether we should be using fluent formatting or now.

In this implementation, we actually _record_ this number inside the enum
for `CallChainLayout`. That is, we make the variant `Fluent` hold an
`AttributeState`. This state can either be:

- The number of call-like attributes preceding the current attribute
- The state `FirstCallOrSubscript` which means we are at the first
call-like attribute in the chain (reading from left to right)
- The state `BeforeFirstCallOrSubscript` which means we are in the
"first group" of attributes, preceding that first call.

In our example, here's what it looks like at each attribute:

```
a.b.c.d().e.f @ Fluent(CallsOrSubscriptsPreceding(1))
a.b.c.d().e @ Fluent(CallsOrSubscriptsPreceding(1))
a.b.c.d @ Fluent(FirstCallOrSubscript)
a.b.c @ Fluent(BeforeFirstCallOrSubscript)
a.b @ Fluent(BeforeFirstCallOrSubscript)
```

Now, as we descend down from the parent expression, we pass along this
little piece of state and modify it as we go to track where we are. This
state doesn't do anything except when we are in `FirstCallOrSubscript`,
in which case we add a soft line break.

Closes #8598

---------

Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
2025-12-15 09:29:50 -06:00
Douglas Creager
cbfecfaf41 [ty] Avoid stack overflow when calculating inferable typevars (#21971)
When we calculate which typevars are inferable in a generic context, the
result might include more than the typevars bound by the generic
context. The canonical example is a generic method of a generic class:

```py
class C[A]:
    def method[T](self, t: T): ...
```

Here, the inferable typevar set of `method` contains `Self` and `T`, as
you'd expect. (Those are the typevars bound by the method.) But it also
contains `A@C`, since the implicit `Self` typevar is defined as `Self:
C[A]`. That means when we call `method`, we need to mark `A@C` as
inferable, so that we can determine the correct mapping for `A@C` at the
call site.

Fixes https://github.com/astral-sh/ty/issues/1874
2025-12-15 10:25:33 -05:00
Aria Desires
8f530a7ab0 [ty] Add "qualify ..." code fix for undefined references (#21968)
## Summary

If `import warnings` exists in the file, we will suggest an edit of
`deprecated -> warnings.deprecated` as "qualify warnings.deprecated"

## Test Plan

Should test more cases...
2025-12-15 10:14:36 -05:00
Micha Reiser
5372bb3440 [ty] Use jemalloc on linux (#21975)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-15 16:04:34 +01:00
Micha Reiser
d08e414179 Update MSRV to 1.90 (#21987) 2025-12-15 14:29:11 +01:00
Alex Waygood
0b918ae4d5 [ty] Improve check enforcing that an overloaded function must have an implementation (#21978)
## Summary

- Treat `if TYPE_CHECKING` blocks the same as stub files (the feature
requested in https://github.com/astral-sh/ty/issues/1216)
- We currently only allow `@abstractmethod`-decorated methods to omit
the implementation if they're methods in classes that have _exactly_
`ABCMeta` as their metaclass. That seems wrong -- `@abstractmethod` has
the same semantics if a class has a subclass of `ABCMeta` as its
metaclass. This PR fixes that too. (I'm actually not _totally_ sure we
should care what the class's metaclass is at all -- see discussion in
https://github.com/astral-sh/ty/issues/1877#issue-3725937441... but the
change this PR is making seems less wrong than what we have currently,
anyway.)

Fixes https://github.com/astral-sh/ty/issues/1216

## Test Plan

Mdtests and snapshots
2025-12-15 08:56:35 +00:00
renovate[bot]
9838f81baf Update actions/checkout digest to 8e8c483 (#21982)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-15 06:52:52 +00:00
Dhruv Manilawala
ba47349c2e [ty] Use ParamSpec without the attr for inferable check (#21934)
## Summary

fixes: https://github.com/astral-sh/ty/issues/1820

## Test Plan

Add new mdtests.

Ecosystem changes removes all false positives.
2025-12-15 11:04:28 +05:30
Bhuminjay Soni
04f9949711 [ty] Emit diagnostic when a type variable with a default is followed by one without a default (#21787)
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-12-14 19:35:37 +00:00
Leandro Braga
8bc753b842 [ty] Fix callout syntax in configuration mkdocs (#1875) (#21961) 2025-12-14 10:21:54 +01:00
Peter Law
c7eea1f2e3 Update debug_assert which pointed at missing method (#21969)
## Summary

I assume that the class has been renamed or split since this assertion
was created.

## Test Plan

Compiled locally, nothing more. Relying on CI given the triviality of
this change.
2025-12-13 17:56:59 -05:00
Charlie Marsh
be8eb92946 [ty] Add support for __qualname__ and other implicit class attributes (#21966)
## Summary

Closes https://github.com/astral-sh/ty/issues/1873
2025-12-13 17:10:25 -05:00
Simon Lamon
a544c59186 [ty] Emit a diagnostic when frozen dataclass inherits a non-frozen dataclass and the other way around (#21962)
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-12-13 20:59:26 +00:00
Alex Waygood
bb464ed924 [ty] Use unqualified names for displays of TypeAliasTypes and unbound ParamSpecs/TypeVars (#21960) 2025-12-13 20:23:16 +00:00
Alex Waygood
f57917becd fix typo in fuzz/README.md (#21963) 2025-12-13 18:21:46 +00:00
David Peter
82a7598aa8 [ty] Remove now-unnecessary Divergent check (#21935)
## Summary

This check is not necessary thanks to
https://github.com/astral-sh/ruff/pull/21906.
2025-12-13 16:32:09 +01:00
Micha Reiser
e2ec2bc306 Use datatest for formatter tests (#21933) 2025-12-13 08:02:22 +00:00
Douglas Creager
b413a6dec4 [ty] Allow gradual lower/upper bounds in a constraint set (#21957)
We now allow the lower and upper bounds of a constraint to be gradual.
Before, we would take the top/bottom materializations of the bounds.
This required us to pass in whether the constraint was intended for a
subtyping check or an assignability check, since that would control
whether we took the "restrictive" or "permissive" materializations,
respectively.

Unfortunately, doing so means that we lost information about whether the
original query involves a non-fully-static type. This would cause us to
create specializations like `T = object` for the constraint `T ≤ Any`,
when it would be nicer to carry through the gradual type and produce `T
= Any`.

We're not currently using constraint sets for subtyping checks, nor are
we going to in the very near future. So for now, we're going to assume
that constraint sets are always used for assignability checks, and allow
the lower/upper bounds to not be fully static. Once we get to the point
where we need to use constraint sets for subtyping checks, we will
consider how best to record this information in constraints.
2025-12-12 22:18:30 -05:00
Shunsuke Shibayama
e19c050386 [ty] disallow explicit specialization of type variables themselves (#21938)
## Summary

This PR makes explicit specialization of a type variable itself an
error, and the result of the specialization is `Unknown`.

The change also fixes https://github.com/astral-sh/ty/issues/1794.

## Test Plan

mdtests updated
new corpus test

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-12-12 15:49:20 -08:00
Alex Waygood
5a2aba237b [ty] Improve diagnostics for unsupported binary operations and unsupported augmented assignments (#21947)
## Summary

This PR takes the improvements we made to unsupported-comparison
diagnostics in https://github.com/astral-sh/ruff/pull/21737, and extends
them to other `unsupported-operator` diagnostics.

## Test Plan

Mdtests and snapshots
2025-12-12 21:53:29 +00:00
Aria Desires
ca5f099481 [ty] update implicit root docs (#21955)
## Summary

./tests is now no longer an implicit root, per
https://github.com/astral-sh/ruff/pull/21817
2025-12-12 16:30:23 -05:00
Alex Waygood
a722df6a73 [ty] Enable even more goto-definition on inlay hints (#21950)
## Summary

Working on py-fuzzer recently (AKA, a Python project!) reminded me how
cool our "inlay hint goto-definition feature" is. So this PR adds a
bunch more of that!

I also made a couple of other minor changes to type display. For
example, in the playground, this snippet:

```py
def f(): ...
reveal_type(f.__get__)
```

currently leads to this diagnostic:

```
Revealed type: `<method-wrapper `__get__` of `f`>` (revealed-type) [Ln 2, Col 13]
```

But the fact that we have backticks both around the type display and
inside the type display isn't _great_ there. This PR changes it to

```
Revealed type: `<method-wrapper '__get__' of function 'f'>` (revealed-type) [Ln 2, Col 13]
```

which avoids the nested-backticks issue in diagnostics, and is more
similar to our display for various other `Type` variants such as
class-literal types (`<class 'Foo'>`, etc., not ``<class `Foo`>``).

## Test Plan

inlay snapshots added; mdtests updated
2025-12-12 12:57:38 -05:00
Brent Westbrook
dec4154c8a Document known lambda formatting deviations from Black (#21954)
Summary
--

Following #8179, we now format long lambda expressions a bit more like
Black, preferring to keep long parameter lists on a single line, but we
go one step further to break the body itself across multiple lines and
parenthesize it if it's still too long. This PR documents both the
stable deviation that breaks parameters across multiple lines, and the
new preview deviation that breaks the body instead.

I also fixed a couple of typos in the section immediately above my
addition.

Test Plan
--

I tested all of the snippets here against `main` for the preview
behavior, our playground for the stable behavior, and Black's playground
for their behavior
2025-12-12 12:57:09 -05:00
Carl Meyer
69d1bfbebc [ty] fix hover type on named expression target (#21952)
## Summary

What it says on the tin.

## Test Plan

Added hover test.
2025-12-12 09:30:50 -08:00
Micha Reiser
90b29c9e87 Bump benchmark dependencies (#21951) 2025-12-12 17:05:57 +00:00
Brent Westbrook
0ebdebddd8 Keep lambda parameters on one line and parenthesize the body if it expands (#21385)
## Summary

This PR makes two changes to our formatting of `lambda` expressions:
1. We now parenthesize the body expression if it expands
2. We now try to keep the parameters on a single line

The latter of these fixes #8179:

Black formatting and this PR's formatting:

```py
def a():
    return b(
        c,
        d,
        e,
        f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
            *args, **kwargs
        ),
    )
```

Stable Ruff formatting

```py
def a():
    return b(
        c,
        d,
        e,
        f=lambda self,
        *args,
        **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs),
    )
```

We don't parenthesize the body expression here because the call to
`aaaa...` has its own parentheses, but adding a binary operator shows
the new parenthesization:

```diff
@@ -3,7 +3,7 @@
         c,
         d,
         e,
-        f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
-            *args, **kwargs
-        ) + 1,
+        f=lambda self, *args, **kwargs: (
+            aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs) + 1
+        ),
     )
```

This is actually a new divergence from Black, which formats this input
like this:

```py
def a():
    return b(
        c,
        d,
        e,
        f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
            *args, **kwargs
        )
        + 1,
    )
```

But I think this is an improvement, unlike the case from #8179.

One other, smaller benefit is that because we now add parentheses to
lambda bodies, we also remove redundant parentheses:

```diff
 @pytest.mark.parametrize(
     "f",
     [
-        lambda x: (x.expanding(min_periods=5).cov(x, pairwise=True)),
-        lambda x: (x.expanding(min_periods=5).corr(x, pairwise=True)),
+        lambda x: x.expanding(min_periods=5).cov(x, pairwise=True),
+        lambda x: x.expanding(min_periods=5).corr(x, pairwise=True),
     ],
 )
 def test_moment_functions_zero_length_pairwise(f):
```

## Test Plan

New tests taken from #8465 and probably a few more I should grab from
the ecosystem results.

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-12 12:02:25 -05:00
Aria Desires
d5546508cf [ty] Improve resolution of absolute imports in tests (#21817)
By teaching desperate resolution to try every possible ancestor that
doesn't have an `__init__.py(i)` when resolving absolute imports.

* Fixes https://github.com/astral-sh/ty/issues/1782
2025-12-12 11:59:06 -05:00
Andrew Gallant
3ac58b47bd [ty] Support __all__ += submodule.__all__
... and also `__all__.extend(submodule.__all__)`.

I originally left out support for this since I was unclear on whether
we'd really need it. But it turns out this is used somewhat frequently.
For example, in `numpy`.

See the comments on the new `Imports` type for how we approach this.
2025-12-12 10:11:04 -05:00
Andrew Gallant
a2b138e789 [ty] Change frequency of invalid __all__ debug message
This was being emitted for every symbol we checked, which
is clearly too frequent. This switches to emitting it once
per module.
2025-12-12 10:11:04 -05:00
Alex Waygood
ff0ed4e752 [ty] Add KnownUnion::to_type() (#21948) 2025-12-12 14:06:35 +00:00
Micha Reiser
bc8efa2fd8 [ty] Classify cls as class parameter (#21944) 2025-12-12 13:54:37 +01:00
Micha Reiser
4249736d74 [ty] Stabilize rename (#21940) 2025-12-12 13:52:47 +01:00
Andrew Gallant
0181568fb5 [ty] Ignore __all__ for document and workspace symbol requests
We also ignore names introduced by import statements, which seems to
match pylance behavior.

Fixes astral-sh/ty#1856
2025-12-12 07:29:29 -05:00
Micha Reiser
8cc7c993de [ty] Attach db to background request handler task (#21941) 2025-12-12 11:31:13 +00:00
Micha Reiser
315bf80eed [ty] Fix outdated version in publish diagnostics after didChange (#21943) 2025-12-12 11:30:56 +00:00
Carl Meyer
0138cd238a [ty] avoid fixpoint unioning of types containing current-cycle Divergent (#21910)
Partially addresses https://github.com/astral-sh/ty/issues/1732

## Summary

Don't union the previous type in fixpoint iteration if the previous type
contains a `Divergent` from the current cycle and the latest type does
not. The theory here, as outlined by @mtshiba at
https://github.com/astral-sh/ty/issues/1732#issuecomment-3609937420, is
that oscillation can't occur by removing and then reintroducing a
`Divergent` type repeatedly, since `Divergent` types are only introduced
at the start of fixpoint iteration.

## Test Plan

Removes a `Divergent` type from the added mdtest, doesn't otherwise
regress any tests.
2025-12-11 19:52:34 -08:00
Shunsuke Shibayama
5e42926eee [ty] improve bad specialization results & error messages (#21840)
## Summary

This PR includes the following changes:

* When attempting to specialize a non-generic type (or a type that is
already specialized), the result is `Unknown`. Also, the error message
is improved.
* When an implicit type alias is incorrectly specialized, the result is
`Unknown`. Also, the error message is improved.
* When only some of the type alias bounds and constraints are not
satisfied, not all substitutions are `Unknown`.
* Double specialization is prohibited. e.g. `G[int][int]`

Furthermore, after applying this PR, the fuzzing tests for seeds 1052
and 4419, which panic in main, now pass.
This is because the false recursions on type variables have been
removed.

```python
# name_2[0] => Unknown
class name_1[name_2: name_2[0]]:
    def name_4(name_3: name_2, /):
        if name_3:
            pass

#  (name_5 if unique_name_0 else name_1)[0] => Unknown
def name_4[name_5: (name_5 if unique_name_0 else name_1)[0], **name_1](): ...
```

## Test Plan

New corpus test
mdtest files updated
2025-12-11 19:21:34 -08:00
Jack O'Connor
ddb7645e9d [ty] support NewTypes of float and complex (#21886)
Fixes https://github.com/astral-sh/ty/issues/1818.
2025-12-12 00:43:09 +00:00
325 changed files with 20685 additions and 5683 deletions

View File

@@ -4,5 +4,6 @@
# Enable off-by-default rules.
[rules]
possibly-unresolved-reference = "warn"
possibly-missing-import = "warn"
unused-ignore-comment = "warn"
division-by-zero = "warn"

View File

@@ -60,7 +60,7 @@ jobs:
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: false
submodules: recursive
@@ -123,7 +123,7 @@ jobs:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BUILD_MANIFEST_NAME: target/distrib/global-dist-manifest.json
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: false
submodules: recursive
@@ -174,7 +174,7 @@ jobs:
outputs:
val: ${{ steps.host.outputs.manifest }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: false
submodules: recursive
@@ -250,7 +250,7 @@ jobs:
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: false
submodules: recursive

View File

@@ -67,7 +67,7 @@ jobs:
cd ..
uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@55df3c868f3fa9ab34cff0498dd6106722aac205"
uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@2e1816eac09c90140b1ba51d19afc5f59da460f5"
ecosystem-analyzer \
--repository ruff \

View File

@@ -52,7 +52,7 @@ jobs:
cd ..
uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@55df3c868f3fa9ab34cff0498dd6106722aac205"
uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@2e1816eac09c90140b1ba51d19afc5f59da460f5"
ecosystem-analyzer \
--verbose \

View File

@@ -1,5 +1,54 @@
# Changelog
## 0.14.10
Released on 2025-12-18.
### Preview features
- [formatter] Fluent formatting of method chains ([#21369](https://github.com/astral-sh/ruff/pull/21369))
- [formatter] Keep lambda parameters on one line and parenthesize the body if it expands ([#21385](https://github.com/astral-sh/ruff/pull/21385))
- \[`flake8-implicit-str-concat`\] New rule to prevent implicit string concatenation in collections (`ISC004`) ([#21972](https://github.com/astral-sh/ruff/pull/21972))
- \[`flake8-use-pathlib`\] Make fixes unsafe when types change in compound statements (`PTH104`, `PTH105`, `PTH109`, `PTH115`) ([#22009](https://github.com/astral-sh/ruff/pull/22009))
- \[`refurb`\] Extend support for `Path.open` (`FURB101`, `FURB103`) ([#21080](https://github.com/astral-sh/ruff/pull/21080))
### Bug fixes
- \[`pyupgrade`\] Fix parsing named Unicode escape sequences (`UP032`) ([#21901](https://github.com/astral-sh/ruff/pull/21901))
### Rule changes
- \[`eradicate`\] Ignore `ruff:disable` and `ruff:enable` comments in `ERA001` ([#22038](https://github.com/astral-sh/ruff/pull/22038))
- \[`flake8-pytest-style`\] Allow `match` and `check` keyword arguments without an expected exception type (`PT010`) ([#21964](https://github.com/astral-sh/ruff/pull/21964))
- [syntax-errors] Annotated name cannot be global ([#20868](https://github.com/astral-sh/ruff/pull/20868))
### Documentation
- Add `uv` and `ty` to the Ruff README ([#21996](https://github.com/astral-sh/ruff/pull/21996))
- Document known lambda formatting deviations from Black ([#21954](https://github.com/astral-sh/ruff/pull/21954))
- Update `setup.md` ([#22024](https://github.com/astral-sh/ruff/pull/22024))
- \[`flake8-bandit`\] Fix broken link (`S704`) ([#22039](https://github.com/astral-sh/ruff/pull/22039))
### Other changes
- Fix playground Share button showing "Copied!" before clipboard copy completes ([#21942](https://github.com/astral-sh/ruff/pull/21942))
### Contributors
- [@dylwil3](https://github.com/dylwil3)
- [@charliecloudberry](https://github.com/charliecloudberry)
- [@charliermarsh](https://github.com/charliermarsh)
- [@chirizxc](https://github.com/chirizxc)
- [@ntBre](https://github.com/ntBre)
- [@zanieb](https://github.com/zanieb)
- [@amyreese](https://github.com/amyreese)
- [@hauntsaninja](https://github.com/hauntsaninja)
- [@11happy](https://github.com/11happy)
- [@mahiro72](https://github.com/mahiro72)
- [@MichaReiser](https://github.com/MichaReiser)
- [@phongddo](https://github.com/phongddo)
- [@PeterJCLaw](https://github.com/PeterJCLaw)
## 0.14.9
Released on 2025-12-11.

86
Cargo.lock generated
View File

@@ -254,6 +254,21 @@ dependencies = [
"syn",
]
[[package]]
name = "bit-set"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08807e080ed7f9d5433fa9b275196cfc35414f66a0c79d864dc51a0d825231a3"
dependencies = [
"bit-vec",
]
[[package]]
name = "bit-vec"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5e764a1d40d510daf35e07be9eb06e75770908c27d411ee6c92109c9840eaaf7"
[[package]]
name = "bitflags"
version = "1.3.2"
@@ -944,6 +959,18 @@ dependencies = [
"parking_lot_core",
]
[[package]]
name = "datatest-stable"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a867d7322eb69cf3a68a5426387a25b45cb3b9c5ee41023ee6cea92e2afadd82"
dependencies = [
"camino",
"fancy-regex",
"libtest-mimic 0.8.1",
"walkdir",
]
[[package]]
name = "derive-where"
version = "1.6.0"
@@ -977,27 +1004,6 @@ dependencies = [
"crypto-common",
]
[[package]]
name = "dir-test"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "62c013fe825864f3e4593f36426c1fa7a74f5603f13ca8d1af7a990c1cd94a79"
dependencies = [
"dir-test-macros",
]
[[package]]
name = "dir-test-macros"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d42f54d7b4a6bc2400fe5b338e35d1a335787585375322f49c5d5fe7b243da7e"
dependencies = [
"glob",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "dirs"
version = "6.0.0"
@@ -1138,6 +1144,17 @@ dependencies = [
"windows-sys 0.61.0",
]
[[package]]
name = "fancy-regex"
version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6e24cb5a94bcae1e5408b0effca5cd7172ea3c5755049c5f3af4cd283a165298"
dependencies = [
"bit-set",
"regex-automata",
"regex-syntax",
]
[[package]]
name = "fastrand"
version = "2.3.0"
@@ -1625,7 +1642,6 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "46fdb647ebde000f43b5b53f773c30cf9b0cb4300453208713fa38b2c70935a0"
dependencies = [
"console 0.15.11",
"globset",
"once_cell",
"pest",
"pest_derive",
@@ -1633,7 +1649,6 @@ dependencies = [
"ron",
"serde",
"similar",
"walkdir",
]
[[package]]
@@ -1919,6 +1934,18 @@ dependencies = [
"threadpool",
]
[[package]]
name = "libtest-mimic"
version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5297962ef19edda4ce33aaa484386e0a5b3d7f2f4e037cbeee00503ef6b29d33"
dependencies = [
"anstream",
"anstyle",
"clap",
"escape8259",
]
[[package]]
name = "linux-raw-sys"
version = "0.11.0"
@@ -2860,7 +2887,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.14.9"
version = "0.14.10"
dependencies = [
"anyhow",
"argfile",
@@ -3118,7 +3145,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.14.9"
version = "0.14.10"
dependencies = [
"aho-corasick",
"anyhow",
@@ -3278,6 +3305,7 @@ dependencies = [
"anyhow",
"clap",
"countme",
"datatest-stable",
"insta",
"itertools 0.14.0",
"memchr",
@@ -3347,6 +3375,7 @@ dependencies = [
"bitflags 2.10.0",
"bstr",
"compact_str",
"datatest-stable",
"get-size2",
"insta",
"itertools 0.14.0",
@@ -3475,7 +3504,7 @@ dependencies = [
[[package]]
name = "ruff_wasm"
version = "0.14.9"
version = "0.14.10"
dependencies = [
"console_error_panic_hook",
"console_log",
@@ -4311,7 +4340,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5fe242ee9e646acec9ab73a5c540e8543ed1b107f0ce42be831e0775d423c396"
dependencies = [
"ignore",
"libtest-mimic",
"libtest-mimic 0.7.3",
"snapbox",
]
@@ -4340,6 +4369,7 @@ dependencies = [
"ruff_python_trivia",
"salsa",
"tempfile",
"tikv-jemallocator",
"toml",
"tracing",
"tracing-flame",
@@ -4462,7 +4492,7 @@ dependencies = [
"camino",
"colored 3.0.0",
"compact_str",
"dir-test",
"datatest-stable",
"drop_bomb",
"get-size2",
"glob",

View File

@@ -5,7 +5,7 @@ resolver = "2"
[workspace.package]
# Please update rustfmt.toml when bumping the Rust edition
edition = "2024"
rust-version = "1.89"
rust-version = "1.90"
homepage = "https://docs.astral.sh/ruff"
documentation = "https://docs.astral.sh/ruff"
repository = "https://github.com/astral-sh/ruff"
@@ -81,7 +81,7 @@ compact_str = "0.9.0"
criterion = { version = "0.7.0", default-features = false }
crossbeam = { version = "0.8.4" }
dashmap = { version = "6.0.1" }
dir-test = { version = "0.4.0" }
datatest-stable = { version = "0.3.3" }
dunce = { version = "1.0.5" }
drop_bomb = { version = "0.1.5" }
etcetera = { version = "0.11.0" }

View File

@@ -57,8 +57,11 @@ Ruff is extremely actively developed and used in major open-source projects like
...and [many more](#whos-using-ruff).
Ruff is backed by [Astral](https://astral.sh). Read the [launch post](https://astral.sh/blog/announcing-astral-the-company-behind-ruff),
or the original [project announcement](https://notes.crmarsh.com/python-tooling-could-be-much-much-faster).
Ruff is backed by [Astral](https://astral.sh), the creators of
[uv](https://github.com/astral-sh/uv) and [ty](https://github.com/astral-sh/ty).
Read the [launch post](https://astral.sh/blog/announcing-astral-the-company-behind-ruff), or the
original [project announcement](https://notes.crmarsh.com/python-tooling-could-be-much-much-faster).
## Testimonials
@@ -147,8 +150,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.14.9/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.9/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.14.10/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.10/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -181,7 +184,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.14.9
rev: v0.14.10
hooks:
# Run the linter.
- id: ruff-check

View File

@@ -4,6 +4,7 @@ extend-exclude = [
"crates/ty_vendored/vendor/**/*",
"**/resources/**/*",
"**/snapshots/**/*",
"crates/ruff_linter/src/rules/flake8_implicit_str_concat/rules/collection_literal.rs",
# Completion tests tend to have a lot of incomplete
# words naturally. It's annoying to have to make all
# of them actually words. So just ignore typos here.

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.14.9"
version = "0.14.10"
publish = true
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -10,7 +10,7 @@ use anyhow::bail;
use clap::builder::Styles;
use clap::builder::styling::{AnsiColor, Effects};
use clap::builder::{TypedValueParser, ValueParserFactory};
use clap::{Parser, Subcommand, command};
use clap::{Parser, Subcommand};
use colored::Colorize;
use itertools::Itertools;
use path_absolutize::path_dedot;

View File

@@ -9,7 +9,7 @@ use std::sync::mpsc::channel;
use anyhow::Result;
use clap::CommandFactory;
use colored::Colorize;
use log::{error, warn};
use log::error;
use notify::{RecursiveMode, Watcher, recommended_watcher};
use args::{GlobalConfigArgs, ServerCommand};

View File

@@ -194,7 +194,7 @@ static SYMPY: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY312,
},
13030,
13100,
);
static TANJUN: Benchmark = Benchmark::new(
@@ -223,7 +223,7 @@ static STATIC_FRAME: Benchmark = Benchmark::new(
max_dep_date: "2025-08-09",
python_version: PythonVersion::PY311,
},
950,
1100,
);
#[track_caller]

View File

@@ -1,5 +1,6 @@
use glob::PatternError;
use ruff_notebook::{Notebook, NotebookError};
use rustc_hash::FxHashMap;
use std::panic::RefUnwindSafe;
use std::sync::{Arc, Mutex};
@@ -20,18 +21,44 @@ use super::walk_directory::WalkDirectoryBuilder;
///
/// ## Warning
/// Don't use this system for production code. It's intended for testing only.
#[derive(Debug, Clone)]
#[derive(Debug)]
pub struct TestSystem {
inner: Arc<dyn WritableSystem + RefUnwindSafe + Send + Sync>,
/// Environment variable overrides. If a key is present here, it takes precedence
/// over the inner system's environment variables.
env_overrides: Arc<Mutex<FxHashMap<String, Option<String>>>>,
}
impl Clone for TestSystem {
fn clone(&self) -> Self {
Self {
inner: self.inner.clone(),
env_overrides: self.env_overrides.clone(),
}
}
}
impl TestSystem {
pub fn new(inner: impl WritableSystem + RefUnwindSafe + Send + Sync + 'static) -> Self {
Self {
inner: Arc::new(inner),
env_overrides: Arc::new(Mutex::new(FxHashMap::default())),
}
}
/// Sets an environment variable override. This takes precedence over the inner system.
pub fn set_env_var(&self, name: impl Into<String>, value: impl Into<String>) {
self.env_overrides
.lock()
.unwrap()
.insert(name.into(), Some(value.into()));
}
/// Removes an environment variable override, making it appear as not set.
pub fn remove_env_var(&self, name: impl Into<String>) {
self.env_overrides.lock().unwrap().insert(name.into(), None);
}
/// Returns the [`InMemorySystem`].
///
/// ## Panics
@@ -147,6 +174,18 @@ impl System for TestSystem {
self.system().case_sensitivity()
}
fn env_var(&self, name: &str) -> std::result::Result<String, std::env::VarError> {
// Check overrides first
if let Some(override_value) = self.env_overrides.lock().unwrap().get(name) {
return match override_value {
Some(value) => Ok(value.clone()),
None => Err(std::env::VarError::NotPresent),
};
}
// Fall back to inner system
self.system().env_var(name)
}
fn dyn_clone(&self) -> Box<dyn System> {
Box::new(self.clone())
}
@@ -156,6 +195,7 @@ impl Default for TestSystem {
fn default() -> Self {
Self {
inner: Arc::new(InMemorySystem::default()),
env_overrides: Arc::new(Mutex::new(FxHashMap::default())),
}
}
}

View File

@@ -144,8 +144,8 @@ fn emit_field(output: &mut String, name: &str, field: &OptionField, parents: &[S
output.push('\n');
if let Some(deprecated) = &field.deprecated {
output.push_str("> [!WARN] \"Deprecated\"\n");
output.push_str("> This option has been deprecated");
output.push_str("!!! warning \"Deprecated\"\n");
output.push_str(" This option has been deprecated");
if let Some(since) = deprecated.since {
write!(output, " in {since}").unwrap();
@@ -166,8 +166,9 @@ fn emit_field(output: &mut String, name: &str, field: &OptionField, parents: &[S
output.push('\n');
let _ = writeln!(output, "**Type**: `{}`", field.value_type);
output.push('\n');
output.push_str("**Example usage** (`pyproject.toml`):\n\n");
output.push_str("**Example usage**:\n\n");
output.push_str(&format_example(
"pyproject.toml",
&format_header(
field.scope,
field.example,
@@ -179,11 +180,11 @@ fn emit_field(output: &mut String, name: &str, field: &OptionField, parents: &[S
output.push('\n');
}
fn format_example(header: &str, content: &str) -> String {
fn format_example(title: &str, header: &str, content: &str) -> String {
if header.is_empty() {
format!("```toml\n{content}\n```\n",)
format!("```toml title=\"{title}\"\n{content}\n```\n",)
} else {
format!("```toml\n{header}\n{content}\n```\n",)
format!("```toml title=\"{title}\"\n{header}\n{content}\n```\n",)
}
}

View File

@@ -119,7 +119,7 @@ fn generate_markdown() -> String {
let _ = writeln!(
&mut output,
r#"<small>
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of '{level}'."><code>{level}</code></a> ·
Default level: <a href="../../rules#rule-levels" title="This lint has a default level of '{level}'."><code>{level}</code></a> ·
{status_text} ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20{encoded_name}" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/{file}#L{line}" target="_blank">View source</a>

View File

@@ -39,7 +39,7 @@ impl Edit {
/// Creates an edit that replaces the content in `range` with `content`.
pub fn range_replacement(content: String, range: TextRange) -> Self {
debug_assert!(!content.is_empty(), "Prefer `Fix::deletion`");
debug_assert!(!content.is_empty(), "Prefer `Edit::deletion`");
Self {
content: Some(Box::from(content)),

View File

@@ -337,7 +337,7 @@ macro_rules! best_fitting {
#[cfg(test)]
mod tests {
use crate::prelude::*;
use crate::{FormatState, SimpleFormatOptions, VecBuffer, write};
use crate::{FormatState, SimpleFormatOptions, VecBuffer};
struct TestFormat;
@@ -385,8 +385,8 @@ mod tests {
#[test]
fn best_fitting_variants_print_as_lists() {
use crate::Formatted;
use crate::prelude::*;
use crate::{Formatted, format, format_args};
// The second variant below should be selected when printing at a width of 30
let formatted_best_fitting = format!(

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.14.9"
version = "0.14.10"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -0,0 +1,66 @@
facts = (
"Lobsters have blue blood.",
"The liver is the only human organ that can fully regenerate itself.",
"Clarinets are made almost entirely out of wood from the mpingo tree."
"In 1971, astronaut Alan Shepard played golf on the moon.",
)
facts = [
"Lobsters have blue blood.",
"The liver is the only human organ that can fully regenerate itself.",
"Clarinets are made almost entirely out of wood from the mpingo tree."
"In 1971, astronaut Alan Shepard played golf on the moon.",
]
facts = {
"Lobsters have blue blood.",
"The liver is the only human organ that can fully regenerate itself.",
"Clarinets are made almost entirely out of wood from the mpingo tree."
"In 1971, astronaut Alan Shepard played golf on the moon.",
}
facts = {
(
"Clarinets are made almost entirely out of wood from the mpingo tree."
"In 1971, astronaut Alan Shepard played golf on the moon."
),
}
facts = (
"Octopuses have three hearts."
# Missing comma here.
"Honey never spoils.",
)
facts = [
"Octopuses have three hearts."
# Missing comma here.
"Honey never spoils.",
]
facts = {
"Octopuses have three hearts."
# Missing comma here.
"Honey never spoils.",
}
facts = (
(
"Clarinets are made almost entirely out of wood from the mpingo tree."
"In 1971, astronaut Alan Shepard played golf on the moon."
),
)
facts = [
(
"Clarinets are made almost entirely out of wood from the mpingo tree."
"In 1971, astronaut Alan Shepard played golf on the moon."
),
]
facts = (
"Lobsters have blue blood.\n"
"The liver is the only human organ that can fully regenerate itself.\n"
"Clarinets are made almost entirely out of wood from the mpingo tree.\n"
"In 1971, astronaut Alan Shepard played golf on the moon.\n"
)

View File

@@ -9,3 +9,15 @@ def test_ok():
def test_error():
with pytest.raises(UnicodeError):
pass
def test_match_only():
with pytest.raises(match="some error message"):
pass
def test_check_only():
with pytest.raises(check=lambda e: True):
pass
def test_match_and_check():
with pytest.raises(match="some error message", check=lambda e: True):
pass

View File

@@ -136,4 +136,38 @@ os.chmod("pth1_file", 0o700, None, True, 1, *[1], **{"x": 1}, foo=1)
os.rename("pth1_file", "pth1_file1", None, None, 1, *[1], **{"x": 1}, foo=1)
os.replace("pth1_file1", "pth1_file", None, None, 1, *[1], **{"x": 1}, foo=1)
os.path.samefile("pth1_file", "pth1_link", 1, *[1], **{"x": 1}, foo=1)
os.path.samefile("pth1_file", "pth1_link", 1, *[1], **{"x": 1}, foo=1)
# See: https://github.com/astral-sh/ruff/issues/21794
import sys
if os.rename("pth1.py", "pth1.py.bak"):
print("rename: truthy")
else:
print("rename: falsey")
if os.replace("pth1.py.bak", "pth1.py"):
print("replace: truthy")
else:
print("replace: falsey")
try:
for _ in os.getcwd():
print("getcwd: iterable")
break
except TypeError as e:
print("getcwd: not iterable")
try:
for _ in os.getcwdb():
print("getcwdb: iterable")
break
except TypeError as e:
print("getcwdb: not iterable")
try:
for _ in os.readlink(sys.executable):
print("readlink: iterable")
break
except TypeError as e:
print("readlink: not iterable")

View File

@@ -132,7 +132,6 @@ async def c():
# Non-errors
###
# False-negative: RustPython doesn't parse the `\N{snowman}`.
"\N{snowman} {}".format(a)
"{".format(a)
@@ -276,3 +275,6 @@ if __name__ == "__main__":
number = 0
string = "{}".format(number := number + 1)
print(string)
# Unicode escape
"\N{angle}AOB = {angle}°".format(angle=180)

View File

@@ -138,5 +138,6 @@ with open("file.txt", encoding="utf-8") as f:
with open("file.txt", encoding="utf-8") as f:
contents = process_contents(f.read())
with open("file.txt", encoding="utf-8") as f:
with open("file1.txt", encoding="utf-8") as f:
contents: str = process_contents(f.read())

View File

@@ -0,0 +1,8 @@
from pathlib import Path
with Path("file.txt").open() as f:
contents = f.read()
with Path("file.txt").open("r") as f:
contents = f.read()

View File

@@ -0,0 +1,26 @@
from pathlib import Path
with Path("file.txt").open("w") as f:
f.write("test")
with Path("file.txt").open("wb") as f:
f.write(b"test")
with Path("file.txt").open(mode="w") as f:
f.write("test")
with Path("file.txt").open("w", encoding="utf8") as f:
f.write("test")
with Path("file.txt").open("w", errors="ignore") as f:
f.write("test")
with Path(foo()).open("w") as f:
f.write("test")
p = Path("file.txt")
with p.open("w") as f:
f.write("test")
with Path("foo", "bar", "baz").open("w") as f:
f.write("test")

View File

@@ -86,3 +86,26 @@ def f():
# Multiple codes but none are used
# ruff: disable[E741, F401, F841]
print("hello")
def f():
# Unknown rule codes
# ruff: disable[YF829]
# ruff: disable[F841, RQW320]
value = 0
# ruff: enable[F841, RQW320]
# ruff: enable[YF829]
def f():
# External rule codes should be ignored
# ruff: disable[TK421]
print("hello")
# ruff: enable[TK421]
def f():
# Empty or missing rule codes
# ruff: disable
# ruff: disable[]
print("hello")

View File

@@ -0,0 +1,38 @@
a: int = 1
def f1():
global a
a: str = "foo" # error
b: int = 1
def outer():
def inner():
global b
b: str = "nested" # error
c: int = 1
def f2():
global c
c: list[str] = [] # error
d: int = 1
def f3():
global d
d: str # error
e: int = 1
def f4():
e: str = "happy" # okay
global f
f: int = 1 # okay
g: int = 1
global g # error
class C:
x: str
global x # error
class D:
global x # error
x: str

View File

@@ -214,6 +214,13 @@ pub(crate) fn expression(expr: &Expr, checker: &Checker) {
range: _,
node_index: _,
}) => {
if checker.is_rule_enabled(Rule::ImplicitStringConcatenationInCollectionLiteral) {
flake8_implicit_str_concat::rules::implicit_string_concatenation_in_collection_literal(
checker,
expr,
elts,
);
}
if ctx.is_store() {
let check_too_many_expressions =
checker.is_rule_enabled(Rule::ExpressionsInStarAssignment);
@@ -1329,6 +1336,13 @@ pub(crate) fn expression(expr: &Expr, checker: &Checker) {
}
}
Expr::Set(set) => {
if checker.is_rule_enabled(Rule::ImplicitStringConcatenationInCollectionLiteral) {
flake8_implicit_str_concat::rules::implicit_string_concatenation_in_collection_literal(
checker,
expr,
&set.elts,
);
}
if checker.is_rule_enabled(Rule::DuplicateValue) {
flake8_bugbear::rules::duplicate_value(checker, set);
}

View File

@@ -454,6 +454,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8ImplicitStrConcat, "001") => rules::flake8_implicit_str_concat::rules::SingleLineImplicitStringConcatenation,
(Flake8ImplicitStrConcat, "002") => rules::flake8_implicit_str_concat::rules::MultiLineImplicitStringConcatenation,
(Flake8ImplicitStrConcat, "003") => rules::flake8_implicit_str_concat::rules::ExplicitStringConcatenation,
(Flake8ImplicitStrConcat, "004") => rules::flake8_implicit_str_concat::rules::ImplicitStringConcatenationInCollectionLiteral,
// flake8-print
(Flake8Print, "1") => rules::flake8_print::rules::Print,
@@ -1063,6 +1064,8 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Ruff, "100") => rules::ruff::rules::UnusedNOQA,
(Ruff, "101") => rules::ruff::rules::RedirectedNOQA,
(Ruff, "102") => rules::ruff::rules::InvalidRuleCode,
(Ruff, "103") => rules::ruff::rules::InvalidSuppressionComment,
(Ruff, "104") => rules::ruff::rules::UnmatchedSuppressionComment,
(Ruff, "200") => rules::ruff::rules::InvalidPyprojectToml,
#[cfg(any(feature = "test-rules", test))]

View File

@@ -286,12 +286,7 @@ pub(crate) fn add_argument(argument: &str, arguments: &Arguments, tokens: &Token
/// Generic function to add a (regular) parameter to a function definition.
pub(crate) fn add_parameter(parameter: &str, parameters: &Parameters, source: &str) -> Edit {
if let Some(last) = parameters
.args
.iter()
.filter(|arg| arg.default.is_none())
.next_back()
{
if let Some(last) = parameters.args.iter().rfind(|arg| arg.default.is_none()) {
// Case 1: at least one regular parameter, so append after the last one.
Edit::insertion(format!(", {parameter}"), last.end())
} else if !parameters.args.is_empty() {

View File

@@ -1001,6 +1001,7 @@ mod tests {
#[test_case(Path::new("write_to_debug.py"), PythonVersion::PY310)]
#[test_case(Path::new("invalid_expression.py"), PythonVersion::PY312)]
#[test_case(Path::new("global_parameter.py"), PythonVersion::PY310)]
#[test_case(Path::new("annotated_global.py"), PythonVersion::PY314)]
fn test_semantic_errors(path: &Path, python_version: PythonVersion) -> Result<()> {
let snapshot = format!(
"semantic_syntax_error_{}_{}",

View File

@@ -22,6 +22,7 @@ static ALLOWLIST_REGEX: LazyLock<Regex> = LazyLock::new(|| {
# Case-sensitive
pyright
| pyrefly
| ruff\s*:\s*(disable|enable)
| mypy:
| type:\s*ignore
| SPDX-License-Identifier:
@@ -148,6 +149,8 @@ mod tests {
assert!(!comment_contains_code("# 123", &[]));
assert!(!comment_contains_code("# 123.1", &[]));
assert!(!comment_contains_code("# 1, 2, 3", &[]));
assert!(!comment_contains_code("# ruff: disable[E501]", &[]));
assert!(!comment_contains_code("#ruff:enable[E501, F84]", &[]));
assert!(!comment_contains_code(
"# pylint: disable=redefined-outer-name",
&[]

View File

@@ -70,7 +70,7 @@ fn is_open_call(func: &Expr, semantic: &SemanticModel) -> bool {
}
/// Returns `true` if an expression resolves to a call to `pathlib.Path.open`.
fn is_open_call_from_pathlib(func: &Expr, semantic: &SemanticModel) -> bool {
pub(crate) fn is_open_call_from_pathlib(func: &Expr, semantic: &SemanticModel) -> bool {
let Expr::Attribute(ast::ExprAttribute { attr, value, .. }) = func else {
return false;
};

View File

@@ -18,7 +18,7 @@ mod async_zero_sleep;
mod blocking_http_call;
mod blocking_http_call_httpx;
mod blocking_input;
mod blocking_open_call;
pub(crate) mod blocking_open_call;
mod blocking_path_methods;
mod blocking_process_invocation;
mod blocking_sleep;

View File

@@ -12,7 +12,7 @@ use crate::{checkers::ast::Checker, settings::LinterSettings};
/// Checks for non-literal strings being passed to [`markupsafe.Markup`][markupsafe-markup].
///
/// ## Why is this bad?
/// [`markupsafe.Markup`] does not perform any escaping, so passing dynamic
/// [`markupsafe.Markup`][markupsafe-markup] does not perform any escaping, so passing dynamic
/// content, like f-strings, variables or interpolated strings will potentially
/// lead to XSS vulnerabilities.
///

View File

@@ -32,6 +32,10 @@ mod tests {
Path::new("ISC_syntax_error_2.py")
)]
#[test_case(Rule::ExplicitStringConcatenation, Path::new("ISC.py"))]
#[test_case(
Rule::ImplicitStringConcatenationInCollectionLiteral,
Path::new("ISC004.py")
)]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
let diagnostics = test_path(

View File

@@ -0,0 +1,103 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{Expr, StringLike};
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::{Edit, Fix, FixAvailability, Violation};
/// ## What it does
/// Checks for implicitly concatenated strings inside list, tuple, and set literals.
///
/// ## Why is this bad?
/// In collection literals, implicit string concatenation is often the result of
/// a missing comma between elements, which can silently merge items together.
///
/// ## Example
/// ```python
/// facts = (
/// "Lobsters have blue blood.",
/// "The liver is the only human organ that can fully regenerate itself.",
/// "Clarinets are made almost entirely out of wood from the mpingo tree."
/// "In 1971, astronaut Alan Shepard played golf on the moon.",
/// )
/// ```
///
/// Instead, you likely intended:
/// ```python
/// facts = (
/// "Lobsters have blue blood.",
/// "The liver is the only human organ that can fully regenerate itself.",
/// "Clarinets are made almost entirely out of wood from the mpingo tree.",
/// "In 1971, astronaut Alan Shepard played golf on the moon.",
/// )
/// ```
///
/// If the concatenation is intentional, wrap it in parentheses to make it
/// explicit:
/// ```python
/// facts = (
/// "Lobsters have blue blood.",
/// "The liver is the only human organ that can fully regenerate itself.",
/// (
/// "Clarinets are made almost entirely out of wood from the mpingo tree."
/// "In 1971, astronaut Alan Shepard played golf on the moon."
/// ),
/// )
/// ```
///
/// ## Fix safety
/// The fix is safe in that it does not change the semantics of your code.
/// However, the issue is that you may often want to change semantics
/// by adding a missing comma.
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "0.14.10")]
pub(crate) struct ImplicitStringConcatenationInCollectionLiteral;
impl Violation for ImplicitStringConcatenationInCollectionLiteral {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Always;
#[derive_message_formats]
fn message(&self) -> String {
"Unparenthesized implicit string concatenation in collection".to_string()
}
fn fix_title(&self) -> Option<String> {
Some("Wrap implicitly concatenated strings in parentheses".to_string())
}
}
/// ISC004
pub(crate) fn implicit_string_concatenation_in_collection_literal(
checker: &Checker,
expr: &Expr,
elements: &[Expr],
) {
for element in elements {
let Ok(string_like) = StringLike::try_from(element) else {
continue;
};
if !string_like.is_implicit_concatenated() {
continue;
}
if parenthesized_range(
string_like.as_expression_ref(),
expr.into(),
checker.tokens(),
)
.is_some()
{
continue;
}
let mut diagnostic = checker.report_diagnostic(
ImplicitStringConcatenationInCollectionLiteral,
string_like.range(),
);
diagnostic.help("Did you forget a comma?");
diagnostic.set_fix(Fix::unsafe_edits(
Edit::insertion("(".to_string(), string_like.range().start()),
[Edit::insertion(")".to_string(), string_like.range().end())],
));
}
}

View File

@@ -1,5 +1,7 @@
pub(crate) use collection_literal::*;
pub(crate) use explicit::*;
pub(crate) use implicit::*;
mod collection_literal;
mod explicit;
mod implicit;

View File

@@ -0,0 +1,149 @@
---
source: crates/ruff_linter/src/rules/flake8_implicit_str_concat/mod.rs
---
ISC004 [*] Unparenthesized implicit string concatenation in collection
--> ISC004.py:4:5
|
2 | "Lobsters have blue blood.",
3 | "The liver is the only human organ that can fully regenerate itself.",
4 | / "Clarinets are made almost entirely out of wood from the mpingo tree."
5 | | "In 1971, astronaut Alan Shepard played golf on the moon.",
| |______________________________________________________________^
6 | )
|
help: Wrap implicitly concatenated strings in parentheses
help: Did you forget a comma?
1 | facts = (
2 | "Lobsters have blue blood.",
3 | "The liver is the only human organ that can fully regenerate itself.",
- "Clarinets are made almost entirely out of wood from the mpingo tree."
- "In 1971, astronaut Alan Shepard played golf on the moon.",
4 + ("Clarinets are made almost entirely out of wood from the mpingo tree."
5 + "In 1971, astronaut Alan Shepard played golf on the moon."),
6 | )
7 |
8 | facts = [
note: This is an unsafe fix and may change runtime behavior
ISC004 [*] Unparenthesized implicit string concatenation in collection
--> ISC004.py:11:5
|
9 | "Lobsters have blue blood.",
10 | "The liver is the only human organ that can fully regenerate itself.",
11 | / "Clarinets are made almost entirely out of wood from the mpingo tree."
12 | | "In 1971, astronaut Alan Shepard played golf on the moon.",
| |______________________________________________________________^
13 | ]
|
help: Wrap implicitly concatenated strings in parentheses
help: Did you forget a comma?
8 | facts = [
9 | "Lobsters have blue blood.",
10 | "The liver is the only human organ that can fully regenerate itself.",
- "Clarinets are made almost entirely out of wood from the mpingo tree."
- "In 1971, astronaut Alan Shepard played golf on the moon.",
11 + ("Clarinets are made almost entirely out of wood from the mpingo tree."
12 + "In 1971, astronaut Alan Shepard played golf on the moon."),
13 | ]
14 |
15 | facts = {
note: This is an unsafe fix and may change runtime behavior
ISC004 [*] Unparenthesized implicit string concatenation in collection
--> ISC004.py:18:5
|
16 | "Lobsters have blue blood.",
17 | "The liver is the only human organ that can fully regenerate itself.",
18 | / "Clarinets are made almost entirely out of wood from the mpingo tree."
19 | | "In 1971, astronaut Alan Shepard played golf on the moon.",
| |______________________________________________________________^
20 | }
|
help: Wrap implicitly concatenated strings in parentheses
help: Did you forget a comma?
15 | facts = {
16 | "Lobsters have blue blood.",
17 | "The liver is the only human organ that can fully regenerate itself.",
- "Clarinets are made almost entirely out of wood from the mpingo tree."
- "In 1971, astronaut Alan Shepard played golf on the moon.",
18 + ("Clarinets are made almost entirely out of wood from the mpingo tree."
19 + "In 1971, astronaut Alan Shepard played golf on the moon."),
20 | }
21 |
22 | facts = {
note: This is an unsafe fix and may change runtime behavior
ISC004 [*] Unparenthesized implicit string concatenation in collection
--> ISC004.py:30:5
|
29 | facts = (
30 | / "Octopuses have three hearts."
31 | | # Missing comma here.
32 | | "Honey never spoils.",
| |_________________________^
33 | )
|
help: Wrap implicitly concatenated strings in parentheses
help: Did you forget a comma?
27 | }
28 |
29 | facts = (
- "Octopuses have three hearts."
30 + ("Octopuses have three hearts."
31 | # Missing comma here.
- "Honey never spoils.",
32 + "Honey never spoils."),
33 | )
34 |
35 | facts = [
note: This is an unsafe fix and may change runtime behavior
ISC004 [*] Unparenthesized implicit string concatenation in collection
--> ISC004.py:36:5
|
35 | facts = [
36 | / "Octopuses have three hearts."
37 | | # Missing comma here.
38 | | "Honey never spoils.",
| |_________________________^
39 | ]
|
help: Wrap implicitly concatenated strings in parentheses
help: Did you forget a comma?
33 | )
34 |
35 | facts = [
- "Octopuses have three hearts."
36 + ("Octopuses have three hearts."
37 | # Missing comma here.
- "Honey never spoils.",
38 + "Honey never spoils."),
39 | ]
40 |
41 | facts = {
note: This is an unsafe fix and may change runtime behavior
ISC004 [*] Unparenthesized implicit string concatenation in collection
--> ISC004.py:42:5
|
41 | facts = {
42 | / "Octopuses have three hearts."
43 | | # Missing comma here.
44 | | "Honey never spoils.",
| |_________________________^
45 | }
|
help: Wrap implicitly concatenated strings in parentheses
help: Did you forget a comma?
39 | ]
40 |
41 | facts = {
- "Octopuses have three hearts."
42 + ("Octopuses have three hearts."
43 | # Missing comma here.
- "Honey never spoils.",
44 + "Honey never spoils."),
45 | }
46 |
47 | facts = (
note: This is an unsafe fix and may change runtime behavior

View File

@@ -125,6 +125,9 @@ impl Violation for PytestRaisesTooBroad {
/// ## Why is this bad?
/// `pytest.raises` expects to receive an expected exception as its first
/// argument. If omitted, the `pytest.raises` call will fail at runtime.
/// The rule will also accept calls without an expected exception but with
/// `match` and/or `check` keyword arguments, which are also valid after
/// pytest version 8.4.0.
///
/// ## Example
/// ```python
@@ -181,6 +184,8 @@ pub(crate) fn raises_call(checker: &Checker, call: &ast::ExprCall) {
.arguments
.find_argument("expected_exception", 0)
.is_none()
&& call.arguments.find_keyword("match").is_none()
&& call.arguments.find_keyword("check").is_none()
{
checker.report_diagnostic(PytestRaisesWithoutException, call.func.range());
}

View File

@@ -146,7 +146,7 @@ fn reverse_comparison(expr: &Expr, locator: &Locator, stylist: &Stylist) -> Resu
let left = (*comparison.left).clone();
// Copy the right side to the left side.
comparison.left = Box::new(comparison.comparisons[0].comparator.clone());
*comparison.left = comparison.comparisons[0].comparator.clone();
// Copy the left side to the right side.
comparison.comparisons[0].comparator = left;

View File

@@ -210,6 +210,7 @@ pub(crate) fn is_argument_non_default(arguments: &Arguments, name: &str, positio
/// Returns `true` if the given call is a top-level expression in its statement.
/// This means the call's return value is not used, so return type changes don't matter.
pub(crate) fn is_top_level_expression_call(checker: &Checker) -> bool {
pub(crate) fn is_top_level_expression_in_statement(checker: &Checker) -> bool {
checker.semantic().current_expression_parent().is_none()
&& checker.semantic().current_statement().is_expr_stmt()
}

View File

@@ -6,7 +6,7 @@ use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::importer::ImportRequest;
use crate::preview::is_fix_os_getcwd_enabled;
use crate::rules::flake8_use_pathlib::helpers::is_top_level_expression_call;
use crate::rules::flake8_use_pathlib::helpers::is_top_level_expression_in_statement;
use crate::{FixAvailability, Violation};
/// ## What it does
@@ -89,7 +89,7 @@ pub(crate) fn os_getcwd(checker: &Checker, call: &ExprCall, segments: &[&str]) {
// Unsafe when the fix would delete comments or change a used return value
let applicability = if checker.comment_ranges().intersects(range)
|| !is_top_level_expression_call(checker)
|| !is_top_level_expression_in_statement(checker)
{
Applicability::Unsafe
} else {

View File

@@ -6,7 +6,7 @@ use crate::checkers::ast::Checker;
use crate::preview::is_fix_os_readlink_enabled;
use crate::rules::flake8_use_pathlib::helpers::{
check_os_pathlib_single_arg_calls, is_keyword_only_argument_non_default,
is_top_level_expression_call,
is_top_level_expression_in_statement,
};
use crate::{FixAvailability, Violation};
@@ -86,7 +86,7 @@ pub(crate) fn os_readlink(checker: &Checker, call: &ExprCall, segments: &[&str])
return;
}
let applicability = if !is_top_level_expression_call(checker) {
let applicability = if !is_top_level_expression_in_statement(checker) {
// Unsafe because the return type changes (str/bytes -> Path)
Applicability::Unsafe
} else {

View File

@@ -6,7 +6,7 @@ use crate::checkers::ast::Checker;
use crate::preview::is_fix_os_rename_enabled;
use crate::rules::flake8_use_pathlib::helpers::{
check_os_pathlib_two_arg_calls, has_unknown_keywords_or_starred_expr,
is_keyword_only_argument_non_default, is_top_level_expression_call,
is_keyword_only_argument_non_default, is_top_level_expression_in_statement,
};
use crate::{FixAvailability, Violation};
@@ -92,7 +92,7 @@ pub(crate) fn os_rename(checker: &Checker, call: &ExprCall, segments: &[&str]) {
);
// Unsafe when the fix would delete comments or change a used return value
let applicability = if !is_top_level_expression_call(checker) {
let applicability = if !is_top_level_expression_in_statement(checker) {
// Unsafe because the return type changes (None -> Path)
Applicability::Unsafe
} else {

View File

@@ -6,7 +6,7 @@ use crate::checkers::ast::Checker;
use crate::preview::is_fix_os_replace_enabled;
use crate::rules::flake8_use_pathlib::helpers::{
check_os_pathlib_two_arg_calls, has_unknown_keywords_or_starred_expr,
is_keyword_only_argument_non_default, is_top_level_expression_call,
is_keyword_only_argument_non_default, is_top_level_expression_in_statement,
};
use crate::{FixAvailability, Violation};
@@ -95,7 +95,7 @@ pub(crate) fn os_replace(checker: &Checker, call: &ExprCall, segments: &[&str])
);
// Unsafe when the fix would delete comments or change a used return value
let applicability = if !is_top_level_expression_call(checker) {
let applicability = if !is_top_level_expression_in_statement(checker) {
// Unsafe because the return type changes (None -> Path)
Applicability::Unsafe
} else {

View File

@@ -567,5 +567,64 @@ PTH121 `os.path.samefile()` should be replaced by `Path.samefile()`
138 |
139 | os.path.samefile("pth1_file", "pth1_link", 1, *[1], **{"x": 1}, foo=1)
| ^^^^^^^^^^^^^^^^
140 |
141 | # See: https://github.com/astral-sh/ruff/issues/21794
|
help: Replace with `Path(...).samefile()`
PTH104 `os.rename()` should be replaced by `Path.rename()`
--> full_name.py:144:4
|
142 | import sys
143 |
144 | if os.rename("pth1.py", "pth1.py.bak"):
| ^^^^^^^^^
145 | print("rename: truthy")
146 | else:
|
help: Replace with `Path(...).rename(...)`
PTH105 `os.replace()` should be replaced by `Path.replace()`
--> full_name.py:149:4
|
147 | print("rename: falsey")
148 |
149 | if os.replace("pth1.py.bak", "pth1.py"):
| ^^^^^^^^^^
150 | print("replace: truthy")
151 | else:
|
help: Replace with `Path(...).replace(...)`
PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
--> full_name.py:155:14
|
154 | try:
155 | for _ in os.getcwd():
| ^^^^^^^^^
156 | print("getcwd: iterable")
157 | break
|
help: Replace with `Path.cwd()`
PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
--> full_name.py:162:14
|
161 | try:
162 | for _ in os.getcwdb():
| ^^^^^^^^^^
163 | print("getcwdb: iterable")
164 | break
|
help: Replace with `Path.cwd()`
PTH115 `os.readlink()` should be replaced by `Path.readlink()`
--> full_name.py:169:14
|
168 | try:
169 | for _ in os.readlink(sys.executable):
| ^^^^^^^^^^^
170 | print("readlink: iterable")
171 | break
|
help: Replace with `Path(...).readlink()`

View File

@@ -1037,5 +1037,142 @@ PTH121 `os.path.samefile()` should be replaced by `Path.samefile()`
138 |
139 | os.path.samefile("pth1_file", "pth1_link", 1, *[1], **{"x": 1}, foo=1)
| ^^^^^^^^^^^^^^^^
140 |
141 | # See: https://github.com/astral-sh/ruff/issues/21794
|
help: Replace with `Path(...).samefile()`
PTH104 [*] `os.rename()` should be replaced by `Path.rename()`
--> full_name.py:144:4
|
142 | import sys
143 |
144 | if os.rename("pth1.py", "pth1.py.bak"):
| ^^^^^^^^^
145 | print("rename: truthy")
146 | else:
|
help: Replace with `Path(...).rename(...)`
140 |
141 | # See: https://github.com/astral-sh/ruff/issues/21794
142 | import sys
143 + import pathlib
144 |
- if os.rename("pth1.py", "pth1.py.bak"):
145 + if pathlib.Path("pth1.py").rename("pth1.py.bak"):
146 | print("rename: truthy")
147 | else:
148 | print("rename: falsey")
note: This is an unsafe fix and may change runtime behavior
PTH105 [*] `os.replace()` should be replaced by `Path.replace()`
--> full_name.py:149:4
|
147 | print("rename: falsey")
148 |
149 | if os.replace("pth1.py.bak", "pth1.py"):
| ^^^^^^^^^^
150 | print("replace: truthy")
151 | else:
|
help: Replace with `Path(...).replace(...)`
140 |
141 | # See: https://github.com/astral-sh/ruff/issues/21794
142 | import sys
143 + import pathlib
144 |
145 | if os.rename("pth1.py", "pth1.py.bak"):
146 | print("rename: truthy")
147 | else:
148 | print("rename: falsey")
149 |
- if os.replace("pth1.py.bak", "pth1.py"):
150 + if pathlib.Path("pth1.py.bak").replace("pth1.py"):
151 | print("replace: truthy")
152 | else:
153 | print("replace: falsey")
note: This is an unsafe fix and may change runtime behavior
PTH109 [*] `os.getcwd()` should be replaced by `Path.cwd()`
--> full_name.py:155:14
|
154 | try:
155 | for _ in os.getcwd():
| ^^^^^^^^^
156 | print("getcwd: iterable")
157 | break
|
help: Replace with `Path.cwd()`
140 |
141 | # See: https://github.com/astral-sh/ruff/issues/21794
142 | import sys
143 + import pathlib
144 |
145 | if os.rename("pth1.py", "pth1.py.bak"):
146 | print("rename: truthy")
--------------------------------------------------------------------------------
153 | print("replace: falsey")
154 |
155 | try:
- for _ in os.getcwd():
156 + for _ in pathlib.Path.cwd():
157 | print("getcwd: iterable")
158 | break
159 | except TypeError as e:
note: This is an unsafe fix and may change runtime behavior
PTH109 [*] `os.getcwd()` should be replaced by `Path.cwd()`
--> full_name.py:162:14
|
161 | try:
162 | for _ in os.getcwdb():
| ^^^^^^^^^^
163 | print("getcwdb: iterable")
164 | break
|
help: Replace with `Path.cwd()`
140 |
141 | # See: https://github.com/astral-sh/ruff/issues/21794
142 | import sys
143 + import pathlib
144 |
145 | if os.rename("pth1.py", "pth1.py.bak"):
146 | print("rename: truthy")
--------------------------------------------------------------------------------
160 | print("getcwd: not iterable")
161 |
162 | try:
- for _ in os.getcwdb():
163 + for _ in pathlib.Path.cwd():
164 | print("getcwdb: iterable")
165 | break
166 | except TypeError as e:
note: This is an unsafe fix and may change runtime behavior
PTH115 [*] `os.readlink()` should be replaced by `Path.readlink()`
--> full_name.py:169:14
|
168 | try:
169 | for _ in os.readlink(sys.executable):
| ^^^^^^^^^^^
170 | print("readlink: iterable")
171 | break
|
help: Replace with `Path(...).readlink()`
140 |
141 | # See: https://github.com/astral-sh/ruff/issues/21794
142 | import sys
143 + import pathlib
144 |
145 | if os.rename("pth1.py", "pth1.py.bak"):
146 | print("rename: truthy")
--------------------------------------------------------------------------------
167 | print("getcwdb: not iterable")
168 |
169 | try:
- for _ in os.readlink(sys.executable):
170 + for _ in pathlib.Path(sys.executable).readlink():
171 | print("readlink: iterable")
172 | break
173 | except TypeError as e:
note: This is an unsafe fix and may change runtime behavior

View File

@@ -902,56 +902,76 @@ help: Convert to f-string
132 | # Non-errors
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:160:1
--> UP032_0.py:135:1
|
158 | r'"\N{snowman} {}".format(a)'
159 |
160 | / "123456789 {}".format(
161 | | 11111111111111111111111111111111111111111111111111111111111111111111111111,
162 | | )
| |_^
163 |
164 | """
133 | ###
134 |
135 | "\N{snowman} {}".format(a)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
136 |
137 | "{".format(a)
|
help: Convert to f-string
157 |
158 | r'"\N{snowman} {}".format(a)'
159 |
132 | # Non-errors
133 | ###
134 |
- "\N{snowman} {}".format(a)
135 + f"\N{snowman} {a}"
136 |
137 | "{".format(a)
138 |
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:159:1
|
157 | r'"\N{snowman} {}".format(a)'
158 |
159 | / "123456789 {}".format(
160 | | 11111111111111111111111111111111111111111111111111111111111111111111111111,
161 | | )
| |_^
162 |
163 | """
|
help: Convert to f-string
156 |
157 | r'"\N{snowman} {}".format(a)'
158 |
- "123456789 {}".format(
- 11111111111111111111111111111111111111111111111111111111111111111111111111,
- )
160 + f"123456789 {11111111111111111111111111111111111111111111111111111111111111111111111111}"
161 |
162 | """
163 | {}
159 + f"123456789 {11111111111111111111111111111111111111111111111111111111111111111111111111}"
160 |
161 | """
162 | {}
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:164:1
--> UP032_0.py:163:1
|
162 | )
163 |
164 | / """
161 | )
162 |
163 | / """
164 | | {}
165 | | {}
166 | | {}
167 | | {}
168 | | """.format(
169 | | 1,
170 | | 2,
171 | | 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111,
172 | | )
167 | | """.format(
168 | | 1,
169 | | 2,
170 | | 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111,
171 | | )
| |_^
173 |
174 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
172 |
173 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
|
help: Convert to f-string
161 | 11111111111111111111111111111111111111111111111111111111111111111111111111,
162 | )
163 |
164 + f"""
165 + {1}
166 + {2}
167 + {111111111111111111111111111111111111111111111111111111111111111111111111111111111111111}
168 | """
160 | 11111111111111111111111111111111111111111111111111111111111111111111111111,
161 | )
162 |
163 + f"""
164 + {1}
165 + {2}
166 + {111111111111111111111111111111111111111111111111111111111111111111111111111111111111111}
167 | """
- {}
- {}
- {}
@@ -960,392 +980,408 @@ help: Convert to f-string
- 2,
- 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111,
- )
169 |
170 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
171 | """.format(
168 |
169 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
170 | """.format(
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:174:84
--> UP032_0.py:173:84
|
172 | )
173 |
174 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
171 | )
172 |
173 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
| ____________________________________________________________________________________^
175 | | """.format(
176 | | 111111
177 | | )
174 | | """.format(
175 | | 111111
176 | | )
| |_^
178 |
179 | "{}".format(
177 |
178 | "{}".format(
|
help: Convert to f-string
171 | 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111,
172 | )
173 |
170 | 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111,
171 | )
172 |
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
- """.format(
- 111111
- )
174 + aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = f"""{111111}
175 + """
176 |
177 | "{}".format(
178 | [
173 + aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = f"""{111111}
174 + """
175 |
176 | "{}".format(
177 | [
UP032 Use f-string instead of `format` call
--> UP032_0.py:202:1
--> UP032_0.py:201:1
|
200 | "{}".format(**c)
201 |
202 | / "{}".format(
203 | | 1 # comment
204 | | )
199 | "{}".format(**c)
200 |
201 | / "{}".format(
202 | | 1 # comment
203 | | )
| |_^
|
help: Convert to f-string
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:209:1
--> UP032_0.py:208:1
|
207 | # The fixed string will exceed the line length, but it's still smaller than the
208 | # existing line length, so it's fine.
209 | "<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
206 | # The fixed string will exceed the line length, but it's still smaller than the
207 | # existing line length, so it's fine.
208 | "<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
210 |
211 | # When fixing, trim the trailing empty string.
209 |
210 | # When fixing, trim the trailing empty string.
|
help: Convert to f-string
206 |
207 | # The fixed string will exceed the line length, but it's still smaller than the
208 | # existing line length, so it's fine.
205 |
206 | # The fixed string will exceed the line length, but it's still smaller than the
207 | # existing line length, so it's fine.
- "<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
209 + f"<Customer: {self.internal_ids}, {self.external_ids}, {self.properties}, {self.tags}, {self.others}>"
210 |
211 | # When fixing, trim the trailing empty string.
212 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
208 + f"<Customer: {self.internal_ids}, {self.external_ids}, {self.properties}, {self.tags}, {self.others}>"
209 |
210 | # When fixing, trim the trailing empty string.
211 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:212:18
--> UP032_0.py:211:18
|
211 | # When fixing, trim the trailing empty string.
212 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
210 | # When fixing, trim the trailing empty string.
211 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
| __________________^
213 | | "".format(new_dict, d))
212 | | "".format(new_dict, d))
| |_______________________________________^
214 |
215 | # When fixing, trim the trailing empty string.
213 |
214 | # When fixing, trim the trailing empty string.
|
help: Convert to f-string
209 | "<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
210 |
211 | # When fixing, trim the trailing empty string.
208 | "<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
209 |
210 | # When fixing, trim the trailing empty string.
- raise ValueError("Conflicting configuration dicts: {!r} {!r}"
- "".format(new_dict, d))
212 + raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}")
211 + raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}")
212 |
213 | # When fixing, trim the trailing empty string.
214 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:215:18
|
214 | # When fixing, trim the trailing empty string.
215 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
| __________________^
216 | | .format(new_dict, d))
| |_____________________________________^
217 |
218 | raise ValueError(
|
help: Convert to f-string
212 | "".format(new_dict, d))
213 |
214 | # When fixing, trim the trailing empty string.
215 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:216:18
|
215 | # When fixing, trim the trailing empty string.
216 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
| __________________^
217 | | .format(new_dict, d))
| |_____________________________________^
218 |
219 | raise ValueError(
|
help: Convert to f-string
213 | "".format(new_dict, d))
214 |
215 | # When fixing, trim the trailing empty string.
- raise ValueError("Conflicting configuration dicts: {!r} {!r}"
- .format(new_dict, d))
216 + raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}"
217 + )
218 |
219 | raise ValueError(
220 | "Conflicting configuration dicts: {!r} {!r}"
215 + raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}"
216 + )
217 |
218 | raise ValueError(
219 | "Conflicting configuration dicts: {!r} {!r}"
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:220:5
--> UP032_0.py:219:5
|
219 | raise ValueError(
220 | / "Conflicting configuration dicts: {!r} {!r}"
221 | | "".format(new_dict, d)
218 | raise ValueError(
219 | / "Conflicting configuration dicts: {!r} {!r}"
220 | | "".format(new_dict, d)
| |__________________________^
222 | )
221 | )
|
help: Convert to f-string
217 | .format(new_dict, d))
218 |
219 | raise ValueError(
216 | .format(new_dict, d))
217 |
218 | raise ValueError(
- "Conflicting configuration dicts: {!r} {!r}"
- "".format(new_dict, d)
220 + f"Conflicting configuration dicts: {new_dict!r} {d!r}"
219 + f"Conflicting configuration dicts: {new_dict!r} {d!r}"
220 | )
221 |
222 | raise ValueError(
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:224:5
|
223 | raise ValueError(
224 | / "Conflicting configuration dicts: {!r} {!r}"
225 | | "".format(new_dict, d)
| |__________________________^
226 |
227 | )
|
help: Convert to f-string
221 | )
222 |
223 | raise ValueError(
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:225:5
|
224 | raise ValueError(
225 | / "Conflicting configuration dicts: {!r} {!r}"
226 | | "".format(new_dict, d)
| |__________________________^
227 |
228 | )
|
help: Convert to f-string
222 | )
223 |
224 | raise ValueError(
- "Conflicting configuration dicts: {!r} {!r}"
- "".format(new_dict, d)
225 + f"Conflicting configuration dicts: {new_dict!r} {d!r}"
226 |
227 | )
228 |
224 + f"Conflicting configuration dicts: {new_dict!r} {d!r}"
225 |
226 | )
227 |
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:231:1
--> UP032_0.py:230:1
|
230 | # The first string will be converted to an f-string and the curly braces in the second should be converted to be unescaped
231 | / (
232 | | "{}"
233 | | "{{}}"
234 | | ).format(a)
229 | # The first string will be converted to an f-string and the curly braces in the second should be converted to be unescaped
230 | / (
231 | | "{}"
232 | | "{{}}"
233 | | ).format(a)
| |___________^
235 |
236 | ("{}" "{{}}").format(a)
234 |
235 | ("{}" "{{}}").format(a)
|
help: Convert to f-string
229 |
230 | # The first string will be converted to an f-string and the curly braces in the second should be converted to be unescaped
231 | (
232 + f"{a}"
233 | "{}"
228 |
229 | # The first string will be converted to an f-string and the curly braces in the second should be converted to be unescaped
230 | (
231 + f"{a}"
232 | "{}"
- "{{}}"
- ).format(a)
234 + )
235 |
236 | ("{}" "{{}}").format(a)
237 |
233 + )
234 |
235 | ("{}" "{{}}").format(a)
236 |
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:236:1
--> UP032_0.py:235:1
|
234 | ).format(a)
235 |
236 | ("{}" "{{}}").format(a)
233 | ).format(a)
234 |
235 | ("{}" "{{}}").format(a)
| ^^^^^^^^^^^^^^^^^^^^^^^
|
help: Convert to f-string
233 | "{{}}"
234 | ).format(a)
235 |
232 | "{{}}"
233 | ).format(a)
234 |
- ("{}" "{{}}").format(a)
236 + (f"{a}" "{}")
235 + (f"{a}" "{}")
236 |
237 |
238 |
239 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
238 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:240:1
--> UP032_0.py:239:1
|
239 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
240 | / (
241 | | "{}"
242 | | "{{{}}}"
243 | | ).format(a, b)
238 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
239 | / (
240 | | "{}"
241 | | "{{{}}}"
242 | | ).format(a, b)
| |______________^
244 |
245 | ("{}" "{{{}}}").format(a, b)
243 |
244 | ("{}" "{{{}}}").format(a, b)
|
help: Convert to f-string
238 |
239 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
240 | (
237 |
238 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
239 | (
- "{}"
- "{{{}}}"
- ).format(a, b)
241 + f"{a}"
242 + f"{{{b}}}"
243 + )
244 |
245 | ("{}" "{{{}}}").format(a, b)
246 |
240 + f"{a}"
241 + f"{{{b}}}"
242 + )
243 |
244 | ("{}" "{{{}}}").format(a, b)
245 |
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:245:1
--> UP032_0.py:244:1
|
243 | ).format(a, b)
244 |
245 | ("{}" "{{{}}}").format(a, b)
242 | ).format(a, b)
243 |
244 | ("{}" "{{{}}}").format(a, b)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
246 |
247 | # The dictionary should be parenthesized.
245 |
246 | # The dictionary should be parenthesized.
|
help: Convert to f-string
242 | "{{{}}}"
243 | ).format(a, b)
244 |
241 | "{{{}}}"
242 | ).format(a, b)
243 |
- ("{}" "{{{}}}").format(a, b)
245 + (f"{a}" f"{{{b}}}")
246 |
247 | # The dictionary should be parenthesized.
248 | "{}".format({0: 1}[0])
244 + (f"{a}" f"{{{b}}}")
245 |
246 | # The dictionary should be parenthesized.
247 | "{}".format({0: 1}[0])
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:248:1
--> UP032_0.py:247:1
|
247 | # The dictionary should be parenthesized.
248 | "{}".format({0: 1}[0])
246 | # The dictionary should be parenthesized.
247 | "{}".format({0: 1}[0])
| ^^^^^^^^^^^^^^^^^^^^^^
249 |
250 | # The dictionary should be parenthesized.
248 |
249 | # The dictionary should be parenthesized.
|
help: Convert to f-string
245 | ("{}" "{{{}}}").format(a, b)
246 |
247 | # The dictionary should be parenthesized.
244 | ("{}" "{{{}}}").format(a, b)
245 |
246 | # The dictionary should be parenthesized.
- "{}".format({0: 1}[0])
248 + f"{({0: 1}[0])}"
249 |
250 | # The dictionary should be parenthesized.
251 | "{}".format({0: 1}.bar)
247 + f"{({0: 1}[0])}"
248 |
249 | # The dictionary should be parenthesized.
250 | "{}".format({0: 1}.bar)
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:251:1
--> UP032_0.py:250:1
|
250 | # The dictionary should be parenthesized.
251 | "{}".format({0: 1}.bar)
249 | # The dictionary should be parenthesized.
250 | "{}".format({0: 1}.bar)
| ^^^^^^^^^^^^^^^^^^^^^^^
252 |
253 | # The dictionary should be parenthesized.
251 |
252 | # The dictionary should be parenthesized.
|
help: Convert to f-string
248 | "{}".format({0: 1}[0])
249 |
250 | # The dictionary should be parenthesized.
247 | "{}".format({0: 1}[0])
248 |
249 | # The dictionary should be parenthesized.
- "{}".format({0: 1}.bar)
251 + f"{({0: 1}.bar)}"
252 |
253 | # The dictionary should be parenthesized.
254 | "{}".format({0: 1}())
250 + f"{({0: 1}.bar)}"
251 |
252 | # The dictionary should be parenthesized.
253 | "{}".format({0: 1}())
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:254:1
--> UP032_0.py:253:1
|
253 | # The dictionary should be parenthesized.
254 | "{}".format({0: 1}())
252 | # The dictionary should be parenthesized.
253 | "{}".format({0: 1}())
| ^^^^^^^^^^^^^^^^^^^^^
255 |
256 | # The string shouldn't be converted, since it would require repeating the function call.
254 |
255 | # The string shouldn't be converted, since it would require repeating the function call.
|
help: Convert to f-string
251 | "{}".format({0: 1}.bar)
252 |
253 | # The dictionary should be parenthesized.
250 | "{}".format({0: 1}.bar)
251 |
252 | # The dictionary should be parenthesized.
- "{}".format({0: 1}())
254 + f"{({0: 1}())}"
255 |
256 | # The string shouldn't be converted, since it would require repeating the function call.
257 | "{x} {x}".format(x=foo())
253 + f"{({0: 1}())}"
254 |
255 | # The string shouldn't be converted, since it would require repeating the function call.
256 | "{x} {x}".format(x=foo())
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:261:1
--> UP032_0.py:260:1
|
260 | # The string _should_ be converted, since the function call is repeated in the arguments.
261 | "{0} {1}".format(foo(), foo())
259 | # The string _should_ be converted, since the function call is repeated in the arguments.
260 | "{0} {1}".format(foo(), foo())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
262 |
263 | # The call should be removed, but the string itself should remain.
261 |
262 | # The call should be removed, but the string itself should remain.
|
help: Convert to f-string
258 | "{0} {0}".format(foo())
259 |
260 | # The string _should_ be converted, since the function call is repeated in the arguments.
257 | "{0} {0}".format(foo())
258 |
259 | # The string _should_ be converted, since the function call is repeated in the arguments.
- "{0} {1}".format(foo(), foo())
261 + f"{foo()} {foo()}"
262 |
263 | # The call should be removed, but the string itself should remain.
264 | ''.format(self.project)
260 + f"{foo()} {foo()}"
261 |
262 | # The call should be removed, but the string itself should remain.
263 | ''.format(self.project)
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:264:1
--> UP032_0.py:263:1
|
263 | # The call should be removed, but the string itself should remain.
264 | ''.format(self.project)
262 | # The call should be removed, but the string itself should remain.
263 | ''.format(self.project)
| ^^^^^^^^^^^^^^^^^^^^^^^
265 |
266 | # The call should be removed, but the string itself should remain.
264 |
265 | # The call should be removed, but the string itself should remain.
|
help: Convert to f-string
261 | "{0} {1}".format(foo(), foo())
262 |
263 | # The call should be removed, but the string itself should remain.
260 | "{0} {1}".format(foo(), foo())
261 |
262 | # The call should be removed, but the string itself should remain.
- ''.format(self.project)
264 + ''
265 |
266 | # The call should be removed, but the string itself should remain.
267 | "".format(self.project)
263 + ''
264 |
265 | # The call should be removed, but the string itself should remain.
266 | "".format(self.project)
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:267:1
--> UP032_0.py:266:1
|
266 | # The call should be removed, but the string itself should remain.
267 | "".format(self.project)
265 | # The call should be removed, but the string itself should remain.
266 | "".format(self.project)
| ^^^^^^^^^^^^^^^^^^^^^^^
268 |
269 | # Not a valid type annotation but this test shouldn't result in a panic.
267 |
268 | # Not a valid type annotation but this test shouldn't result in a panic.
|
help: Convert to f-string
264 | ''.format(self.project)
265 |
266 | # The call should be removed, but the string itself should remain.
263 | ''.format(self.project)
264 |
265 | # The call should be removed, but the string itself should remain.
- "".format(self.project)
267 + ""
268 |
269 | # Not a valid type annotation but this test shouldn't result in a panic.
270 | # Refer: https://github.com/astral-sh/ruff/issues/11736
266 + ""
267 |
268 | # Not a valid type annotation but this test shouldn't result in a panic.
269 | # Refer: https://github.com/astral-sh/ruff/issues/11736
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:271:5
--> UP032_0.py:270:5
|
269 | # Not a valid type annotation but this test shouldn't result in a panic.
270 | # Refer: https://github.com/astral-sh/ruff/issues/11736
271 | x: "'{} + {}'.format(x, y)"
268 | # Not a valid type annotation but this test shouldn't result in a panic.
269 | # Refer: https://github.com/astral-sh/ruff/issues/11736
270 | x: "'{} + {}'.format(x, y)"
| ^^^^^^^^^^^^^^^^^^^^^^
272 |
273 | # Regression https://github.com/astral-sh/ruff/issues/21000
271 |
272 | # Regression https://github.com/astral-sh/ruff/issues/21000
|
help: Convert to f-string
268 |
269 | # Not a valid type annotation but this test shouldn't result in a panic.
270 | # Refer: https://github.com/astral-sh/ruff/issues/11736
267 |
268 | # Not a valid type annotation but this test shouldn't result in a panic.
269 | # Refer: https://github.com/astral-sh/ruff/issues/11736
- x: "'{} + {}'.format(x, y)"
271 + x: "f'{x} + {y}'"
272 |
273 | # Regression https://github.com/astral-sh/ruff/issues/21000
274 | # Fix should parenthesize walrus
270 + x: "f'{x} + {y}'"
271 |
272 | # Regression https://github.com/astral-sh/ruff/issues/21000
273 | # Fix should parenthesize walrus
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:277:14
--> UP032_0.py:276:14
|
275 | if __name__ == "__main__":
276 | number = 0
277 | string = "{}".format(number := number + 1)
274 | if __name__ == "__main__":
275 | number = 0
276 | string = "{}".format(number := number + 1)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
278 | print(string)
277 | print(string)
|
help: Convert to f-string
274 | # Fix should parenthesize walrus
275 | if __name__ == "__main__":
276 | number = 0
273 | # Fix should parenthesize walrus
274 | if __name__ == "__main__":
275 | number = 0
- string = "{}".format(number := number + 1)
277 + string = f"{(number := number + 1)}"
278 | print(string)
276 + string = f"{(number := number + 1)}"
277 | print(string)
278 |
279 | # Unicode escape
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:280:1
|
279 | # Unicode escape
280 | "\N{angle}AOB = {angle}°".format(angle=180)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Convert to f-string
277 | print(string)
278 |
279 | # Unicode escape
- "\N{angle}AOB = {angle}°".format(angle=180)
280 + f"\N{angle}AOB = {180}°"

View File

@@ -3,10 +3,11 @@ use std::borrow::Cow;
use ruff_python_ast::PythonVersion;
use ruff_python_ast::{self as ast, Expr, name::Name, token::parenthesized_range};
use ruff_python_codegen::Generator;
use ruff_python_semantic::{BindingId, ResolvedReference, SemanticModel};
use ruff_python_semantic::{ResolvedReference, SemanticModel};
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
use crate::rules::flake8_async::rules::blocking_open_call::is_open_call_from_pathlib;
use crate::{Applicability, Edit, Fix};
/// Format a code snippet to call `name.method()`.
@@ -119,14 +120,13 @@ impl OpenMode {
pub(super) struct FileOpen<'a> {
/// With item where the open happens, we use it for the reporting range.
pub(super) item: &'a ast::WithItem,
/// Filename expression used as the first argument in `open`, we use it in the diagnostic message.
pub(super) filename: &'a Expr,
/// The file open mode.
pub(super) mode: OpenMode,
/// The file open keywords.
pub(super) keywords: Vec<&'a ast::Keyword>,
/// We only check `open` operations whose file handles are used exactly once.
pub(super) reference: &'a ResolvedReference,
pub(super) argument: OpenArgument<'a>,
}
impl FileOpen<'_> {
@@ -137,6 +137,45 @@ impl FileOpen<'_> {
}
}
#[derive(Debug, Clone, Copy)]
pub(super) enum OpenArgument<'a> {
/// The filename argument to `open`, e.g. "foo.txt" in:
///
/// ```py
/// f = open("foo.txt")
/// ```
Builtin { filename: &'a Expr },
/// The `Path` receiver of a `pathlib.Path.open` call, e.g. the `p` in the
/// context manager in:
///
/// ```py
/// p = Path("foo.txt")
/// with p.open() as f: ...
/// ```
///
/// or `Path("foo.txt")` in
///
/// ```py
/// with Path("foo.txt").open() as f: ...
/// ```
Pathlib { path: &'a Expr },
}
impl OpenArgument<'_> {
pub(super) fn display<'src>(&self, source: &'src str) -> &'src str {
&source[self.range()]
}
}
impl Ranged for OpenArgument<'_> {
fn range(&self) -> TextRange {
match self {
OpenArgument::Builtin { filename } => filename.range(),
OpenArgument::Pathlib { path } => path.range(),
}
}
}
/// Find and return all `open` operations in the given `with` statement.
pub(super) fn find_file_opens<'a>(
with: &'a ast::StmtWith,
@@ -146,10 +185,65 @@ pub(super) fn find_file_opens<'a>(
) -> Vec<FileOpen<'a>> {
with.items
.iter()
.filter_map(|item| find_file_open(item, with, semantic, read_mode, python_version))
.filter_map(|item| {
find_file_open(item, with, semantic, read_mode, python_version)
.or_else(|| find_path_open(item, with, semantic, read_mode, python_version))
})
.collect()
}
fn resolve_file_open<'a>(
item: &'a ast::WithItem,
with: &'a ast::StmtWith,
semantic: &'a SemanticModel<'a>,
read_mode: bool,
mode: OpenMode,
keywords: Vec<&'a ast::Keyword>,
argument: OpenArgument<'a>,
) -> Option<FileOpen<'a>> {
match mode {
OpenMode::ReadText | OpenMode::ReadBytes => {
if !read_mode {
return None;
}
}
OpenMode::WriteText | OpenMode::WriteBytes => {
if read_mode {
return None;
}
}
}
if matches!(mode, OpenMode::ReadBytes | OpenMode::WriteBytes) && !keywords.is_empty() {
return None;
}
let var = item.optional_vars.as_deref()?.as_name_expr()?;
let scope = semantic.current_scope();
let binding = scope.get_all(var.id.as_str()).find_map(|id| {
let b = semantic.binding(id);
(b.range() == var.range()).then_some(b)
})?;
let references: Vec<&ResolvedReference> = binding
.references
.iter()
.map(|id| semantic.reference(*id))
.filter(|reference| with.range().contains_range(reference.range()))
.collect();
let [reference] = references.as_slice() else {
return None;
};
Some(FileOpen {
item,
mode,
keywords,
reference,
argument,
})
}
/// Find `open` operation in the given `with` item.
fn find_file_open<'a>(
item: &'a ast::WithItem,
@@ -165,8 +259,6 @@ fn find_file_open<'a>(
..
} = item.context_expr.as_call_expr()?;
let var = item.optional_vars.as_deref()?.as_name_expr()?;
// Ignore calls with `*args` and `**kwargs`. In the exact case of `open(*filename, mode="w")`,
// it could be a match; but in all other cases, the call _could_ contain unsupported keyword
// arguments, like `buffering`.
@@ -187,58 +279,57 @@ fn find_file_open<'a>(
let (keywords, kw_mode) = match_open_keywords(keywords, read_mode, python_version)?;
let mode = kw_mode.unwrap_or(pos_mode);
match mode {
OpenMode::ReadText | OpenMode::ReadBytes => {
if !read_mode {
return None;
}
}
OpenMode::WriteText | OpenMode::WriteBytes => {
if read_mode {
return None;
}
}
}
// Path.read_bytes and Path.write_bytes do not support any kwargs.
if matches!(mode, OpenMode::ReadBytes | OpenMode::WriteBytes) && !keywords.is_empty() {
return None;
}
// Now we need to find what is this variable bound to...
let scope = semantic.current_scope();
let bindings: Vec<BindingId> = scope.get_all(var.id.as_str()).collect();
let binding = bindings
.iter()
.map(|id| semantic.binding(*id))
// We might have many bindings with the same name, but we only care
// for the one we are looking at right now.
.find(|binding| binding.range() == var.range())?;
// Since many references can share the same binding, we can limit our attention span
// exclusively to the body of the current `with` statement.
let references: Vec<&ResolvedReference> = binding
.references
.iter()
.map(|id| semantic.reference(*id))
.filter(|reference| with.range().contains_range(reference.range()))
.collect();
// And even with all these restrictions, if the file handle gets used not exactly once,
// it doesn't fit the bill.
let [reference] = references.as_slice() else {
return None;
};
Some(FileOpen {
resolve_file_open(
item,
filename,
with,
semantic,
read_mode,
mode,
keywords,
reference,
})
OpenArgument::Builtin { filename },
)
}
fn find_path_open<'a>(
item: &'a ast::WithItem,
with: &'a ast::StmtWith,
semantic: &'a SemanticModel<'a>,
read_mode: bool,
python_version: PythonVersion,
) -> Option<FileOpen<'a>> {
let ast::ExprCall {
func,
arguments: ast::Arguments { args, keywords, .. },
..
} = item.context_expr.as_call_expr()?;
if args.iter().any(Expr::is_starred_expr)
|| keywords.iter().any(|keyword| keyword.arg.is_none())
{
return None;
}
if !is_open_call_from_pathlib(func, semantic) {
return None;
}
let attr = func.as_attribute_expr()?;
let mode = if args.is_empty() {
OpenMode::ReadText
} else {
match_open_mode(args.first()?)?
};
let (keywords, kw_mode) = match_open_keywords(keywords, read_mode, python_version)?;
let mode = kw_mode.unwrap_or(mode);
resolve_file_open(
item,
with,
semantic,
read_mode,
mode,
keywords,
OpenArgument::Pathlib {
path: attr.value.as_ref(),
},
)
}
/// Match positional arguments. Return expression for the file name and open mode.

View File

@@ -15,7 +15,8 @@ mod tests {
use crate::test::test_path;
use crate::{assert_diagnostics, settings};
#[test_case(Rule::ReadWholeFile, Path::new("FURB101.py"))]
#[test_case(Rule::ReadWholeFile, Path::new("FURB101_0.py"))]
#[test_case(Rule::ReadWholeFile, Path::new("FURB101_1.py"))]
#[test_case(Rule::RepeatedAppend, Path::new("FURB113.py"))]
#[test_case(Rule::IfExpInsteadOfOrOperator, Path::new("FURB110.py"))]
#[test_case(Rule::ReimplementedOperator, Path::new("FURB118.py"))]
@@ -46,7 +47,8 @@ mod tests {
#[test_case(Rule::MetaClassABCMeta, Path::new("FURB180.py"))]
#[test_case(Rule::HashlibDigestHex, Path::new("FURB181.py"))]
#[test_case(Rule::ListReverseCopy, Path::new("FURB187.py"))]
#[test_case(Rule::WriteWholeFile, Path::new("FURB103.py"))]
#[test_case(Rule::WriteWholeFile, Path::new("FURB103_0.py"))]
#[test_case(Rule::WriteWholeFile, Path::new("FURB103_1.py"))]
#[test_case(Rule::FStringNumberFormat, Path::new("FURB116.py"))]
#[test_case(Rule::SortedMinMax, Path::new("FURB192.py"))]
#[test_case(Rule::SliceToRemovePrefixOrSuffix, Path::new("FURB188.py"))]
@@ -65,7 +67,7 @@ mod tests {
#[test]
fn write_whole_file_python_39() -> Result<()> {
let diagnostics = test_path(
Path::new("refurb/FURB103.py"),
Path::new("refurb/FURB103_0.py"),
&settings::LinterSettings::for_rule(Rule::WriteWholeFile)
.with_target_version(PythonVersion::PY39),
)?;

View File

@@ -10,7 +10,7 @@ use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
use crate::fix::snippet::SourceCodeSnippet;
use crate::importer::ImportRequest;
use crate::rules::refurb::helpers::{FileOpen, find_file_opens};
use crate::rules::refurb::helpers::{FileOpen, OpenArgument, find_file_opens};
use crate::{FixAvailability, Violation};
/// ## What it does
@@ -42,27 +42,41 @@ use crate::{FixAvailability, Violation};
/// - [Python documentation: `Path.read_text`](https://docs.python.org/3/library/pathlib.html#pathlib.Path.read_text)
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "v0.1.2")]
pub(crate) struct ReadWholeFile {
pub(crate) struct ReadWholeFile<'a> {
filename: SourceCodeSnippet,
suggestion: SourceCodeSnippet,
argument: OpenArgument<'a>,
}
impl Violation for ReadWholeFile {
impl Violation for ReadWholeFile<'_> {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let filename = self.filename.truncated_display();
let suggestion = self.suggestion.truncated_display();
format!("`open` and `read` should be replaced by `Path({filename}).{suggestion}`")
match self.argument {
OpenArgument::Pathlib { .. } => {
format!(
"`Path.open()` followed by `read()` can be replaced by `{filename}.{suggestion}`"
)
}
OpenArgument::Builtin { .. } => {
format!("`open` and `read` should be replaced by `Path({filename}).{suggestion}`")
}
}
}
fn fix_title(&self) -> Option<String> {
Some(format!(
"Replace with `Path({}).{}`",
self.filename.truncated_display(),
self.suggestion.truncated_display(),
))
let filename = self.filename.truncated_display();
let suggestion = self.suggestion.truncated_display();
match self.argument {
OpenArgument::Pathlib { .. } => Some(format!("Replace with `{filename}.{suggestion}`")),
OpenArgument::Builtin { .. } => {
Some(format!("Replace with `Path({filename}).{suggestion}`"))
}
}
}
}
@@ -114,13 +128,13 @@ impl<'a> Visitor<'a> for ReadMatcher<'a, '_> {
.position(|open| open.is_ref(read_from))
{
let open = self.candidates.remove(open);
let filename_display = open.argument.display(self.checker.source());
let suggestion = make_suggestion(&open, self.checker.generator());
let mut diagnostic = self.checker.report_diagnostic(
ReadWholeFile {
filename: SourceCodeSnippet::from_str(
&self.checker.generator().expr(open.filename),
),
filename: SourceCodeSnippet::from_str(filename_display),
suggestion: SourceCodeSnippet::from_str(&suggestion),
argument: open.argument,
},
open.item.range(),
);
@@ -188,8 +202,6 @@ fn generate_fix(
let locator = checker.locator();
let filename_code = locator.slice(open.filename.range());
let (import_edit, binding) = checker
.importer()
.get_or_import_symbol(
@@ -206,10 +218,15 @@ fn generate_fix(
[Stmt::Assign(ast::StmtAssign { targets, value, .. })] if value.range() == expr.range() => {
match targets.as_slice() {
[Expr::Name(name)] => {
format!(
"{name} = {binding}({filename_code}).{suggestion}",
name = name.id
)
let target = match open.argument {
OpenArgument::Builtin { filename } => {
let filename_code = locator.slice(filename.range());
format!("{binding}({filename_code})")
}
OpenArgument::Pathlib { path } => locator.slice(path.range()).to_string(),
};
format!("{name} = {target}.{suggestion}", name = name.id)
}
_ => return None,
}
@@ -223,8 +240,16 @@ fn generate_fix(
}),
] if value.range() == expr.range() => match target.as_ref() {
Expr::Name(name) => {
let target = match open.argument {
OpenArgument::Builtin { filename } => {
let filename_code = locator.slice(filename.range());
format!("{binding}({filename_code})")
}
OpenArgument::Pathlib { path } => locator.slice(path.range()).to_string(),
};
format!(
"{var}: {ann} = {binding}({filename_code}).{suggestion}",
"{var}: {ann} = {target}.{suggestion}",
var = name.id,
ann = locator.slice(annotation.range())
)

View File

@@ -176,7 +176,7 @@ fn match_consecutive_appends<'a>(
let suite = if semantic.at_top_level() {
// If the statement is at the top level, we should go to the parent module.
// Module is available in the definitions list.
EnclosingSuite::new(semantic.definitions.python_ast()?, stmt)?
EnclosingSuite::new(semantic.definitions.python_ast()?, stmt.into())?
} else {
// Otherwise, go to the parent, and take its body as a sequence of siblings.
semantic

View File

@@ -9,7 +9,7 @@ use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::fix::snippet::SourceCodeSnippet;
use crate::importer::ImportRequest;
use crate::rules::refurb::helpers::{FileOpen, find_file_opens};
use crate::rules::refurb::helpers::{FileOpen, OpenArgument, find_file_opens};
use crate::{FixAvailability, Locator, Violation};
/// ## What it does
@@ -42,26 +42,40 @@ use crate::{FixAvailability, Locator, Violation};
/// - [Python documentation: `Path.write_text`](https://docs.python.org/3/library/pathlib.html#pathlib.Path.write_text)
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "v0.3.6")]
pub(crate) struct WriteWholeFile {
pub(crate) struct WriteWholeFile<'a> {
filename: SourceCodeSnippet,
suggestion: SourceCodeSnippet,
argument: OpenArgument<'a>,
}
impl Violation for WriteWholeFile {
impl Violation for WriteWholeFile<'_> {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let filename = self.filename.truncated_display();
let suggestion = self.suggestion.truncated_display();
format!("`open` and `write` should be replaced by `Path({filename}).{suggestion}`")
match self.argument {
OpenArgument::Pathlib { .. } => {
format!(
"`Path.open()` followed by `write()` can be replaced by `{filename}.{suggestion}`"
)
}
OpenArgument::Builtin { .. } => {
format!("`open` and `write` should be replaced by `Path({filename}).{suggestion}`")
}
}
}
fn fix_title(&self) -> Option<String> {
Some(format!(
"Replace with `Path({}).{}`",
self.filename.truncated_display(),
self.suggestion.truncated_display(),
))
let filename = self.filename.truncated_display();
let suggestion = self.suggestion.truncated_display();
match self.argument {
OpenArgument::Pathlib { .. } => Some(format!("Replace with `{filename}.{suggestion}`")),
OpenArgument::Builtin { .. } => {
Some(format!("Replace with `Path({filename}).{suggestion}`"))
}
}
}
}
@@ -125,16 +139,15 @@ impl<'a> Visitor<'a> for WriteMatcher<'a, '_> {
.position(|open| open.is_ref(write_to))
{
let open = self.candidates.remove(open);
if self.loop_counter == 0 {
let filename_display = open.argument.display(self.checker.source());
let suggestion = make_suggestion(&open, content, self.checker.locator());
let mut diagnostic = self.checker.report_diagnostic(
WriteWholeFile {
filename: SourceCodeSnippet::from_str(
&self.checker.generator().expr(open.filename),
),
filename: SourceCodeSnippet::from_str(filename_display),
suggestion: SourceCodeSnippet::from_str(&suggestion),
argument: open.argument,
},
open.item.range(),
);
@@ -198,7 +211,6 @@ fn generate_fix(
}
let locator = checker.locator();
let filename_code = locator.slice(open.filename.range());
let (import_edit, binding) = checker
.importer()
@@ -209,7 +221,15 @@ fn generate_fix(
)
.ok()?;
let replacement = format!("{binding}({filename_code}).{suggestion}");
let target = match open.argument {
OpenArgument::Builtin { filename } => {
let filename_code = locator.slice(filename.range());
format!("{binding}({filename_code})")
}
OpenArgument::Pathlib { path } => locator.slice(path.range()).to_string(),
};
let replacement = format!("{target}.{suggestion}");
let applicability = if checker.comment_ranges().intersects(with_stmt.range()) {
Applicability::Unsafe

View File

@@ -2,7 +2,7 @@
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text()`
--> FURB101.py:12:6
--> FURB101_0.py:12:6
|
11 | # FURB101
12 | with open("file.txt") as f:
@@ -26,7 +26,7 @@ help: Replace with `Path("file.txt").read_text()`
16 | with open("file.txt", "rb") as f:
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_bytes()`
--> FURB101.py:16:6
--> FURB101_0.py:16:6
|
15 | # FURB101
16 | with open("file.txt", "rb") as f:
@@ -50,7 +50,7 @@ help: Replace with `Path("file.txt").read_bytes()`
20 | with open("file.txt", mode="rb") as f:
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_bytes()`
--> FURB101.py:20:6
--> FURB101_0.py:20:6
|
19 | # FURB101
20 | with open("file.txt", mode="rb") as f:
@@ -74,7 +74,7 @@ help: Replace with `Path("file.txt").read_bytes()`
24 | with open("file.txt", encoding="utf8") as f:
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf8")`
--> FURB101.py:24:6
--> FURB101_0.py:24:6
|
23 | # FURB101
24 | with open("file.txt", encoding="utf8") as f:
@@ -98,7 +98,7 @@ help: Replace with `Path("file.txt").read_text(encoding="utf8")`
28 | with open("file.txt", errors="ignore") as f:
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text(errors="ignore")`
--> FURB101.py:28:6
--> FURB101_0.py:28:6
|
27 | # FURB101
28 | with open("file.txt", errors="ignore") as f:
@@ -122,7 +122,7 @@ help: Replace with `Path("file.txt").read_text(errors="ignore")`
32 | with open("file.txt", mode="r") as f: # noqa: FURB120
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text()`
--> FURB101.py:32:6
--> FURB101_0.py:32:6
|
31 | # FURB101
32 | with open("file.txt", mode="r") as f: # noqa: FURB120
@@ -147,7 +147,7 @@ help: Replace with `Path("file.txt").read_text()`
note: This is an unsafe fix and may change runtime behavior
FURB101 `open` and `read` should be replaced by `Path(foo()).read_bytes()`
--> FURB101.py:36:6
--> FURB101_0.py:36:6
|
35 | # FURB101
36 | with open(foo(), "rb") as f:
@@ -158,7 +158,7 @@ FURB101 `open` and `read` should be replaced by `Path(foo()).read_bytes()`
help: Replace with `Path(foo()).read_bytes()`
FURB101 `open` and `read` should be replaced by `Path("a.txt").read_text()`
--> FURB101.py:44:6
--> FURB101_0.py:44:6
|
43 | # FURB101
44 | with open("a.txt") as a, open("b.txt", "rb") as b:
@@ -169,7 +169,7 @@ FURB101 `open` and `read` should be replaced by `Path("a.txt").read_text()`
help: Replace with `Path("a.txt").read_text()`
FURB101 `open` and `read` should be replaced by `Path("b.txt").read_bytes()`
--> FURB101.py:44:26
--> FURB101_0.py:44:26
|
43 | # FURB101
44 | with open("a.txt") as a, open("b.txt", "rb") as b:
@@ -180,7 +180,7 @@ FURB101 `open` and `read` should be replaced by `Path("b.txt").read_bytes()`
help: Replace with `Path("b.txt").read_bytes()`
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text()`
--> FURB101.py:49:18
--> FURB101_0.py:49:18
|
48 | # FURB101
49 | with foo() as a, open("file.txt") as b, foo() as c:
@@ -191,7 +191,7 @@ FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text()`
help: Replace with `Path("file.txt").read_text()`
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf-8")`
--> FURB101.py:130:6
--> FURB101_0.py:130:6
|
129 | # FURB101
130 | with open("file.txt", encoding="utf-8") as f:
@@ -215,7 +215,7 @@ help: Replace with `Path("file.txt").read_text(encoding="utf-8")`
134 | with open("file.txt", encoding="utf-8") as f:
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf-8")`
--> FURB101.py:134:6
--> FURB101_0.py:134:6
|
133 | # FURB101 but no fix because it would remove the assignment to `x`
134 | with open("file.txt", encoding="utf-8") as f:
@@ -225,7 +225,7 @@ FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(enco
help: Replace with `Path("file.txt").read_text(encoding="utf-8")`
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf-8")`
--> FURB101.py:138:6
--> FURB101_0.py:138:6
|
137 | # FURB101 but no fix because it would remove the `process_contents` call
138 | with open("file.txt", encoding="utf-8") as f:
@@ -234,13 +234,13 @@ FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(enco
|
help: Replace with `Path("file.txt").read_text(encoding="utf-8")`
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf-8")`
--> FURB101.py:141:6
FURB101 `open` and `read` should be replaced by `Path("file1.txt").read_text(encoding="utf-8")`
--> FURB101_0.py:141:6
|
139 | contents = process_contents(f.read())
140 |
141 | with open("file.txt", encoding="utf-8") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
141 | with open("file1.txt", encoding="utf-8") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
142 | contents: str = process_contents(f.read())
|
help: Replace with `Path("file.txt").read_text(encoding="utf-8")`
help: Replace with `Path("file1.txt").read_text(encoding="utf-8")`

View File

@@ -0,0 +1,39 @@
---
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB101 [*] `Path.open()` followed by `read()` can be replaced by `Path("file.txt").read_text()`
--> FURB101_1.py:4:6
|
2 | from pathlib import Path
3 |
4 | with Path("file.txt").open() as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5 | contents = f.read()
|
help: Replace with `Path("file.txt").read_text()`
1 |
2 | from pathlib import Path
3 |
- with Path("file.txt").open() as f:
- contents = f.read()
4 + contents = Path("file.txt").read_text()
5 |
6 | with Path("file.txt").open("r") as f:
7 | contents = f.read()
FURB101 [*] `Path.open()` followed by `read()` can be replaced by `Path("file.txt").read_text()`
--> FURB101_1.py:7:6
|
5 | contents = f.read()
6 |
7 | with Path("file.txt").open("r") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
8 | contents = f.read()
|
help: Replace with `Path("file.txt").read_text()`
4 | with Path("file.txt").open() as f:
5 | contents = f.read()
6 |
- with Path("file.txt").open("r") as f:
- contents = f.read()
7 + contents = Path("file.txt").read_text()

View File

@@ -2,7 +2,7 @@
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text("test")`
--> FURB103.py:12:6
--> FURB103_0.py:12:6
|
11 | # FURB103
12 | with open("file.txt", "w") as f:
@@ -26,7 +26,7 @@ help: Replace with `Path("file.txt").write_text("test")`
16 | with open("file.txt", "wb") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_bytes(foobar)`
--> FURB103.py:16:6
--> FURB103_0.py:16:6
|
15 | # FURB103
16 | with open("file.txt", "wb") as f:
@@ -50,7 +50,7 @@ help: Replace with `Path("file.txt").write_bytes(foobar)`
20 | with open("file.txt", mode="wb") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_bytes(b"abc")`
--> FURB103.py:20:6
--> FURB103_0.py:20:6
|
19 | # FURB103
20 | with open("file.txt", mode="wb") as f:
@@ -74,7 +74,7 @@ help: Replace with `Path("file.txt").write_bytes(b"abc")`
24 | with open("file.txt", "w", encoding="utf8") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, encoding="utf8")`
--> FURB103.py:24:6
--> FURB103_0.py:24:6
|
23 | # FURB103
24 | with open("file.txt", "w", encoding="utf8") as f:
@@ -98,7 +98,7 @@ help: Replace with `Path("file.txt").write_text(foobar, encoding="utf8")`
28 | with open("file.txt", "w", errors="ignore") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, errors="ignore")`
--> FURB103.py:28:6
--> FURB103_0.py:28:6
|
27 | # FURB103
28 | with open("file.txt", "w", errors="ignore") as f:
@@ -122,7 +122,7 @@ help: Replace with `Path("file.txt").write_text(foobar, errors="ignore")`
32 | with open("file.txt", mode="w") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar)`
--> FURB103.py:32:6
--> FURB103_0.py:32:6
|
31 | # FURB103
32 | with open("file.txt", mode="w") as f:
@@ -146,7 +146,7 @@ help: Replace with `Path("file.txt").write_text(foobar)`
36 | with open(foo(), "wb") as f:
FURB103 `open` and `write` should be replaced by `Path(foo()).write_bytes(bar())`
--> FURB103.py:36:6
--> FURB103_0.py:36:6
|
35 | # FURB103
36 | with open(foo(), "wb") as f:
@@ -157,7 +157,7 @@ FURB103 `open` and `write` should be replaced by `Path(foo()).write_bytes(bar())
help: Replace with `Path(foo()).write_bytes(bar())`
FURB103 `open` and `write` should be replaced by `Path("a.txt").write_text(x)`
--> FURB103.py:44:6
--> FURB103_0.py:44:6
|
43 | # FURB103
44 | with open("a.txt", "w") as a, open("b.txt", "wb") as b:
@@ -168,7 +168,7 @@ FURB103 `open` and `write` should be replaced by `Path("a.txt").write_text(x)`
help: Replace with `Path("a.txt").write_text(x)`
FURB103 `open` and `write` should be replaced by `Path("b.txt").write_bytes(y)`
--> FURB103.py:44:31
--> FURB103_0.py:44:31
|
43 | # FURB103
44 | with open("a.txt", "w") as a, open("b.txt", "wb") as b:
@@ -179,7 +179,7 @@ FURB103 `open` and `write` should be replaced by `Path("b.txt").write_bytes(y)`
help: Replace with `Path("b.txt").write_bytes(y)`
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(bar(bar(a + x)))`
--> FURB103.py:49:18
--> FURB103_0.py:49:18
|
48 | # FURB103
49 | with foo() as a, open("file.txt", "w") as b, foo() as c:
@@ -190,7 +190,7 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(ba
help: Replace with `Path("file.txt").write_text(bar(bar(a + x)))`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, newline="\r\n")`
--> FURB103.py:58:6
--> FURB103_0.py:58:6
|
57 | # FURB103
58 | with open("file.txt", "w", newline="\r\n") as f:
@@ -214,7 +214,7 @@ help: Replace with `Path("file.txt").write_text(foobar, newline="\r\n")`
62 | import builtins
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, newline="\r\n")`
--> FURB103.py:66:6
--> FURB103_0.py:66:6
|
65 | # FURB103
66 | with builtins.open("file.txt", "w", newline="\r\n") as f:
@@ -237,7 +237,7 @@ help: Replace with `Path("file.txt").write_text(foobar, newline="\r\n")`
70 | from builtins import open as o
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, newline="\r\n")`
--> FURB103.py:74:6
--> FURB103_0.py:74:6
|
73 | # FURB103
74 | with o("file.txt", "w", newline="\r\n") as f:
@@ -260,7 +260,7 @@ help: Replace with `Path("file.txt").write_text(foobar, newline="\r\n")`
78 |
FURB103 [*] `open` and `write` should be replaced by `Path("test.json")....`
--> FURB103.py:154:6
--> FURB103_0.py:154:6
|
152 | data = {"price": 100}
153 |
@@ -284,7 +284,7 @@ help: Replace with `Path("test.json")....`
158 | with open("tmp_path/pyproject.toml", "w") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("tmp_path/pyproject.toml")....`
--> FURB103.py:158:6
--> FURB103_0.py:158:6
|
157 | # See: https://github.com/astral-sh/ruff/issues/21381
158 | with open("tmp_path/pyproject.toml", "w") as f:

View File

@@ -0,0 +1,157 @@
---
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `Path("file.txt").write_text("test")`
--> FURB103_1.py:3:6
|
1 | from pathlib import Path
2 |
3 | with Path("file.txt").open("w") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4 | f.write("test")
|
help: Replace with `Path("file.txt").write_text("test")`
1 | from pathlib import Path
2 |
- with Path("file.txt").open("w") as f:
- f.write("test")
3 + Path("file.txt").write_text("test")
4 |
5 | with Path("file.txt").open("wb") as f:
6 | f.write(b"test")
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `Path("file.txt").write_bytes(b"test")`
--> FURB103_1.py:6:6
|
4 | f.write("test")
5 |
6 | with Path("file.txt").open("wb") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
7 | f.write(b"test")
|
help: Replace with `Path("file.txt").write_bytes(b"test")`
3 | with Path("file.txt").open("w") as f:
4 | f.write("test")
5 |
- with Path("file.txt").open("wb") as f:
- f.write(b"test")
6 + Path("file.txt").write_bytes(b"test")
7 |
8 | with Path("file.txt").open(mode="w") as f:
9 | f.write("test")
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `Path("file.txt").write_text("test")`
--> FURB103_1.py:9:6
|
7 | f.write(b"test")
8 |
9 | with Path("file.txt").open(mode="w") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
10 | f.write("test")
|
help: Replace with `Path("file.txt").write_text("test")`
6 | with Path("file.txt").open("wb") as f:
7 | f.write(b"test")
8 |
- with Path("file.txt").open(mode="w") as f:
- f.write("test")
9 + Path("file.txt").write_text("test")
10 |
11 | with Path("file.txt").open("w", encoding="utf8") as f:
12 | f.write("test")
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `Path("file.txt").write_text("test", encoding="utf8")`
--> FURB103_1.py:12:6
|
10 | f.write("test")
11 |
12 | with Path("file.txt").open("w", encoding="utf8") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
13 | f.write("test")
|
help: Replace with `Path("file.txt").write_text("test", encoding="utf8")`
9 | with Path("file.txt").open(mode="w") as f:
10 | f.write("test")
11 |
- with Path("file.txt").open("w", encoding="utf8") as f:
- f.write("test")
12 + Path("file.txt").write_text("test", encoding="utf8")
13 |
14 | with Path("file.txt").open("w", errors="ignore") as f:
15 | f.write("test")
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `Path("file.txt").write_text("test", errors="ignore")`
--> FURB103_1.py:15:6
|
13 | f.write("test")
14 |
15 | with Path("file.txt").open("w", errors="ignore") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
16 | f.write("test")
|
help: Replace with `Path("file.txt").write_text("test", errors="ignore")`
12 | with Path("file.txt").open("w", encoding="utf8") as f:
13 | f.write("test")
14 |
- with Path("file.txt").open("w", errors="ignore") as f:
- f.write("test")
15 + Path("file.txt").write_text("test", errors="ignore")
16 |
17 | with Path(foo()).open("w") as f:
18 | f.write("test")
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `Path(foo()).write_text("test")`
--> FURB103_1.py:18:6
|
16 | f.write("test")
17 |
18 | with Path(foo()).open("w") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
19 | f.write("test")
|
help: Replace with `Path(foo()).write_text("test")`
15 | with Path("file.txt").open("w", errors="ignore") as f:
16 | f.write("test")
17 |
- with Path(foo()).open("w") as f:
- f.write("test")
18 + Path(foo()).write_text("test")
19 |
20 | p = Path("file.txt")
21 | with p.open("w") as f:
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `p.write_text("test")`
--> FURB103_1.py:22:6
|
21 | p = Path("file.txt")
22 | with p.open("w") as f:
| ^^^^^^^^^^^^^^^^
23 | f.write("test")
|
help: Replace with `p.write_text("test")`
19 | f.write("test")
20 |
21 | p = Path("file.txt")
- with p.open("w") as f:
- f.write("test")
22 + p.write_text("test")
23 |
24 | with Path("foo", "bar", "baz").open("w") as f:
25 | f.write("test")
FURB103 [*] `Path.open()` followed by `write()` can be replaced by `Path("foo", "bar", "baz").write_text("test")`
--> FURB103_1.py:25:6
|
23 | f.write("test")
24 |
25 | with Path("foo", "bar", "baz").open("w") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
26 | f.write("test")
|
help: Replace with `Path("foo", "bar", "baz").write_text("test")`
22 | with p.open("w") as f:
23 | f.write("test")
24 |
- with Path("foo", "bar", "baz").open("w") as f:
- f.write("test")
25 + Path("foo", "bar", "baz").write_text("test")

View File

@@ -2,7 +2,7 @@
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text("test")`
--> FURB103.py:12:6
--> FURB103_0.py:12:6
|
11 | # FURB103
12 | with open("file.txt", "w") as f:
@@ -26,7 +26,7 @@ help: Replace with `Path("file.txt").write_text("test")`
16 | with open("file.txt", "wb") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_bytes(foobar)`
--> FURB103.py:16:6
--> FURB103_0.py:16:6
|
15 | # FURB103
16 | with open("file.txt", "wb") as f:
@@ -50,7 +50,7 @@ help: Replace with `Path("file.txt").write_bytes(foobar)`
20 | with open("file.txt", mode="wb") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_bytes(b"abc")`
--> FURB103.py:20:6
--> FURB103_0.py:20:6
|
19 | # FURB103
20 | with open("file.txt", mode="wb") as f:
@@ -74,7 +74,7 @@ help: Replace with `Path("file.txt").write_bytes(b"abc")`
24 | with open("file.txt", "w", encoding="utf8") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, encoding="utf8")`
--> FURB103.py:24:6
--> FURB103_0.py:24:6
|
23 | # FURB103
24 | with open("file.txt", "w", encoding="utf8") as f:
@@ -98,7 +98,7 @@ help: Replace with `Path("file.txt").write_text(foobar, encoding="utf8")`
28 | with open("file.txt", "w", errors="ignore") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, errors="ignore")`
--> FURB103.py:28:6
--> FURB103_0.py:28:6
|
27 | # FURB103
28 | with open("file.txt", "w", errors="ignore") as f:
@@ -122,7 +122,7 @@ help: Replace with `Path("file.txt").write_text(foobar, errors="ignore")`
32 | with open("file.txt", mode="w") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar)`
--> FURB103.py:32:6
--> FURB103_0.py:32:6
|
31 | # FURB103
32 | with open("file.txt", mode="w") as f:
@@ -146,7 +146,7 @@ help: Replace with `Path("file.txt").write_text(foobar)`
36 | with open(foo(), "wb") as f:
FURB103 `open` and `write` should be replaced by `Path(foo()).write_bytes(bar())`
--> FURB103.py:36:6
--> FURB103_0.py:36:6
|
35 | # FURB103
36 | with open(foo(), "wb") as f:
@@ -157,7 +157,7 @@ FURB103 `open` and `write` should be replaced by `Path(foo()).write_bytes(bar())
help: Replace with `Path(foo()).write_bytes(bar())`
FURB103 `open` and `write` should be replaced by `Path("a.txt").write_text(x)`
--> FURB103.py:44:6
--> FURB103_0.py:44:6
|
43 | # FURB103
44 | with open("a.txt", "w") as a, open("b.txt", "wb") as b:
@@ -168,7 +168,7 @@ FURB103 `open` and `write` should be replaced by `Path("a.txt").write_text(x)`
help: Replace with `Path("a.txt").write_text(x)`
FURB103 `open` and `write` should be replaced by `Path("b.txt").write_bytes(y)`
--> FURB103.py:44:31
--> FURB103_0.py:44:31
|
43 | # FURB103
44 | with open("a.txt", "w") as a, open("b.txt", "wb") as b:
@@ -179,7 +179,7 @@ FURB103 `open` and `write` should be replaced by `Path("b.txt").write_bytes(y)`
help: Replace with `Path("b.txt").write_bytes(y)`
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(bar(bar(a + x)))`
--> FURB103.py:49:18
--> FURB103_0.py:49:18
|
48 | # FURB103
49 | with foo() as a, open("file.txt", "w") as b, foo() as c:
@@ -190,7 +190,7 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(ba
help: Replace with `Path("file.txt").write_text(bar(bar(a + x)))`
FURB103 [*] `open` and `write` should be replaced by `Path("test.json")....`
--> FURB103.py:154:6
--> FURB103_0.py:154:6
|
152 | data = {"price": 100}
153 |
@@ -214,7 +214,7 @@ help: Replace with `Path("test.json")....`
158 | with open("tmp_path/pyproject.toml", "w") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("tmp_path/pyproject.toml")....`
--> FURB103.py:158:6
--> FURB103_0.py:158:6
|
157 | # See: https://github.com/astral-sh/ruff/issues/21381
158 | with open("tmp_path/pyproject.toml", "w") as f:

View File

@@ -313,12 +313,20 @@ mod tests {
Rule::UnusedVariable,
Rule::AmbiguousVariableName,
Rule::UnusedNOQA,
]),
Rule::InvalidRuleCode,
Rule::InvalidSuppressionComment,
Rule::UnmatchedSuppressionComment,
])
.with_external_rules(&["TK421"]),
&settings::LinterSettings::for_rules(vec![
Rule::UnusedVariable,
Rule::AmbiguousVariableName,
Rule::UnusedNOQA,
Rule::InvalidRuleCode,
Rule::InvalidSuppressionComment,
Rule::UnmatchedSuppressionComment,
])
.with_external_rules(&["TK421"])
.with_preview_mode(),
);
Ok(())

View File

@@ -9,6 +9,21 @@ use crate::registry::Rule;
use crate::rule_redirects::get_redirect_target;
use crate::{AlwaysFixableViolation, Edit, Fix};
#[derive(Debug, PartialEq, Eq)]
pub(crate) enum InvalidRuleCodeKind {
Noqa,
Suppression,
}
impl InvalidRuleCodeKind {
fn as_str(&self) -> &str {
match self {
InvalidRuleCodeKind::Noqa => "`# noqa`",
InvalidRuleCodeKind::Suppression => "suppression",
}
}
}
/// ## What it does
/// Checks for `noqa` codes that are invalid.
///
@@ -36,12 +51,17 @@ use crate::{AlwaysFixableViolation, Edit, Fix};
#[violation_metadata(preview_since = "0.11.4")]
pub(crate) struct InvalidRuleCode {
pub(crate) rule_code: String,
pub(crate) kind: InvalidRuleCodeKind,
}
impl AlwaysFixableViolation for InvalidRuleCode {
#[derive_message_formats]
fn message(&self) -> String {
format!("Invalid rule code in `# noqa`: {}", self.rule_code)
format!(
"Invalid rule code in {}: {}",
self.kind.as_str(),
self.rule_code
)
}
fn fix_title(&self) -> String {
@@ -61,7 +81,9 @@ pub(crate) fn invalid_noqa_code(
continue;
};
let all_valid = directive.iter().all(|code| code_is_valid(code, external));
let all_valid = directive
.iter()
.all(|code| code_is_valid(code.as_str(), external));
if all_valid {
continue;
@@ -69,7 +91,7 @@ pub(crate) fn invalid_noqa_code(
let (valid_codes, invalid_codes): (Vec<_>, Vec<_>) = directive
.iter()
.partition(|&code| code_is_valid(code, external));
.partition(|&code| code_is_valid(code.as_str(), external));
if valid_codes.is_empty() {
all_codes_invalid_diagnostic(directive, invalid_codes, context);
@@ -81,10 +103,9 @@ pub(crate) fn invalid_noqa_code(
}
}
fn code_is_valid(code: &Code, external: &[String]) -> bool {
let code_str = code.as_str();
Rule::from_code(get_redirect_target(code_str).unwrap_or(code_str)).is_ok()
|| external.iter().any(|ext| code_str.starts_with(ext))
pub(crate) fn code_is_valid(code: &str, external: &[String]) -> bool {
Rule::from_code(get_redirect_target(code).unwrap_or(code)).is_ok()
|| external.iter().any(|ext| code.starts_with(ext))
}
fn all_codes_invalid_diagnostic(
@@ -100,6 +121,7 @@ fn all_codes_invalid_diagnostic(
.map(Code::as_str)
.collect::<Vec<_>>()
.join(", "),
kind: InvalidRuleCodeKind::Noqa,
},
directive.range(),
)
@@ -116,6 +138,7 @@ fn some_codes_are_invalid_diagnostic(
.report_diagnostic(
InvalidRuleCode {
rule_code: invalid_code.to_string(),
kind: InvalidRuleCodeKind::Noqa,
},
invalid_code.range(),
)

View File

@@ -0,0 +1,59 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use crate::AlwaysFixableViolation;
use crate::suppression::{InvalidSuppressionKind, ParseErrorKind};
/// ## What it does
/// Checks for invalid suppression comments
///
/// ## Why is this bad?
/// Invalid suppression comments are ignored by Ruff, and should either
/// be fixed or removed to avoid confusion.
///
/// ## Example
/// ```python
/// ruff: disable # missing codes
/// ```
///
/// Use instead:
/// ```python
/// # ruff: disable[E501]
/// ```
///
/// Or delete the invalid suppression comment.
///
/// ## References
/// - [Ruff error suppression](https://docs.astral.sh/ruff/linter/#error-suppression)
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "0.14.11")]
pub(crate) struct InvalidSuppressionComment {
pub(crate) kind: InvalidSuppressionCommentKind,
}
impl AlwaysFixableViolation for InvalidSuppressionComment {
#[derive_message_formats]
fn message(&self) -> String {
let msg = match self.kind {
InvalidSuppressionCommentKind::Invalid(InvalidSuppressionKind::Indentation) => {
"unexpected indentation".to_string()
}
InvalidSuppressionCommentKind::Invalid(InvalidSuppressionKind::Trailing) => {
"trailing comments are not supported".to_string()
}
InvalidSuppressionCommentKind::Invalid(InvalidSuppressionKind::Unmatched) => {
"no matching 'disable' comment".to_string()
}
InvalidSuppressionCommentKind::Error(error) => format!("{error}"),
};
format!("Invalid suppression comment: {msg}")
}
fn fix_title(&self) -> String {
"Remove suppression comment".to_string()
}
}
pub(crate) enum InvalidSuppressionCommentKind {
Invalid(InvalidSuppressionKind),
Error(ParseErrorKind),
}

View File

@@ -22,6 +22,7 @@ pub(crate) use invalid_formatter_suppression_comment::*;
pub(crate) use invalid_index_type::*;
pub(crate) use invalid_pyproject_toml::*;
pub(crate) use invalid_rule_code::*;
pub(crate) use invalid_suppression_comment::*;
pub(crate) use legacy_form_pytest_raises::*;
pub(crate) use logging_eager_conversion::*;
pub(crate) use map_int_version_parsing::*;
@@ -46,6 +47,7 @@ pub(crate) use starmap_zip::*;
pub(crate) use static_key_dict_comprehension::*;
#[cfg(any(feature = "test-rules", test))]
pub(crate) use test_rules::*;
pub(crate) use unmatched_suppression_comment::*;
pub(crate) use unnecessary_cast_to_int::*;
pub(crate) use unnecessary_iterable_allocation_for_first_element::*;
pub(crate) use unnecessary_key_check::*;
@@ -87,6 +89,7 @@ mod invalid_formatter_suppression_comment;
mod invalid_index_type;
mod invalid_pyproject_toml;
mod invalid_rule_code;
mod invalid_suppression_comment;
mod legacy_form_pytest_raises;
mod logging_eager_conversion;
mod map_int_version_parsing;
@@ -113,6 +116,7 @@ mod static_key_dict_comprehension;
mod suppression_comment_visitor;
#[cfg(any(feature = "test-rules", test))]
pub(crate) mod test_rules;
mod unmatched_suppression_comment;
mod unnecessary_cast_to_int;
mod unnecessary_iterable_allocation_for_first_element;
mod unnecessary_key_check;

View File

@@ -0,0 +1,42 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use crate::Violation;
/// ## What it does
/// Checks for unmatched range suppression comments
///
/// ## Why is this bad?
/// Unmatched range suppression comments can inadvertently suppress violations
/// over larger sections of code than intended, particularly at module scope.
///
/// ## Example
/// ```python
/// def foo():
/// # ruff: disable[E501] # unmatched
/// REALLY_LONG_VALUES = [...]
///
/// print(REALLY_LONG_VALUES)
/// ```
///
/// Use instead:
/// ```python
/// def foo():
/// # ruff: disable[E501]
/// REALLY_LONG_VALUES = [...]
/// # ruff: enable[E501]
///
/// print(REALLY_LONG_VALUES)
/// ```
///
/// ## References
/// - [Ruff error suppression](https://docs.astral.sh/ruff/linter/#error-suppression)
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "0.14.11")]
pub(crate) struct UnmatchedSuppressionComment;
impl Violation for UnmatchedSuppressionComment {
#[derive_message_formats]
fn message(&self) -> String {
"Suppression comment without matching `#ruff:enable` comment".to_string()
}
}

View File

@@ -6,8 +6,8 @@ source: crates/ruff_linter/src/rules/ruff/mod.rs
+linter.preview = enabled
--- Summary ---
Removed: 14
Added: 11
Removed: 15
Added: 23
--- Removed ---
E741 Ambiguous variable name: `I`
@@ -238,8 +238,60 @@ help: Remove assignment to unused variable `I`
note: This is an unsafe fix and may change runtime behavior
F841 [*] Local variable `value` is assigned to but never used
--> suppressions.py:95:5
|
93 | # ruff: disable[YF829]
94 | # ruff: disable[F841, RQW320]
95 | value = 0
| ^^^^^
96 | # ruff: enable[F841, RQW320]
97 | # ruff: enable[YF829]
|
help: Remove assignment to unused variable `value`
92 | # Unknown rule codes
93 | # ruff: disable[YF829]
94 | # ruff: disable[F841, RQW320]
- value = 0
95 + pass
96 | # ruff: enable[F841, RQW320]
97 | # ruff: enable[YF829]
98 |
note: This is an unsafe fix and may change runtime behavior
--- Added ---
RUF104 Suppression comment without matching `#ruff:enable` comment
--> suppressions.py:11:5
|
9 | # These should both be ignored by the implicit range suppression.
10 | # Should also generate an "unmatched suppression" warning.
11 | # ruff:disable[E741,F841]
| ^^^^^^^^^^^^^^^^^^^^^^^^^
12 | I = 1
|
RUF103 [*] Invalid suppression comment: no matching 'disable' comment
--> suppressions.py:19:5
|
17 | # should be generated.
18 | I = 1
19 | # ruff: enable[E741, F841]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Remove suppression comment
16 | # Neither warning is ignored, and an "unmatched suppression"
17 | # should be generated.
18 | I = 1
- # ruff: enable[E741, F841]
19 |
20 |
21 | def f():
note: This is an unsafe fix and may change runtime behavior
RUF100 [*] Unused suppression (non-enabled: `E501`)
--> suppressions.py:46:5
|
@@ -298,6 +350,17 @@ help: Remove unused `noqa` directive
58 |
RUF104 Suppression comment without matching `#ruff:enable` comment
--> suppressions.py:61:5
|
59 | def f():
60 | # TODO: Duplicate codes should be counted as duplicate, not unused
61 | # ruff: disable[F841, F841]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
62 | foo = 0
|
RUF100 [*] Unused suppression (unused: `F841`)
--> suppressions.py:61:21
|
@@ -318,6 +381,18 @@ help: Remove unused suppression
64 |
RUF104 Suppression comment without matching `#ruff:enable` comment
--> suppressions.py:68:5
|
66 | # Overlapping range suppressions, one should be marked as used,
67 | # and the other should trigger an unused suppression diagnostic
68 | # ruff: disable[F841]
| ^^^^^^^^^^^^^^^^^^^^^
69 | # ruff: disable[F841]
70 | foo = 0
|
RUF100 [*] Unused suppression (unused: `F841`)
--> suppressions.py:69:5
|
@@ -337,6 +412,17 @@ help: Remove unused suppression
71 |
RUF104 Suppression comment without matching `#ruff:enable` comment
--> suppressions.py:75:5
|
73 | def f():
74 | # Multiple codes but only one is used
75 | # ruff: disable[E741, F401, F841]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
76 | foo = 0
|
RUF100 [*] Unused suppression (unused: `E741`)
--> suppressions.py:75:21
|
@@ -377,6 +463,17 @@ help: Remove unused suppression
78 |
RUF104 Suppression comment without matching `#ruff:enable` comment
--> suppressions.py:81:5
|
79 | def f():
80 | # Multiple codes but only two are used
81 | # ruff: disable[E741, F401, F841]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
82 | I = 0
|
RUF100 [*] Unused suppression (non-enabled: `F401`)
--> suppressions.py:81:27
|
@@ -413,6 +510,8 @@ help: Remove unused suppression
- # ruff: disable[E741, F401, F841]
87 + # ruff: disable[F401, F841]
88 | print("hello")
89 |
90 |
RUF100 [*] Unused suppression (non-enabled: `F401`)
@@ -431,6 +530,8 @@ help: Remove unused suppression
- # ruff: disable[E741, F401, F841]
87 + # ruff: disable[E741, F841]
88 | print("hello")
89 |
90 |
RUF100 [*] Unused suppression (unused: `F841`)
@@ -449,3 +550,122 @@ help: Remove unused suppression
- # ruff: disable[E741, F401, F841]
87 + # ruff: disable[E741, F401]
88 | print("hello")
89 |
90 |
RUF102 [*] Invalid rule code in suppression: YF829
--> suppressions.py:93:21
|
91 | def f():
92 | # Unknown rule codes
93 | # ruff: disable[YF829]
| ^^^^^
94 | # ruff: disable[F841, RQW320]
95 | value = 0
|
help: Remove the rule code
90 |
91 | def f():
92 | # Unknown rule codes
- # ruff: disable[YF829]
93 | # ruff: disable[F841, RQW320]
94 | value = 0
95 | # ruff: enable[F841, RQW320]
RUF102 [*] Invalid rule code in suppression: RQW320
--> suppressions.py:94:27
|
92 | # Unknown rule codes
93 | # ruff: disable[YF829]
94 | # ruff: disable[F841, RQW320]
| ^^^^^^
95 | value = 0
96 | # ruff: enable[F841, RQW320]
|
help: Remove the rule code
91 | def f():
92 | # Unknown rule codes
93 | # ruff: disable[YF829]
- # ruff: disable[F841, RQW320]
94 + # ruff: disable[F841]
95 | value = 0
96 | # ruff: enable[F841, RQW320]
97 | # ruff: enable[YF829]
RUF102 [*] Invalid rule code in suppression: RQW320
--> suppressions.py:96:26
|
94 | # ruff: disable[F841, RQW320]
95 | value = 0
96 | # ruff: enable[F841, RQW320]
| ^^^^^^
97 | # ruff: enable[YF829]
|
help: Remove the rule code
93 | # ruff: disable[YF829]
94 | # ruff: disable[F841, RQW320]
95 | value = 0
- # ruff: enable[F841, RQW320]
96 + # ruff: enable[F841]
97 | # ruff: enable[YF829]
98 |
99 |
RUF102 [*] Invalid rule code in suppression: YF829
--> suppressions.py:97:20
|
95 | value = 0
96 | # ruff: enable[F841, RQW320]
97 | # ruff: enable[YF829]
| ^^^^^
|
help: Remove the rule code
94 | # ruff: disable[F841, RQW320]
95 | value = 0
96 | # ruff: enable[F841, RQW320]
- # ruff: enable[YF829]
97 |
98 |
99 | def f():
RUF103 [*] Invalid suppression comment: missing suppression codes like `[E501, ...]`
--> suppressions.py:109:5
|
107 | def f():
108 | # Empty or missing rule codes
109 | # ruff: disable
| ^^^^^^^^^^^^^^^
110 | # ruff: disable[]
111 | print("hello")
|
help: Remove suppression comment
106 |
107 | def f():
108 | # Empty or missing rule codes
- # ruff: disable
109 | # ruff: disable[]
110 | print("hello")
note: This is an unsafe fix and may change runtime behavior
RUF103 [*] Invalid suppression comment: missing suppression codes like `[E501, ...]`
--> suppressions.py:110:5
|
108 | # Empty or missing rule codes
109 | # ruff: disable
110 | # ruff: disable[]
| ^^^^^^^^^^^^^^^^^
111 | print("hello")
|
help: Remove suppression comment
107 | def f():
108 | # Empty or missing rule codes
109 | # ruff: disable
- # ruff: disable[]
110 | print("hello")
note: This is an unsafe fix and may change runtime behavior

View File

@@ -471,6 +471,13 @@ impl LinterSettings {
self
}
#[must_use]
pub fn with_external_rules(mut self, rules: &[&str]) -> Self {
self.external
.extend(rules.iter().map(std::string::ToString::to_string));
self
}
/// Resolve the [`TargetVersion`] to use for linting.
///
/// This method respects the per-file version overrides in

View File

@@ -0,0 +1,74 @@
---
source: crates/ruff_linter/src/linter.rs
---
invalid-syntax: annotated name `a` can't be global
--> resources/test/fixtures/semantic_errors/annotated_global.py:4:5
|
2 | def f1():
3 | global a
4 | a: str = "foo" # error
| ^
5 |
6 | b: int = 1
|
invalid-syntax: annotated name `b` can't be global
--> resources/test/fixtures/semantic_errors/annotated_global.py:10:9
|
8 | def inner():
9 | global b
10 | b: str = "nested" # error
| ^
11 |
12 | c: int = 1
|
invalid-syntax: annotated name `c` can't be global
--> resources/test/fixtures/semantic_errors/annotated_global.py:15:5
|
13 | def f2():
14 | global c
15 | c: list[str] = [] # error
| ^
16 |
17 | d: int = 1
|
invalid-syntax: annotated name `d` can't be global
--> resources/test/fixtures/semantic_errors/annotated_global.py:20:5
|
18 | def f3():
19 | global d
20 | d: str # error
| ^
21 |
22 | e: int = 1
|
invalid-syntax: annotated name `g` can't be global
--> resources/test/fixtures/semantic_errors/annotated_global.py:29:1
|
27 | f: int = 1 # okay
28 |
29 | g: int = 1
| ^
30 | global g # error
|
invalid-syntax: annotated name `x` can't be global
--> resources/test/fixtures/semantic_errors/annotated_global.py:33:5
|
32 | class C:
33 | x: str
| ^
34 | global x # error
|
invalid-syntax: annotated name `x` can't be global
--> resources/test/fixtures/semantic_errors/annotated_global.py:38:5
|
36 | class D:
37 | global x # error
38 | x: str
| ^
|

View File

@@ -4,6 +4,7 @@ use ruff_db::diagnostic::Diagnostic;
use ruff_diagnostics::{Edit, Fix};
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_ast::whitespace::indentation;
use rustc_hash::FxHashSet;
use std::cell::Cell;
use std::{error::Error, fmt::Formatter};
use thiserror::Error;
@@ -17,7 +18,11 @@ use crate::checkers::ast::LintContext;
use crate::codes::Rule;
use crate::fix::edits::delete_comment;
use crate::preview::is_range_suppressions_enabled;
use crate::rules::ruff::rules::{UnusedCodes, UnusedNOQA, UnusedNOQAKind};
use crate::rule_redirects::get_redirect_target;
use crate::rules::ruff::rules::{
InvalidRuleCode, InvalidRuleCodeKind, InvalidSuppressionComment, InvalidSuppressionCommentKind,
UnmatchedSuppressionComment, UnusedCodes, UnusedNOQA, UnusedNOQAKind, code_is_valid,
};
use crate::settings::LinterSettings;
#[derive(Clone, Debug, Eq, PartialEq)]
@@ -130,7 +135,7 @@ impl Suppressions {
}
pub(crate) fn is_empty(&self) -> bool {
self.valid.is_empty()
self.valid.is_empty() && self.invalid.is_empty() && self.errors.is_empty()
}
/// Check if a diagnostic is suppressed by any known range suppressions
@@ -150,7 +155,9 @@ impl Suppressions {
};
for suppression in &self.valid {
if *code == suppression.code.as_str() && suppression.range.contains_range(range) {
let suppression_code =
get_redirect_target(suppression.code.as_str()).unwrap_or(suppression.code.as_str());
if *code == suppression_code && suppression.range.contains_range(range) {
suppression.used.set(true);
return true;
}
@@ -159,81 +166,140 @@ impl Suppressions {
}
pub(crate) fn check_suppressions(&self, context: &LintContext, locator: &Locator) {
if !context.any_rule_enabled(&[Rule::UnusedNOQA, Rule::InvalidRuleCode]) {
return;
}
let unused = self
.valid
.iter()
.filter(|suppression| !suppression.used.get());
for suppression in unused {
let Ok(rule) = Rule::from_code(&suppression.code) else {
continue; // TODO: invalid code
};
for comment in &suppression.comments {
let mut range = comment.range;
let edit = if comment.codes.len() == 1 {
delete_comment(comment.range, locator)
} else {
let code_index = comment
.codes
.iter()
.position(|range| locator.slice(range) == suppression.code)
.unwrap();
range = comment.codes[code_index];
let code_range = if code_index < (comment.codes.len() - 1) {
TextRange::new(
comment.codes[code_index].start(),
comment.codes[code_index + 1].start(),
)
} else {
TextRange::new(
comment.codes[code_index - 1].end(),
comment.codes[code_index].end(),
)
let mut unmatched_ranges = FxHashSet::default();
for suppression in &self.valid {
if !code_is_valid(&suppression.code, &context.settings().external) {
// InvalidRuleCode
if context.is_rule_enabled(Rule::InvalidRuleCode) {
for comment in &suppression.comments {
let (range, edit) = Suppressions::delete_code_or_comment(
locator,
suppression,
comment,
true,
);
context
.report_diagnostic(
InvalidRuleCode {
rule_code: suppression.code.to_string(),
kind: InvalidRuleCodeKind::Suppression,
},
range,
)
.set_fix(Fix::safe_edit(edit));
}
}
} else if !suppression.used.get() {
// UnusedNOQA
if context.is_rule_enabled(Rule::UnusedNOQA) {
let Ok(rule) = Rule::from_code(
get_redirect_target(&suppression.code).unwrap_or(&suppression.code),
) else {
continue; // "external" lint code, don't treat it as unused
};
Edit::range_deletion(code_range)
};
for comment in &suppression.comments {
let (range, edit) = Suppressions::delete_code_or_comment(
locator,
suppression,
comment,
false,
);
let codes = if context.is_rule_enabled(rule) {
UnusedCodes {
unmatched: vec![suppression.code.to_string()],
..Default::default()
}
} else {
UnusedCodes {
disabled: vec![suppression.code.to_string()],
..Default::default()
}
};
let codes = if context.is_rule_enabled(rule) {
UnusedCodes {
unmatched: vec![suppression.code.to_string()],
..Default::default()
}
} else {
UnusedCodes {
disabled: vec![suppression.code.to_string()],
..Default::default()
}
};
let mut diagnostic = context.report_diagnostic(
UnusedNOQA {
codes: Some(codes),
kind: UnusedNOQAKind::Suppression,
},
range,
);
diagnostic.set_fix(Fix::safe_edit(edit));
context
.report_diagnostic(
UnusedNOQA {
codes: Some(codes),
kind: UnusedNOQAKind::Suppression,
},
range,
)
.set_fix(Fix::safe_edit(edit));
}
}
} else if suppression.comments.len() == 1 {
// UnmatchedSuppressionComment
let range = suppression.comments[0].range;
if unmatched_ranges.insert(range) {
context.report_diagnostic_if_enabled(UnmatchedSuppressionComment {}, range);
}
}
}
for error in self
.errors
.iter()
.filter(|error| error.kind == ParseErrorKind::MissingCodes)
{
let mut diagnostic = context.report_diagnostic(
UnusedNOQA {
codes: Some(UnusedCodes::default()),
kind: UnusedNOQAKind::Suppression,
},
error.range,
);
diagnostic.set_fix(Fix::safe_edit(delete_comment(error.range, locator)));
if context.is_rule_enabled(Rule::InvalidSuppressionComment) {
for error in &self.errors {
context
.report_diagnostic(
InvalidSuppressionComment {
kind: InvalidSuppressionCommentKind::Error(error.kind),
},
error.range,
)
.set_fix(Fix::unsafe_edit(delete_comment(error.range, locator)));
}
}
if context.is_rule_enabled(Rule::InvalidSuppressionComment) {
for invalid in &self.invalid {
context
.report_diagnostic(
InvalidSuppressionComment {
kind: InvalidSuppressionCommentKind::Invalid(invalid.kind),
},
invalid.comment.range,
)
.set_fix(Fix::unsafe_edit(delete_comment(
invalid.comment.range,
locator,
)));
}
}
}
fn delete_code_or_comment(
locator: &Locator<'_>,
suppression: &Suppression,
comment: &SuppressionComment,
highlight_only_code: bool,
) -> (TextRange, Edit) {
let mut range = comment.range;
let edit = if comment.codes.len() == 1 {
if highlight_only_code {
range = comment.codes[0];
}
delete_comment(comment.range, locator)
} else {
let code_index = comment
.codes
.iter()
.position(|range| locator.slice(range) == suppression.code)
.unwrap();
range = comment.codes[code_index];
let code_range = if code_index < (comment.codes.len() - 1) {
TextRange::new(
comment.codes[code_index].start(),
comment.codes[code_index + 1].start(),
)
} else {
TextRange::new(
comment.codes[code_index - 1].end(),
comment.codes[code_index].end(),
)
};
Edit::range_deletion(code_range)
};
(range, edit)
}
}
@@ -391,7 +457,7 @@ impl<'a> SuppressionsBuilder<'a> {
}
#[derive(Copy, Clone, Debug, Eq, Error, PartialEq)]
enum ParseErrorKind {
pub(crate) enum ParseErrorKind {
#[error("not a suppression comment")]
NotASuppression,
@@ -401,7 +467,7 @@ enum ParseErrorKind {
#[error("unknown ruff directive")]
UnknownAction,
#[error("missing suppression codes")]
#[error("missing suppression codes like `[E501, ...]`")]
MissingCodes,
#[error("missing closing bracket")]

View File

@@ -1,5 +1,5 @@
use ruff_python_ast::AnyNodeRef;
use ruff_python_ast::visitor::source_order::{SourceOrderVisitor, TraversalSignal};
use crate::AnyNodeRef;
use crate::visitor::source_order::{SourceOrderVisitor, TraversalSignal, walk_node};
use ruff_text_size::{Ranged, TextRange};
use std::fmt;
use std::fmt::Formatter;
@@ -11,7 +11,7 @@ use std::fmt::Formatter;
///
/// ## Panics
/// Panics if `range` is not contained within `root`.
pub(crate) fn covering_node(root: AnyNodeRef, range: TextRange) -> CoveringNode {
pub fn covering_node(root: AnyNodeRef, range: TextRange) -> CoveringNode {
struct Visitor<'a> {
range: TextRange,
found: bool,
@@ -48,15 +48,12 @@ pub(crate) fn covering_node(root: AnyNodeRef, range: TextRange) -> CoveringNode
ancestors: Vec::new(),
};
root.visit_source_order(&mut visitor);
if visitor.ancestors.is_empty() {
visitor.ancestors.push(root);
}
walk_node(&mut visitor, root);
CoveringNode::from_ancestors(visitor.ancestors)
}
/// The node with a minimal range that fully contains the search range.
pub(crate) struct CoveringNode<'a> {
pub struct CoveringNode<'a> {
/// The covering node, along with all of its ancestors up to the
/// root. The root is always the first element and the covering
/// node found is always the last node. This sequence is guaranteed
@@ -67,12 +64,12 @@ pub(crate) struct CoveringNode<'a> {
impl<'a> CoveringNode<'a> {
/// Creates a new `CoveringNode` from a list of ancestor nodes.
/// The ancestors should be ordered from root to the covering node.
pub(crate) fn from_ancestors(ancestors: Vec<AnyNodeRef<'a>>) -> Self {
pub fn from_ancestors(ancestors: Vec<AnyNodeRef<'a>>) -> Self {
Self { nodes: ancestors }
}
/// Returns the covering node found.
pub(crate) fn node(&self) -> AnyNodeRef<'a> {
pub fn node(&self) -> AnyNodeRef<'a> {
*self
.nodes
.last()
@@ -80,7 +77,7 @@ impl<'a> CoveringNode<'a> {
}
/// Returns the node's parent.
pub(crate) fn parent(&self) -> Option<AnyNodeRef<'a>> {
pub fn parent(&self) -> Option<AnyNodeRef<'a>> {
let penultimate = self.nodes.len().checked_sub(2)?;
self.nodes.get(penultimate).copied()
}
@@ -90,7 +87,7 @@ impl<'a> CoveringNode<'a> {
///
/// The "first" here means that the node closest to a leaf is
/// returned.
pub(crate) fn find_first(mut self, f: impl Fn(AnyNodeRef<'a>) -> bool) -> Result<Self, Self> {
pub fn find_first(mut self, f: impl Fn(AnyNodeRef<'a>) -> bool) -> Result<Self, Self> {
let Some(index) = self.find_first_index(f) else {
return Err(self);
};
@@ -105,7 +102,7 @@ impl<'a> CoveringNode<'a> {
/// the highest ancestor found satisfying the given predicate is
/// returned. Note that this is *not* the same as finding the node
/// closest to the root that satisfies the given predictate.
pub(crate) fn find_last(mut self, f: impl Fn(AnyNodeRef<'a>) -> bool) -> Result<Self, Self> {
pub fn find_last(mut self, f: impl Fn(AnyNodeRef<'a>) -> bool) -> Result<Self, Self> {
let Some(mut index) = self.find_first_index(&f) else {
return Err(self);
};
@@ -118,7 +115,7 @@ impl<'a> CoveringNode<'a> {
/// Returns an iterator over the ancestor nodes, starting with the node itself
/// and walking towards the root.
pub(crate) fn ancestors(&self) -> impl DoubleEndedIterator<Item = AnyNodeRef<'a>> + '_ {
pub fn ancestors(&self) -> impl DoubleEndedIterator<Item = AnyNodeRef<'a>> + '_ {
self.nodes.iter().copied().rev()
}

View File

@@ -12,6 +12,7 @@ pub use python_version::*;
pub mod comparable;
pub mod docstrings;
mod expression;
pub mod find_node;
mod generated;
pub mod helpers;
pub mod identifier;

View File

@@ -2,18 +2,25 @@
use crate::{self as ast, AnyNodeRef, ExceptHandler, Stmt};
/// Given a [`Stmt`] and its parent, return the [`ast::Suite`] that contains the [`Stmt`].
pub fn suite<'a>(stmt: &'a Stmt, parent: &'a Stmt) -> Option<EnclosingSuite<'a>> {
pub fn suite<'a>(
stmt: impl Into<AnyNodeRef<'a>>,
parent: impl Into<AnyNodeRef<'a>>,
) -> Option<EnclosingSuite<'a>> {
// TODO: refactor this to work without a parent, ie when `stmt` is at the top level
match parent {
Stmt::FunctionDef(ast::StmtFunctionDef { body, .. }) => EnclosingSuite::new(body, stmt),
Stmt::ClassDef(ast::StmtClassDef { body, .. }) => EnclosingSuite::new(body, stmt),
Stmt::For(ast::StmtFor { body, orelse, .. }) => [body, orelse]
let stmt = stmt.into();
match parent.into() {
AnyNodeRef::ModModule(ast::ModModule { body, .. }) => EnclosingSuite::new(body, stmt),
AnyNodeRef::StmtFunctionDef(ast::StmtFunctionDef { body, .. }) => {
EnclosingSuite::new(body, stmt)
}
AnyNodeRef::StmtClassDef(ast::StmtClassDef { body, .. }) => EnclosingSuite::new(body, stmt),
AnyNodeRef::StmtFor(ast::StmtFor { body, orelse, .. }) => [body, orelse]
.iter()
.find_map(|suite| EnclosingSuite::new(suite, stmt)),
Stmt::While(ast::StmtWhile { body, orelse, .. }) => [body, orelse]
AnyNodeRef::StmtWhile(ast::StmtWhile { body, orelse, .. }) => [body, orelse]
.iter()
.find_map(|suite| EnclosingSuite::new(suite, stmt)),
Stmt::If(ast::StmtIf {
AnyNodeRef::StmtIf(ast::StmtIf {
body,
elif_else_clauses,
..
@@ -21,12 +28,12 @@ pub fn suite<'a>(stmt: &'a Stmt, parent: &'a Stmt) -> Option<EnclosingSuite<'a>>
.into_iter()
.chain(elif_else_clauses.iter().map(|clause| &clause.body))
.find_map(|suite| EnclosingSuite::new(suite, stmt)),
Stmt::With(ast::StmtWith { body, .. }) => EnclosingSuite::new(body, stmt),
Stmt::Match(ast::StmtMatch { cases, .. }) => cases
AnyNodeRef::StmtWith(ast::StmtWith { body, .. }) => EnclosingSuite::new(body, stmt),
AnyNodeRef::StmtMatch(ast::StmtMatch { cases, .. }) => cases
.iter()
.map(|case| &case.body)
.find_map(|body| EnclosingSuite::new(body, stmt)),
Stmt::Try(ast::StmtTry {
AnyNodeRef::StmtTry(ast::StmtTry {
body,
handlers,
orelse,
@@ -51,10 +58,10 @@ pub struct EnclosingSuite<'a> {
}
impl<'a> EnclosingSuite<'a> {
pub fn new(suite: &'a [Stmt], stmt: &'a Stmt) -> Option<Self> {
pub fn new(suite: &'a [Stmt], stmt: AnyNodeRef<'a>) -> Option<Self> {
let position = suite
.iter()
.position(|sibling| AnyNodeRef::ptr_eq(sibling.into(), stmt.into()))?;
.position(|sibling| AnyNodeRef::ptr_eq(sibling.into(), stmt))?;
Some(EnclosingSuite { suite, position })
}

View File

@@ -222,6 +222,17 @@ where
visitor.leave_node(node);
}
pub fn walk_node<'a, V>(visitor: &mut V, node: AnyNodeRef<'a>)
where
V: SourceOrderVisitor<'a> + ?Sized,
{
if visitor.enter_node(node).is_traverse() {
node.visit_source_order(visitor);
}
visitor.leave_node(node);
}
#[derive(Copy, Clone, Eq, PartialEq, Debug)]
pub enum TraversalSignal {
Traverse,

View File

@@ -1247,6 +1247,7 @@ impl<'a> Generator<'a> {
self.p_bytes_repr(&bytes_literal.value, bytes_literal.flags);
}
}
#[expect(clippy::eq_op)]
Expr::NumberLiteral(ast::ExprNumberLiteral { value, .. }) => {
static INF_STR: &str = "1e309";
assert_eq!(f64::MAX_10_EXP, 308);

View File

@@ -43,7 +43,8 @@ tracing = { workspace = true }
[dev-dependencies]
ruff_formatter = { workspace = true }
insta = { workspace = true, features = ["glob"] }
datatest-stable = { workspace = true }
insta = { workspace = true }
regex = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
@@ -54,8 +55,8 @@ similar = { workspace = true }
ignored = ["ruff_cache"]
[[test]]
name = "ruff_python_formatter_fixtures"
path = "tests/fixtures.rs"
name = "fixtures"
harness = false
test = true
required-features = ["serde"]

View File

@@ -125,6 +125,13 @@ lambda a, /, c: a
*x: x
)
(
lambda
# comment
*x,
**y: x
)
(
lambda
# comment 1
@@ -196,6 +203,17 @@ lambda: ( # comment
x
)
(
lambda # 1
# 2
x, # 3
# 4
y
: # 5
# 6
x
)
(
lambda
x,
@@ -204,6 +222,71 @@ lambda: ( # comment
z
)
# Leading
lambda x: (
lambda y: lambda z: x
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ z # Trailing
) # Trailing
# Leading
lambda x: lambda y: lambda z: [
x,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
z
] # Trailing
# Trailing
lambda self, araa, kkkwargs=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs), e=1, f=2, g=2: d
# Regression tests for https://github.com/astral-sh/ruff/issues/8179
@@ -228,6 +311,441 @@ def a():
g = 10
)
def a():
return b(
c,
d,
e,
f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
*args, **kwargs
) + 1,
)
# Additional ecosystem cases from https://github.com/astral-sh/ruff/pull/21385
class C:
def foo():
mock_service.return_value.bucket.side_effect = lambda name: (
source_bucket
if name == source_bucket_name
else storage.Bucket(mock_service, destination_bucket_name)
)
class C:
function_dict: Dict[Text, Callable[[CRFToken], Any]] = {
CRFEntityExtractorOptions.POS2: lambda crf_token: crf_token.pos_tag[:2]
if crf_token.pos_tag is not None
else None,
}
name = re.sub(r"[^\x21\x23-\x5b\x5d-\x7e]...............", lambda m: f"\\{m.group(0)}", p["name"])
def foo():
if True:
if True:
return (
lambda x: np.exp(cs(np.log(x.to(u.MeV).value))) * u.MeV * u.cm**2 / u.g
)
class C:
_is_recognized_dtype: Callable[[DtypeObj], bool] = lambda x: lib.is_np_dtype(
x, "M"
) or isinstance(x, DatetimeTZDtype)
class C:
def foo():
if True:
transaction_count = self._query_txs_for_range(
get_count_fn=lambda from_ts, to_ts, _chain_id=chain_id: db_evmtx.count_transactions_in_range(
chain_id=_chain_id,
from_ts=from_ts,
to_ts=to_ts,
),
)
transaction_count = self._query_txs_for_range(
get_count_fn=lambda from_ts, to_ts, _chain_id=chain_id: db_evmtx.count_transactions_in_range[_chain_id, from_ts, to_ts],
)
def ddb():
sql = (
lambda var, table, n=N: f"""
CREATE TABLE {table} AS
SELECT ROW_NUMBER() OVER () AS id, {var}
FROM (
SELECT {var}
FROM RANGE({n}) _ ({var})
ORDER BY RANDOM()
)
"""
)
long_assignment_target.with_attribute.and_a_slice[with_an_index] = ( # 1
# 2
lambda x, y, z: # 3
# 4
x + y + z # 5
# 6
)
long_assignment_target.with_attribute.and_a_slice[with_an_index] = (
lambda x, y, z: x + y + z
)
long_assignment_target.with_attribute.and_a_slice[with_an_index] = lambda x, y, z: x + y + z
very_long_variable_name_x, very_long_variable_name_y = lambda a: a + some_very_long_expression, lambda b: b * another_very_long_expression_here
very_long_variable_name_for_result += lambda x: very_long_function_call_that_should_definitely_be_parenthesized_now(x, more_args, additional_parameters)
if 1:
if 2:
if 3:
if self.location in EVM_EVMLIKE_LOCATIONS and database is not None:
exported_dict["notes"] = EVM_ADDRESS_REGEX.sub(
repl=lambda matched_address: self._maybe_add_label_with_address(
database=database,
matched_address=matched_address,
),
string=exported_dict["notes"],
)
class C:
def f():
return dict(
filter(
lambda intent_response: self.is_retrieval_intent_response(
intent_response
),
self.responses.items(),
)
)
@pytest.mark.parametrize(
"op",
[
# Not fluent
param(
lambda left, right: (
ibis.timestamp("2017-04-01")
),
),
# These four are fluent and fit on one line inside the parenthesized
# lambda body
param(
lambda left, right: (
ibis.timestamp("2017-04-01").cast(dt.date)
),
),
param(
lambda left, right: (
ibis.timestamp("2017-04-01").cast(dt.date).between(left, right)
),
),
param(lambda left, right: ibis.timestamp("2017-04-01").cast(dt.date)),
param(lambda left, right: ibis.timestamp("2017-04-01").cast(dt.date).between(left, right)),
# This is too long on one line in the lambda body and gets wrapped
# inside the body.
param(
lambda left, right: (
ibis.timestamp("2017-04-01").cast(dt.date).between(left, right).between(left, right)
),
),
],
)
def test_string_temporal_compare_between(con, op, left, right): ...
[
(
lambda eval_df, _: MetricValue(
scores=eval_df["prediction"].tolist(),
aggregate_results={"prediction_sum": sum(eval_df["prediction"])},
)
),
]
# reuses the list parentheses
lambda xxxxxxxxxxxxxxxxxxxx, yyyyyyyyyyyyyyyyyyyy, zzzzzzzzzzzzzzzzzzzz: [xxxxxxxxxxxxxxxxxxxx, yyyyyyyyyyyyyyyyyyyy, zzzzzzzzzzzzzzzzzzzz]
# adds parentheses around the body
lambda xxxxxxxxxxxxxxxxxxxx, yyyyyyyyyyyyyyyyyyyy, zzzzzzzzzzzzzzzzzzzz: xxxxxxxxxxxxxxxxxxxx + yyyyyyyyyyyyyyyyyyyy + zzzzzzzzzzzzzzzzzzzz
# removes parentheses around the body
lambda xxxxxxxxxxxxxxxxxxxx: (xxxxxxxxxxxxxxxxxxxx + 1)
mapper = lambda x: dict_with_default[np.nan if isinstance(x, float) and np.isnan(x) else x]
lambda x, y, z: (
x + y + z
)
lambda x, y, z: (
x + y + z
# trailing body
)
lambda x, y, z: (
x + y + z # trailing eol body
)
lambda x, y, z: (
x + y + z
) # trailing lambda
lambda x, y, z: (
# leading body
x + y + z
)
lambda x, y, z: ( # leading eol body
x + y + z
)
(
lambda name:
source_bucket # trailing eol comment
if name == source_bucket_name
else storage.Bucket(mock_service, destination_bucket_name)
)
(
lambda name:
# dangling header comment
source_bucket
if name == source_bucket_name
else storage.Bucket(mock_service, destination_bucket_name)
)
x = (
lambda name:
# dangling header comment
source_bucket
if name == source_bucket_name
else storage.Bucket(mock_service, destination_bucket_name)
)
(
lambda name: # dangling header comment
(
source_bucket
if name == source_bucket_name
else storage.Bucket(mock_service, destination_bucket_name)
)
)
(
lambda from_ts, to_ts, _chain_id=chain_id: # dangling eol header comment
db_evmtx.count_transactions_in_range(
chain_id=_chain_id,
from_ts=from_ts,
to_ts=to_ts,
)
)
(
lambda from_ts, to_ts, _chain_id=chain_id:
# dangling header comment before call
db_evmtx.count_transactions_in_range(
chain_id=_chain_id,
from_ts=from_ts,
to_ts=to_ts,
)
)
(
lambda left, right:
# comment
ibis.timestamp("2017-04-01").cast(dt.date).between(left, right)
)
(
lambda left, right:
ibis.timestamp("2017-04-01") # comment
.cast(dt.date)
.between(left, right)
)
(
lambda xxxxxxxxxxxxxxxxxxxx, yyyyyyyyyyyyyyyyyyyy:
# comment
[xxxxxxxxxxxxxxxxxxxx, yyyyyyyyyyyyyyyyyyyy, zzzzzzzzzzzzzzzzzzzz]
)
(
lambda x, y:
# comment
{
"key": x,
"another": y,
}
)
(
lambda x, y:
# comment
(
x,
y,
z
)
)
(
lambda x:
# comment
dict_with_default[np.nan if isinstance(x, float) and np.isnan(x) else x]
)
(
lambda from_ts, to_ts, _chain_id=chain_id:
db_evmtx.count_transactions_in_range[
# comment
_chain_id, from_ts, to_ts
]
)
(
lambda
# comment
*args, **kwargs:
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs) + 1
)
(
lambda # comment
*args, **kwargs:
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs) + 1
)
(
lambda # comment 1
# comment 2
*args, **kwargs: # comment 3
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs) + 1
)
(
lambda # comment 1
*args, **kwargs: # comment 3
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs) + 1
)
(
lambda *args, **kwargs:
# comment 1
( # comment 2
# comment 3
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs) + 1 # comment 4
# comment 5
) # comment 6
)
(
lambda *brgs, **kwargs:
# comment 1
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa( # comment 2
# comment 3
*brgs, **kwargs) + 1 # comment 4
# comment 5
)
(
lambda *crgs, **kwargs: # comment 1
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*crgs, **kwargs) + 1
)
(
lambda *drgs, **kwargs: # comment 1
(
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*drgs, **kwargs) + 1
)
)
(
lambda * # comment 1
ergs, **
# comment 2
kwargs # comment 3
: # comment 4
(
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*ergs, **kwargs) + 1
)
)
(
lambda # 1
# 2
left, # 3
# 4
right: # 5
# 6
ibis.timestamp("2017-04-01").cast(dt.date).between(left, right)
)
(
lambda x: # outer comment 1
(
lambda y: # inner comment 1
# inner comment 2
lambda z: (
# innermost comment
x + y + z
)
)
)
foo(
lambda from_ts, # comment prevents collapsing the parameters to one line
to_ts, _chain_id=chain_id: db_evmtx.count_transactions_in_range(
chain_id=_chain_id,
from_ts=from_ts,
to_ts=to_ts,
)
)
foo(
lambda from_ts, # but still wrap the body if it gets too long
to_ts,
_chain_id=chain_id: db_evmtx.count_transactions_in_rangeeeeeeeeeeeeeeeeeeeeeeeeeeeee(
chain_id=_chain_id,
from_ts=from_ts,
to_ts=to_ts,
)
)
transform = lambda left, right: ibis.timestamp("2017-04-01").cast(dt.date).between(left, right).between(left, right) # trailing comment
(
lambda: # comment
1
)
(
lambda # comment
:
1
)
(
lambda:
# comment
1
)
(
lambda: # comment 1
# comment 2
1
)
(
lambda # comment 1
# comment 2
: # comment 3
# comment 4
1
)
(
lambda
* # comment 2
@@ -271,3 +789,18 @@ def a():
x:
x
)
(
lambda: # dangling-end-of-line
# dangling-own-line
( # leading-body-end-of-line
x
)
)
(
lambda: # dangling-end-of-line
( # leading-body-end-of-line
x
)
)

View File

@@ -0,0 +1 @@
[{"line_width":8}]

View File

@@ -0,0 +1,35 @@
# Fixtures for fluent formatting of call chains
# Note that `fluent.options.json` sets line width to 8
x = a.b()
x = a.b().c()
x = a.b().c().d
x = a.b.c.d().e()
x = a.b.c().d.e().f.g()
# Consecutive calls/subscripts are grouped together
# for the purposes of fluent formatting (though, as 2025.12.15,
# there may be a break inside of one of these
# calls/subscripts, but that is unrelated to the fluent format.)
x = a()[0]().b().c()
x = a.b()[0].c.d()[1]().e
# Parentheses affect both where the root of the call
# chain is and how many calls we require before applying
# fluent formatting (just 1, in the presence of a parenthesized
# root, as of 2025.12.15.)
x = (a).b()
x = (a()).b()
x = (a.b()).d.e()
x = (a.b().d).e()

View File

@@ -216,3 +216,69 @@ max_message_id = (
.baz()
)
# Note in preview we split at `pl` which some
# folks may dislike. (Similarly with common
# `np` and `pd` invocations).
#
# This is because we cannot reliably predict,
# just from syntax, whether a short identifier
# is being used as a 'namespace' or as an 'object'.
#
# As of 2025.12.15, we do not indent methods in
# fluent formatting. If we ever decide to do so,
# it may make sense to special case call chain roots
# that are shorter than the indent-width (like Prettier does).
# This would have the benefit of handling these common
# two-letter aliases for libraries.
expr = (
pl.scan_parquet("/data/pypi-parquet/*.parquet")
.filter(
[
pl.col("path").str.contains(
r"\.(asm|c|cc|cpp|cxx|h|hpp|rs|[Ff][0-9]{0,2}(?:or)?|go)$"
),
~pl.col("path").str.contains(r"(^|/)test(|s|ing)"),
~pl.col("path").str.contains("/site-packages/", literal=True),
]
)
.with_columns(
month=pl.col("uploaded_on").dt.truncate("1mo"),
ext=pl.col("path")
.str.extract(pattern=r"\.([a-z0-9]+)$", group_index=1)
.str.replace_all(pattern=r"cxx|cpp|cc|c|hpp|h", value="C/C++")
.str.replace_all(pattern="^f.*$", value="Fortran")
.str.replace("rs", "Rust", literal=True)
.str.replace("go", "Go", literal=True)
.str.replace("asm", "Assembly", literal=True)
.replace({"": None}),
)
.group_by(["month", "ext"])
.agg(project_count=pl.col("project_name").n_unique())
.drop_nulls(["ext"])
.sort(["month", "project_count"], descending=True)
)
def indentation_matching_for_loop_in_preview():
if make_this:
if more_nested_because_line_length:
identical_hidden_layer_sizes = all(
current_hidden_layer_sizes == first_hidden_layer_sizes
for current_hidden_layer_sizes in self.component_config[
HIDDEN_LAYERS_SIZES
].values().attr
)
def indentation_matching_walrus_in_preview():
if make_this:
if more_nested_because_line_length:
with self.read_ctx(book_type) as cursor:
if (entry_count := len(names := cursor.execute(
'SELECT name FROM address_book WHERE address=?',
(address,),
).fetchall().some_attr)) == 0 or len(set(names)) > 1:
return
# behavior with parenthesized roots
x = (aaaaaaaaaaaaaaaaaaaaaa).bbbbbbbbbbbbbbbbbbb.cccccccccccccccccccccccc().dddddddddddddddddddddddd().eeeeeeeeeeee

View File

@@ -1,4 +1,4 @@
use ruff_formatter::{Argument, Arguments, write};
use ruff_formatter::{Argument, Arguments, format_args, write};
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::context::{NodeLevel, WithNodeLevel};
@@ -33,20 +33,27 @@ impl<'ast> Format<PyFormatContext<'ast>> for ParenthesizeIfExpands<'_, 'ast> {
{
let mut f = WithNodeLevel::new(NodeLevel::ParenthesizedExpression, f);
write!(
f,
[group(&format_with(|f| {
if_group_breaks(&token("(")).fmt(f)?;
if self.indent {
soft_block_indent(&Arguments::from(&self.inner)).fmt(f)?;
} else {
Arguments::from(&self.inner).fmt(f)?;
}
if_group_breaks(&token(")")).fmt(f)
}))]
)
if self.indent {
let parens_id = f.group_id("indented_parenthesize_if_expands");
group(&format_args![
if_group_breaks(&token("(")),
indent_if_group_breaks(
&format_args![soft_line_break(), &Arguments::from(&self.inner)],
parens_id
),
soft_line_break(),
if_group_breaks(&token(")"))
])
.with_id(Some(parens_id))
.fmt(&mut f)
} else {
group(&format_args![
if_group_breaks(&token("(")),
Arguments::from(&self.inner),
if_group_breaks(&token(")")),
])
.fmt(&mut f)
}
}
}
}

View File

@@ -3,7 +3,7 @@
use std::path::{Path, PathBuf};
use anyhow::{Context, Result};
use clap::{Parser, ValueEnum, command};
use clap::{Parser, ValueEnum};
use ruff_formatter::SourceCode;
use ruff_python_ast::{PySourceType, PythonVersion};

View File

@@ -10,6 +10,7 @@ use crate::expression::parentheses::{
NeedsParentheses, OptionalParentheses, Parentheses, is_expression_parenthesized,
};
use crate::prelude::*;
use crate::preview::is_fluent_layout_split_first_call_enabled;
#[derive(Default)]
pub struct FormatExprAttribute {
@@ -47,20 +48,26 @@ impl FormatNodeRule<ExprAttribute> for FormatExprAttribute {
)
};
if call_chain_layout == CallChainLayout::Fluent {
if call_chain_layout.is_fluent() {
if parenthesize_value {
// Don't propagate the call chain layout.
value.format().with_options(Parentheses::Always).fmt(f)?;
} else {
match value.as_ref() {
Expr::Attribute(expr) => {
expr.format().with_options(call_chain_layout).fmt(f)?;
expr.format()
.with_options(call_chain_layout.transition_after_attribute())
.fmt(f)?;
}
Expr::Call(expr) => {
expr.format().with_options(call_chain_layout).fmt(f)?;
expr.format()
.with_options(call_chain_layout.transition_after_attribute())
.fmt(f)?;
}
Expr::Subscript(expr) => {
expr.format().with_options(call_chain_layout).fmt(f)?;
expr.format()
.with_options(call_chain_layout.transition_after_attribute())
.fmt(f)?;
}
_ => {
value.format().with_options(Parentheses::Never).fmt(f)?;
@@ -105,8 +112,30 @@ impl FormatNodeRule<ExprAttribute> for FormatExprAttribute {
// Allow the `.` on its own line if this is a fluent call chain
// and the value either requires parenthesizing or is a call or subscript expression
// (it's a fluent chain but not the first element).
else if call_chain_layout == CallChainLayout::Fluent {
if parenthesize_value || value.is_call_expr() || value.is_subscript_expr() {
//
// In preview we also break _at_ the first call in the chain.
// For example:
//
// ```diff
// # stable formatting vs. preview
// x = (
// - df.merge()
// + df
// + .merge()
// .groupby()
// .agg()
// .filter()
// )
// ```
else if call_chain_layout.is_fluent() {
if parenthesize_value
|| value.is_call_expr()
|| value.is_subscript_expr()
// Remember to update the doc-comment above when
// stabilizing this behavior.
|| (is_fluent_layout_split_first_call_enabled(f.context())
&& call_chain_layout.is_first_call_like())
{
soft_line_break().fmt(f)?;
}
}
@@ -148,8 +177,8 @@ impl FormatNodeRule<ExprAttribute> for FormatExprAttribute {
)
});
let is_call_chain_root = self.call_chain_layout == CallChainLayout::Default
&& call_chain_layout == CallChainLayout::Fluent;
let is_call_chain_root =
self.call_chain_layout == CallChainLayout::Default && call_chain_layout.is_fluent();
if is_call_chain_root {
write!(f, [group(&format_inner)])
} else {
@@ -169,7 +198,8 @@ impl NeedsParentheses for ExprAttribute {
self.into(),
context.comments().ranges(),
context.source(),
) == CallChainLayout::Fluent
)
.is_fluent()
{
OptionalParentheses::Multiline
} else if context.comments().has_dangling(self) {

View File

@@ -47,7 +47,10 @@ impl FormatNodeRule<ExprCall> for FormatExprCall {
func.format().with_options(Parentheses::Always).fmt(f)
} else {
match func.as_ref() {
Expr::Attribute(expr) => expr.format().with_options(call_chain_layout).fmt(f),
Expr::Attribute(expr) => expr
.format()
.with_options(call_chain_layout.decrement_call_like_count())
.fmt(f),
Expr::Call(expr) => expr.format().with_options(call_chain_layout).fmt(f),
Expr::Subscript(expr) => expr.format().with_options(call_chain_layout).fmt(f),
_ => func.format().with_options(Parentheses::Never).fmt(f),
@@ -67,9 +70,7 @@ impl FormatNodeRule<ExprCall> for FormatExprCall {
// queryset.distinct().order_by(field.name).values_list(field_name_flat_long_long=True)
// )
// ```
if call_chain_layout == CallChainLayout::Fluent
&& self.call_chain_layout == CallChainLayout::Default
{
if call_chain_layout.is_fluent() && self.call_chain_layout == CallChainLayout::Default {
group(&fmt_func).fmt(f)
} else {
fmt_func.fmt(f)
@@ -87,7 +88,8 @@ impl NeedsParentheses for ExprCall {
self.into(),
context.comments().ranges(),
context.source(),
) == CallChainLayout::Fluent
)
.is_fluent()
{
OptionalParentheses::Multiline
} else if context.comments().has_dangling(self) {

View File

@@ -1,15 +1,21 @@
use ruff_formatter::write;
use ruff_python_ast::AnyNodeRef;
use ruff_python_ast::ExprLambda;
use ruff_formatter::{FormatRuleWithOptions, RemoveSoftLinesBuffer, format_args, write};
use ruff_python_ast::{AnyNodeRef, Expr, ExprLambda};
use ruff_text_size::Ranged;
use crate::comments::dangling_comments;
use crate::expression::parentheses::{NeedsParentheses, OptionalParentheses};
use crate::builders::parenthesize_if_expands;
use crate::comments::{SourceComment, dangling_comments, leading_comments, trailing_comments};
use crate::expression::parentheses::{
NeedsParentheses, OptionalParentheses, Parentheses, is_expression_parenthesized,
};
use crate::expression::{CallChainLayout, has_own_parentheses};
use crate::other::parameters::ParametersParentheses;
use crate::prelude::*;
use crate::preview::is_parenthesize_lambda_bodies_enabled;
#[derive(Default)]
pub struct FormatExprLambda;
pub struct FormatExprLambda {
layout: ExprLambdaLayout,
}
impl FormatNodeRule<ExprLambda> for FormatExprLambda {
fn fmt_fields(&self, item: &ExprLambda, f: &mut PyFormatter) -> FormatResult<()> {
@@ -20,13 +26,19 @@ impl FormatNodeRule<ExprLambda> for FormatExprLambda {
body,
} = item;
let body = &**body;
let parameters = parameters.as_deref();
let comments = f.context().comments().clone();
let dangling = comments.dangling(item);
let preview = is_parenthesize_lambda_bodies_enabled(f.context());
write!(f, [token("lambda")])?;
if let Some(parameters) = parameters {
// In this context, a dangling comment can either be a comment between the `lambda` the
// Format any dangling comments before the parameters, but save any dangling comments after
// the parameters/after the header to be formatted with the body below.
let dangling_header_comments = if let Some(parameters) = parameters {
// In this context, a dangling comment can either be a comment between the `lambda` and the
// parameters, or a comment between the parameters and the body.
let (dangling_before_parameters, dangling_after_parameters) = dangling
.split_at(dangling.partition_point(|comment| comment.end() < parameters.start()));
@@ -86,7 +98,7 @@ impl FormatNodeRule<ExprLambda> for FormatExprLambda {
// *x: x
// )
// ```
if comments.has_leading(&**parameters) {
if comments.has_leading(parameters) {
hard_line_break().fmt(f)?;
} else {
write!(f, [space()])?;
@@ -95,32 +107,90 @@ impl FormatNodeRule<ExprLambda> for FormatExprLambda {
write!(f, [dangling_comments(dangling_before_parameters)])?;
}
write!(
f,
[parameters
.format()
.with_options(ParametersParentheses::Never)]
)?;
write!(f, [token(":")])?;
if dangling_after_parameters.is_empty() {
write!(f, [space()])?;
// Try to keep the parameters on a single line, unless there are intervening comments.
if preview && !comments.contains_comments(parameters.into()) {
let mut buffer = RemoveSoftLinesBuffer::new(f);
write!(
buffer,
[parameters
.format()
.with_options(ParametersParentheses::Never)]
)?;
} else {
write!(f, [dangling_comments(dangling_after_parameters)])?;
write!(
f,
[parameters
.format()
.with_options(ParametersParentheses::Never)]
)?;
}
dangling_after_parameters
} else {
write!(f, [token(":")])?;
dangling
};
// In this context, a dangling comment is a comment between the `lambda` and the body.
if dangling.is_empty() {
write!(f, [space()])?;
} else {
write!(f, [dangling_comments(dangling)])?;
}
write!(f, [token(":")])?;
if dangling_header_comments.is_empty() {
write!(f, [space()])?;
} else if !preview {
write!(f, [dangling_comments(dangling_header_comments)])?;
}
write!(f, [body.format()])
if !preview {
return body.format().fmt(f);
}
let fmt_body = FormatBody {
body,
dangling_header_comments,
};
match self.layout {
ExprLambdaLayout::Assignment => fits_expanded(&fmt_body).fmt(f),
ExprLambdaLayout::Default => fmt_body.fmt(f),
}
}
}
#[derive(Debug, Default, Copy, Clone)]
pub enum ExprLambdaLayout {
#[default]
Default,
/// The [`ExprLambda`] is the direct child of an assignment expression, so it needs to use
/// `fits_expanded` to prefer parenthesizing its own body before the assignment tries to
/// parenthesize the whole lambda. For example, we want this formatting:
///
/// ```py
/// long_assignment_target = lambda x, y, z: (
/// x + y + z
/// )
/// ```
///
/// instead of either of these:
///
/// ```py
/// long_assignment_target = (
/// lambda x, y, z: (
/// x + y + z
/// )
/// )
///
/// long_assignment_target = (
/// lambda x, y, z: x + y + z
/// )
/// ```
Assignment,
}
impl FormatRuleWithOptions<ExprLambda, PyFormatContext<'_>> for FormatExprLambda {
type Options = ExprLambdaLayout;
fn with_options(mut self, options: Self::Options) -> Self {
self.layout = options;
self
}
}
@@ -137,3 +207,267 @@ impl NeedsParentheses for ExprLambda {
}
}
}
struct FormatBody<'a> {
body: &'a Expr,
/// Dangling comments attached to the lambda header that should be formatted with the body.
///
/// These can include both own-line and end-of-line comments. For lambdas with parameters, this
/// means comments after the parameters:
///
/// ```py
/// (
/// lambda x, y # 1
/// # 2
/// : # 3
/// # 4
/// x + y
/// )
/// ```
///
/// Or all dangling comments for lambdas without parameters:
///
/// ```py
/// (
/// lambda # 1
/// # 2
/// : # 3
/// # 4
/// 1
/// )
/// ```
///
/// In most cases these should formatted within the parenthesized body, as in:
///
/// ```py
/// (
/// lambda: ( # 1
/// # 2
/// # 3
/// # 4
/// 1
/// )
/// )
/// ```
///
/// or without `# 2`:
///
/// ```py
/// (
/// lambda: ( # 1 # 3
/// # 4
/// 1
/// )
/// )
/// ```
dangling_header_comments: &'a [SourceComment],
}
impl Format<PyFormatContext<'_>> for FormatBody<'_> {
fn fmt(&self, f: &mut PyFormatter) -> FormatResult<()> {
let FormatBody {
dangling_header_comments,
body,
} = self;
let body = *body;
let comments = f.context().comments().clone();
let body_comments = comments.leading_dangling_trailing(body);
if !dangling_header_comments.is_empty() {
// Split the dangling header comments into trailing comments formatted with the lambda
// header (1) and leading comments formatted with the body (2, 3, 4).
//
// ```python
// (
// lambda # 1
// # 2
// : # 3
// # 4
// y
// )
// ```
//
// Note that these are split based on their line position rather than using
// `partition_point` based on a range, for example.
let (trailing_header_comments, leading_body_comments) = dangling_header_comments
.split_at(
dangling_header_comments
.iter()
.position(|comment| comment.line_position().is_own_line())
.unwrap_or(dangling_header_comments.len()),
);
// If the body is parenthesized and has its own leading comments, preserve the
// separation between the dangling lambda comments and the body comments. For
// example, preserve this comment positioning:
//
// ```python
// (
// lambda: # 1
// # 2
// ( # 3
// x
// )
// )
// ```
//
// 1 and 2 are dangling on the lambda and emitted first, followed by a hard line
// break and the parenthesized body with its leading comments.
//
// However, when removing 2, 1 and 3 can instead be formatted on the same line:
//
// ```python
// (
// lambda: ( # 1 # 3
// x
// )
// )
// ```
let comments = f.context().comments();
if is_expression_parenthesized(body.into(), comments.ranges(), f.context().source())
&& comments.has_leading(body)
{
trailing_comments(dangling_header_comments).fmt(f)?;
// Note that `leading_body_comments` have already been formatted as part of
// `dangling_header_comments` above, but their presence still determines the spacing
// here.
if leading_body_comments.is_empty() {
space().fmt(f)?;
} else {
hard_line_break().fmt(f)?;
}
body.format().with_options(Parentheses::Always).fmt(f)
} else {
write!(
f,
[
space(),
token("("),
trailing_comments(trailing_header_comments),
block_indent(&format_args!(
leading_comments(leading_body_comments),
body.format().with_options(Parentheses::Never)
)),
token(")")
]
)
}
}
// If the body has comments, we always want to preserve the parentheses. This also
// ensures that we correctly handle parenthesized comments, and don't need to worry
// about them in the implementation below.
else if body_comments.has_leading() || body_comments.has_trailing_own_line() {
body.format().with_options(Parentheses::Always).fmt(f)
}
// Calls and subscripts require special formatting because they have their own
// parentheses, but they can also have an arbitrary amount of text before the
// opening parenthesis. We want to avoid cases where we keep a long callable on the
// same line as the lambda parameters. For example, `db_evmtx...` in:
//
// ```py
// transaction_count = self._query_txs_for_range(
// get_count_fn=lambda from_ts, to_ts, _chain_id=chain_id: db_evmtx.count_transactions_in_range(
// chain_id=_chain_id,
// from_ts=from_ts,
// to_ts=to_ts,
// ),
// )
// ```
//
// should cause the whole lambda body to be parenthesized instead:
//
// ```py
// transaction_count = self._query_txs_for_range(
// get_count_fn=lambda from_ts, to_ts, _chain_id=chain_id: (
// db_evmtx.count_transactions_in_range(
// chain_id=_chain_id,
// from_ts=from_ts,
// to_ts=to_ts,
// )
// ),
// )
// ```
else if matches!(body, Expr::Call(_) | Expr::Subscript(_)) {
let unparenthesized = body.format().with_options(Parentheses::Never);
if CallChainLayout::from_expression(
body.into(),
comments.ranges(),
f.context().source(),
)
.is_fluent()
{
parenthesize_if_expands(&unparenthesized).fmt(f)
} else {
let unparenthesized = unparenthesized.memoized();
if unparenthesized.inspect(f)?.will_break() {
expand_parent().fmt(f)?;
}
best_fitting![
// body all flat
unparenthesized,
// body expanded
group(&unparenthesized).should_expand(true),
// parenthesized
format_args![token("("), block_indent(&unparenthesized), token(")")]
]
.fmt(f)
}
}
// For other cases with their own parentheses, such as lists, sets, dicts, tuples,
// etc., we can just format the body directly. Their own formatting results in the
// lambda being formatted well too. For example:
//
// ```py
// lambda xxxxxxxxxxxxxxxxxxxx, yyyyyyyyyyyyyyyyyyyy, zzzzzzzzzzzzzzzzzzzz: [xxxxxxxxxxxxxxxxxxxx, yyyyyyyyyyyyyyyyyyyy, zzzzzzzzzzzzzzzzzzzz]
// ```
//
// gets formatted as:
//
// ```py
// lambda xxxxxxxxxxxxxxxxxxxx, yyyyyyyyyyyyyyyyyyyy, zzzzzzzzzzzzzzzzzzzz: [
// xxxxxxxxxxxxxxxxxxxx,
// yyyyyyyyyyyyyyyyyyyy,
// zzzzzzzzzzzzzzzzzzzz
// ]
// ```
else if has_own_parentheses(body, f.context()).is_some() {
body.format().fmt(f)
}
// Finally, for expressions without their own parentheses, use
// `parenthesize_if_expands` to add parentheses around the body, only if it expands
// across multiple lines. The `Parentheses::Never` here also removes unnecessary
// parentheses around lambda bodies that fit on one line. For example:
//
// ```py
// lambda xxxxxxxxxxxxxxxxxxxx, yyyyyyyyyyyyyyyyyyyy, zzzzzzzzzzzzzzzzzzzz: xxxxxxxxxxxxxxxxxxxx + yyyyyyyyyyyyyyyyyyyy + zzzzzzzzzzzzzzzzzzzz
// ```
//
// is formatted as:
//
// ```py
// lambda xxxxxxxxxxxxxxxxxxxx, yyyyyyyyyyyyyyyyyyyy, zzzzzzzzzzzzzzzzzzzz: (
// xxxxxxxxxxxxxxxxxxxx + yyyyyyyyyyyyyyyyyyyy + zzzzzzzzzzzzzzzzzzzz
// )
// ```
//
// while
//
// ```py
// lambda xxxxxxxxxxxxxxxxxxxx: (xxxxxxxxxxxxxxxxxxxx + 1)
// ```
//
// is formatted as:
//
// ```py
// lambda xxxxxxxxxxxxxxxxxxxx: xxxxxxxxxxxxxxxxxxxx + 1
// ```
else {
parenthesize_if_expands(&body.format().with_options(Parentheses::Never)).fmt(f)
}
}
}

View File

@@ -51,7 +51,10 @@ impl FormatNodeRule<ExprSubscript> for FormatExprSubscript {
value.format().with_options(Parentheses::Always).fmt(f)
} else {
match value.as_ref() {
Expr::Attribute(expr) => expr.format().with_options(call_chain_layout).fmt(f),
Expr::Attribute(expr) => expr
.format()
.with_options(call_chain_layout.decrement_call_like_count())
.fmt(f),
Expr::Call(expr) => expr.format().with_options(call_chain_layout).fmt(f),
Expr::Subscript(expr) => expr.format().with_options(call_chain_layout).fmt(f),
_ => value.format().with_options(Parentheses::Never).fmt(f),
@@ -71,8 +74,8 @@ impl FormatNodeRule<ExprSubscript> for FormatExprSubscript {
.fmt(f)
});
let is_call_chain_root = self.call_chain_layout == CallChainLayout::Default
&& call_chain_layout == CallChainLayout::Fluent;
let is_call_chain_root =
self.call_chain_layout == CallChainLayout::Default && call_chain_layout.is_fluent();
if is_call_chain_root {
write!(f, [group(&format_inner)])
} else {
@@ -92,7 +95,8 @@ impl NeedsParentheses for ExprSubscript {
self.into(),
context.comments().ranges(),
context.source(),
) == CallChainLayout::Fluent
)
.is_fluent()
{
OptionalParentheses::Multiline
} else if is_expression_parenthesized(

View File

@@ -876,6 +876,22 @@ impl<'a> First<'a> {
/// )
/// ).all()
/// ```
///
/// In [`preview`](crate::preview::is_fluent_layout_split_first_call_enabled), we also track the position of the leftmost call or
/// subscript on an attribute in the chain and break just before the dot.
///
/// So, for example, the right-hand summand in the above expression
/// would get formatted as:
/// ```python
/// Blog.objects
/// .filter(
/// entry__headline__contains="McCartney",
/// )
/// .limit_results[:10]
/// .filter(
/// entry__pub_date__year=2010,
/// )
/// ```
#[derive(Copy, Clone, Debug, Default, PartialEq, Eq)]
pub enum CallChainLayout {
/// The root of a call chain
@@ -883,19 +899,149 @@ pub enum CallChainLayout {
Default,
/// A nested call chain element that uses fluent style.
Fluent,
Fluent(AttributeState),
/// A nested call chain element not using fluent style.
NonFluent,
}
/// Records information about the current position within
/// a call chain.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum AttributeState {
/// Stores the number of calls or subscripts
/// to the left of the current position in a chain.
///
/// Consecutive calls/subscripts on a single
/// object only count once. For example, if we are at
/// `c` in `a.b()[0]()().c()` then this number would be 1.
///
/// Caveat: If the root of the chain is parenthesized,
/// it contributes +1 to this count, even if it is not
/// a call or subscript. But the name
/// `CallLikeOrParenthesizedRootPreceding`
/// is a tad unwieldy, and this also rarely occurs.
CallLikePreceding(u32),
/// Indicates that we are at the first called or
/// subscripted object in the chain
///
/// For example, if we are at `b` in `a.b()[0]()().c()`
FirstCallLike,
/// Indicates that we are to the left of the first
/// called or subscripted object in the chain, and therefore
/// need not break.
///
/// For example, if we are at `a` in `a.b()[0]()().c()`
BeforeFirstCallLike,
}
impl CallChainLayout {
/// Returns new state decreasing count of remaining calls/subscripts
/// to traverse, or the state `FirstCallOrSubscript`, as appropriate.
#[must_use]
pub(crate) fn decrement_call_like_count(self) -> Self {
match self {
Self::Fluent(AttributeState::CallLikePreceding(x)) => {
if x > 1 {
// Recall that we traverse call chains from right to
// left. So after moving from a call/subscript into
// an attribute, we _decrease_ the count of
// _remaining_ calls or subscripts to the left of our
// current position.
Self::Fluent(AttributeState::CallLikePreceding(x - 1))
} else {
Self::Fluent(AttributeState::FirstCallLike)
}
}
_ => self,
}
}
/// Returns with state change
/// `FirstCallOrSubscript` -> `BeforeFirstCallOrSubscript`
/// and otherwise returns unchanged.
#[must_use]
pub(crate) fn transition_after_attribute(self) -> Self {
match self {
Self::Fluent(AttributeState::FirstCallLike) => {
Self::Fluent(AttributeState::BeforeFirstCallLike)
}
_ => self,
}
}
pub(crate) fn is_first_call_like(self) -> bool {
matches!(self, Self::Fluent(AttributeState::FirstCallLike))
}
/// Returns either `Fluent` or `NonFluent` depending on a
/// heuristic computed for the whole chain.
///
/// Explicitly, the criterion to return `Fluent` is
/// as follows:
///
/// 1. Beginning from the right (i.e. the `expr` itself),
/// traverse inwards past calls, subscripts, and attribute
/// expressions until we meet the first expression that is
/// either none of these or else is parenthesized. This will
/// be the _root_ of the call chain.
/// 2. Count the number of _attribute values_ that are _called
/// or subscripted_ in the chain (note that this includes the
/// root but excludes the rightmost attribute in the chain since
/// it is not the _value_ of some attribute).
/// 3. If the root is parenthesized, add 1 to that value.
/// 4. If the total is at least 2, return `Fluent`. Otherwise
/// return `NonFluent`
pub(crate) fn from_expression(
mut expr: ExprRef,
comment_ranges: &CommentRanges,
source: &str,
) -> Self {
let mut attributes_after_parentheses = 0;
// TODO(dylan): Once the fluent layout preview style is
// stabilized, see if it is possible to simplify some of
// the logic around parenthesized roots. (While supporting
// both styles it is more difficult to do this.)
// Count of attribute _values_ which are called or
// subscripted, after the leftmost parenthesized
// value.
//
// Examples:
// ```
// # Count of 3 - notice that .d()
// # does not contribute
// a().b().c[0]()().d()
// # Count of 2 - notice that a()
// # does not contribute
// (a()).b().c[0].d
// ```
let mut computed_attribute_values_after_parentheses = 0;
// Similar to the above, but instead looks at all calls
// and subscripts rather than looking only at those on
// _attribute values_. So this count can differ from the
// above.
//
// Examples of `computed_attribute_values_after_parentheses` vs
// `call_like_count`:
//
// a().b ---> 1 vs 1
// a.b().c --> 1 vs 1
// a.b() ---> 0 vs 1
let mut call_like_count = 0;
// Going from right to left, we traverse calls, subscripts,
// and attributes until we get to an expression of a different
// kind _or_ to a parenthesized expression. This records
// the case where we end the traversal at a parenthesized expression.
//
// In these cases, the inferred semantics of the chain are different.
// We interpret this as the user indicating:
// "this parenthesized value is the object of interest and we are
// doing transformations on it". This increases our confidence that
// this should be fluently formatted, and also means we should make
// our first break after this value.
let mut root_value_parenthesized = false;
loop {
match expr {
ExprRef::Attribute(ast::ExprAttribute { value, .. }) => {
@@ -907,10 +1053,10 @@ impl CallChainLayout {
// ```
if is_expression_parenthesized(value.into(), comment_ranges, source) {
// `(a).b`. We preserve these parentheses so don't recurse
attributes_after_parentheses += 1;
root_value_parenthesized = true;
break;
} else if matches!(value.as_ref(), Expr::Call(_) | Expr::Subscript(_)) {
attributes_after_parentheses += 1;
computed_attribute_values_after_parentheses += 1;
}
expr = ExprRef::from(value.as_ref());
@@ -925,31 +1071,68 @@ impl CallChainLayout {
// ```
ExprRef::Call(ast::ExprCall { func: inner, .. })
| ExprRef::Subscript(ast::ExprSubscript { value: inner, .. }) => {
// We preserve these parentheses so don't recurse
// e.g. (a)[0].x().y().z()
// ^stop here
if is_expression_parenthesized(inner.into(), comment_ranges, source) {
break;
}
// Accumulate the `call_like_count`, but we only
// want to count things like `a()[0]()()` once.
if !inner.is_call_expr() && !inner.is_subscript_expr() {
call_like_count += 1;
}
expr = ExprRef::from(inner.as_ref());
}
_ => {
// We to format the following in fluent style:
// ```
// f2 = (a).w().t(1,)
// ^ expr
// ```
if is_expression_parenthesized(expr, comment_ranges, source) {
attributes_after_parentheses += 1;
}
break;
}
}
// We preserve these parentheses so don't recurse
if is_expression_parenthesized(expr, comment_ranges, source) {
break;
}
}
if attributes_after_parentheses < 2 {
if computed_attribute_values_after_parentheses + u32::from(root_value_parenthesized) < 2 {
CallChainLayout::NonFluent
} else {
CallChainLayout::Fluent
CallChainLayout::Fluent(AttributeState::CallLikePreceding(
// We count a parenthesized root value as an extra
// call for the purposes of tracking state.
//
// The reason is that, in this case, we want the first
// "special" break to happen right after the root, as
// opposed to right after the first called/subscripted
// attribute.
//
// For example:
//
// ```
// (object_of_interest)
// .data.filter()
// .agg()
// .etc()
// ```
//
// instead of (in preview):
//
// ```
// (object_of_interest)
// .data
// .filter()
// .etc()
// ```
//
// For comparison, if we didn't have parentheses around
// the root, we want (and get, in preview):
//
// ```
// object_of_interest.data
// .filter()
// .agg()
// .etc()
// ```
call_like_count + u32::from(root_value_parenthesized),
))
}
}
@@ -972,9 +1155,13 @@ impl CallChainLayout {
CallChainLayout::NonFluent
}
}
layout @ (CallChainLayout::Fluent | CallChainLayout::NonFluent) => layout,
layout @ (CallChainLayout::Fluent(_) | CallChainLayout::NonFluent) => layout,
}
}
pub(crate) fn is_fluent(self) -> bool {
matches!(self, CallChainLayout::Fluent(_))
}
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]

View File

@@ -52,3 +52,17 @@ pub(crate) const fn is_avoid_parens_for_long_as_captures_enabled(
) -> bool {
context.is_preview()
}
/// Returns `true` if the
/// [`parenthesize_lambda_bodies`](https://github.com/astral-sh/ruff/pull/21385) preview style is
/// enabled.
pub(crate) const fn is_parenthesize_lambda_bodies_enabled(context: &PyFormatContext) -> bool {
context.is_preview()
}
/// Returns `true` if the
/// [`fluent_layout_split_first_call`](https://github.com/astral-sh/ruff/pull/21369) preview
/// style is enabled.
pub(crate) const fn is_fluent_layout_split_first_call_enabled(context: &PyFormatContext) -> bool {
context.is_preview()
}

View File

@@ -9,6 +9,7 @@ use crate::comments::{
Comments, LeadingDanglingTrailingComments, SourceComment, trailing_comments,
};
use crate::context::{NodeLevel, WithNodeLevel};
use crate::expression::expr_lambda::ExprLambdaLayout;
use crate::expression::parentheses::{
NeedsParentheses, OptionalParentheses, Parentheses, Parenthesize, is_expression_parenthesized,
optional_parentheses,
@@ -18,6 +19,7 @@ use crate::expression::{
maybe_parenthesize_expression,
};
use crate::other::interpolated_string::InterpolatedStringLayout;
use crate::preview::is_parenthesize_lambda_bodies_enabled;
use crate::statement::trailing_semicolon;
use crate::string::StringLikeExtensions;
use crate::string::implicit::{
@@ -303,12 +305,7 @@ impl Format<PyFormatContext<'_>> for FormatStatementsLastExpression<'_> {
&& format_implicit_flat.is_none()
&& format_interpolated_string.is_none()
{
return maybe_parenthesize_expression(
value,
*statement,
Parenthesize::IfBreaks,
)
.fmt(f);
return maybe_parenthesize_value(value, *statement).fmt(f);
}
let comments = f.context().comments().clone();
@@ -586,11 +583,7 @@ impl Format<PyFormatContext<'_>> for FormatStatementsLastExpression<'_> {
space(),
operator,
space(),
maybe_parenthesize_expression(
value,
*statement,
Parenthesize::IfBreaks
)
maybe_parenthesize_value(value, *statement)
]
);
}
@@ -1369,3 +1362,32 @@ fn is_attribute_with_parenthesized_value(target: &Expr, context: &PyFormatContex
_ => false,
}
}
/// Like [`maybe_parenthesize_expression`] but with special handling for lambdas in preview.
fn maybe_parenthesize_value<'a>(
expression: &'a Expr,
parent: AnyNodeRef<'a>,
) -> MaybeParenthesizeValue<'a> {
MaybeParenthesizeValue { expression, parent }
}
struct MaybeParenthesizeValue<'a> {
expression: &'a Expr,
parent: AnyNodeRef<'a>,
}
impl Format<PyFormatContext<'_>> for MaybeParenthesizeValue<'_> {
fn fmt(&self, f: &mut PyFormatter) -> FormatResult<()> {
let MaybeParenthesizeValue { expression, parent } = self;
if is_parenthesize_lambda_bodies_enabled(f.context())
&& let Expr::Lambda(lambda) = expression
&& !f.context().comments().has_leading(lambda)
{
parenthesize_if_expands(&lambda.format().with_options(ExprLambdaLayout::Assignment))
.fmt(f)
} else {
maybe_parenthesize_expression(expression, *parent, Parenthesize::IfBreaks).fmt(f)
}
}
}

View File

@@ -1,4 +1,7 @@
use crate::normalizer::Normalizer;
use anyhow::anyhow;
use datatest_stable::Utf8Path;
use insta::assert_snapshot;
use ruff_db::diagnostic::{
Annotation, Diagnostic, DiagnosticFormat, DiagnosticId, DisplayDiagnosticConfig,
DisplayDiagnostics, DummyFileResolver, Severity, Span, SubDiagnostic, SubDiagnosticSeverity,
@@ -24,26 +27,27 @@ use std::{fmt, fs};
mod normalizer;
#[test]
fn black_compatibility() {
let test_file = |input_path: &Path| {
let content = fs::read_to_string(input_path).unwrap();
#[expect(clippy::needless_pass_by_value)]
fn black_compatibility(input_path: &Utf8Path, content: String) -> datatest_stable::Result<()> {
let test_name = input_path
.strip_prefix("./resources/test/fixtures/black")
.unwrap_or(input_path)
.as_str();
let options_path = input_path.with_extension("options.json");
let options_path = input_path.with_extension("options.json");
let options: PyFormatOptions = if let Ok(options_file) = fs::File::open(&options_path) {
let reader = BufReader::new(options_file);
serde_json::from_reader(reader).unwrap_or_else(|_| {
panic!("Expected option file {options_path:?} to be a valid Json file")
})
} else {
PyFormatOptions::from_extension(input_path)
};
let options: PyFormatOptions = if let Ok(options_file) = fs::File::open(&options_path) {
let reader = BufReader::new(options_file);
serde_json::from_reader(reader).map_err(|err| {
anyhow!("Expected option file {options_path:?} to be a valid Json file: {err}")
})?
} else {
PyFormatOptions::from_extension(input_path.as_std_path())
};
let first_line = content.lines().next().unwrap_or_default();
let formatted_code = if first_line.starts_with("# flags:")
&& first_line.contains("--line-ranges=")
{
let first_line = content.lines().next().unwrap_or_default();
let formatted_code =
if first_line.starts_with("# flags:") && first_line.contains("--line-ranges=") {
let line_index = LineIndex::from_source_text(&content);
let ranges = first_line
@@ -69,13 +73,9 @@ fn black_compatibility() {
let mut formatted_code = content.clone();
for range in ranges {
let formatted =
format_range(&content, range, options.clone()).unwrap_or_else(|err| {
panic!(
"Range-formatting of {} to succeed but encountered error {err}",
input_path.display()
)
});
let formatted = format_range(&content, range, options.clone()).map_err(|err| {
anyhow!("Range-formatting to succeed but encountered error {err}")
})?;
let range = formatted.source_range();
@@ -86,12 +86,8 @@ fn black_compatibility() {
formatted_code
} else {
let printed = format_module_source(&content, options.clone()).unwrap_or_else(|err| {
panic!(
"Formatting of {} to succeed but encountered error {err}",
input_path.display()
)
});
let printed = format_module_source(&content, options.clone())
.map_err(|err| anyhow!("Formatting to succeed but encountered error {err}"))?;
let formatted_code = printed.into_code();
@@ -100,191 +96,133 @@ fn black_compatibility() {
formatted_code
};
let extension = input_path
.extension()
.expect("Test file to have py or pyi extension")
.to_string_lossy();
let expected_path = input_path.with_extension(format!("{extension}.expect"));
let expected_output = fs::read_to_string(&expected_path)
.unwrap_or_else(|_| panic!("Expected Black output file '{expected_path:?}' to exist"));
let extension = input_path
.extension()
.expect("Test file to have py or pyi extension");
let expected_path = input_path.with_extension(format!("{extension}.expect"));
let expected_output = fs::read_to_string(&expected_path)
.unwrap_or_else(|_| panic!("Expected Black output file '{expected_path:?}' to exist"));
let unsupported_syntax_errors =
ensure_unchanged_ast(&content, &formatted_code, &options, input_path);
let unsupported_syntax_errors =
ensure_unchanged_ast(&content, &formatted_code, &options, input_path);
if formatted_code == expected_output {
// Black and Ruff formatting matches. Delete any existing snapshot files because the Black output
// already perfectly captures the expected output.
// The following code mimics insta's logic generating the snapshot name for a test.
let workspace_path = std::env::var("CARGO_MANIFEST_DIR").unwrap();
// Black and Ruff formatting matches. Delete any existing snapshot files because the Black output
// already perfectly captures the expected output.
// The following code mimics insta's logic generating the snapshot name for a test.
let workspace_path = std::env::var("CARGO_MANIFEST_DIR").unwrap();
let mut components = input_path.components().rev();
let file_name = components.next().unwrap();
let test_suite = components.next().unwrap();
let full_snapshot_name = format!("black_compatibility@{test_name}.snap",);
let snapshot_name = format!(
"black_compatibility@{}__{}.snap",
test_suite.as_os_str().to_string_lossy(),
file_name.as_os_str().to_string_lossy()
);
let snapshot_path = Path::new(&workspace_path)
.join("tests/snapshots")
.join(full_snapshot_name);
let snapshot_path = Path::new(&workspace_path)
.join("tests/snapshots")
.join(snapshot_name);
if snapshot_path.exists() && snapshot_path.is_file() {
// SAFETY: This is a convenience feature. That's why we don't want to abort
// when deleting a no longer needed snapshot fails.
fs::remove_file(&snapshot_path).ok();
}
let new_snapshot_path = snapshot_path.with_extension("snap.new");
if new_snapshot_path.exists() && new_snapshot_path.is_file() {
// SAFETY: This is a convenience feature. That's why we don't want to abort
// when deleting a no longer needed snapshot fails.
fs::remove_file(&new_snapshot_path).ok();
}
} else {
// Black and Ruff have different formatting. Write out a snapshot that covers the differences
// today.
let mut snapshot = String::new();
write!(snapshot, "{}", Header::new("Input")).unwrap();
write!(snapshot, "{}", CodeFrame::new("python", &content)).unwrap();
write!(snapshot, "{}", Header::new("Black Differences")).unwrap();
let diff = TextDiff::from_lines(expected_output.as_str(), &formatted_code)
.unified_diff()
.header("Black", "Ruff")
.to_string();
write!(snapshot, "{}", CodeFrame::new("diff", &diff)).unwrap();
write!(snapshot, "{}", Header::new("Ruff Output")).unwrap();
write!(snapshot, "{}", CodeFrame::new("python", &formatted_code)).unwrap();
write!(snapshot, "{}", Header::new("Black Output")).unwrap();
write!(snapshot, "{}", CodeFrame::new("python", &expected_output)).unwrap();
if !unsupported_syntax_errors.is_empty() {
write!(snapshot, "{}", Header::new("New Unsupported Syntax Errors")).unwrap();
writeln!(
snapshot,
"{}",
DisplayDiagnostics::new(
&DummyFileResolver,
&DisplayDiagnosticConfig::default().format(DiagnosticFormat::Full),
&unsupported_syntax_errors
)
)
.unwrap();
}
insta::with_settings!({
omit_expression => true,
input_file => input_path,
prepend_module_to_snapshot => false,
}, {
insta::assert_snapshot!(snapshot);
});
if formatted_code == expected_output {
if snapshot_path.exists() && snapshot_path.is_file() {
// SAFETY: This is a convenience feature. That's why we don't want to abort
// when deleting a no longer needed snapshot fails.
fs::remove_file(&snapshot_path).ok();
}
};
insta::glob!(
"../resources",
"test/fixtures/black/**/*.{py,pyi}",
test_file
);
let new_snapshot_path = snapshot_path.with_extension("snap.new");
if new_snapshot_path.exists() && new_snapshot_path.is_file() {
// SAFETY: This is a convenience feature. That's why we don't want to abort
// when deleting a no longer needed snapshot fails.
fs::remove_file(&new_snapshot_path).ok();
}
} else {
// Black and Ruff have different formatting. Write out a snapshot that covers the differences
// today.
let mut snapshot = String::new();
write!(snapshot, "{}", Header::new("Input")).unwrap();
write!(snapshot, "{}", CodeFrame::new("python", &content)).unwrap();
write!(snapshot, "{}", Header::new("Black Differences")).unwrap();
let diff = TextDiff::from_lines(expected_output.as_str(), &formatted_code)
.unified_diff()
.header("Black", "Ruff")
.to_string();
write!(snapshot, "{}", CodeFrame::new("diff", &diff)).unwrap();
write!(snapshot, "{}", Header::new("Ruff Output")).unwrap();
write!(snapshot, "{}", CodeFrame::new("python", &formatted_code)).unwrap();
write!(snapshot, "{}", Header::new("Black Output")).unwrap();
write!(snapshot, "{}", CodeFrame::new("python", &expected_output)).unwrap();
if !unsupported_syntax_errors.is_empty() {
write!(snapshot, "{}", Header::new("New Unsupported Syntax Errors")).unwrap();
writeln!(
snapshot,
"{}",
DisplayDiagnostics::new(
&DummyFileResolver,
&DisplayDiagnosticConfig::default().format(DiagnosticFormat::Full),
&unsupported_syntax_errors
)
)
.unwrap();
}
let mut settings = insta::Settings::clone_current();
settings.set_omit_expression(true);
settings.set_input_file(input_path);
settings.set_prepend_module_to_snapshot(false);
settings.set_snapshot_suffix(test_name);
let _settings = settings.bind_to_scope();
assert_snapshot!(snapshot);
}
Ok(())
}
#[test]
fn format() {
let test_file = |input_path: &Path| {
let content = fs::read_to_string(input_path).unwrap();
#[expect(clippy::needless_pass_by_value)]
fn format(input_path: &Utf8Path, content: String) -> datatest_stable::Result<()> {
let test_name = input_path
.strip_prefix("./resources/test/fixtures/ruff")
.unwrap_or(input_path)
.as_str();
let mut snapshot = format!("## Input\n{}", CodeFrame::new("python", &content));
let options_path = input_path.with_extension("options.json");
let mut snapshot = format!("## Input\n{}", CodeFrame::new("python", &content));
let options_path = input_path.with_extension("options.json");
if let Ok(options_file) = fs::File::open(&options_path) {
let reader = BufReader::new(options_file);
let options: Vec<PyFormatOptions> =
serde_json::from_reader(reader).unwrap_or_else(|_| {
panic!("Expected option file {options_path:?} to be a valid Json file")
});
if let Ok(options_file) = fs::File::open(&options_path) {
let reader = BufReader::new(options_file);
let options: Vec<PyFormatOptions> = serde_json::from_reader(reader).map_err(|_| {
anyhow!("Expected option file {options_path:?} to be a valid Json file")
})?;
writeln!(snapshot, "## Outputs").unwrap();
writeln!(snapshot, "## Outputs").unwrap();
for (i, options) in options.into_iter().enumerate() {
let (formatted_code, unsupported_syntax_errors) =
format_file(&content, &options, input_path);
writeln!(
snapshot,
"### Output {}\n{}{}",
i + 1,
CodeFrame::new("", &DisplayPyOptions(&options)),
CodeFrame::new("python", &formatted_code)
)
.unwrap();
if options.preview().is_enabled() {
continue;
}
// We want to capture the differences in the preview style in our fixtures
let options_preview = options.with_preview(PreviewMode::Enabled);
let (formatted_preview, _) = format_file(&content, &options_preview, input_path);
if formatted_code != formatted_preview {
// Having both snapshots makes it hard to see the difference, so we're keeping only
// diff.
writeln!(
snapshot,
"#### Preview changes\n{}",
CodeFrame::new(
"diff",
TextDiff::from_lines(&formatted_code, &formatted_preview)
.unified_diff()
.header("Stable", "Preview")
)
)
.unwrap();
}
if !unsupported_syntax_errors.is_empty() {
writeln!(
snapshot,
"### Unsupported Syntax Errors\n{}",
DisplayDiagnostics::new(
&DummyFileResolver,
&DisplayDiagnosticConfig::default().format(DiagnosticFormat::Full),
&unsupported_syntax_errors
)
)
.unwrap();
}
}
} else {
// We want to capture the differences in the preview style in our fixtures
let options = PyFormatOptions::from_extension(input_path);
for (i, options) in options.into_iter().enumerate() {
let (formatted_code, unsupported_syntax_errors) =
format_file(&content, &options, input_path);
writeln!(
snapshot,
"### Output {}\n{}{}",
i + 1,
CodeFrame::new("", &DisplayPyOptions(&options)),
CodeFrame::new("python", &formatted_code)
)
.unwrap();
if options.preview().is_enabled() {
continue;
}
// We want to capture the differences in the preview style in our fixtures
let options_preview = options.with_preview(PreviewMode::Enabled);
let (formatted_preview, _) = format_file(&content, &options_preview, input_path);
if formatted_code == formatted_preview {
writeln!(
snapshot,
"## Output\n{}",
CodeFrame::new("python", &formatted_code)
)
.unwrap();
} else {
if formatted_code != formatted_preview {
// Having both snapshots makes it hard to see the difference, so we're keeping only
// diff.
writeln!(
snapshot,
"## Output\n{}\n## Preview changes\n{}",
CodeFrame::new("python", &formatted_code),
"#### Preview changes\n{}",
CodeFrame::new(
"diff",
TextDiff::from_lines(&formatted_code, &formatted_preview)
@@ -298,7 +236,7 @@ fn format() {
if !unsupported_syntax_errors.is_empty() {
writeln!(
snapshot,
"## Unsupported Syntax Errors\n{}",
"### Unsupported Syntax Errors\n{}",
DisplayDiagnostics::new(
&DummyFileResolver,
&DisplayDiagnosticConfig::default().format(DiagnosticFormat::Full),
@@ -308,27 +246,74 @@ fn format() {
.unwrap();
}
}
} else {
// We want to capture the differences in the preview style in our fixtures
let options = PyFormatOptions::from_extension(input_path.as_std_path());
let (formatted_code, unsupported_syntax_errors) =
format_file(&content, &options, input_path);
insta::with_settings!({
omit_expression => true,
input_file => input_path,
prepend_module_to_snapshot => false,
}, {
insta::assert_snapshot!(snapshot);
});
};
let options_preview = options.with_preview(PreviewMode::Enabled);
let (formatted_preview, _) = format_file(&content, &options_preview, input_path);
insta::glob!(
"../resources",
"test/fixtures/ruff/**/*.{py,pyi}",
test_file
);
if formatted_code == formatted_preview {
writeln!(
snapshot,
"## Output\n{}",
CodeFrame::new("python", &formatted_code)
)
.unwrap();
} else {
// Having both snapshots makes it hard to see the difference, so we're keeping only
// diff.
writeln!(
snapshot,
"## Output\n{}\n## Preview changes\n{}",
CodeFrame::new("python", &formatted_code),
CodeFrame::new(
"diff",
TextDiff::from_lines(&formatted_code, &formatted_preview)
.unified_diff()
.header("Stable", "Preview")
)
)
.unwrap();
}
if !unsupported_syntax_errors.is_empty() {
writeln!(
snapshot,
"## Unsupported Syntax Errors\n{}",
DisplayDiagnostics::new(
&DummyFileResolver,
&DisplayDiagnosticConfig::default().format(DiagnosticFormat::Full),
&unsupported_syntax_errors
)
)
.unwrap();
}
}
let mut settings = insta::Settings::clone_current();
settings.set_omit_expression(true);
settings.set_input_file(input_path);
settings.set_prepend_module_to_snapshot(false);
settings.set_snapshot_suffix(test_name);
let _settings = settings.bind_to_scope();
assert_snapshot!(snapshot);
Ok(())
}
datatest_stable::harness! {
{ test = black_compatibility, root = "./resources/test/fixtures/black", pattern = r".+\.pyi?$" },
{ test = format, root="./resources/test/fixtures/ruff", pattern = r".+\.pyi?$" }
}
fn format_file(
source: &str,
options: &PyFormatOptions,
input_path: &Path,
input_path: &Utf8Path,
) -> (String, Vec<Diagnostic>) {
let (unformatted, formatted_code) = if source.contains("<RANGE_START>") {
let mut content = source.to_string();
@@ -363,8 +348,7 @@ fn format_file(
let formatted =
format_range(&format_input, range, options.clone()).unwrap_or_else(|err| {
panic!(
"Range-formatting of {} to succeed but encountered error {err}",
input_path.display()
"Range-formatting of {input_path} to succeed but encountered error {err}",
)
});
@@ -377,10 +361,7 @@ fn format_file(
(Cow::Owned(without_markers), content)
} else {
let printed = format_module_source(source, options.clone()).unwrap_or_else(|err| {
panic!(
"Formatting `{input_path} was expected to succeed but it failed: {err}",
input_path = input_path.display()
)
panic!("Formatting `{input_path} was expected to succeed but it failed: {err}",)
});
let formatted_code = printed.into_code();
@@ -399,22 +380,20 @@ fn format_file(
fn ensure_stability_when_formatting_twice(
formatted_code: &str,
options: &PyFormatOptions,
input_path: &Path,
input_path: &Utf8Path,
) {
let reformatted = match format_module_source(formatted_code, options.clone()) {
Ok(reformatted) => reformatted,
Err(err) => {
let mut diag = Diagnostic::from(&err);
if let Some(range) = err.range() {
let file =
SourceFileBuilder::new(input_path.to_string_lossy(), formatted_code).finish();
let file = SourceFileBuilder::new(input_path.as_str(), formatted_code).finish();
let span = Span::from(file).with_range(range);
diag.annotate(Annotation::primary(span));
}
panic!(
"Expected formatted code of {} to be valid syntax: {err}:\
"Expected formatted code of {input_path} to be valid syntax: {err}:\
\n---\n{formatted_code}---\n{}",
input_path.display(),
diag.display(&DummyFileResolver, &DisplayDiagnosticConfig::default()),
);
}
@@ -440,7 +419,6 @@ Formatted once:
Formatted twice:
---
{reformatted}---"#,
input_path = input_path.display(),
options = &DisplayPyOptions(options),
reformatted = reformatted.as_code(),
);
@@ -467,7 +445,7 @@ fn ensure_unchanged_ast(
unformatted_code: &str,
formatted_code: &str,
options: &PyFormatOptions,
input_path: &Path,
input_path: &Utf8Path,
) -> Vec<Diagnostic> {
let source_type = options.source_type();
@@ -499,11 +477,7 @@ fn ensure_unchanged_ast(
formatted_unsupported_syntax_errors
.retain(|fingerprint, _| !unformatted_unsupported_syntax_errors.contains_key(fingerprint));
let file = SourceFileBuilder::new(
input_path.file_name().unwrap().to_string_lossy(),
formatted_code,
)
.finish();
let file = SourceFileBuilder::new(input_path.file_name().unwrap(), formatted_code).finish();
let diagnostics = formatted_unsupported_syntax_errors
.values()
.map(|error| {
@@ -533,11 +507,10 @@ fn ensure_unchanged_ast(
.header("Unformatted", "Formatted")
.to_string();
panic!(
r#"Reformatting the unformatted code of {} resulted in AST changes.
r#"Reformatting the unformatted code of {input_path} resulted in AST changes.
---
{diff}
"#,
input_path.display(),
);
}

View File

@@ -192,7 +192,7 @@ class Random:
}
x = {
"foobar": (123) + 456,
@@ -97,24 +94,20 @@
@@ -97,24 +94,21 @@
my_dict = {
@@ -221,13 +221,14 @@ class Random:
- .second_call()
- .third_call(some_args="some value")
- )
+ "a key in my dict": MyClass.some_attribute.first_call()
+ "a key in my dict": MyClass.some_attribute
+ .first_call()
+ .second_call()
+ .third_call(some_args="some value")
}
{
@@ -139,17 +132,17 @@
@@ -139,17 +133,17 @@
class Random:
def func():
@@ -363,7 +364,8 @@ my_dict = {
/ 100000.0
}
my_dict = {
"a key in my dict": MyClass.some_attribute.first_call()
"a key in my dict": MyClass.some_attribute
.first_call()
.second_call()
.third_call(some_args="some value")
}

View File

@@ -906,11 +906,10 @@ x = {
-)
+string_with_escaped_nameescape = "........................................................................... \\N{LAO KO LA}"
-msg = lambda x: (
msg = lambda x: (
- f"this is a very very very very long lambda value {x} that doesn't fit on a"
- " single line"
+msg = (
+ lambda x: f"this is a very very very very long lambda value {x} that doesn't fit on a single line"
+ f"this is a very very very very long lambda value {x} that doesn't fit on a single line"
)
dict_with_lambda_values = {
@@ -1403,8 +1402,8 @@ string_with_escaped_nameescape = "..............................................
string_with_escaped_nameescape = "........................................................................... \\N{LAO KO LA}"
msg = (
lambda x: f"this is a very very very very long lambda value {x} that doesn't fit on a single line"
msg = lambda x: (
f"this is a very very very very long lambda value {x} that doesn't fit on a single line"
)
dict_with_lambda_values = {

View File

@@ -375,7 +375,7 @@ a = b if """
# Another use case
data = yaml.load("""\
a: 1
@@ -77,19 +106,23 @@
@@ -77,10 +106,12 @@
b: 2
""",
)
@@ -390,19 +390,7 @@ a = b if """
MULTILINE = """
foo
""".replace("\n", "")
-generated_readme = lambda project_name: """
+generated_readme = (
+ lambda project_name: """
{}
<Add content here!>
""".strip().format(project_name)
+)
parser.usage += """
Custom extra help summary.
@@ -156,16 +189,24 @@
@@ -156,16 +187,24 @@
10 LOAD_CONST 0 (None)
12 RETURN_VALUE
""" % (_C.__init__.__code__.co_firstlineno + 1,)
@@ -433,7 +421,7 @@ a = b if """
[
"""cow
moos""",
@@ -206,7 +247,9 @@
@@ -206,7 +245,9 @@
"c"
)
@@ -444,7 +432,7 @@ a = b if """
assert some_var == expected_result, """
test
@@ -224,10 +267,8 @@
@@ -224,10 +265,8 @@
"""Sxxxxxxx xxxxxxxx, xxxxxxx xx xxxxxxxxx
xxxxxxxxxxxxx xxxxxxx xxxxxxxxx xxx-xxxxxxxxxx xxxxxx xx xxx-xxxxxx"""
),
@@ -457,7 +445,7 @@ a = b if """
},
}
@@ -246,14 +287,12 @@
@@ -246,14 +285,12 @@
a
a"""
),
@@ -597,13 +585,11 @@ data = yaml.load(
MULTILINE = """
foo
""".replace("\n", "")
generated_readme = (
lambda project_name: """
generated_readme = lambda project_name: """
{}
<Add content here!>
""".strip().format(project_name)
)
parser.usage += """
Custom extra help summary.

View File

@@ -1,7 +1,6 @@
---
source: crates/ruff_python_formatter/tests/fixtures.rs
input_file: crates/ruff_python_formatter/resources/test/fixtures/ruff/expression/await.py
snapshot_kind: text
---
## Input
```python
@@ -142,3 +141,20 @@ test_data = await (
.to_list()
)
```
## Preview changes
```diff
--- Stable
+++ Preview
@@ -65,7 +65,8 @@
# https://github.com/astral-sh/ruff/issues/8644
test_data = await (
- Stream.from_async(async_data)
+ Stream
+ .from_async(async_data)
.flat_map_async()
.map()
.filter_async(is_valid_data)
```

View File

@@ -1,7 +1,6 @@
---
source: crates/ruff_python_formatter/tests/fixtures.rs
input_file: crates/ruff_python_formatter/resources/test/fixtures/ruff/expression/call.py
snapshot_kind: text
---
## Input
```python
@@ -557,3 +556,20 @@ result = (
result = (object[complicate_caller])("argument").a["b"].test(argument)
```
## Preview changes
```diff
--- Stable
+++ Preview
@@ -57,7 +57,8 @@
# Call chains/fluent interface (https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#call-chains)
result = (
- session.query(models.Customer.id)
+ session
+ .query(models.Customer.id)
.filter(
models.Customer.account_id == 10000,
models.Customer.email == "user@example.org",
```

View File

@@ -1,7 +1,6 @@
---
source: crates/ruff_python_formatter/tests/fixtures.rs
input_file: crates/ruff_python_formatter/resources/test/fixtures/ruff/expression/split_empty_brackets.py
snapshot_kind: text
---
## Input
```python

View File

@@ -0,0 +1,163 @@
---
source: crates/ruff_python_formatter/tests/fixtures.rs
input_file: crates/ruff_python_formatter/resources/test/fixtures/ruff/fluent.py
---
## Input
```python
# Fixtures for fluent formatting of call chains
# Note that `fluent.options.json` sets line width to 8
x = a.b()
x = a.b().c()
x = a.b().c().d
x = a.b.c.d().e()
x = a.b.c().d.e().f.g()
# Consecutive calls/subscripts are grouped together
# for the purposes of fluent formatting (though, as 2025.12.15,
# there may be a break inside of one of these
# calls/subscripts, but that is unrelated to the fluent format.)
x = a()[0]().b().c()
x = a.b()[0].c.d()[1]().e
# Parentheses affect both where the root of the call
# chain is and how many calls we require before applying
# fluent formatting (just 1, in the presence of a parenthesized
# root, as of 2025.12.15.)
x = (a).b()
x = (a()).b()
x = (a.b()).d.e()
x = (a.b().d).e()
```
## Outputs
### Output 1
```
indent-style = space
line-width = 8
indent-width = 4
quote-style = Double
line-ending = LineFeed
magic-trailing-comma = Respect
docstring-code = Disabled
docstring-code-line-width = "dynamic"
preview = Disabled
target_version = 3.10
source_type = Python
```
```python
# Fixtures for fluent formatting of call chains
# Note that `fluent.options.json` sets line width to 8
x = a.b()
x = a.b().c()
x = (
a.b()
.c()
.d
)
x = a.b.c.d().e()
x = (
a.b.c()
.d.e()
.f.g()
)
# Consecutive calls/subscripts are grouped together
# for the purposes of fluent formatting (though, as 2025.12.15,
# there may be a break inside of one of these
# calls/subscripts, but that is unrelated to the fluent format.)
x = (
a()[
0
]()
.b()
.c()
)
x = (
a.b()[
0
]
.c.d()[
1
]()
.e
)
# Parentheses affect both where the root of the call
# chain is and how many calls we require before applying
# fluent formatting (just 1, in the presence of a parenthesized
# root, as of 2025.12.15.)
x = (
a
).b()
x = (
a()
).b()
x = (
a.b()
).d.e()
x = (
a.b().d
).e()
```
#### Preview changes
```diff
--- Stable
+++ Preview
@@ -7,7 +7,8 @@
x = a.b().c()
x = (
- a.b()
+ a
+ .b()
.c()
.d
)
@@ -15,7 +16,8 @@
x = a.b.c.d().e()
x = (
- a.b.c()
+ a.b
+ .c()
.d.e()
.f.g()
)
@@ -34,7 +36,8 @@
)
x = (
- a.b()[
+ a
+ .b()[
0
]
.c.d()[
```

View File

@@ -1,7 +1,6 @@
---
source: crates/ruff_python_formatter/tests/fixtures.rs
input_file: crates/ruff_python_formatter/resources/test/fixtures/ruff/multiline_string_deviations.py
snapshot_kind: text
---
## Input
```python
@@ -106,3 +105,22 @@ generated_readme = (
""".strip().format(project_name)
)
```
## Preview changes
```diff
--- Stable
+++ Preview
@@ -44,10 +44,8 @@
# this by changing `Lambda::needs_parentheses` to return `BestFit` but it causes
# issues when the lambda has comments.
# Let's keep this as a known deviation for now.
-generated_readme = (
- lambda project_name: """
+generated_readme = lambda project_name: """
{}
<Add content here!>
""".strip().format(project_name)
-)
```

View File

@@ -1,7 +1,6 @@
---
source: crates/ruff_python_formatter/tests/fixtures.rs
input_file: crates/ruff_python_formatter/resources/test/fixtures/ruff/parentheses/call_chains.py
snapshot_kind: text
---
## Input
```python
@@ -223,6 +222,72 @@ max_message_id = (
.baz()
)
# Note in preview we split at `pl` which some
# folks may dislike. (Similarly with common
# `np` and `pd` invocations).
#
# This is because we cannot reliably predict,
# just from syntax, whether a short identifier
# is being used as a 'namespace' or as an 'object'.
#
# As of 2025.12.15, we do not indent methods in
# fluent formatting. If we ever decide to do so,
# it may make sense to special case call chain roots
# that are shorter than the indent-width (like Prettier does).
# This would have the benefit of handling these common
# two-letter aliases for libraries.
expr = (
pl.scan_parquet("/data/pypi-parquet/*.parquet")
.filter(
[
pl.col("path").str.contains(
r"\.(asm|c|cc|cpp|cxx|h|hpp|rs|[Ff][0-9]{0,2}(?:or)?|go)$"
),
~pl.col("path").str.contains(r"(^|/)test(|s|ing)"),
~pl.col("path").str.contains("/site-packages/", literal=True),
]
)
.with_columns(
month=pl.col("uploaded_on").dt.truncate("1mo"),
ext=pl.col("path")
.str.extract(pattern=r"\.([a-z0-9]+)$", group_index=1)
.str.replace_all(pattern=r"cxx|cpp|cc|c|hpp|h", value="C/C++")
.str.replace_all(pattern="^f.*$", value="Fortran")
.str.replace("rs", "Rust", literal=True)
.str.replace("go", "Go", literal=True)
.str.replace("asm", "Assembly", literal=True)
.replace({"": None}),
)
.group_by(["month", "ext"])
.agg(project_count=pl.col("project_name").n_unique())
.drop_nulls(["ext"])
.sort(["month", "project_count"], descending=True)
)
def indentation_matching_for_loop_in_preview():
if make_this:
if more_nested_because_line_length:
identical_hidden_layer_sizes = all(
current_hidden_layer_sizes == first_hidden_layer_sizes
for current_hidden_layer_sizes in self.component_config[
HIDDEN_LAYERS_SIZES
].values().attr
)
def indentation_matching_walrus_in_preview():
if make_this:
if more_nested_because_line_length:
with self.read_ctx(book_type) as cursor:
if (entry_count := len(names := cursor.execute(
'SELECT name FROM address_book WHERE address=?',
(address,),
).fetchall().some_attr)) == 0 or len(set(names)) > 1:
return
# behavior with parenthesized roots
x = (aaaaaaaaaaaaaaaaaaaaaa).bbbbbbbbbbbbbbbbbbb.cccccccccccccccccccccccc().dddddddddddddddddddddddd().eeeeeeeeeeee
```
## Output
@@ -466,4 +531,237 @@ max_message_id = (
.sum()
.baz()
)
# Note in preview we split at `pl` which some
# folks may dislike. (Similarly with common
# `np` and `pd` invocations).
#
# This is because we cannot reliably predict,
# just from syntax, whether a short identifier
# is being used as a 'namespace' or as an 'object'.
#
# As of 2025.12.15, we do not indent methods in
# fluent formatting. If we ever decide to do so,
# it may make sense to special case call chain roots
# that are shorter than the indent-width (like Prettier does).
# This would have the benefit of handling these common
# two-letter aliases for libraries.
expr = (
pl.scan_parquet("/data/pypi-parquet/*.parquet")
.filter(
[
pl.col("path").str.contains(
r"\.(asm|c|cc|cpp|cxx|h|hpp|rs|[Ff][0-9]{0,2}(?:or)?|go)$"
),
~pl.col("path").str.contains(r"(^|/)test(|s|ing)"),
~pl.col("path").str.contains("/site-packages/", literal=True),
]
)
.with_columns(
month=pl.col("uploaded_on").dt.truncate("1mo"),
ext=pl.col("path")
.str.extract(pattern=r"\.([a-z0-9]+)$", group_index=1)
.str.replace_all(pattern=r"cxx|cpp|cc|c|hpp|h", value="C/C++")
.str.replace_all(pattern="^f.*$", value="Fortran")
.str.replace("rs", "Rust", literal=True)
.str.replace("go", "Go", literal=True)
.str.replace("asm", "Assembly", literal=True)
.replace({"": None}),
)
.group_by(["month", "ext"])
.agg(project_count=pl.col("project_name").n_unique())
.drop_nulls(["ext"])
.sort(["month", "project_count"], descending=True)
)
def indentation_matching_for_loop_in_preview():
if make_this:
if more_nested_because_line_length:
identical_hidden_layer_sizes = all(
current_hidden_layer_sizes == first_hidden_layer_sizes
for current_hidden_layer_sizes in self.component_config[
HIDDEN_LAYERS_SIZES
]
.values()
.attr
)
def indentation_matching_walrus_in_preview():
if make_this:
if more_nested_because_line_length:
with self.read_ctx(book_type) as cursor:
if (
entry_count := len(
names := cursor.execute(
"SELECT name FROM address_book WHERE address=?",
(address,),
)
.fetchall()
.some_attr
)
) == 0 or len(set(names)) > 1:
return
# behavior with parenthesized roots
x = (
(aaaaaaaaaaaaaaaaaaaaaa)
.bbbbbbbbbbbbbbbbbbb.cccccccccccccccccccccccc()
.dddddddddddddddddddddddd()
.eeeeeeeeeeee
)
```
## Preview changes
```diff
--- Stable
+++ Preview
@@ -21,7 +21,8 @@
)
raise OsError("") from (
- Blog.objects.filter(
+ Blog.objects
+ .filter(
entry__headline__contains="Lennon",
)
.filter(
@@ -33,7 +34,8 @@
)
raise OsError("sökdjffffsldkfjlhsakfjhalsökafhsöfdahsödfjösaaksjdllllllllllllll") from (
- Blog.objects.filter(
+ Blog.objects
+ .filter(
entry__headline__contains="Lennon",
)
.filter(
@@ -46,7 +48,8 @@
# Break only after calls and indexing
b1 = (
- session.query(models.Customer.id)
+ session
+ .query(models.Customer.id)
.filter(
models.Customer.account_id == account_id, models.Customer.email == email_address
)
@@ -54,7 +57,8 @@
)
b2 = (
- Blog.objects.filter(
+ Blog.objects
+ .filter(
entry__headline__contains="Lennon",
)
.limit_results[:10]
@@ -70,7 +74,8 @@
).filter(
entry__pub_date__year=2008,
)
- + Blog.objects.filter(
+ + Blog.objects
+ .filter(
entry__headline__contains="McCartney",
)
.limit_results[:10]
@@ -89,7 +94,8 @@
d11 = x.e().e().e() #
d12 = x.e().e().e() #
d13 = (
- x.e() #
+ x
+ .e() #
.e()
.e()
)
@@ -101,7 +107,8 @@
# Doesn't fit, fluent style
d3 = (
- x.e() #
+ x
+ .e() #
.esadjkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk()
.esadjkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk()
)
@@ -218,7 +225,8 @@
(
(
- df1_aaaaaaaaaaaa.merge()
+ df1_aaaaaaaaaaaa
+ .merge()
.groupby(
1,
)
@@ -228,7 +236,8 @@
(
(
- df1_aaaaaaaaaaaa.merge()
+ df1_aaaaaaaaaaaa
+ .merge()
.groupby(
1,
)
@@ -255,19 +264,19 @@
expr = (
- pl.scan_parquet("/data/pypi-parquet/*.parquet")
- .filter(
- [
- pl.col("path").str.contains(
- r"\.(asm|c|cc|cpp|cxx|h|hpp|rs|[Ff][0-9]{0,2}(?:or)?|go)$"
- ),
- ~pl.col("path").str.contains(r"(^|/)test(|s|ing)"),
- ~pl.col("path").str.contains("/site-packages/", literal=True),
- ]
- )
+ pl
+ .scan_parquet("/data/pypi-parquet/*.parquet")
+ .filter([
+ pl.col("path").str.contains(
+ r"\.(asm|c|cc|cpp|cxx|h|hpp|rs|[Ff][0-9]{0,2}(?:or)?|go)$"
+ ),
+ ~pl.col("path").str.contains(r"(^|/)test(|s|ing)"),
+ ~pl.col("path").str.contains("/site-packages/", literal=True),
+ ])
.with_columns(
month=pl.col("uploaded_on").dt.truncate("1mo"),
- ext=pl.col("path")
+ ext=pl
+ .col("path")
.str.extract(pattern=r"\.([a-z0-9]+)$", group_index=1)
.str.replace_all(pattern=r"cxx|cpp|cc|c|hpp|h", value="C/C++")
.str.replace_all(pattern="^f.*$", value="Fortran")
@@ -288,9 +297,8 @@
if more_nested_because_line_length:
identical_hidden_layer_sizes = all(
current_hidden_layer_sizes == first_hidden_layer_sizes
- for current_hidden_layer_sizes in self.component_config[
- HIDDEN_LAYERS_SIZES
- ]
+ for current_hidden_layer_sizes in self
+ .component_config[HIDDEN_LAYERS_SIZES]
.values()
.attr
)
@@ -302,7 +310,8 @@
with self.read_ctx(book_type) as cursor:
if (
entry_count := len(
- names := cursor.execute(
+ names := cursor
+ .execute(
"SELECT name FROM address_book WHERE address=?",
(address,),
)
```

Some files were not shown because too many files have changed in this diff Show More