Compare commits

...

47 Commits

Author SHA1 Message Date
David Peter
e956acf371 [ty] Public types: remove special-case for stub files 2025-10-01 16:28:37 +02:00
David Peter
439ffc1f15 Review comments 2025-10-01 16:26:57 +02:00
David Peter
b6e9af50f1 [ty] No union with Unknown for module-global symbols 2025-10-01 14:42:53 +02:00
David Peter
963bc8c228 [ty] Reformulation of public symbol inference test suite (#20667)
## Summary

Reformulation of the public symbol type inference test suite to use
class scopes instead of module scopes. This is in preparation for an
upcoming change to module-global scopes (#20664).

## Test Plan

Updated tests
2025-10-01 14:26:17 +02:00
Alex Waygood
20eb5b5b35 [ty] Fix subtyping of invariant generics specialized with Any (#20650) 2025-10-01 10:05:54 +00:00
github-actions[bot]
d9473a2fcf [ty] Sync vendored typeshed stubs (#20658)
---------

Co-authored-by: typeshedbot <>
Co-authored-by: David Peter <mail@david-peter.de>
2025-10-01 10:11:48 +02:00
Douglas Creager
a422716267 [ty] Fix flaky constraint set rendering (#20653)
This doesn't seem to be flaky in the sense of tests failing
non-deterministically, but they are flaky in the sense of unrelated
changes causing testing failures from the clauses of a constraint set
being rendered in different orders. This flakiness is because we're
using Salsa IDs to determine the order in which typevars appear in a
constraint set BDD, and those IDs are assigned non-deterministically.

The fix is ham-fisted but effective: sort the constraints in each
clause, and the clauses in each set, as part of the rendering process.
Constraint sets are only rendered in our test cases, so we don't need to
over-optimize this.
2025-10-01 09:14:35 +02:00
David Peter
a3e5c72537 [ty] Use release mode for ecosystem report (#20663)
## Summary

Prevent the ecosystem report workflow from timing out by running ty in
release mode.

## Test Plan

Manual workflow run:
https://github.com/astral-sh/ruff/actions/runs/18154035186
2025-10-01 09:06:24 +02:00
Igor Drokin
11dae2cf1b [pyupgrade] Prevent infinite loop with I002 and UP026 (#20634)
## Summary
Closes #20601

Do not treat imports as unused for the rule [unnecessary-builtin-import
(UP029)](https://docs.astral.sh/ruff/rules/unnecessary-builtin-import/)
if they are required by
`isort`([missing-required-import](https://docs.astral.sh/ruff/rules/missing-required-import/))

## Test Plan
- Added test case `i002_up029_conflict` to ensure there is no conflict

Co-authored-by: Igor Drokin <drokinii1017@gmail.com>
2025-09-30 17:11:34 -04:00
Wei Lee
7fee877c50 [airflow]: rename AutoImport as Rename (internal) (#20563)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Since we are trying to import both `AutoImport` and `SourceModuleMoved`,
the previous naming was not as descriptive. Renaming it to `Rename`
better reflects the intention.

## Test Plan

<!-- How was it tested? -->

no functionality change
2025-09-30 15:56:26 -04:00
Dan Parizher
7c87b31533 [ruff] Do not flag %r + repr() combinations (RUF065) (#20600)
## Summary

Fixes the first part of #20583
2025-09-30 15:49:50 -04:00
Brent Westbrook
2b1d3c60fa Display diffs for ruff format --check and add support for different output formats (#20443)
## Summary

This PR uses the new `Diagnostic` type for rendering formatter
diagnostics. This allows the formatter to inherit all of the output
formats already implemented in the linter and ty. For example, here's
the new `full` output format, with the formatting diff displayed using
the same infrastructure as the linter:

<img width="592" height="364" alt="image"
src="https://github.com/user-attachments/assets/6d09817d-3f27-4960-aa8b-41ba47fb4dc0"
/>


<details><summary>Resolved TODOs</summary>
<p>

~~There are several limitiations/todos here still, especially around the
`OutputFormat` type~~:
- [x] A few literal `todo!`s for the remaining `OutputFormat`s without
matching `DiagnosticFormat`s
- [x] The default output format is `full` instead of something more
concise like the current output
- [x] Some of the output formats (namely JSON) have information that
doesn't make much sense for these diagnostics

The first of these is definitely resolved, and I think the other two are
as well, based on discussion on the design document. In brief, we're
okay inheriting the default `OutputFormat` and can separate the global
option into `lint.output-format` and `format.output-format` in the
future, if needed; and we're okay including redundant information in the
non-human-readable output formats.

My last major concern is with the performance of the new code, as
discussed in the `Benchmarks` section below.

A smaller question is whether we should use `Diagnostic`s for formatting
errors too. I think the answer to this is yes, in line with changes
we're making in the linter too. I still need to implement that here.

</p>
</details> 

<details><summary>Benchmarks</summary>
<p>


The values in the table are from a large benchmark on the CPython 3.10
code
base, which involves checking 2011 files, 1872 of which need to be
reformatted.
`stable` corresponds to the same code used on `main`, while
`preview-full` and
`preview-concise` use the new `Diagnostic` code gated behind `--preview`
for the
`full` and `concise` output formats, respectively. `stable-diff` uses
the
`--diff` to compare the two diff rendering approaches. See the full
hyperfine
command below for more details. For a sense of scale, the `stable`
output format
produces 1873 lines on stdout, compared to 855,278 for `preview-full`
and
857,798 for `stable-diff`.

| Command | Mean [ms] | Min [ms] | Max [ms] | Relative |

|:------------------|--------------:|---------:|---------:|-------------:|
| `stable` | 201.2 ± 6.8 | 192.9 | 220.6 | 1.00 |
| `preview-full` | 9113.2 ± 31.2 | 9076.1 | 9152.0 | 45.29 ± 1.54 |
| `preview-concise` | 214.2 ± 1.4 | 212.0 | 217.6 | 1.06 ± 0.04 |
| `stable-diff` | 3308.6 ± 20.2 | 3278.6 | 3341.8 | 16.44 ± 0.56 |

In summary, the `preview-concise` diagnostics are ~6% slower than the
stable
output format, increasing the average runtime from 201.2 ms to 214.2 ms.
The
`full` preview diagnostics are much more expensive, taking over 9113.2
ms to
complete, which is ~3x more expensive even than the stable diffs
produced by the
`--diff` flag.

My main takeaways here are:
1. Rendering `Edit`s is much more expensive than rendering the diffs
from `--diff`
2. Constructing `Edit`s actually isn't too bad

### Constructing `Edit`s

I also took a closer look at `Edit` construction by modifying the code
and
repeating the `preview-concise` benchmark and found that the main issue
is
constructing a `SourceFile` for use in the `Edit` rendering. Commenting
out the
`Edit` construction itself has basically no effect:

| Command   |   Mean [ms] | Min [ms] | Max [ms] |    Relative |
|:----------|------------:|---------:|---------:|------------:|
| `stable`  | 197.5 ± 1.6 |    195.0 |    200.3 |        1.00 |
| `no-edit` | 208.9 ± 2.2 |    204.8 |    212.2 | 1.06 ± 0.01 |

However, also omitting the source text from the `SourceFile`
construction
resolves the slowdown compared to `stable`. So it seems that copying the
full
source text into a `SourceFile` is the main cause of the slowdown for
non-`full`
diagnostics.

| Command          |   Mean [ms] | Min [ms] | Max [ms] |    Relative |
|:-----------------|------------:|---------:|---------:|------------:|
| `stable`         | 202.4 ± 2.9 |    197.6 |    207.9 |        1.00 |
| `no-source-text` | 202.7 ± 3.3 |    196.3 |    209.1 | 1.00 ± 0.02 |

### Rendering diffs

The main difference between `stable-diff` and `preview-full` seems to be
the diffing strategy we use from `similar`. Both versions use the same
algorithm, but in the existing
[`CodeDiff`](https://github.com/astral-sh/ruff/blob/main/crates/ruff_linter/src/source_kind.rs#L259)
rendering for the `--diff` flag, we only do line-level diffing, whereas
for `Diagnostic`s we use `TextDiff::iter_inline_changes` to highlight
word-level changes too. Skipping the word diff for `Diagnostic`s closes
most of the gap:

| Command | Mean [s] | Min [s] | Max [s] | Relative |
|:---|---:|---:|---:|---:|
| `stable-diff` | 3.323 ± 0.015 | 3.297 | 3.341 | 1.00 |
| `preview-full` | 3.654 ± 0.019 | 3.618 | 3.682 | 1.10 ± 0.01 |

(In some repeated runs, I've seen as small as a ~5% difference, down
from 10% in the table)

This doesn't actually change any of our snapshots, but it would
obviously change the rendered result in a terminal since we wouldn't
highlight the specific words that changed within a line.

Another much smaller change that we can try is removing the deadline
from the `iter_inline_changes` call. It looks like there's a fair amount
of overhead from the default 500 ms deadline for computing these, and
using `iter_inline_changes(op, None)` (`None` for the optional deadline
argument) improves the runtime quite a bit:

| Command | Mean [s] | Min [s] | Max [s] | Relative |
|:---|---:|---:|---:|---:|
| `stable-diff` | 3.322 ± 0.013 | 3.298 | 3.341 | 1.00 |
| `preview-full` | 5.296 ± 0.030 | 5.251 | 5.366 | 1.59 ± 0.01 |

<hr>

<details><summary>hyperfine command</summary>

```shell
cargo build --release --bin ruff && hyperfine --ignore-failure --warmup 10 --export-markdown /tmp/table.md \
  -n stable -n preview-full -n preview-concise -n stable-diff \
  "./target/release/ruff format --check ./crates/ruff_linter/resources/test/cpython/ --no-cache" \
  "./target/release/ruff format --check ./crates/ruff_linter/resources/test/cpython/ --no-cache --preview --output-format=full" \
  "./target/release/ruff format --check ./crates/ruff_linter/resources/test/cpython/ --no-cache --preview --output-format=concise" \
  "./target/release/ruff format --check ./crates/ruff_linter/resources/test/cpython/ --no-cache --diff"
```

</details>

</p>
</details> 

## Test Plan

Some new CLI tests and manual testing
2025-09-30 12:00:51 -04:00
David Peter
b483d3b0b9 [ty] Literal promotion refactor (#20646)
## Summary

Not sure if this was the original intention, but it looks to me like the
previous `Type::literal_promotion_type` was more of an implementation
detail for the actual operation of promoting all literals in a
possibly-nested position of a type.

This is not a pure refactor, as I'm technically changing the behavior
for that protocols diagnostic message suggestion.

## Test Plan

New Markdown test
2025-09-30 14:22:36 +02:00
David Peter
130a794c2b [ty] Add tests for nested generic functions (#20631)
## Summary

Add two simple tests that we recently discussed with @dcreager. They
demonstrate that the `TypeMapping::MarkTypeVarsInferable` operation
really does need to keep track of the binding context.

## Test Plan

Made sure that those tests fail if we create
`TypeMapping::MarkTypeVarsInferable(None)`s everywhere.
2025-09-30 08:44:18 +02:00
Dan Parizher
1c08f71a00 [cli] Add conflict between --add-noqa and --diff options (#20642) 2025-09-30 08:34:18 +02:00
Alex Waygood
8664842d00 [ty] Ensure first-party search paths always appear in a sensible order (#20629)
This PR ensures that we always put `./src` before `.` in our list of
first-party search paths. This better emulates the fact that at runtime,
the module name of a file `src/foo.py` would almost certainly be `foo`
rather than `src.foo`.

I wondered if fixing this might fix
https://github.com/astral-sh/ruff/pull/20603#issuecomment-3345317444. It
seems like that's not the case, but it also seems like it leads to
better diagnostics because we report much more intuitive module names to
the user in our error messages -- so, it's probably a good change
anyway.
2025-09-29 21:19:13 +01:00
David Peter
0092794302 [ty] Use typing.Self for the first parameter of instance methods (#20517)
## Summary

Modify the (external) signature of instance methods such that the first
parameter uses `Self` unless it is explicitly annotated. This allows us
to correctly type-check more code, and allows us to infer correct return
types for many functions that return `Self`. For example:

```py
from pathlib import Path
from datetime import datetime, timedelta

reveal_type(Path(".config") / ".ty")  # now Path, previously Unknown

def _(dt: datetime, delta: timedelta):
    reveal_type(dt - delta)  # now datetime, previously Unknown
```

part of https://github.com/astral-sh/ty/issues/159

## Performance

I ran benchmarks locally on `attrs`, `freqtrade` and `colour`, the
projects with the largest regressions on CodSpeed. I see much smaller
effects locally, but can definitely reproduce the regression on `attrs`.
From looking at the profiling results (on Codspeed), it seems that we
simply do more type inference work, which seems plausible, given that we
now understand much more return types (of many stdlib functions). In
particular, whenever a function uses an implicit `self` and returns
`Self` (without mentioning `Self` anywhere else in its signature), we
will now infer the correct type, whereas we would previously return
`Unknown`. This also means that we need to invoke the generics solver in
more cases. Comparing half a million lines of log output on attrs, I can
see that we do 5% more "work" (number of lines in the log), and have a
lot more `apply_specialization` events (7108 vs 4304). On freqtrade, I
see similar numbers for `apply_specialization` (11360 vs 5138 calls).
Given these results, I'm not sure if it's generally worth doing more
performance work, especially since none of the code modifications
themselves seem to be likely candidates for regressions.

| Command | Mean [ms] | Min [ms] | Max [ms] | Relative |
|:---|---:|---:|---:|---:|
| `./ty_main check /home/shark/ecosystem/attrs` | 92.6 ± 3.6 | 85.9 |
102.6 | 1.00 |
| `./ty_self check /home/shark/ecosystem/attrs` | 101.7 ± 3.5 | 96.9 |
113.8 | 1.10 ± 0.06 |

| Command | Mean [ms] | Min [ms] | Max [ms] | Relative |
|:---|---:|---:|---:|---:|
| `./ty_main check /home/shark/ecosystem/freqtrade` | 599.0 ± 20.2 |
568.2 | 627.5 | 1.00 |
| `./ty_self check /home/shark/ecosystem/freqtrade` | 607.9 ± 11.5 |
594.9 | 626.4 | 1.01 ± 0.04 |

| Command | Mean [ms] | Min [ms] | Max [ms] | Relative |
|:---|---:|---:|---:|---:|
| `./ty_main check /home/shark/ecosystem/colour` | 423.9 ± 17.9 | 394.6
| 447.4 | 1.00 |
| `./ty_self check /home/shark/ecosystem/colour` | 426.9 ± 24.9 | 373.8
| 456.6 | 1.01 ± 0.07 |

## Test Plan

New Markdown tests

## Ecosystem report

* apprise: ~300 new diagnostics related to problematic stubs in apprise
😩
* attrs: a new true positive, since [this
function](4e2c89c823/tests/test_make.py (L2135))
is missing a `@staticmethod`?
* Some legitimate true positives
* sympy: lots of new `invalid-operator` false positives in [matrix
multiplication](cf9f4b6805/sympy/matrices/matrixbase.py (L3267-L3269))
due to our limited understanding of [generic `Callable[[Callable[[T1,
T2], T3]], Callable[[T1, T2], T3]]` "identity"
types](cf9f4b6805/sympy/core/decorators.py (L83-L84))
of decorators. This is not related to type-of-self.

## Typing conformance results

The changes are all correct, except for
```diff
+generics_self_usage.py:50:5: error[invalid-assignment] Object of type `def foo(self) -> int` is not assignable to `(typing.Self, /) -> int`
```
which is related to an assignability problem involving type variables on
both sides:
```py
class CallableAttribute:
    def foo(self) -> int:
        return 0

    bar: Callable[[Self], int] = foo  # <- we currently error on this assignment
```

---------

Co-authored-by: Shaygan Hooshyari <sh.hooshyari@gmail.com>
2025-09-29 21:08:08 +02:00
Alex Waygood
1d3e4a9153 [ty] Remove unnecessary parsed_module() calls (#20630) 2025-09-29 16:05:12 +01:00
Brent Westbrook
00c8851ef8 Remove TextEmitter (#20595)
## Summary

Addresses
https://github.com/astral-sh/ruff/pull/20443#discussion_r2381237640 by
factoring out the `match` on the ruff output format in a way that should
be reusable by the formatter.

I didn't think this was going to work at first, but the fact that the
config holds options that apply only to certain output formats works in
our favor here. We can set up a single config for all of the output
formats and then use `try_from` to convert the `OutputFormat` to a
`DiagnosticFormat` later.

## Test Plan

Existing tests, plus a few new ones to make sure relocating the
`SHOW_FIX_SUMMARY` rendering worked, that was untested before. I deleted
a bunch of test code along with the `text` module, but I believe all of
it is now well-covered by the `full` and `concise` tests in `ruff_db`.

I also merged this branch into
https://github.com/astral-sh/ruff/pull/20443 locally and made sure that
the API actually helps. `render_diagnostics` dropped in perfectly and
passed the tests there too.
2025-09-29 08:46:25 -04:00
Alex Waygood
1cf19732b9 [ty] Use fully qualified names to distinguish ambiguous protocols in diagnostics (#20627) 2025-09-29 12:02:07 +00:00
David Peter
803d61e21f [ty] Ecosystem analyzer: relax timeout thresholds (#20626)
## Summary

Pull in a small upstream change
(6ce3a60957),
because some type check times were close to the previous limits, which
prevents us from seeing diagnostics diffs (in case they run into a
timeout).
2025-09-29 11:36:14 +00:00
Douglas Creager
cf2b083668 [ty] Apply type mappings to functions eagerly (#20596)
`TypeMapping` is no longer cow-shaped.

Before, `TypeMapping` defined a `to_owned` method, which would make an
owned copy of the type mapping. This let us apply type mappings to
function literals lazily. The primary part of a function that you have
to apply the type mapping to is its signature. The hypothesis was that
doing this lazily would prevent us from constructing the signature of a
function just to apply a type mapping; if you never ended up needed the
updated function signature, that would be extraneous work.

But looking at the CI for this PR, it looks like that hypothesis is
wrong! And this definitely cleans up the code quite a bit. It also means
that over time we can consider replacing all of these `TypeMapping` enum
variants with separate `TypeTransformer` impls.

---------

Co-authored-by: David Peter <mail@david-peter.de>
2025-09-29 13:24:40 +02:00
Alex Waygood
3f640dacd4 [ty] Improve disambiguation of class names in diagnostics (#20603) 2025-09-29 11:43:11 +01:00
Micha Reiser
81f43a1fc8 Add the *The Basics* title back to CONTRIBUTING.md (#20624) 2025-09-29 07:46:01 +00:00
Dan Parizher
053c750c93 [playground] Fix quick fixes for empty ranges in playground (#20599)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-09-29 07:38:32 +00:00
renovate[bot]
e5faf6c268 Update dependency ruff to v0.13.2 (#20622)
Coming soon: The Renovate bot (GitHub App) will be renamed to Mend. PRs
from Renovate will soon appear from 'Mend'. Learn more
[here](https://redirect.github.com/renovatebot/renovate/discussions/37842).

This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| [ruff](https://docs.astral.sh/ruff)
([source](https://redirect.github.com/astral-sh/ruff),
[changelog](https://redirect.github.com/astral-sh/ruff/blob/main/CHANGELOG.md))
| `==0.13.1` -> `==0.13.2` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/ruff/0.13.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/ruff/0.13.1/0.13.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>astral-sh/ruff (ruff)</summary>

###
[`v0.13.2`](https://redirect.github.com/astral-sh/ruff/blob/HEAD/CHANGELOG.md#0132)

[Compare
Source](https://redirect.github.com/astral-sh/ruff/compare/0.13.1...0.13.2)

Released on 2025-09-25.

##### Preview features

- \[`flake8-async`] Implement `blocking-path-method` (`ASYNC240`)
([#&#8203;20264](https://redirect.github.com/astral-sh/ruff/pull/20264))
- \[`flake8-bugbear`] Implement `map-without-explicit-strict` (`B912`)
([#&#8203;20429](https://redirect.github.com/astral-sh/ruff/pull/20429))
- \[`flake8-bultins`] Detect class-scope builtin shadowing in
decorators, default args, and attribute initializers (`A003`)
([#&#8203;20178](https://redirect.github.com/astral-sh/ruff/pull/20178))
- \[`ruff`] Implement `logging-eager-conversion` (`RUF065`)
([#&#8203;19942](https://redirect.github.com/astral-sh/ruff/pull/19942))
- Include `.pyw` files by default when linting and formatting
([#&#8203;20458](https://redirect.github.com/astral-sh/ruff/pull/20458))

##### Bug fixes

- Deduplicate input paths
([#&#8203;20105](https://redirect.github.com/astral-sh/ruff/pull/20105))
- \[`flake8-comprehensions`] Preserve trailing commas for single-element
lists (`C409`)
([#&#8203;19571](https://redirect.github.com/astral-sh/ruff/pull/19571))
- \[`flake8-pyi`] Avoid syntax error from conflict with `PIE790`
(`PYI021`)
([#&#8203;20010](https://redirect.github.com/astral-sh/ruff/pull/20010))
- \[`flake8-simplify`] Correct fix for positive `maxsplit` without
separator (`SIM905`)
([#&#8203;20056](https://redirect.github.com/astral-sh/ruff/pull/20056))
- \[`pyupgrade`] Fix `UP008` not to apply when `__class__` is a local
variable
([#&#8203;20497](https://redirect.github.com/astral-sh/ruff/pull/20497))
- \[`ruff`] Fix `B004` to skip invalid `hasattr`/`getattr` calls
([#&#8203;20486](https://redirect.github.com/astral-sh/ruff/pull/20486))
- \[`ruff`] Replace `-nan` with `nan` when using the value to construct
a `Decimal` (`FURB164` )
([#&#8203;20391](https://redirect.github.com/astral-sh/ruff/pull/20391))

##### Documentation

- Add 'Finding ways to help' to CONTRIBUTING.md
([#&#8203;20567](https://redirect.github.com/astral-sh/ruff/pull/20567))
- Update import path to `ruff-wasm-web`
([#&#8203;20539](https://redirect.github.com/astral-sh/ruff/pull/20539))
- \[`flake8-bandit`] Clarify the supported hashing functions (`S324`)
([#&#8203;20534](https://redirect.github.com/astral-sh/ruff/pull/20534))

##### Other changes

- \[`playground`] Allow hover quick fixes to appear for overlapping
diagnostics
([#&#8203;20527](https://redirect.github.com/astral-sh/ruff/pull/20527))
- \[`playground`] Fix non‑BMP code point handling in quick fixes and
markers
([#&#8203;20526](https://redirect.github.com/astral-sh/ruff/pull/20526))

##### Contributors

- [@&#8203;BurntSushi](https://redirect.github.com/BurntSushi)
- [@&#8203;mtshiba](https://redirect.github.com/mtshiba)
- [@&#8203;second-ed](https://redirect.github.com/second-ed)
- [@&#8203;danparizher](https://redirect.github.com/danparizher)
- [@&#8203;ShikChen](https://redirect.github.com/ShikChen)
- [@&#8203;PieterCK](https://redirect.github.com/PieterCK)
- [@&#8203;GDYendell](https://redirect.github.com/GDYendell)
- [@&#8203;RazerM](https://redirect.github.com/RazerM)
- [@&#8203;TaKO8Ki](https://redirect.github.com/TaKO8Ki)
- [@&#8203;amyreese](https://redirect.github.com/amyreese)
- [@&#8203;ntbre](https://redirect.github.com/ntBre)
- [@&#8203;MichaReiser](https://redirect.github.com/MichaReiser)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xMzEuOSIsInVwZGF0ZWRJblZlciI6IjQxLjEzMS45IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJpbnRlcm5hbCJdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-29 08:59:50 +02:00
Takayuki Maeda
666f53331f [ruff] Fix minor typos in doc comments (#20623) 2025-09-29 08:56:23 +02:00
renovate[bot]
65e805de62 Update dependency PyYAML to v6.0.3 (#20621)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-29 08:55:39 +02:00
renovate[bot]
256b520a52 Update cargo-bins/cargo-binstall action to v1.15.6 (#20620)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-29 08:55:13 +02:00
Rahul Sahoo
4e33501115 Fixed documentation for try_consider_else (#20587)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

This PR addresses #20570 . In the example, the correct usage had a
bug/issue where in the except block after logging exception, None was
getting returned, which made the linters flag out the code. So adding an
empty raise solves the issue.

## Test Plan

Tested it by building the doc locally.
2025-09-27 13:50:51 +00:00
Alex Waygood
6b3c493cff [ty] Use Top materializations for TypeIs special form (#20591) 2025-09-26 17:24:43 +00:00
Alex Waygood
e4de179cdd [ty] Simplify Any | (Any & T) to Any (#20593) 2025-09-26 17:00:10 +01:00
Dylan
57e1ff8294 [pyflakes] Handle some common submodule import situations for unused-import (F401) (#20200)
# Summary

The PR under review attempts to make progress towards the age-old
problem of submodule imports, specifically with regards to their
treatment by the rule [`unused-import`
(`F401`)](https://docs.astral.sh/ruff/rules/unused-import/).

Some related issues:
- https://github.com/astral-sh/ruff/issues/60
- https://github.com/astral-sh/ruff/issues/4656

Prior art:
- https://github.com/astral-sh/ruff/pull/13965
- https://github.com/astral-sh/ruff/pull/5010
- https://github.com/astral-sh/ruff/pull/5011
- https://github.com/astral-sh/ruff/pull/666

See the PR summary for a detailed description.
2025-09-26 08:22:26 -05:00
David Peter
3932f7c849 [ty] Fix subtyping for dynamic specializations (#20592)
## Summary

Fixes a bug observed by @AlexWaygood where `C[Any] <: C[object]` should
hold for a class that is covariant in its type parameter (and similar
subtyping relations involving dynamic types for other variance
configurations).

## Test Plan

New and updated Markdown tests
2025-09-26 15:05:03 +02:00
Alex Waygood
2af8c53110 [ty] Add more tests for subtyping/assignability between two protocol types (#20573) 2025-09-26 12:07:57 +01:00
Dan Parizher
0bae7e613d Use Annotation::tags instead of hardcoded rule matching in ruff server (#20565)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-09-26 09:06:26 +02:00
Douglas Creager
02ebb2ee61 [ty] Change to BDD representation for constraint sets (#20533)
While working on #20093, I kept running into test failures due to
constraint sets not simplifying as much as they could, and therefore not
being easily testable against "always true" and "always false".

This PR updates our constraint set representation to use BDDs. Because
BDDs are reduced and ordered, they are canonical — equivalent boolean
formulas are represented by the same interned BDD node.

That said, there is a wrinkle, in that the "variables" that we use in
these BDDs — the individual constraints like `Lower ≤ T ≤ Upper` are not
always independent of each other.

As an example, given types `A ≤ B ≤ C ≤ D` and a typevar `T`, the
constraints `A ≤ T ≤ C` and `B ≤ T ≤ D` "overlap" — their intersection
is non-empty. So we should be able to simplify

```
(A ≤ T ≤ C) ∧ (B ≤ T ≤ D) == (B ≤ T ≤ C)
```

That's not a simplification that the BDD structure can perform itself,
since those three constraints are modeled as separate BDD variables, and
are therefore "opaque" to the BDD algorithms.

That means we need to perform this kind of simplification ourselves. We
look at pairs of constraints that appear in a BDD and see if they can be
simplified relative to each other, and if so, replace the pair with the
simplification. A large part of the toil of getting this PR to work was
identifying all of those patterns and getting that substitution logic
correct.

With this new representation, all existing tests pass, as well as some
new ones that represent test failures that were occuring on #20093.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-09-25 21:55:35 -04:00
Francesco Giacometti
e66a872c14 [ty] Coalesce allocations for parameter info in ArgumentMatcher (#20586)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

Follow up on #20495. The improvement suggested by @AlexWaygood cannot be
applied as-is since the `argument_matches` vector is indexed by argument
number, while the two boolean vectors are indexed by parameter number.
Still coalescing the latter two saves one allocation.
2025-09-25 20:56:59 -04:00
Dan Parizher
589a674a8d [isort] Fix infinite loop when checking equivalent imports (I002, PLR0402) (#20381)
## Summary

Fixes #20380

The fix exempts required imports from `PLR0402`
2025-09-25 16:08:15 -05:00
Brent Westbrook
e4ac9e9041 Replace two more uses of unsafe with const Option::unwrap (#20584)
I guess I missed these in #20007, but I found them today while grepping
for something else. `Option::unwrap` has been const since 1.83, so we
can use it here and avoid some unsafe code.
2025-09-25 15:35:13 -04:00
Dylan
f2b7c82534 Handle t-string prefixes in SimpleTokenizer (#20578)
The simple tokenizer is meant to skip strings, but it was recording a
`Name` token for t-strings (from the `t`). This PR fixes that.
2025-09-25 14:33:37 -05:00
Bhuminjay Soni
cfc64d1707 [syntax-errors]: future-feature-not-defined (F407) (#20554)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

This PR implements
https://docs.astral.sh/ruff/rules/future-feature-not-defined/ (F407) as
a semantic syntax error.

## Test Plan

<!-- How was it tested? -->

I have written inline tests as directed in #17412

---------

Signed-off-by: 11happy <soni5happy@gmail.com>
2025-09-25 13:52:24 -04:00
Brent Westbrook
6b7a9dc2f2 [isort] Clarify dependency between order-by-type and case-sensitive settings (#20559)
Summary
--

Fixes #20536 by linking between the isort options `case-sensitive` and
`order-by-type`. The latter takes precedence over the former, so it
seems good to clarify this somewhere.

I tweaked the wording slightly, but this is otherwise based on the patch
from @SkylerWittman in
https://github.com/astral-sh/ruff/issues/20536#issuecomment-3326097324
(thank you!)

Test Plan
--

N/a

---------

Co-authored-by: Skyler Wittman <skyler.wittman@gmail.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-09-25 16:25:12 +00:00
Brent Westbrook
9903104328 [pylint] Fix missing max-nested-blocks in settings display (#20574)
Summary
--

This fixes a bug pointed out in #20560 where one of the `pylint`
settings wasn't used in its `Display` implementation.

Test Plan
--

Existing tests with updated snapshots
2025-09-25 12:14:28 -04:00
Giovani Moutinho
beec2f2dbb [flake8-simplify] Improve help message clarity (SIM105) (#20548)
## Summary

Improve the SIM105 rule message to prevent user confusion about how to
properly use `contextlib.suppress`.

The previous message "Replace with `contextlib.suppress(ValueError)`"
was ambiguous and led users to incorrectly use
`contextlib.suppress(ValueError)` as a statement inside except blocks
instead of replacing the entire try-except-pass block with `with
contextlib.suppress(ValueError):`.

This change makes the message more explicit:
- **Before**: `"Use \`contextlib.suppress({exception})\` instead of
\`try\`-\`except\`-\`pass\`"`
- **After**: `"Replace \`try\`-\`except\`-\`pass\` block with \`with
contextlib.suppress({exception})\`"`

The fix title is also updated to be more specific:
- **Before**: `"Replace with \`contextlib.suppress({exception})\`"`  
- **After**: `"Replace \`try\`-\`except\`-\`pass\` with \`with
contextlib.suppress({exception})\`"`

Fixes #20462

## Test Plan

-  All existing SIM105 tests pass with updated snapshots
-  Cargo clippy passes without warnings  
-  Full test suite passes
-  The new messages clearly indicate that the entire try-except-pass
block should be replaced with a `with` statement, preventing the misuse
described in the issue

---------

Co-authored-by: Giovani Moutinho <e@mgiovani.dev>
2025-09-25 11:19:26 -04:00
Micha Reiser
c256c7943c [ty] Update salsa to fix hang when cycle head panics (#20577) 2025-09-25 17:13:07 +02:00
Dhruv Manilawala
35ed55ec8c [ty] Filter overloads using variadic parameters (#20547)
## Summary

Closes: https://github.com/astral-sh/ty/issues/551

This PR adds support for step 4 of the overload call evaluation
algorithm which states that:

> If the argument list is compatible with two or more overloads,
determine whether one or more of the overloads has a variadic parameter
(either `*args` or `**kwargs`) that maps to a corresponding argument
that supplies an indeterminate number of positional or keyword
arguments. If so, eliminate overloads that do not have a variadic
parameter.

And, with that, the overload call evaluation algorithm has been
implemented completely end to end as stated in the typing spec.

## Test Plan

Expand the overload call test suite.
2025-09-25 14:58:00 +00:00
240 changed files with 7262 additions and 4200 deletions

View File

@@ -452,7 +452,7 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: "Install cargo-binstall"
uses: cargo-bins/cargo-binstall@20aa316bab4942180bbbabe93237858e8d77f1ed # v1.15.5
uses: cargo-bins/cargo-binstall@38e8f5e4c386b611d51e8aa997b9a06a3c8eb67a # v1.15.6
- name: "Install cargo-fuzz"
# Download the latest version from quick install and not the github releases because github releases only has MUSL targets.
run: cargo binstall cargo-fuzz --force --disable-strategies crate-meta-data --no-confirm
@@ -703,7 +703,7 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- uses: cargo-bins/cargo-binstall@20aa316bab4942180bbbabe93237858e8d77f1ed # v1.15.5
- uses: cargo-bins/cargo-binstall@38e8f5e4c386b611d51e8aa997b9a06a3c8eb67a # v1.15.6
- run: cargo binstall --no-confirm cargo-shear
- run: cargo shear

View File

@@ -64,7 +64,7 @@ jobs:
cd ..
uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@fc0f612798710b0dd69bb7528bc9b361dc60bd43"
uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@279f8a15b0e7f77213bf9096dbc2335a19ef89c5"
ecosystem-analyzer \
--repository ruff \

View File

@@ -49,12 +49,13 @@ jobs:
cd ..
uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@fc0f612798710b0dd69bb7528bc9b361dc60bd43"
uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@279f8a15b0e7f77213bf9096dbc2335a19ef89c5"
ecosystem-analyzer \
--verbose \
--repository ruff \
analyze \
--profile=release \
--projects ruff/crates/ty_python_semantic/resources/primer/good.txt \
--output ecosystem-diagnostics.json
@@ -62,7 +63,7 @@ jobs:
ecosystem-analyzer \
generate-report \
--max-diagnostics-per-project=1200 \
--max-diagnostics-per-project=1000 \
ecosystem-diagnostics.json \
--output dist/index.html

View File

@@ -37,6 +37,8 @@ exploration of new features, we will often close these pull requests immediately
new feature to ruff creates a long-term maintenance burden and requires strong consensus from the ruff
team before it is appropriate to begin work on an implementation.
## The Basics
### Prerequisites
Ruff is written in Rust. You'll need to install the

6
Cargo.lock generated
View File

@@ -3463,7 +3463,7 @@ checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
[[package]]
name = "salsa"
version = "0.23.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=3713cd7eb30821c0c086591832dd6f59f2af7fe7#3713cd7eb30821c0c086591832dd6f59f2af7fe7"
source = "git+https://github.com/salsa-rs/salsa.git?rev=29ab321b45d00daa4315fa2a06f7207759a8c87e#29ab321b45d00daa4315fa2a06f7207759a8c87e"
dependencies = [
"boxcar",
"compact_str",
@@ -3487,12 +3487,12 @@ dependencies = [
[[package]]
name = "salsa-macro-rules"
version = "0.23.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=3713cd7eb30821c0c086591832dd6f59f2af7fe7#3713cd7eb30821c0c086591832dd6f59f2af7fe7"
source = "git+https://github.com/salsa-rs/salsa.git?rev=29ab321b45d00daa4315fa2a06f7207759a8c87e#29ab321b45d00daa4315fa2a06f7207759a8c87e"
[[package]]
name = "salsa-macros"
version = "0.23.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=3713cd7eb30821c0c086591832dd6f59f2af7fe7#3713cd7eb30821c0c086591832dd6f59f2af7fe7"
source = "git+https://github.com/salsa-rs/salsa.git?rev=29ab321b45d00daa4315fa2a06f7207759a8c87e#29ab321b45d00daa4315fa2a06f7207759a8c87e"
dependencies = [
"proc-macro2",
"quote",

View File

@@ -144,7 +144,7 @@ regex-automata = { version = "0.4.9" }
rustc-hash = { version = "2.0.0" }
rustc-stable-hash = { version = "0.1.2" }
# When updating salsa, make sure to also update the revision in `fuzz/Cargo.toml`
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "3713cd7eb30821c0c086591832dd6f59f2af7fe7", default-features = false, features = [
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "29ab321b45d00daa4315fa2a06f7207759a8c87e", default-features = false, features = [
"compact_str",
"macros",
"salsa_unstable",

View File

@@ -416,6 +416,7 @@ pub struct CheckCommand {
conflicts_with = "stdin_filename",
conflicts_with = "watch",
conflicts_with = "fix",
conflicts_with = "diff",
)]
pub add_noqa: bool,
/// See the files Ruff will be run against with the current settings.
@@ -537,6 +538,14 @@ pub struct FormatCommand {
/// Exit with a non-zero status code if any files were modified via format, even if all files were formatted successfully.
#[arg(long, help_heading = "Miscellaneous", alias = "exit-non-zero-on-fix")]
pub exit_non_zero_on_format: bool,
/// Output serialization format for violations, when used with `--check`.
/// The default serialization format is "full".
///
/// Note that this option is currently only respected in preview mode. A warning will be emitted
/// if this flag is used on stable.
#[arg(long, value_enum, env = "RUFF_OUTPUT_FORMAT")]
pub output_format: Option<OutputFormat>,
}
#[derive(Copy, Clone, Debug, clap::Parser)]
@@ -784,6 +793,7 @@ impl FormatCommand {
target_version: self.target_version.map(ast::PythonVersion::from),
cache_dir: self.cache_dir,
extension: self.extension,
output_format: self.output_format,
..ExplicitConfigOverrides::default()
};

View File

@@ -9,11 +9,10 @@ use ignore::Error;
use log::{debug, warn};
#[cfg(not(target_family = "wasm"))]
use rayon::prelude::*;
use ruff_linter::message::create_panic_diagnostic;
use rustc_hash::FxHashMap;
use ruff_db::diagnostic::{
Annotation, Diagnostic, DiagnosticId, Span, SubDiagnostic, SubDiagnosticSeverity,
};
use ruff_db::diagnostic::Diagnostic;
use ruff_db::panic::catch_unwind;
use ruff_linter::package::PackageRoot;
use ruff_linter::registry::Rule;
@@ -195,23 +194,7 @@ fn lint_path(
match result {
Ok(inner) => inner,
Err(error) => {
let message = match error.payload.as_str() {
Some(summary) => format!("Fatal error while linting: {summary}"),
_ => "Fatal error while linting".to_owned(),
};
let mut diagnostic = Diagnostic::new(
DiagnosticId::Panic,
ruff_db::diagnostic::Severity::Fatal,
message,
);
let span = Span::from(SourceFileBuilder::new(path.to_string_lossy(), "").finish());
let mut annotation = Annotation::primary(span);
annotation.set_file_level(true);
diagnostic.annotate(annotation);
diagnostic.sub(SubDiagnostic::new(
SubDiagnosticSeverity::Info,
format!("{error}"),
));
let diagnostic = create_panic_diagnostic(&error, Some(path));
Ok(Diagnostics::new(vec![diagnostic], FxHashMap::default()))
}
}
@@ -227,7 +210,8 @@ mod test {
use rustc_hash::FxHashMap;
use tempfile::TempDir;
use ruff_linter::message::{Emitter, EmitterContext, TextEmitter};
use ruff_db::diagnostic::{DiagnosticFormat, DisplayDiagnosticConfig, DisplayDiagnostics};
use ruff_linter::message::EmitterContext;
use ruff_linter::registry::Rule;
use ruff_linter::settings::types::UnsafeFixes;
use ruff_linter::settings::{LinterSettings, flags};
@@ -280,19 +264,16 @@ mod test {
UnsafeFixes::Enabled,
)
.unwrap();
let mut output = Vec::new();
TextEmitter::default()
.with_show_fix_status(true)
.with_color(false)
.emit(
&mut output,
&diagnostics.inner,
&EmitterContext::new(&FxHashMap::default()),
)
.unwrap();
let messages = String::from_utf8(output).unwrap();
let config = DisplayDiagnosticConfig::default()
.format(DiagnosticFormat::Concise)
.hide_severity(true);
let messages = DisplayDiagnostics::new(
&EmitterContext::new(&FxHashMap::default()),
&config,
&diagnostics.inner,
)
.to_string();
insta::with_settings!({
omit_expression => true,

View File

@@ -11,13 +11,19 @@ use itertools::Itertools;
use log::{error, warn};
use rayon::iter::Either::{Left, Right};
use rayon::iter::{IntoParallelRefIterator, ParallelIterator};
use ruff_db::diagnostic::{
Annotation, Diagnostic, DiagnosticId, DisplayDiagnosticConfig, Severity, Span,
};
use ruff_linter::message::{EmitterContext, create_panic_diagnostic, render_diagnostics};
use ruff_linter::settings::types::OutputFormat;
use ruff_notebook::NotebookIndex;
use ruff_python_parser::ParseError;
use rustc_hash::FxHashSet;
use rustc_hash::{FxHashMap, FxHashSet};
use thiserror::Error;
use tracing::debug;
use ruff_db::panic::{PanicError, catch_unwind};
use ruff_diagnostics::SourceMap;
use ruff_diagnostics::{Edit, Fix, SourceMap};
use ruff_linter::fs;
use ruff_linter::logging::{DisplayParseError, LogLevel};
use ruff_linter::package::PackageRoot;
@@ -27,14 +33,15 @@ use ruff_linter::source_kind::{SourceError, SourceKind};
use ruff_linter::warn_user_once;
use ruff_python_ast::{PySourceType, SourceType};
use ruff_python_formatter::{FormatModuleError, QuoteStyle, format_module_source, format_range};
use ruff_source_file::LineIndex;
use ruff_source_file::{LineIndex, LineRanges, OneIndexed, SourceFileBuilder};
use ruff_text_size::{TextLen, TextRange, TextSize};
use ruff_workspace::FormatterSettings;
use ruff_workspace::resolver::{ResolvedFile, Resolver, match_exclusion, python_files_in_path};
use ruff_workspace::resolver::{
PyprojectConfig, ResolvedFile, Resolver, match_exclusion, python_files_in_path,
};
use crate::args::{ConfigArguments, FormatArguments, FormatRange};
use crate::cache::{Cache, FileCacheKey, PackageCacheMap, PackageCaches};
use crate::resolve::resolve;
use crate::{ExitStatus, resolve_default_files};
#[derive(Debug, Copy, Clone, is_macro::Is)]
@@ -63,11 +70,14 @@ impl FormatMode {
pub(crate) fn format(
cli: FormatArguments,
config_arguments: &ConfigArguments,
pyproject_config: &PyprojectConfig,
) -> Result<ExitStatus> {
let pyproject_config = resolve(config_arguments, cli.stdin_filename.as_deref())?;
let mode = FormatMode::from_cli(&cli);
let files = resolve_default_files(cli.files, false);
let (paths, resolver) = python_files_in_path(&files, &pyproject_config, config_arguments)?;
let (paths, resolver) = python_files_in_path(&files, pyproject_config, config_arguments)?;
let output_format = pyproject_config.settings.output_format;
let preview = pyproject_config.settings.formatter.preview;
if paths.is_empty() {
warn_user_once!("No Python files found under the given path(s)");
@@ -184,17 +194,26 @@ pub(crate) fn format(
caches.persist()?;
// Report on any errors.
errors.sort_unstable_by(|a, b| a.path().cmp(&b.path()));
//
// We only convert errors to `Diagnostic`s in `Check` mode with preview enabled, otherwise we
// fall back on printing simple messages.
if !(preview.is_enabled() && mode.is_check()) {
errors.sort_unstable_by(|a, b| a.path().cmp(&b.path()));
for error in &errors {
error!("{error}");
for error in &errors {
error!("{error}");
}
}
let results = FormatResults::new(results.as_slice(), mode);
match mode {
FormatMode::Write => {}
FormatMode::Check => {
results.write_changed(&mut stdout().lock())?;
if preview.is_enabled() {
results.write_changed_preview(&mut stdout().lock(), output_format, &errors)?;
} else {
results.write_changed(&mut stdout().lock())?;
}
}
FormatMode::Diff => {
results.write_diff(&mut stdout().lock())?;
@@ -206,7 +225,7 @@ pub(crate) fn format(
if mode.is_diff() {
// Allow piping the diff to e.g. a file by writing the summary to stderr
results.write_summary(&mut stderr().lock())?;
} else {
} else if !preview.is_enabled() || output_format.is_human_readable() {
results.write_summary(&mut stdout().lock())?;
}
}
@@ -295,8 +314,7 @@ pub(crate) fn format_path(
FormatResult::Formatted
}
FormatMode::Check => FormatResult::Formatted,
FormatMode::Diff => FormatResult::Diff {
FormatMode::Check | FormatMode::Diff => FormatResult::Diff {
unformatted,
formatted,
},
@@ -329,7 +347,7 @@ pub(crate) enum FormattedSource {
impl From<FormattedSource> for FormatResult {
fn from(value: FormattedSource) -> Self {
match value {
FormattedSource::Formatted(_) => FormatResult::Formatted,
FormattedSource::Formatted { .. } => FormatResult::Formatted,
FormattedSource::Unchanged => FormatResult::Unchanged,
}
}
@@ -477,10 +495,10 @@ pub(crate) fn format_source(
/// The result of an individual formatting operation.
#[derive(Debug, Clone, is_macro::Is)]
pub(crate) enum FormatResult {
/// The file was formatted.
/// The file was formatted and written back to disk.
Formatted,
/// The file was formatted, [`SourceKind`] contains the formatted code
/// The file needs to be formatted, as the `formatted` and `unformatted` contents differ.
Diff {
unformatted: SourceKind,
formatted: SourceKind,
@@ -552,7 +570,7 @@ impl<'a> FormatResults<'a> {
.results
.iter()
.filter_map(|result| {
if result.result.is_formatted() {
if result.result.is_diff() {
Some(result.path.as_path())
} else {
None
@@ -566,6 +584,30 @@ impl<'a> FormatResults<'a> {
Ok(())
}
/// Write a list of the files that would be changed and any errors to the given writer.
fn write_changed_preview(
&self,
f: &mut impl Write,
output_format: OutputFormat,
errors: &[FormatCommandError],
) -> io::Result<()> {
let mut notebook_index = FxHashMap::default();
let diagnostics: Vec<_> = errors
.iter()
.map(Diagnostic::from)
.chain(self.to_diagnostics(&mut notebook_index))
.sorted_unstable_by(Diagnostic::ruff_start_ordering)
.collect();
let context = EmitterContext::new(&notebook_index);
let config = DisplayDiagnosticConfig::default()
.hide_severity(true)
.show_fix_diff(true)
.color(!cfg!(test) && colored::control::SHOULD_COLORIZE.should_colorize());
render_diagnostics(f, output_format, config, &context, &diagnostics)
}
/// Write a summary of the formatting results to the given writer.
fn write_summary(&self, f: &mut impl Write) -> io::Result<()> {
// Compute the number of changed and unchanged files.
@@ -628,6 +670,155 @@ impl<'a> FormatResults<'a> {
Ok(())
}
}
/// Convert formatted files into [`Diagnostic`]s.
fn to_diagnostics(
&self,
notebook_index: &mut FxHashMap<String, NotebookIndex>,
) -> impl Iterator<Item = Diagnostic> {
/// The number of unmodified context lines rendered in diffs.
///
/// Note that this should be kept in sync with the argument to `TextDiff::grouped_ops` in
/// the diff rendering in `ruff_db` (currently 3). The `similar` crate uses two times that
/// argument as a cutoff for rendering unmodified lines.
const CONTEXT_LINES: u32 = 6;
self.results.iter().filter_map(|result| {
let (unformatted, formatted) = match &result.result {
FormatResult::Skipped | FormatResult::Unchanged => return None,
FormatResult::Diff {
unformatted,
formatted,
} => (unformatted, formatted),
FormatResult::Formatted => {
debug_assert!(
false,
"Expected `FormatResult::Diff` for changed files in check mode"
);
return None;
}
};
let mut diagnostic = Diagnostic::new(
DiagnosticId::Unformatted,
Severity::Error,
"File would be reformatted",
);
// Locate the first and last characters that differ to use as the diagnostic
// range and to narrow the `Edit` range.
let modified_range = ModifiedRange::new(unformatted, formatted);
let path = result.path.to_string_lossy();
// For scripts, this is a single `Edit` using the `ModifiedRange` above, but notebook
// edits must be split by cell in order to render them as diffs.
//
// We also attempt to estimate the line number width for aligning the
// annotate-snippets header. This is only an estimate because we don't actually know
// if the maximum line number present in the document will be rendered as part of
// the diff, either as a changed line or as an unchanged context line. For
// notebooks, we refine our estimate by checking the number of lines in each cell
// individually, otherwise we could use `formatted.source_code().count_lines(...)`
// in both cases.
let (fix, line_count) = if let SourceKind::IpyNotebook(formatted) = formatted
&& let SourceKind::IpyNotebook(unformatted) = unformatted
{
notebook_index.insert(path.to_string(), unformatted.index().clone());
let mut edits = formatted
.cell_offsets()
.ranges()
.zip(unformatted.cell_offsets().ranges())
.filter_map(|(formatted_range, unformatted_range)| {
// Filter out cells that weren't modified. We use `intersect` instead of
// `contains_range` because the full modified range might start or end in
// the middle of a cell:
//
// ```
// | cell 1 | cell 2 | cell 3 |
// |----------------| modified range
// ```
//
// The intersection will be `Some` for all three cells in this case.
if modified_range
.unformatted
.intersect(unformatted_range)
.is_some()
{
let formatted = &formatted.source_code()[formatted_range];
let edit = if formatted.is_empty() {
Edit::range_deletion(unformatted_range)
} else {
Edit::range_replacement(formatted.to_string(), unformatted_range)
};
Some(edit)
} else {
None
}
});
let fix = Fix::safe_edits(
edits
.next()
.expect("Formatted files must have at least one edit"),
edits,
);
let source = formatted.source_code();
let line_count = formatted
.cell_offsets()
.ranges()
.filter_map(|range| {
if modified_range.formatted.contains_range(range) {
Some(source.count_lines(range))
} else {
None
}
})
.max()
.unwrap_or_default();
(fix, line_count)
} else {
let formatted_code = &formatted.source_code()[modified_range.formatted];
let edit = if formatted_code.is_empty() {
Edit::range_deletion(modified_range.unformatted)
} else {
Edit::range_replacement(formatted_code.to_string(), modified_range.unformatted)
};
let fix = Fix::safe_edit(edit);
let line_count = formatted
.source_code()
.count_lines(TextRange::up_to(modified_range.formatted.end()));
(fix, line_count)
};
let source_file = SourceFileBuilder::new(path, unformatted.source_code()).finish();
let span = Span::from(source_file).with_range(modified_range.unformatted);
let mut annotation = Annotation::primary(span);
annotation.hide_snippet(true);
diagnostic.annotate(annotation);
diagnostic.set_fix(fix);
// TODO(brent) this offset is a hack to get the header of the diagnostic message, which
// is rendered by our fork of `annotate-snippets`, to align with our manually-rendered
// diff. `annotate-snippets` computes the alignment of the arrow in the header based on
// the maximum line number width in its rendered snippet. However, we don't have a
// reasonable range to underline in an annotation, so we don't send `annotate-snippets`
// a snippet to measure. If we commit to staying on our fork, a more robust way of
// handling this would be to move the diff rendering in
// `ruff_db::diagnostic::render::full` into `annotate-snippets`, likely as another
// `DisplayLine` variant and update the `lineno_width` calculation in
// `DisplayList::fmt`. That would handle this offset "automatically."
let line_count = (line_count + CONTEXT_LINES).min(
formatted
.source_code()
.count_lines(TextRange::up_to(formatted.source_code().text_len())),
);
let lines = OneIndexed::new(line_count as usize).unwrap_or_default();
diagnostic.set_header_offset(lines.digits().get());
Some(diagnostic)
})
}
}
/// An error that can occur while formatting a set of files.
@@ -639,7 +830,6 @@ pub(crate) enum FormatCommandError {
Read(Option<PathBuf>, SourceError),
Format(Option<PathBuf>, FormatModuleError),
Write(Option<PathBuf>, SourceError),
Diff(Option<PathBuf>, io::Error),
RangeFormatNotebook(Option<PathBuf>),
}
@@ -658,12 +848,65 @@ impl FormatCommandError {
| Self::Read(path, _)
| Self::Format(path, _)
| Self::Write(path, _)
| Self::Diff(path, _)
| Self::RangeFormatNotebook(path) => path.as_deref(),
}
}
}
impl From<&FormatCommandError> for Diagnostic {
fn from(error: &FormatCommandError) -> Self {
let annotation = error.path().map(|path| {
let file = SourceFileBuilder::new(path.to_string_lossy(), "").finish();
let span = Span::from(file);
let mut annotation = Annotation::primary(span);
annotation.hide_snippet(true);
annotation
});
let mut diagnostic = match error {
FormatCommandError::Ignore(error) => {
Diagnostic::new(DiagnosticId::Io, Severity::Error, error)
}
FormatCommandError::Parse(display_parse_error) => Diagnostic::new(
DiagnosticId::InvalidSyntax,
Severity::Error,
&display_parse_error.error().error,
),
FormatCommandError::Panic(path, panic_error) => {
return create_panic_diagnostic(panic_error, path.as_deref());
}
FormatCommandError::Read(_, source_error)
| FormatCommandError::Write(_, source_error) => {
Diagnostic::new(DiagnosticId::Io, Severity::Error, source_error)
}
FormatCommandError::Format(_, format_module_error) => match format_module_error {
FormatModuleError::ParseError(parse_error) => Diagnostic::new(
DiagnosticId::InternalError,
Severity::Error,
&parse_error.error,
),
FormatModuleError::FormatError(format_error) => {
Diagnostic::new(DiagnosticId::InternalError, Severity::Error, format_error)
}
FormatModuleError::PrintError(print_error) => {
Diagnostic::new(DiagnosticId::InternalError, Severity::Error, print_error)
}
},
FormatCommandError::RangeFormatNotebook(_) => Diagnostic::new(
DiagnosticId::InvalidCliOption,
Severity::Error,
"Range formatting isn't supported for notebooks.",
),
};
if let Some(annotation) = annotation {
diagnostic.annotate(annotation);
}
diagnostic
}
}
impl Display for FormatCommandError {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
@@ -731,23 +974,6 @@ impl Display for FormatCommandError {
write!(f, "{header} {err}", header = "Failed to format:".bold())
}
}
Self::Diff(path, err) => {
if let Some(path) = path {
write!(
f,
"{}{}{} {err}",
"Failed to generate diff for ".bold(),
fs::relativize_path(path).bold(),
":".bold()
)
} else {
write!(
f,
"{header} {err}",
header = "Failed to generate diff:".bold(),
)
}
}
Self::RangeFormatNotebook(path) => {
if let Some(path) = path {
write!(
@@ -792,6 +1018,54 @@ impl Display for FormatCommandError {
}
}
#[derive(Debug)]
struct ModifiedRange {
unformatted: TextRange,
formatted: TextRange,
}
impl ModifiedRange {
/// Determine the range that differs between `unformatted` and `formatted`.
///
/// If the two inputs are equal, the returned ranges will be empty.
fn new(unformatted: &SourceKind, formatted: &SourceKind) -> Self {
let unformatted = unformatted.source_code();
let formatted = formatted.source_code();
let mut prefix_length = TextSize::ZERO;
for (unformatted, formatted) in unformatted.chars().zip(formatted.chars()) {
if unformatted != formatted {
break;
}
prefix_length += unformatted.text_len();
}
// For the ends of the ranges, track the length of the common suffix and then subtract that
// from each total text length. Unlike for `start`, the character offsets are very unlikely
// to be equal, so they need to be treated separately.
let mut suffix_length = TextSize::ZERO;
for (old, new) in unformatted[prefix_length.to_usize()..]
.chars()
.rev()
.zip(formatted[prefix_length.to_usize()..].chars().rev())
{
if old != new {
break;
}
suffix_length += old.text_len();
}
let unformatted_range =
TextRange::new(prefix_length, unformatted.text_len() - suffix_length);
let formatted_range = TextRange::new(prefix_length, formatted.text_len() - suffix_length);
Self {
unformatted: unformatted_range,
formatted: formatted_range,
}
}
}
pub(super) fn warn_incompatible_formatter_settings(resolver: &Resolver) {
// First, collect all rules that are incompatible regardless of the linter-specific settings.
let mut incompatible_rules = FxHashSet::default();
@@ -963,3 +1237,144 @@ pub(super) fn warn_incompatible_formatter_settings(resolver: &Resolver) {
}
}
}
#[cfg(test)]
mod tests {
use std::io;
use std::ops::Range;
use std::path::PathBuf;
use ignore::Error;
use insta::assert_snapshot;
use ruff_db::panic::catch_unwind;
use ruff_linter::logging::DisplayParseError;
use ruff_linter::source_kind::{SourceError, SourceKind};
use ruff_python_formatter::FormatModuleError;
use ruff_python_parser::{ParseError, ParseErrorType};
use ruff_text_size::{TextRange, TextSize};
use test_case::test_case;
use crate::commands::format::{FormatCommandError, FormatMode, FormatResults, ModifiedRange};
#[test]
fn error_diagnostics() -> anyhow::Result<()> {
let path = PathBuf::from("test.py");
let source_kind = SourceKind::Python("1".to_string());
let panic_error = catch_unwind(|| {
panic!("Test panic for FormatCommandError");
})
.unwrap_err();
let errors = [
FormatCommandError::Ignore(Error::WithPath {
path: path.clone(),
err: Box::new(Error::Io(io::Error::new(
io::ErrorKind::PermissionDenied,
"Permission denied",
))),
}),
FormatCommandError::Parse(DisplayParseError::from_source_kind(
ParseError {
error: ParseErrorType::UnexpectedIndentation,
location: TextRange::default(),
},
Some(path.clone()),
&source_kind,
)),
FormatCommandError::Panic(Some(path.clone()), Box::new(panic_error)),
FormatCommandError::Read(
Some(path.clone()),
SourceError::Io(io::Error::new(io::ErrorKind::NotFound, "File not found")),
),
FormatCommandError::Format(
Some(path.clone()),
FormatModuleError::ParseError(ParseError {
error: ParseErrorType::EmptySlice,
location: TextRange::default(),
}),
),
FormatCommandError::Write(
Some(path.clone()),
SourceError::Io(io::Error::new(
io::ErrorKind::PermissionDenied,
"Cannot write to file",
)),
),
FormatCommandError::RangeFormatNotebook(Some(path)),
];
let results = FormatResults::new(&[], FormatMode::Check);
let mut buf = Vec::new();
results.write_changed_preview(
&mut buf,
ruff_linter::settings::types::OutputFormat::Full,
&errors,
)?;
let mut settings = insta::Settings::clone_current();
settings.add_filter(r"(Panicked at) [^:]+:\d+:\d+", "$1 <location>");
let _s = settings.bind_to_scope();
assert_snapshot!(str::from_utf8(&buf)?, @r"
io: test.py: Permission denied
--> test.py:1:1
invalid-syntax: Unexpected indentation
--> test.py:1:1
io: File not found
--> test.py:1:1
internal-error: Expected index or slice expression
--> test.py:1:1
io: Cannot write to file
--> test.py:1:1
invalid-cli-option: Range formatting isn't supported for notebooks.
--> test.py:1:1
panic: Panicked at <location> when checking `test.py`: `Test panic for FormatCommandError`
--> test.py:1:1
info: This indicates a bug in Ruff.
info: If you could open an issue at https://github.com/astral-sh/ruff/issues/new?title=%5Bpanic%5D, we'd be very appreciative!
info: run with `RUST_BACKTRACE=1` environment variable to show the full backtrace information
");
Ok(())
}
#[test_case("abcdef", "abcXYdef", 3..3, 3..5; "insertion")]
#[test_case("abcXYdef", "abcdef", 3..5, 3..3; "deletion")]
#[test_case("abcXdef", "abcYdef", 3..4, 3..4; "modification")]
#[test_case("abc", "abcX", 3..3, 3..4; "strict_prefix")]
#[test_case("", "", 0..0, 0..0; "empty")]
#[test_case("abc", "abc", 3..3, 3..3; "equal")]
fn modified_range(
unformatted: &str,
formatted: &str,
expect_unformatted: Range<u32>,
expect_formatted: Range<u32>,
) {
let mr = ModifiedRange::new(
&SourceKind::Python(unformatted.to_string()),
&SourceKind::Python(formatted.to_string()),
);
assert_eq!(
mr.unformatted,
TextRange::new(
TextSize::new(expect_unformatted.start),
TextSize::new(expect_unformatted.end)
)
);
assert_eq!(
mr.formatted,
TextRange::new(
TextSize::new(expect_formatted.start),
TextSize::new(expect_formatted.end)
)
);
}
}

View File

@@ -4,10 +4,10 @@ use std::path::Path;
use anyhow::Result;
use log::error;
use ruff_linter::source_kind::SourceKind;
use ruff_linter::source_kind::{SourceError, SourceKind};
use ruff_python_ast::{PySourceType, SourceType};
use ruff_workspace::FormatterSettings;
use ruff_workspace::resolver::{Resolver, match_exclusion, python_file_at_path};
use ruff_workspace::resolver::{PyprojectConfig, Resolver, match_exclusion, python_file_at_path};
use crate::ExitStatus;
use crate::args::{ConfigArguments, FormatArguments, FormatRange};
@@ -15,17 +15,15 @@ use crate::commands::format::{
FormatCommandError, FormatMode, FormatResult, FormattedSource, format_source,
warn_incompatible_formatter_settings,
};
use crate::resolve::resolve;
use crate::stdin::{parrot_stdin, read_from_stdin};
/// Run the formatter over a single file, read from `stdin`.
pub(crate) fn format_stdin(
cli: &FormatArguments,
config_arguments: &ConfigArguments,
pyproject_config: &PyprojectConfig,
) -> Result<ExitStatus> {
let pyproject_config = resolve(config_arguments, cli.stdin_filename.as_deref())?;
let mut resolver = Resolver::new(&pyproject_config);
let mut resolver = Resolver::new(pyproject_config);
warn_incompatible_formatter_settings(&resolver);
let mode = FormatMode::from_cli(cli);
@@ -124,7 +122,9 @@ fn format_source_code(
"{}",
source_kind.diff(formatted, path).unwrap()
)
.map_err(|err| FormatCommandError::Diff(path.map(Path::to_path_buf), err))?;
.map_err(|err| {
FormatCommandError::Write(path.map(Path::to_path_buf), SourceError::Io(err))
})?;
}
},
FormattedSource::Unchanged => {

View File

@@ -205,12 +205,18 @@ pub fn run(
}
fn format(args: FormatCommand, global_options: GlobalConfigArgs) -> Result<ExitStatus> {
let cli_output_format_set = args.output_format.is_some();
let (cli, config_arguments) = args.partition(global_options)?;
let pyproject_config = resolve::resolve(&config_arguments, cli.stdin_filename.as_deref())?;
if cli_output_format_set && !pyproject_config.settings.formatter.preview.is_enabled() {
warn_user_once!(
"The --output-format flag for the formatter is unstable and requires preview mode to use."
);
}
if is_stdin(&cli.files, cli.stdin_filename.as_deref()) {
commands::format_stdin::format_stdin(&cli, &config_arguments)
commands::format_stdin::format_stdin(&cli, &config_arguments, &pyproject_config)
} else {
commands::format::format(cli, &config_arguments)
commands::format::format(cli, &config_arguments, &pyproject_config)
}
}

View File

@@ -10,12 +10,11 @@ use ruff_linter::linter::FixTable;
use serde::Serialize;
use ruff_db::diagnostic::{
Diagnostic, DiagnosticFormat, DisplayDiagnosticConfig, DisplayDiagnostics,
DisplayGithubDiagnostics, GithubRenderer, SecondaryCode,
Diagnostic, DiagnosticFormat, DisplayDiagnosticConfig, DisplayDiagnostics, SecondaryCode,
};
use ruff_linter::fs::relativize_path;
use ruff_linter::logging::LogLevel;
use ruff_linter::message::{Emitter, EmitterContext, GroupedEmitter, SarifEmitter, TextEmitter};
use ruff_linter::message::{EmitterContext, render_diagnostics};
use ruff_linter::notify_user;
use ruff_linter::settings::flags::{self};
use ruff_linter::settings::types::{OutputFormat, UnsafeFixes};
@@ -225,86 +224,28 @@ impl Printer {
let context = EmitterContext::new(&diagnostics.notebook_indexes);
let fixables = FixableStatistics::try_from(diagnostics, self.unsafe_fixes);
let config = DisplayDiagnosticConfig::default().preview(preview);
let config = DisplayDiagnosticConfig::default()
.preview(preview)
.hide_severity(true)
.color(!cfg!(test) && colored::control::SHOULD_COLORIZE.should_colorize())
.with_show_fix_status(show_fix_status(self.fix_mode, fixables.as_ref()))
.with_fix_applicability(self.unsafe_fixes.required_applicability())
.show_fix_diff(preview);
match self.format {
OutputFormat::Json => {
let config = config.format(DiagnosticFormat::Json);
let value = DisplayDiagnostics::new(&context, &config, &diagnostics.inner);
write!(writer, "{value}")?;
}
OutputFormat::Rdjson => {
let config = config.format(DiagnosticFormat::Rdjson);
let value = DisplayDiagnostics::new(&context, &config, &diagnostics.inner);
write!(writer, "{value}")?;
}
OutputFormat::JsonLines => {
let config = config.format(DiagnosticFormat::JsonLines);
let value = DisplayDiagnostics::new(&context, &config, &diagnostics.inner);
write!(writer, "{value}")?;
}
OutputFormat::Junit => {
let config = config.format(DiagnosticFormat::Junit);
let value = DisplayDiagnostics::new(&context, &config, &diagnostics.inner);
write!(writer, "{value}")?;
}
OutputFormat::Concise | OutputFormat::Full => {
TextEmitter::default()
.with_show_fix_status(show_fix_status(self.fix_mode, fixables.as_ref()))
.with_show_fix_diff(self.format == OutputFormat::Full && preview)
.with_show_source(self.format == OutputFormat::Full)
.with_fix_applicability(self.unsafe_fixes.required_applicability())
.with_preview(preview)
.emit(writer, &diagnostics.inner, &context)?;
render_diagnostics(writer, self.format, config, &context, &diagnostics.inner)?;
if self.flags.intersects(Flags::SHOW_FIX_SUMMARY) {
if !diagnostics.fixed.is_empty() {
writeln!(writer)?;
print_fix_summary(writer, &diagnostics.fixed)?;
writeln!(writer)?;
}
if matches!(
self.format,
OutputFormat::Full | OutputFormat::Concise | OutputFormat::Grouped
) {
if self.flags.intersects(Flags::SHOW_FIX_SUMMARY) {
if !diagnostics.fixed.is_empty() {
writeln!(writer)?;
print_fix_summary(writer, &diagnostics.fixed)?;
writeln!(writer)?;
}
self.write_summary_text(writer, diagnostics)?;
}
OutputFormat::Grouped => {
GroupedEmitter::default()
.with_show_fix_status(show_fix_status(self.fix_mode, fixables.as_ref()))
.with_unsafe_fixes(self.unsafe_fixes)
.emit(writer, &diagnostics.inner, &context)?;
if self.flags.intersects(Flags::SHOW_FIX_SUMMARY) {
if !diagnostics.fixed.is_empty() {
writeln!(writer)?;
print_fix_summary(writer, &diagnostics.fixed)?;
writeln!(writer)?;
}
}
self.write_summary_text(writer, diagnostics)?;
}
OutputFormat::Github => {
let renderer = GithubRenderer::new(&context, "Ruff");
let value = DisplayGithubDiagnostics::new(&renderer, &diagnostics.inner);
write!(writer, "{value}")?;
}
OutputFormat::Gitlab => {
let config = config.format(DiagnosticFormat::Gitlab);
let value = DisplayDiagnostics::new(&context, &config, &diagnostics.inner);
write!(writer, "{value}")?;
}
OutputFormat::Pylint => {
let config = config.format(DiagnosticFormat::Pylint);
let value = DisplayDiagnostics::new(&context, &config, &diagnostics.inner);
write!(writer, "{value}")?;
}
OutputFormat::Azure => {
let config = config.format(DiagnosticFormat::Azure);
let value = DisplayDiagnostics::new(&context, &config, &diagnostics.inner);
write!(writer, "{value}")?;
}
OutputFormat::Sarif => {
SarifEmitter.emit(writer, &diagnostics.inner, &context)?;
}
self.write_summary_text(writer, diagnostics)?;
}
writer.flush()?;
@@ -448,11 +389,22 @@ impl Printer {
}
let context = EmitterContext::new(&diagnostics.notebook_indexes);
TextEmitter::default()
let format = if preview {
DiagnosticFormat::Full
} else {
DiagnosticFormat::Concise
};
let config = DisplayDiagnosticConfig::default()
.hide_severity(true)
.color(!cfg!(test) && colored::control::SHOULD_COLORIZE.should_colorize())
.with_show_fix_status(show_fix_status(self.fix_mode, fixables.as_ref()))
.with_show_source(preview)
.with_fix_applicability(self.unsafe_fixes.required_applicability())
.emit(writer, &diagnostics.inner, &context)?;
.format(format)
.with_fix_applicability(self.unsafe_fixes.required_applicability());
write!(
writer,
"{}",
DisplayDiagnostics::new(&context, &config, &diagnostics.inner)
)?;
}
writer.flush()?;

View File

@@ -12,8 +12,8 @@ use tempfile::TempDir;
const BIN_NAME: &str = "ruff";
fn tempdir_filter(tempdir: &TempDir) -> String {
format!(r"{}\\?/?", escape(tempdir.path().to_str().unwrap()))
fn tempdir_filter(path: impl AsRef<Path>) -> String {
format!(r"{}\\?/?", escape(path.as_ref().to_str().unwrap()))
}
#[test]
@@ -609,6 +609,112 @@ if __name__ == "__main__":
Ok(())
}
#[test_case::test_case("concise")]
#[test_case::test_case("full")]
#[test_case::test_case("json")]
#[test_case::test_case("json-lines")]
#[test_case::test_case("junit")]
#[test_case::test_case("grouped")]
#[test_case::test_case("github")]
#[test_case::test_case("gitlab")]
#[test_case::test_case("pylint")]
#[test_case::test_case("rdjson")]
#[test_case::test_case("azure")]
#[test_case::test_case("sarif")]
fn output_format(output_format: &str) -> Result<()> {
const CONTENT: &str = r#"
from test import say_hy
if __name__ == "__main__":
say_hy("dear Ruff contributor")
"#;
let tempdir = TempDir::new()?;
let input = tempdir.path().join("input.py");
fs::write(&input, CONTENT)?;
let snapshot = format!("output_format_{output_format}");
let project_dir = dunce::canonicalize(tempdir.path())?;
insta::with_settings!({
filters => vec![
(tempdir_filter(&project_dir).as_str(), "[TMP]/"),
(tempdir_filter(&tempdir).as_str(), "[TMP]/"),
(r#""[^"]+\\?/?input.py"#, r#""[TMP]/input.py"#),
(ruff_linter::VERSION, "[VERSION]"),
]
}, {
assert_cmd_snapshot!(
snapshot,
Command::new(get_cargo_bin(BIN_NAME))
.args([
"format",
"--no-cache",
"--output-format",
output_format,
"--preview",
"--check",
"input.py",
])
.current_dir(&tempdir),
);
});
Ok(())
}
#[test]
fn output_format_notebook() {
let args = ["format", "--no-cache", "--isolated", "--preview", "--check"];
let fixtures = Path::new("resources").join("test").join("fixtures");
let path = fixtures.join("unformatted.ipynb");
insta::with_settings!({filters => vec![
// Replace windows paths
(r"\\", "/"),
]}, {
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME)).args(args).arg(path),
@r"
success: false
exit_code: 1
----- stdout -----
unformatted: File would be reformatted
--> resources/test/fixtures/unformatted.ipynb:cell 1:1:1
::: cell 1
1 | import numpy
- maths = (numpy.arange(100)**2).sum()
- stats= numpy.asarray([1,2,3,4]).median()
2 +
3 + maths = (numpy.arange(100) ** 2).sum()
4 + stats = numpy.asarray([1, 2, 3, 4]).median()
::: cell 3
1 | # A cell with IPython escape command
2 | def some_function(foo, bar):
3 | pass
4 +
5 +
6 | %matplotlib inline
::: cell 4
1 | foo = %pwd
- def some_function(foo,bar,):
2 +
3 +
4 + def some_function(
5 + foo,
6 + bar,
7 + ):
8 | # Another cell with IPython escape command
9 | foo = %pwd
10 | print(foo)
1 file would be reformatted
----- stderr -----
");
});
}
#[test]
fn exit_non_zero_on_format() -> Result<()> {
let tempdir = TempDir::new()?;
@@ -2355,3 +2461,21 @@ fn cookiecutter_globbing() -> Result<()> {
Ok(())
}
#[test]
fn stable_output_format_warning() {
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--output-format=full", "-"])
.pass_stdin("1"),
@r"
success: true
exit_code: 0
----- stdout -----
1
----- stderr -----
warning: The --output-format flag for the formatter is unstable and requires preview mode to use.
",
);
}

View File

@@ -2445,6 +2445,7 @@ requires-python = ">= 3.11"
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -2758,6 +2759,7 @@ requires-python = ">= 3.11"
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -3070,6 +3072,7 @@ requires-python = ">= 3.11"
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -3434,6 +3437,7 @@ from typing import Union;foo: Union[int, str] = 1
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -3814,6 +3818,7 @@ from typing import Union;foo: Union[int, str] = 1
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -4142,6 +4147,7 @@ from typing import Union;foo: Union[int, str] = 1
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -4470,6 +4476,7 @@ from typing import Union;foo: Union[int, str] = 1
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -4755,6 +4762,7 @@ from typing import Union;foo: Union[int, str] = 1
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -5093,6 +5101,7 @@ from typing import Union;foo: Union[int, str] = 1
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -6190,6 +6199,36 @@ match 42: # invalid-syntax
Ok(())
}
#[test_case::test_case("concise"; "concise_show_fixes")]
#[test_case::test_case("full"; "full_show_fixes")]
#[test_case::test_case("grouped"; "grouped_show_fixes")]
fn output_format_show_fixes(output_format: &str) -> Result<()> {
let tempdir = TempDir::new()?;
let input = tempdir.path().join("input.py");
fs::write(&input, "import os # F401")?;
let snapshot = format!("output_format_show_fixes_{output_format}");
assert_cmd_snapshot!(
snapshot,
Command::new(get_cargo_bin(BIN_NAME))
.args([
"check",
"--no-cache",
"--output-format",
output_format,
"--select",
"F401",
"--fix",
"--show-fixes",
"input.py",
])
.current_dir(&tempdir),
);
Ok(())
}
#[test]
fn up045_nested_optional_flatten_all() {
let contents = "\
@@ -6263,6 +6302,7 @@ fn rule_panic_mixed_results_concise() -> Result<()> {
filters => vec![
(tempdir_filter(&tempdir).as_str(), "[TMP]/"),
(r"\\", r"/"),
(r"(Panicked at) [^:]+:\d+:\d+", "$1 <location>")
]
}, {
assert_cmd_snapshot!(
@@ -6279,7 +6319,7 @@ fn rule_panic_mixed_results_concise() -> Result<()> {
[TMP]/normal.py:1:1: RUF903 Hey this is a stable test rule with a display only fix.
[TMP]/normal.py:1:1: RUF911 Hey this is a preview test rule.
[TMP]/normal.py:1:1: RUF950 Hey this is a test rule that was redirected from another.
[TMP]/panic.py: panic: Fatal error while linting: This is a fake panic for testing.
[TMP]/panic.py: panic: Panicked at <location> when checking `[TMP]/panic.py`: `This is a fake panic for testing.`
Found 7 errors.
[*] 1 fixable with the `--fix` option (1 hidden fix can be enabled with the `--unsafe-fixes` option).
@@ -6308,6 +6348,7 @@ fn rule_panic_mixed_results_full() -> Result<()> {
filters => vec![
(tempdir_filter(&tempdir).as_str(), "[TMP]/"),
(r"\\", r"/"),
(r"(Panicked at) [^:]+:\d+:\d+", "$1 <location>"),
]
}, {
assert_cmd_snapshot!(
@@ -6338,12 +6379,11 @@ fn rule_panic_mixed_results_full() -> Result<()> {
RUF950 Hey this is a test rule that was redirected from another.
--> [TMP]/normal.py:1:1
panic: Fatal error while linting: This is a fake panic for testing.
panic: Panicked at <location> when checking `[TMP]/panic.py`: `This is a fake panic for testing.`
--> [TMP]/panic.py:1:1
info: panicked at crates/ruff_linter/src/rules/ruff/rules/test_rules.rs:511:9:
This is a fake panic for testing.
run with `RUST_BACKTRACE=1` environment variable to display a backtrace
info: This indicates a bug in Ruff.
info: If you could open an issue at https://github.com/astral-sh/ruff/issues/new?title=%5Bpanic%5D, we'd be very appreciative!
info: run with `RUST_BACKTRACE=1` environment variable to show the full backtrace information
Found 7 errors.
[*] 1 fixable with the `--fix` option (1 hidden fix can be enabled with the `--unsafe-fixes` option).

View File

@@ -0,0 +1,19 @@
---
source: crates/ruff/tests/format.rs
info:
program: ruff
args:
- format
- "--no-cache"
- "--output-format"
- azure
- "--preview"
- "--check"
- input.py
---
success: false
exit_code: 1
----- stdout -----
##vso[task.logissue type=error;sourcepath=[TMP]/input.py;linenumber=1;columnnumber=1;code=unformatted;]File would be reformatted
----- stderr -----

View File

@@ -0,0 +1,20 @@
---
source: crates/ruff/tests/format.rs
info:
program: ruff
args:
- format
- "--no-cache"
- "--output-format"
- concise
- "--preview"
- "--check"
- input.py
---
success: false
exit_code: 1
----- stdout -----
input.py:1:1: unformatted: File would be reformatted
1 file would be reformatted
----- stderr -----

View File

@@ -0,0 +1,26 @@
---
source: crates/ruff/tests/format.rs
info:
program: ruff
args:
- format
- "--no-cache"
- "--output-format"
- full
- "--preview"
- "--check"
- input.py
---
success: false
exit_code: 1
----- stdout -----
unformatted: File would be reformatted
--> input.py:1:1
-
1 | from test import say_hy
2 |
3 | if __name__ == "__main__":
1 file would be reformatted
----- stderr -----

View File

@@ -0,0 +1,19 @@
---
source: crates/ruff/tests/format.rs
info:
program: ruff
args:
- format
- "--no-cache"
- "--output-format"
- github
- "--preview"
- "--check"
- input.py
---
success: false
exit_code: 1
----- stdout -----
::error title=Ruff (unformatted),file=[TMP]/input.py,line=1,col=1,endLine=2,endColumn=1::input.py:1:1: unformatted: File would be reformatted
----- stderr -----

View File

@@ -0,0 +1,38 @@
---
source: crates/ruff/tests/format.rs
info:
program: ruff
args:
- format
- "--no-cache"
- "--output-format"
- gitlab
- "--preview"
- "--check"
- input.py
---
success: false
exit_code: 1
----- stdout -----
[
{
"check_name": "unformatted",
"description": "unformatted: File would be reformatted",
"severity": "major",
"fingerprint": "d868d7da11a65fcf",
"location": {
"path": "input.py",
"positions": {
"begin": {
"line": 1,
"column": 1
},
"end": {
"line": 2,
"column": 1
}
}
}
}
]
----- stderr -----

View File

@@ -0,0 +1,21 @@
---
source: crates/ruff/tests/format.rs
info:
program: ruff
args:
- format
- "--no-cache"
- "--output-format"
- grouped
- "--check"
- input.py
---
success: false
exit_code: 1
----- stdout -----
input.py:
1:1 unformatted: File would be reformatted
1 file would be reformatted
----- stderr -----

View File

@@ -0,0 +1,19 @@
---
source: crates/ruff/tests/format.rs
info:
program: ruff
args:
- format
- "--no-cache"
- "--output-format"
- json-lines
- "--preview"
- "--check"
- input.py
---
success: false
exit_code: 1
----- stdout -----
{"cell":null,"code":"unformatted","end_location":{"column":1,"row":2},"filename":"[TMP]/input.py","fix":{"applicability":"safe","edits":[{"content":"","end_location":{"column":1,"row":2},"location":{"column":1,"row":1}}],"message":null},"location":{"column":1,"row":1},"message":"File would be reformatted","noqa_row":null,"url":null}
----- stderr -----

View File

@@ -0,0 +1,52 @@
---
source: crates/ruff/tests/format.rs
info:
program: ruff
args:
- format
- "--no-cache"
- "--output-format"
- json
- "--preview"
- "--check"
- input.py
---
success: false
exit_code: 1
----- stdout -----
[
{
"cell": null,
"code": "unformatted",
"end_location": {
"column": 1,
"row": 2
},
"filename": "[TMP]/input.py",
"fix": {
"applicability": "safe",
"edits": [
{
"content": "",
"end_location": {
"column": 1,
"row": 2
},
"location": {
"column": 1,
"row": 1
}
}
],
"message": null
},
"location": {
"column": 1,
"row": 1
},
"message": "File would be reformatted",
"noqa_row": null,
"url": null
}
]
----- stderr -----

View File

@@ -0,0 +1,26 @@
---
source: crates/ruff/tests/format.rs
info:
program: ruff
args:
- format
- "--no-cache"
- "--output-format"
- junit
- "--preview"
- "--check"
- input.py
---
success: false
exit_code: 1
----- stdout -----
<?xml version="1.0" encoding="UTF-8"?>
<testsuites name="ruff" tests="1" failures="1" errors="0">
<testsuite name="[TMP]/input.py" tests="1" disabled="0" errors="0" failures="1" package="org.ruff">
<testcase name="org.ruff.unformatted" classname="[TMP]/input" line="1" column="1">
<failure message="File would be reformatted">line 1, col 1, File would be reformatted</failure>
</testcase>
</testsuite>
</testsuites>
----- stderr -----

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff/tests/format.rs
info:
program: ruff
args:
- format
- "--no-cache"
- "--output-format"
- pylint
- "--check"
- input.py
---
success: false
exit_code: 1
----- stdout -----
input.py:1: [unformatted] File would be reformatted
----- stderr -----

View File

@@ -0,0 +1,60 @@
---
source: crates/ruff/tests/format.rs
info:
program: ruff
args:
- format
- "--no-cache"
- "--output-format"
- rdjson
- "--preview"
- "--check"
- input.py
---
success: false
exit_code: 1
----- stdout -----
{
"diagnostics": [
{
"code": {
"value": "unformatted"
},
"location": {
"path": "[TMP]/input.py",
"range": {
"end": {
"column": 1,
"line": 2
},
"start": {
"column": 1,
"line": 1
}
}
},
"message": "File would be reformatted",
"suggestions": [
{
"range": {
"end": {
"column": 1,
"line": 2
},
"start": {
"column": 1,
"line": 1
}
},
"text": ""
}
]
}
],
"severity": "WARNING",
"source": {
"name": "ruff",
"url": "https://docs.astral.sh/ruff"
}
}
----- stderr -----

View File

@@ -0,0 +1,81 @@
---
source: crates/ruff/tests/format.rs
info:
program: ruff
args:
- format
- "--no-cache"
- "--output-format"
- sarif
- "--preview"
- "--check"
- input.py
---
success: false
exit_code: 1
----- stdout -----
{
"$schema": "https://json.schemastore.org/sarif-2.1.0.json",
"runs": [
{
"results": [
{
"fixes": [
{
"artifactChanges": [
{
"artifactLocation": {
"uri": "[TMP]/input.py"
},
"replacements": [
{
"deletedRegion": {
"endColumn": 1,
"endLine": 2,
"startColumn": 1,
"startLine": 1
}
}
]
}
],
"description": {
"text": null
}
}
],
"level": "error",
"locations": [
{
"physicalLocation": {
"artifactLocation": {
"uri": "[TMP]/input.py"
},
"region": {
"endColumn": 1,
"endLine": 2,
"startColumn": 1,
"startLine": 1
}
}
}
],
"message": {
"text": "File would be reformatted"
},
"ruleId": "unformatted"
}
],
"tool": {
"driver": {
"informationUri": "https://github.com/astral-sh/ruff",
"name": "ruff",
"rules": [],
"version": "[VERSION]"
}
}
}
],
"version": "2.1.0"
}
----- stderr -----

View File

@@ -44,6 +44,43 @@ import some_module
__all__ = ["some_module"]
```
## Preview
When [preview] is enabled (and certain simplifying assumptions
are met), we analyze all import statements for a given module
when determining whether an import is used, rather than simply
the last of these statements. This can result in both different and
more import statements being marked as unused.
For example, if a module consists of
```python
import a
import a.b
```
then both statements are marked as unused under [preview], whereas
only the second is marked as unused under stable behavior.
As another example, if a module consists of
```python
import a.b
import a
a.b.foo()
```
then a diagnostic will only be emitted for the first line under [preview],
whereas a diagnostic would only be emitted for the second line under
stable behavior.
Note that this behavior is somewhat subjective and is designed
to conform to the developer's intuition rather than Python's actual
execution. To wit, the statement `import a.b` automatically executes
`import a`, so in some sense `import a` is _always_ redundant
in the presence of `import a.b`.
## Fix safety
Fixes to remove unused imports are safe, except in `__init__.py` files.
@@ -96,4 +133,6 @@ else:
- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)
- [Typing documentation: interface conventions](https://typing.python.org/en/latest/spec/distributing.html#library-interface-public-and-private-symbols)
[preview]: https://docs.astral.sh/ruff/preview/
----- stderr -----

View File

@@ -119,7 +119,7 @@ exit_code: 1
"rules": [
{
"fullDescription": {
"text": "## What it does\nChecks for unused imports.\n\n## Why is this bad?\nUnused imports add a performance overhead at runtime, and risk creating\nimport cycles. They also increase the cognitive load of reading the code.\n\nIf an import statement is used to check for the availability or existence\nof a module, consider using `importlib.util.find_spec` instead.\n\nIf an import statement is used to re-export a symbol as part of a module's\npublic interface, consider using a \"redundant\" import alias, which\ninstructs Ruff (and other tools) to respect the re-export, and avoid\nmarking it as unused, as in:\n\n```python\nfrom module import member as member\n```\n\nAlternatively, you can use `__all__` to declare a symbol as part of the module's\ninterface, as in:\n\n```python\n# __init__.py\nimport some_module\n\n__all__ = [\"some_module\"]\n```\n\n## Fix safety\n\nFixes to remove unused imports are safe, except in `__init__.py` files.\n\nApplying fixes to `__init__.py` files is currently in preview. The fix offered depends on the\ntype of the unused import. Ruff will suggest a safe fix to export first-party imports with\neither a redundant alias or, if already present in the file, an `__all__` entry. If multiple\n`__all__` declarations are present, Ruff will not offer a fix. Ruff will suggest an unsafe fix\nto remove third-party and standard library imports -- the fix is unsafe because the module's\ninterface changes.\n\nSee [this FAQ section](https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc)\nfor more details on how Ruff\ndetermines whether an import is first or third-party.\n\n## Example\n\n```python\nimport numpy as np # unused import\n\n\ndef area(radius):\n return 3.14 * radius**2\n```\n\nUse instead:\n\n```python\ndef area(radius):\n return 3.14 * radius**2\n```\n\nTo check the availability of a module, use `importlib.util.find_spec`:\n\n```python\nfrom importlib.util import find_spec\n\nif find_spec(\"numpy\") is not None:\n print(\"numpy is installed\")\nelse:\n print(\"numpy is not installed\")\n```\n\n## Options\n- `lint.ignore-init-module-imports`\n- `lint.pyflakes.allowed-unused-imports`\n\n## References\n- [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)\n- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)\n- [Typing documentation: interface conventions](https://typing.python.org/en/latest/spec/distributing.html#library-interface-public-and-private-symbols)\n"
"text": "## What it does\nChecks for unused imports.\n\n## Why is this bad?\nUnused imports add a performance overhead at runtime, and risk creating\nimport cycles. They also increase the cognitive load of reading the code.\n\nIf an import statement is used to check for the availability or existence\nof a module, consider using `importlib.util.find_spec` instead.\n\nIf an import statement is used to re-export a symbol as part of a module's\npublic interface, consider using a \"redundant\" import alias, which\ninstructs Ruff (and other tools) to respect the re-export, and avoid\nmarking it as unused, as in:\n\n```python\nfrom module import member as member\n```\n\nAlternatively, you can use `__all__` to declare a symbol as part of the module's\ninterface, as in:\n\n```python\n# __init__.py\nimport some_module\n\n__all__ = [\"some_module\"]\n```\n\n## Preview\nWhen [preview] is enabled (and certain simplifying assumptions\nare met), we analyze all import statements for a given module\nwhen determining whether an import is used, rather than simply\nthe last of these statements. This can result in both different and\nmore import statements being marked as unused.\n\nFor example, if a module consists of\n\n```python\nimport a\nimport a.b\n```\n\nthen both statements are marked as unused under [preview], whereas\nonly the second is marked as unused under stable behavior.\n\nAs another example, if a module consists of\n\n```python\nimport a.b\nimport a\n\na.b.foo()\n```\n\nthen a diagnostic will only be emitted for the first line under [preview],\nwhereas a diagnostic would only be emitted for the second line under\nstable behavior.\n\nNote that this behavior is somewhat subjective and is designed\nto conform to the developer's intuition rather than Python's actual\nexecution. To wit, the statement `import a.b` automatically executes\n`import a`, so in some sense `import a` is _always_ redundant\nin the presence of `import a.b`.\n\n\n## Fix safety\n\nFixes to remove unused imports are safe, except in `__init__.py` files.\n\nApplying fixes to `__init__.py` files is currently in preview. The fix offered depends on the\ntype of the unused import. Ruff will suggest a safe fix to export first-party imports with\neither a redundant alias or, if already present in the file, an `__all__` entry. If multiple\n`__all__` declarations are present, Ruff will not offer a fix. Ruff will suggest an unsafe fix\nto remove third-party and standard library imports -- the fix is unsafe because the module's\ninterface changes.\n\nSee [this FAQ section](https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc)\nfor more details on how Ruff\ndetermines whether an import is first or third-party.\n\n## Example\n\n```python\nimport numpy as np # unused import\n\n\ndef area(radius):\n return 3.14 * radius**2\n```\n\nUse instead:\n\n```python\ndef area(radius):\n return 3.14 * radius**2\n```\n\nTo check the availability of a module, use `importlib.util.find_spec`:\n\n```python\nfrom importlib.util import find_spec\n\nif find_spec(\"numpy\") is not None:\n print(\"numpy is installed\")\nelse:\n print(\"numpy is not installed\")\n```\n\n## Options\n- `lint.ignore-init-module-imports`\n- `lint.pyflakes.allowed-unused-imports`\n\n## References\n- [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)\n- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)\n- [Typing documentation: interface conventions](https://typing.python.org/en/latest/spec/distributing.html#library-interface-public-and-private-symbols)\n\n[preview]: https://docs.astral.sh/ruff/preview/\n"
},
"help": {
"text": "`{name}` imported but unused; consider using `importlib.util.find_spec` to test for availability"

View File

@@ -0,0 +1,26 @@
---
source: crates/ruff/tests/lint.rs
info:
program: ruff
args:
- check
- "--no-cache"
- "--output-format"
- concise
- "--select"
- F401
- "--fix"
- "--show-fixes"
- input.py
---
success: true
exit_code: 0
----- stdout -----
Fixed 1 error:
- input.py:
1 × F401 (unused-import)
Found 1 error (1 fixed, 0 remaining).
----- stderr -----

View File

@@ -0,0 +1,26 @@
---
source: crates/ruff/tests/lint.rs
info:
program: ruff
args:
- check
- "--no-cache"
- "--output-format"
- full
- "--select"
- F401
- "--fix"
- "--show-fixes"
- input.py
---
success: true
exit_code: 0
----- stdout -----
Fixed 1 error:
- input.py:
1 × F401 (unused-import)
Found 1 error (1 fixed, 0 remaining).
----- stderr -----

View File

@@ -0,0 +1,26 @@
---
source: crates/ruff/tests/lint.rs
info:
program: ruff
args:
- check
- "--no-cache"
- "--output-format"
- grouped
- "--select"
- F401
- "--fix"
- "--show-fixes"
- input.py
---
success: true
exit_code: 0
----- stdout -----
Fixed 1 error:
- input.py:
1 × F401 (unused-import)
Found 1 error (1 fixed, 0 remaining).
----- stderr -----

View File

@@ -371,6 +371,7 @@ linter.pylint.max_branches = 12
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false

View File

@@ -56,6 +56,7 @@ pub(crate) struct DisplayList<'a> {
pub(crate) stylesheet: &'a Stylesheet,
pub(crate) anonymized_line_numbers: bool,
pub(crate) cut_indicator: &'static str,
pub(crate) lineno_offset: usize,
}
impl PartialEq for DisplayList<'_> {
@@ -81,13 +82,14 @@ impl Display for DisplayList<'_> {
_ => max,
})
});
let lineno_width = if lineno_width == 0 {
lineno_width
} else if self.anonymized_line_numbers {
ANONYMIZED_LINE_NUM.len()
} else {
((lineno_width as f64).log10().floor() as usize) + 1
};
let lineno_width = self.lineno_offset
+ if lineno_width == 0 {
lineno_width
} else if self.anonymized_line_numbers {
ANONYMIZED_LINE_NUM.len()
} else {
((lineno_width as f64).log10().floor() as usize) + 1
};
let multiline_depth = self.body.iter().fold(0, |max, set| {
set.display_lines.iter().fold(max, |max2, line| match line {
@@ -124,6 +126,7 @@ impl<'a> DisplayList<'a> {
term_width: usize,
cut_indicator: &'static str,
) -> DisplayList<'a> {
let lineno_offset = message.lineno_offset;
let body = format_message(
message,
term_width,
@@ -137,6 +140,7 @@ impl<'a> DisplayList<'a> {
stylesheet,
anonymized_line_numbers,
cut_indicator,
lineno_offset,
}
}
@@ -1088,6 +1092,7 @@ fn format_message<'m>(
footer,
snippets,
is_fixable,
lineno_offset: _,
} = message;
let mut sets = vec![];

View File

@@ -23,6 +23,7 @@ pub struct Message<'a> {
pub(crate) snippets: Vec<Snippet<'a>>,
pub(crate) footer: Vec<Message<'a>>,
pub(crate) is_fixable: bool,
pub(crate) lineno_offset: usize,
}
impl<'a> Message<'a> {
@@ -59,6 +60,16 @@ impl<'a> Message<'a> {
self.is_fixable = yes;
self
}
/// Add an offset used for aligning the header sigil (`-->`) with the line number separators.
///
/// For normal diagnostics this is computed automatically based on the lines to be rendered.
/// This is intended only for use in the formatter, where we don't render a snippet directly but
/// still want the header to align with the diff.
pub fn lineno_offset(mut self, offset: usize) -> Self {
self.lineno_offset = offset;
self
}
}
/// Structure containing the slice of text to be annotated and
@@ -144,7 +155,7 @@ impl<'a> Annotation<'a> {
self
}
pub fn is_file_level(mut self, yes: bool) -> Self {
pub fn hide_snippet(mut self, yes: bool) -> Self {
self.is_file_level = yes;
self
}
@@ -173,6 +184,7 @@ impl Level {
snippets: vec![],
footer: vec![],
is_fixable: false,
lineno_offset: 0,
}
}

View File

@@ -444,7 +444,7 @@ fn benchmark_complex_constrained_attributes_2(criterion: &mut Criterion) {
criterion.bench_function("ty_micro[complex_constrained_attributes_2]", |b| {
b.iter_batched_ref(
|| {
// This is is similar to the case above, but now the attributes are actually defined.
// This is similar to the case above, but now the attributes are actually defined.
// https://github.com/astral-sh/ty/issues/711
setup_micro_case(
r#"

View File

@@ -117,7 +117,7 @@ static COLOUR_SCIENCE: std::sync::LazyLock<Benchmark<'static>> = std::sync::Lazy
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY310,
},
477,
500,
)
});

View File

@@ -69,6 +69,7 @@ impl Diagnostic {
parent: None,
noqa_offset: None,
secondary_code: None,
header_offset: 0,
});
Diagnostic { inner }
}
@@ -432,14 +433,23 @@ impl Diagnostic {
/// Returns the URL for the rule documentation, if it exists.
pub fn to_ruff_url(&self) -> Option<String> {
if self.is_invalid_syntax() {
None
} else {
Some(format!(
"{}/rules/{}",
env!("CARGO_PKG_HOMEPAGE"),
self.name()
))
match self.id() {
DiagnosticId::Panic
| DiagnosticId::Io
| DiagnosticId::InvalidSyntax
| DiagnosticId::RevealedType
| DiagnosticId::UnknownRule
| DiagnosticId::InvalidGlob
| DiagnosticId::EmptyInclude
| DiagnosticId::UnnecessaryOverridesSection
| DiagnosticId::UselessOverridesSection
| DiagnosticId::DeprecatedSetting
| DiagnosticId::Unformatted
| DiagnosticId::InvalidCliOption
| DiagnosticId::InternalError => None,
DiagnosticId::Lint(lint_name) => {
Some(format!("{}/rules/{lint_name}", env!("CARGO_PKG_HOMEPAGE")))
}
}
}
@@ -512,6 +522,11 @@ impl Diagnostic {
a.cmp(&b)
}
/// Add an offset for aligning the header sigil with the line number separators in a diff.
pub fn set_header_offset(&mut self, offset: usize) {
Arc::make_mut(&mut self.inner).header_offset = offset;
}
}
#[derive(Debug, Clone, Eq, PartialEq, Hash, get_size2::GetSize)]
@@ -525,6 +540,7 @@ struct DiagnosticInner {
parent: Option<TextSize>,
noqa_offset: Option<TextSize>,
secondary_code: Option<SecondaryCode>,
header_offset: usize,
}
struct RenderingSortKey<'a> {
@@ -742,11 +758,11 @@ pub struct Annotation {
is_primary: bool,
/// The diagnostic tags associated with this annotation.
tags: Vec<DiagnosticTag>,
/// Whether this annotation is a file-level or full-file annotation.
/// Whether the snippet for this annotation should be hidden.
///
/// When set, rendering will only include the file's name and (optional) range. Everything else
/// is omitted, including any file snippet or message.
is_file_level: bool,
hide_snippet: bool,
}
impl Annotation {
@@ -765,7 +781,7 @@ impl Annotation {
message: None,
is_primary: true,
tags: Vec::new(),
is_file_level: false,
hide_snippet: false,
}
}
@@ -782,7 +798,7 @@ impl Annotation {
message: None,
is_primary: false,
tags: Vec::new(),
is_file_level: false,
hide_snippet: false,
}
}
@@ -849,19 +865,20 @@ impl Annotation {
self.tags.push(tag);
}
/// Set whether or not this annotation is file-level.
/// Set whether or not the snippet on this annotation should be suppressed when rendering.
///
/// File-level annotations are only rendered with their file name and range, if available. This
/// is intended for backwards compatibility with Ruff diagnostics, which historically used
/// Such annotations are only rendered with their file name and range, if available. This is
/// intended for backwards compatibility with Ruff diagnostics, which historically used
/// `TextRange::default` to indicate a file-level diagnostic. In the new diagnostic model, a
/// [`Span`] with a range of `None` should be used instead, as mentioned in the `Span`
/// documentation.
///
/// TODO(brent) update this usage in Ruff and remove `is_file_level` entirely. See
/// <https://github.com/astral-sh/ruff/issues/19688>, especially my first comment, for more
/// details.
pub fn set_file_level(&mut self, yes: bool) {
self.is_file_level = yes;
/// details. As of 2025-09-26 we also use this to suppress snippet rendering for formatter
/// diagnostics, which also need to have a range, so we probably can't eliminate this entirely.
pub fn hide_snippet(&mut self, yes: bool) {
self.hide_snippet = yes;
}
}
@@ -1016,6 +1033,17 @@ pub enum DiagnosticId {
/// Use of a deprecated setting.
DeprecatedSetting,
/// The code needs to be formatted.
Unformatted,
/// Use of an invalid command-line option.
InvalidCliOption,
/// An internal assumption was violated.
///
/// This indicates a bug in the program rather than a user error.
InternalError,
}
impl DiagnosticId {
@@ -1055,6 +1083,9 @@ impl DiagnosticId {
DiagnosticId::UnnecessaryOverridesSection => "unnecessary-overrides-section",
DiagnosticId::UselessOverridesSection => "useless-overrides-section",
DiagnosticId::DeprecatedSetting => "deprecated-setting",
DiagnosticId::Unformatted => "unformatted",
DiagnosticId::InvalidCliOption => "invalid-cli-option",
DiagnosticId::InternalError => "internal-error",
}
}
@@ -1353,7 +1384,7 @@ impl DisplayDiagnosticConfig {
}
/// Whether to show a fix's availability or not.
pub fn show_fix_status(self, yes: bool) -> DisplayDiagnosticConfig {
pub fn with_show_fix_status(self, yes: bool) -> DisplayDiagnosticConfig {
DisplayDiagnosticConfig {
show_fix_status: yes,
..self
@@ -1374,12 +1405,20 @@ impl DisplayDiagnosticConfig {
/// availability for unsafe or display-only fixes.
///
/// Note that this option is currently ignored when `hide_severity` is false.
pub fn fix_applicability(self, applicability: Applicability) -> DisplayDiagnosticConfig {
pub fn with_fix_applicability(self, applicability: Applicability) -> DisplayDiagnosticConfig {
DisplayDiagnosticConfig {
fix_applicability: applicability,
..self
}
}
pub fn show_fix_status(&self) -> bool {
self.show_fix_status
}
pub fn fix_applicability(&self) -> Applicability {
self.fix_applicability
}
}
impl Default for DisplayDiagnosticConfig {

View File

@@ -208,6 +208,7 @@ struct ResolvedDiagnostic<'a> {
message: String,
annotations: Vec<ResolvedAnnotation<'a>>,
is_fixable: bool,
header_offset: usize,
}
impl<'a> ResolvedDiagnostic<'a> {
@@ -258,7 +259,8 @@ impl<'a> ResolvedDiagnostic<'a> {
id,
message: diag.inner.message.as_str().to_string(),
annotations,
is_fixable: diag.has_applicable_fix(config),
is_fixable: config.show_fix_status && diag.has_applicable_fix(config),
header_offset: diag.inner.header_offset,
}
}
@@ -288,6 +290,7 @@ impl<'a> ResolvedDiagnostic<'a> {
message: diag.inner.message.as_str().to_string(),
annotations,
is_fixable: false,
header_offset: 0,
}
}
@@ -385,6 +388,7 @@ impl<'a> ResolvedDiagnostic<'a> {
message: &self.message,
snippets_by_input,
is_fixable: self.is_fixable,
header_offset: self.header_offset,
}
}
}
@@ -404,7 +408,7 @@ struct ResolvedAnnotation<'a> {
line_end: OneIndexed,
message: Option<&'a str>,
is_primary: bool,
is_file_level: bool,
hide_snippet: bool,
notebook_index: Option<NotebookIndex>,
}
@@ -452,7 +456,7 @@ impl<'a> ResolvedAnnotation<'a> {
line_end,
message: ann.get_message(),
is_primary: ann.is_primary,
is_file_level: ann.is_file_level,
hide_snippet: ann.hide_snippet,
notebook_index: resolver.notebook_index(&ann.span.file),
})
}
@@ -492,6 +496,11 @@ struct RenderableDiagnostic<'r> {
///
/// This is rendered as a `[*]` indicator after the diagnostic ID.
is_fixable: bool,
/// Offset to align the header sigil (`-->`) with the subsequent line number separators.
///
/// This is only needed for formatter diagnostics where we don't render a snippet via
/// `annotate-snippets` and thus the alignment isn't computed automatically.
header_offset: usize,
}
impl RenderableDiagnostic<'_> {
@@ -504,7 +513,11 @@ impl RenderableDiagnostic<'_> {
.iter()
.map(|snippet| snippet.to_annotate(path))
});
let mut message = self.level.title(self.message).is_fixable(self.is_fixable);
let mut message = self
.level
.title(self.message)
.is_fixable(self.is_fixable)
.lineno_offset(self.header_offset);
if let Some(id) = self.id {
message = message.id(id);
}
@@ -709,8 +722,8 @@ struct RenderableAnnotation<'r> {
message: Option<&'r str>,
/// Whether this annotation is considered "primary" or not.
is_primary: bool,
/// Whether this annotation applies to an entire file, rather than a snippet within it.
is_file_level: bool,
/// Whether the snippet for this annotation should be hidden instead of rendered.
hide_snippet: bool,
}
impl<'r> RenderableAnnotation<'r> {
@@ -732,7 +745,7 @@ impl<'r> RenderableAnnotation<'r> {
range,
message: ann.message,
is_primary: ann.is_primary,
is_file_level: ann.is_file_level,
hide_snippet: ann.hide_snippet,
}
}
@@ -758,7 +771,7 @@ impl<'r> RenderableAnnotation<'r> {
if let Some(message) = self.message {
ann = ann.label(message);
}
ann.is_file_level(self.is_file_level)
ann.hide_snippet(self.hide_snippet)
}
}
@@ -2618,7 +2631,7 @@ watermelon
/// Show fix availability when rendering.
pub(super) fn show_fix_status(&mut self, yes: bool) {
let mut config = std::mem::take(&mut self.config);
config = config.show_fix_status(yes);
config = config.with_show_fix_status(yes);
self.config = config;
}
@@ -2632,7 +2645,7 @@ watermelon
/// The lowest fix applicability to show when rendering.
pub(super) fn fix_applicability(&mut self, applicability: Applicability) {
let mut config = std::mem::take(&mut self.config);
config = config.fix_applicability(applicability);
config = config.with_fix_applicability(applicability);
self.config = config;
}

View File

@@ -366,6 +366,7 @@ mod tests {
fn hide_severity_output() {
let (mut env, diagnostics) = create_diagnostics(DiagnosticFormat::Full);
env.hide_severity(true);
env.show_fix_status(true);
env.fix_applicability(Applicability::DisplayOnly);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r#"
@@ -572,7 +573,7 @@ print()
let mut diagnostic = env.err().build();
let span = env.path("example.py").with_range(TextRange::default());
let mut annotation = Annotation::primary(span);
annotation.set_file_level(true);
annotation.hide_snippet(true);
diagnostic.annotate(annotation);
insta::assert_snapshot!(env.render(&diagnostic), @r"
@@ -584,7 +585,8 @@ print()
/// Check that ranges in notebooks are remapped relative to the cells.
#[test]
fn notebook_output() {
let (env, diagnostics) = create_notebook_diagnostics(DiagnosticFormat::Full);
let (mut env, diagnostics) = create_notebook_diagnostics(DiagnosticFormat::Full);
env.show_fix_status(true);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
error[unused-import][*]: `os` imported but unused
--> notebook.ipynb:cell 1:2:8
@@ -698,6 +700,7 @@ print()
fn notebook_output_with_diff() {
let (mut env, diagnostics) = create_notebook_diagnostics(DiagnosticFormat::Full);
env.show_fix_diff(true);
env.show_fix_status(true);
env.fix_applicability(Applicability::DisplayOnly);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
@@ -752,6 +755,7 @@ print()
fn notebook_output_with_diff_spanning_cells() {
let (mut env, mut diagnostics) = create_notebook_diagnostics(DiagnosticFormat::Full);
env.show_fix_diff(true);
env.show_fix_status(true);
env.fix_applicability(Applicability::DisplayOnly);
// Move all of the edits from the later diagnostics to the first diagnostic to simulate a
@@ -928,6 +932,7 @@ line 10
env.add("example.py", contents);
env.format(DiagnosticFormat::Full);
env.show_fix_diff(true);
env.show_fix_status(true);
env.fix_applicability(Applicability::DisplayOnly);
let mut diagnostic = env.err().primary("example.py", "3", "3", "label").build();

View File

@@ -31,6 +31,29 @@ impl Payload {
}
}
impl PanicError {
pub fn to_diagnostic_message(&self, path: Option<impl std::fmt::Display>) -> String {
use std::fmt::Write;
let mut message = String::new();
message.push_str("Panicked");
if let Some(location) = &self.location {
let _ = write!(&mut message, " at {location}");
}
if let Some(path) = path {
let _ = write!(&mut message, " when checking `{path}`");
}
if let Some(payload) = self.payload.as_str() {
let _ = write!(&mut message, ": `{payload}`");
}
message
}
}
impl std::fmt::Display for PanicError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "panicked at")?;

View File

@@ -38,12 +38,12 @@ impl std::fmt::Display for FormatError {
),
FormatError::InvalidDocument(error) => std::write!(
fmt,
"Invalid document: {error}\n\n This is an internal Rome error. Please report if necessary."
"Invalid document: {error}\n\n This is an internal Ruff error. Please report if necessary."
),
FormatError::PoorLayout => {
std::write!(
fmt,
"Poor layout: The formatter wasn't able to pick a good layout for your document. This is an internal Rome error. Please report if necessary."
"Poor layout: The formatter wasn't able to pick a good layout for your document. This is an internal Ruff error. Please report if necessary."
)
}
}

View File

@@ -0,0 +1,3 @@
import concurrent.futures as futures
1

View File

@@ -0,0 +1,3 @@
import concurrent.futures as futures
1

View File

@@ -0,0 +1 @@
from builtins import str, int

View File

@@ -37,3 +37,9 @@ log(logging.INFO, "Hello %r", repr("World!"))
def str(s): return f"str = {s}"
# Don't flag this
logging.info("Hello %s", str("World!"))
logging.info("Debug info: %r", repr("test\nstring"))
logging.warning("Value: %r", repr(42))
logging.error("Error: %r", repr([1, 2, 3]))
logging.info("Debug info: %s", repr("test\nstring"))
logging.warning("Value: %s", repr(42))

View File

@@ -803,11 +803,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
}
}
for alias in names {
if let Some("__future__") = module {
if checker.is_rule_enabled(Rule::FutureFeatureNotDefined) {
pyflakes::rules::future_feature_not_defined(checker, alias);
}
} else if &alias.name == "*" {
if module != Some("__future__") && &alias.name == "*" {
// F403
checker.report_diagnostic_if_enabled(
pyflakes::rules::UndefinedLocalWithImportStar {

View File

@@ -28,7 +28,7 @@ use itertools::Itertools;
use log::debug;
use rustc_hash::{FxHashMap, FxHashSet};
use ruff_db::diagnostic::{Annotation, Diagnostic, IntoDiagnosticMessage, Span};
use ruff_db::diagnostic::{Annotation, Diagnostic, DiagnosticTag, IntoDiagnosticMessage, Span};
use ruff_diagnostics::{Applicability, Fix, IsolationLevel};
use ruff_notebook::{CellOffsets, NotebookIndex};
use ruff_python_ast::helpers::{collect_import_from_member, is_docstring_stmt, to_module_path};
@@ -696,6 +696,14 @@ impl SemanticSyntaxContext for Checker<'_> {
self.report_diagnostic(MultipleStarredExpressions, error.range);
}
}
SemanticSyntaxErrorKind::FutureFeatureNotDefined(name) => {
if self.is_rule_enabled(Rule::FutureFeatureNotDefined) {
self.report_diagnostic(
pyflakes::rules::FutureFeatureNotDefined { name },
error.range,
);
}
}
SemanticSyntaxErrorKind::ReboundComprehensionVariable
| SemanticSyntaxErrorKind::DuplicateTypeParameter
| SemanticSyntaxErrorKind::MultipleCaseAssignment(_)
@@ -2352,7 +2360,7 @@ impl<'a> Checker<'a> {
}
}
/// Visit an body of [`Stmt`] nodes within a type-checking block.
/// Visit a body of [`Stmt`] nodes within a type-checking block.
fn visit_type_checking_block(&mut self, body: &'a [Stmt]) {
let snapshot = self.semantic.flags;
self.semantic.flags |= SemanticModelFlags::TYPE_CHECKING_BLOCK;
@@ -3318,6 +3326,56 @@ impl DiagnosticGuard<'_, '_> {
pub(crate) fn defuse(mut self) {
self.diagnostic = None;
}
/// Set the message on the primary annotation for this diagnostic.
///
/// If a message already exists on the primary annotation, then this
/// overwrites the existing message.
///
/// This message is associated with the primary annotation created
/// for every `Diagnostic` that uses the `DiagnosticGuard` API.
/// Specifically, the annotation is derived from the `TextRange` given to
/// the `LintContext::report_diagnostic` API.
///
/// Callers can add additional primary or secondary annotations via the
/// `DerefMut` trait implementation to a `Diagnostic`.
pub(crate) fn set_primary_message(&mut self, message: impl IntoDiagnosticMessage) {
// N.B. It is normally bad juju to define `self` methods
// on types that implement `Deref`. Instead, it's idiomatic
// to do `fn foo(this: &mut LintDiagnosticGuard)`, which in
// turn forces callers to use
// `LintDiagnosticGuard(&mut guard, message)`. But this is
// supremely annoying for what is expected to be a common
// case.
//
// Moreover, most of the downside that comes from these sorts
// of methods is a semver hazard. Because the deref target type
// could also define a method by the same name, and that leads
// to confusion. But we own all the code involved here and
// there is no semver boundary. So... ¯\_(ツ)_/¯ ---AG
// OK because we know the diagnostic was constructed with a single
// primary annotation that will always come before any other annotation
// in the diagnostic. (This relies on the `Diagnostic` API not exposing
// any methods for removing annotations or re-ordering them, which is
// true as of 2025-04-11.)
let ann = self.primary_annotation_mut().unwrap();
ann.set_message(message);
}
/// Adds a tag on the primary annotation for this diagnostic.
///
/// This tag is associated with the primary annotation created
/// for every `Diagnostic` that uses the `DiagnosticGuard` API.
/// Specifically, the annotation is derived from the `TextRange` given to
/// the `LintContext::report_diagnostic` API.
///
/// Callers can add additional primary or secondary annotations via the
/// `DerefMut` trait implementation to a `Diagnostic`.
pub(crate) fn add_primary_tag(&mut self, tag: DiagnosticTag) {
let ann = self.primary_annotation_mut().unwrap();
ann.push_tag(tag);
}
}
impl DiagnosticGuard<'_, '_> {

View File

@@ -223,6 +223,11 @@ impl DisplayParseError {
pub fn path(&self) -> Option<&Path> {
self.path.as_deref()
}
/// Return the underlying [`ParseError`].
pub fn error(&self) -> &ParseError {
&self.error
}
}
impl std::error::Error for DisplayParseError {}

View File

@@ -6,17 +6,25 @@ use std::num::NonZeroUsize;
use colored::Colorize;
use ruff_db::diagnostic::Diagnostic;
use ruff_diagnostics::Applicability;
use ruff_notebook::NotebookIndex;
use ruff_source_file::{LineColumn, OneIndexed};
use crate::fs::relativize_path;
use crate::message::{Emitter, EmitterContext};
use crate::settings::types::UnsafeFixes;
#[derive(Default)]
pub struct GroupedEmitter {
show_fix_status: bool,
unsafe_fixes: UnsafeFixes,
applicability: Applicability,
}
impl Default for GroupedEmitter {
fn default() -> Self {
Self {
show_fix_status: false,
applicability: Applicability::Safe,
}
}
}
impl GroupedEmitter {
@@ -27,8 +35,8 @@ impl GroupedEmitter {
}
#[must_use]
pub fn with_unsafe_fixes(mut self, unsafe_fixes: UnsafeFixes) -> Self {
self.unsafe_fixes = unsafe_fixes;
pub fn with_applicability(mut self, applicability: Applicability) -> Self {
self.applicability = applicability;
self
}
}
@@ -67,7 +75,7 @@ impl Emitter for GroupedEmitter {
notebook_index: context.notebook_index(&message.expect_ruff_filename()),
message,
show_fix_status: self.show_fix_status,
unsafe_fixes: self.unsafe_fixes,
applicability: self.applicability,
row_length,
column_length,
}
@@ -114,7 +122,7 @@ fn group_diagnostics_by_filename(
struct DisplayGroupedMessage<'a> {
message: MessageWithLocation<'a>,
show_fix_status: bool,
unsafe_fixes: UnsafeFixes,
applicability: Applicability,
row_length: NonZeroUsize,
column_length: NonZeroUsize,
notebook_index: Option<&'a NotebookIndex>,
@@ -162,7 +170,7 @@ impl Display for DisplayGroupedMessage<'_> {
code_and_body = RuleCodeAndBody {
message,
show_fix_status: self.show_fix_status,
unsafe_fixes: self.unsafe_fixes
applicability: self.applicability
},
)?;
@@ -173,7 +181,7 @@ impl Display for DisplayGroupedMessage<'_> {
pub(super) struct RuleCodeAndBody<'a> {
pub(crate) message: &'a Diagnostic,
pub(crate) show_fix_status: bool,
pub(crate) unsafe_fixes: UnsafeFixes,
pub(crate) applicability: Applicability,
}
impl Display for RuleCodeAndBody<'_> {
@@ -181,7 +189,7 @@ impl Display for RuleCodeAndBody<'_> {
if self.show_fix_status {
if let Some(fix) = self.message.fix() {
// Do not display an indicator for inapplicable fixes
if fix.applies(self.unsafe_fixes.required_applicability()) {
if fix.applies(self.applicability) {
if let Some(code) = self.message.secondary_code() {
write!(f, "{} ", code.red().bold())?;
}
@@ -217,11 +225,12 @@ impl Display for RuleCodeAndBody<'_> {
mod tests {
use insta::assert_snapshot;
use ruff_diagnostics::Applicability;
use crate::message::GroupedEmitter;
use crate::message::tests::{
capture_emitter_output, create_diagnostics, create_syntax_error_diagnostics,
};
use crate::settings::types::UnsafeFixes;
#[test]
fn default() {
@@ -251,7 +260,7 @@ mod tests {
fn fix_status_unsafe() {
let mut emitter = GroupedEmitter::default()
.with_show_fix_status(true)
.with_unsafe_fixes(UnsafeFixes::Enabled);
.with_applicability(Applicability::Unsafe);
let content = capture_emitter_output(&mut emitter, &create_diagnostics());
assert_snapshot!(content);

View File

@@ -1,27 +1,30 @@
use std::backtrace::BacktraceStatus;
use std::fmt::Display;
use std::io::Write;
use std::path::Path;
use ruff_db::panic::PanicError;
use rustc_hash::FxHashMap;
use ruff_db::diagnostic::{
Annotation, Diagnostic, DiagnosticId, FileResolver, Input, LintName, SecondaryCode, Severity,
Span, UnifiedFile,
Annotation, Diagnostic, DiagnosticFormat, DiagnosticId, DisplayDiagnosticConfig,
DisplayDiagnostics, DisplayGithubDiagnostics, FileResolver, GithubRenderer, Input, LintName,
SecondaryCode, Severity, Span, SubDiagnostic, SubDiagnosticSeverity, UnifiedFile,
};
use ruff_db::files::File;
pub use grouped::GroupedEmitter;
use ruff_notebook::NotebookIndex;
use ruff_source_file::SourceFile;
use ruff_source_file::{SourceFile, SourceFileBuilder};
use ruff_text_size::{Ranged, TextRange, TextSize};
pub use sarif::SarifEmitter;
pub use text::TextEmitter;
use crate::Fix;
use crate::registry::Rule;
use crate::settings::types::{OutputFormat, RuffOutputFormat};
mod grouped;
mod sarif;
mod text;
/// Creates a `Diagnostic` from a syntax error, with the format expected by Ruff.
///
@@ -41,6 +44,55 @@ pub fn create_syntax_error_diagnostic(
diag
}
/// Create a `Diagnostic` from a panic.
pub fn create_panic_diagnostic(error: &PanicError, path: Option<&Path>) -> Diagnostic {
let mut diagnostic = Diagnostic::new(
DiagnosticId::Panic,
Severity::Fatal,
error.to_diagnostic_message(path.as_ref().map(|path| path.display())),
);
diagnostic.sub(SubDiagnostic::new(
SubDiagnosticSeverity::Info,
"This indicates a bug in Ruff.",
));
let report_message = "If you could open an issue at \
https://github.com/astral-sh/ruff/issues/new?title=%5Bpanic%5D, \
we'd be very appreciative!";
diagnostic.sub(SubDiagnostic::new(
SubDiagnosticSeverity::Info,
report_message,
));
if let Some(backtrace) = &error.backtrace {
match backtrace.status() {
BacktraceStatus::Disabled => {
diagnostic.sub(SubDiagnostic::new(
SubDiagnosticSeverity::Info,
"run with `RUST_BACKTRACE=1` environment variable to show the full backtrace information",
));
}
BacktraceStatus::Captured => {
diagnostic.sub(SubDiagnostic::new(
SubDiagnosticSeverity::Info,
format!("Backtrace:\n{backtrace}"),
));
}
_ => {}
}
}
if let Some(path) = path {
let file = SourceFileBuilder::new(path.to_string_lossy(), "").finish();
let span = Span::from(file);
let mut annotation = Annotation::primary(span);
annotation.hide_snippet(true);
diagnostic.annotate(annotation);
}
diagnostic
}
#[expect(clippy::too_many_arguments)]
pub fn create_lint_diagnostic<B, S>(
body: B,
@@ -70,7 +122,7 @@ where
// actually need it, but we need to be able to cache the new diagnostic model first. See
// https://github.com/astral-sh/ruff/issues/19688.
if range == TextRange::default() {
annotation.set_file_level(true);
annotation.hide_snippet(true);
}
diagnostic.annotate(annotation);
@@ -160,14 +212,48 @@ impl<'a> EmitterContext<'a> {
}
}
pub fn render_diagnostics(
writer: &mut dyn Write,
format: OutputFormat,
config: DisplayDiagnosticConfig,
context: &EmitterContext<'_>,
diagnostics: &[Diagnostic],
) -> std::io::Result<()> {
match DiagnosticFormat::try_from(format) {
Ok(format) => {
let config = config.format(format);
let value = DisplayDiagnostics::new(context, &config, diagnostics);
write!(writer, "{value}")?;
}
Err(RuffOutputFormat::Github) => {
let renderer = GithubRenderer::new(context, "Ruff");
let value = DisplayGithubDiagnostics::new(&renderer, diagnostics);
write!(writer, "{value}")?;
}
Err(RuffOutputFormat::Grouped) => {
GroupedEmitter::default()
.with_show_fix_status(config.show_fix_status())
.with_applicability(config.fix_applicability())
.emit(writer, diagnostics, context)
.map_err(std::io::Error::other)?;
}
Err(RuffOutputFormat::Sarif) => {
SarifEmitter
.emit(writer, diagnostics, context)
.map_err(std::io::Error::other)?;
}
}
Ok(())
}
#[cfg(test)]
mod tests {
use rustc_hash::FxHashMap;
use ruff_db::diagnostic::Diagnostic;
use ruff_notebook::NotebookIndex;
use ruff_python_parser::{Mode, ParseOptions, parse_unchecked};
use ruff_source_file::{OneIndexed, SourceFileBuilder};
use ruff_source_file::SourceFileBuilder;
use ruff_text_size::{TextRange, TextSize};
use crate::codes::Rule;
@@ -257,104 +343,6 @@ def fibonacci(n):
vec![unused_import, unused_variable, undefined_name]
}
pub(super) fn create_notebook_diagnostics()
-> (Vec<Diagnostic>, FxHashMap<String, NotebookIndex>) {
let notebook = r"# cell 1
import os
# cell 2
import math
print('hello world')
# cell 3
def foo():
print()
x = 1
";
let notebook_source = SourceFileBuilder::new("notebook.ipynb", notebook).finish();
let unused_import_os_start = TextSize::from(16);
let unused_import_os = create_lint_diagnostic(
"`os` imported but unused",
Some("Remove unused import: `os`"),
TextRange::new(unused_import_os_start, TextSize::from(18)),
Some(Fix::safe_edit(Edit::range_deletion(TextRange::new(
TextSize::from(9),
TextSize::from(19),
)))),
None,
notebook_source.clone(),
Some(unused_import_os_start),
Rule::UnusedImport,
);
let unused_import_math_start = TextSize::from(35);
let unused_import_math = create_lint_diagnostic(
"`math` imported but unused",
Some("Remove unused import: `math`"),
TextRange::new(unused_import_math_start, TextSize::from(39)),
Some(Fix::safe_edit(Edit::range_deletion(TextRange::new(
TextSize::from(28),
TextSize::from(40),
)))),
None,
notebook_source.clone(),
Some(unused_import_math_start),
Rule::UnusedImport,
);
let unused_variable_start = TextSize::from(98);
let unused_variable = create_lint_diagnostic(
"Local variable `x` is assigned to but never used",
Some("Remove assignment to unused variable `x`"),
TextRange::new(unused_variable_start, TextSize::from(99)),
Some(Fix::unsafe_edit(Edit::deletion(
TextSize::from(94),
TextSize::from(104),
))),
None,
notebook_source,
Some(unused_variable_start),
Rule::UnusedVariable,
);
let mut notebook_indexes = FxHashMap::default();
notebook_indexes.insert(
"notebook.ipynb".to_string(),
NotebookIndex::new(
vec![
OneIndexed::from_zero_indexed(0),
OneIndexed::from_zero_indexed(0),
OneIndexed::from_zero_indexed(1),
OneIndexed::from_zero_indexed(1),
OneIndexed::from_zero_indexed(1),
OneIndexed::from_zero_indexed(1),
OneIndexed::from_zero_indexed(2),
OneIndexed::from_zero_indexed(2),
OneIndexed::from_zero_indexed(2),
OneIndexed::from_zero_indexed(2),
],
vec![
OneIndexed::from_zero_indexed(0),
OneIndexed::from_zero_indexed(1),
OneIndexed::from_zero_indexed(0),
OneIndexed::from_zero_indexed(1),
OneIndexed::from_zero_indexed(2),
OneIndexed::from_zero_indexed(3),
OneIndexed::from_zero_indexed(0),
OneIndexed::from_zero_indexed(1),
OneIndexed::from_zero_indexed(2),
OneIndexed::from_zero_indexed(3),
],
),
);
(
vec![unused_import_os, unused_import_math, unused_variable],
notebook_indexes,
)
}
pub(super) fn capture_emitter_output(
emitter: &mut dyn Emitter,
diagnostics: &[Diagnostic],
@@ -366,16 +354,4 @@ def foo():
String::from_utf8(output).expect("Output to be valid UTF-8")
}
pub(super) fn capture_emitter_notebook_output(
emitter: &mut dyn Emitter,
diagnostics: &[Diagnostic],
notebook_indexes: &FxHashMap<String, NotebookIndex>,
) -> String {
let context = EmitterContext::new(notebook_indexes);
let mut output: Vec<u8> = Vec::new();
emitter.emit(&mut output, diagnostics, &context).unwrap();
String::from_utf8(output).expect("Output to be valid UTF-8")
}
}

View File

@@ -129,7 +129,7 @@ expression: value
"rules": [
{
"fullDescription": {
"text": "## What it does\nChecks for unused imports.\n\n## Why is this bad?\nUnused imports add a performance overhead at runtime, and risk creating\nimport cycles. They also increase the cognitive load of reading the code.\n\nIf an import statement is used to check for the availability or existence\nof a module, consider using `importlib.util.find_spec` instead.\n\nIf an import statement is used to re-export a symbol as part of a module's\npublic interface, consider using a \"redundant\" import alias, which\ninstructs Ruff (and other tools) to respect the re-export, and avoid\nmarking it as unused, as in:\n\n```python\nfrom module import member as member\n```\n\nAlternatively, you can use `__all__` to declare a symbol as part of the module's\ninterface, as in:\n\n```python\n# __init__.py\nimport some_module\n\n__all__ = [\"some_module\"]\n```\n\n## Fix safety\n\nFixes to remove unused imports are safe, except in `__init__.py` files.\n\nApplying fixes to `__init__.py` files is currently in preview. The fix offered depends on the\ntype of the unused import. Ruff will suggest a safe fix to export first-party imports with\neither a redundant alias or, if already present in the file, an `__all__` entry. If multiple\n`__all__` declarations are present, Ruff will not offer a fix. Ruff will suggest an unsafe fix\nto remove third-party and standard library imports -- the fix is unsafe because the module's\ninterface changes.\n\nSee [this FAQ section](https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc)\nfor more details on how Ruff\ndetermines whether an import is first or third-party.\n\n## Example\n\n```python\nimport numpy as np # unused import\n\n\ndef area(radius):\n return 3.14 * radius**2\n```\n\nUse instead:\n\n```python\ndef area(radius):\n return 3.14 * radius**2\n```\n\nTo check the availability of a module, use `importlib.util.find_spec`:\n\n```python\nfrom importlib.util import find_spec\n\nif find_spec(\"numpy\") is not None:\n print(\"numpy is installed\")\nelse:\n print(\"numpy is not installed\")\n```\n\n## Options\n- `lint.ignore-init-module-imports`\n- `lint.pyflakes.allowed-unused-imports`\n\n## References\n- [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)\n- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)\n- [Typing documentation: interface conventions](https://typing.python.org/en/latest/spec/distributing.html#library-interface-public-and-private-symbols)\n"
"text": "## What it does\nChecks for unused imports.\n\n## Why is this bad?\nUnused imports add a performance overhead at runtime, and risk creating\nimport cycles. They also increase the cognitive load of reading the code.\n\nIf an import statement is used to check for the availability or existence\nof a module, consider using `importlib.util.find_spec` instead.\n\nIf an import statement is used to re-export a symbol as part of a module's\npublic interface, consider using a \"redundant\" import alias, which\ninstructs Ruff (and other tools) to respect the re-export, and avoid\nmarking it as unused, as in:\n\n```python\nfrom module import member as member\n```\n\nAlternatively, you can use `__all__` to declare a symbol as part of the module's\ninterface, as in:\n\n```python\n# __init__.py\nimport some_module\n\n__all__ = [\"some_module\"]\n```\n\n## Preview\nWhen [preview] is enabled (and certain simplifying assumptions\nare met), we analyze all import statements for a given module\nwhen determining whether an import is used, rather than simply\nthe last of these statements. This can result in both different and\nmore import statements being marked as unused.\n\nFor example, if a module consists of\n\n```python\nimport a\nimport a.b\n```\n\nthen both statements are marked as unused under [preview], whereas\nonly the second is marked as unused under stable behavior.\n\nAs another example, if a module consists of\n\n```python\nimport a.b\nimport a\n\na.b.foo()\n```\n\nthen a diagnostic will only be emitted for the first line under [preview],\nwhereas a diagnostic would only be emitted for the second line under\nstable behavior.\n\nNote that this behavior is somewhat subjective and is designed\nto conform to the developer's intuition rather than Python's actual\nexecution. To wit, the statement `import a.b` automatically executes\n`import a`, so in some sense `import a` is _always_ redundant\nin the presence of `import a.b`.\n\n\n## Fix safety\n\nFixes to remove unused imports are safe, except in `__init__.py` files.\n\nApplying fixes to `__init__.py` files is currently in preview. The fix offered depends on the\ntype of the unused import. Ruff will suggest a safe fix to export first-party imports with\neither a redundant alias or, if already present in the file, an `__all__` entry. If multiple\n`__all__` declarations are present, Ruff will not offer a fix. Ruff will suggest an unsafe fix\nto remove third-party and standard library imports -- the fix is unsafe because the module's\ninterface changes.\n\nSee [this FAQ section](https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc)\nfor more details on how Ruff\ndetermines whether an import is first or third-party.\n\n## Example\n\n```python\nimport numpy as np # unused import\n\n\ndef area(radius):\n return 3.14 * radius**2\n```\n\nUse instead:\n\n```python\ndef area(radius):\n return 3.14 * radius**2\n```\n\nTo check the availability of a module, use `importlib.util.find_spec`:\n\n```python\nfrom importlib.util import find_spec\n\nif find_spec(\"numpy\") is not None:\n print(\"numpy is installed\")\nelse:\n print(\"numpy is not installed\")\n```\n\n## Options\n- `lint.ignore-init-module-imports`\n- `lint.pyflakes.allowed-unused-imports`\n\n## References\n- [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)\n- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)\n- [Typing documentation: interface conventions](https://typing.python.org/en/latest/spec/distributing.html#library-interface-public-and-private-symbols)\n\n[preview]: https://docs.astral.sh/ruff/preview/\n"
},
"help": {
"text": "`{name}` imported but unused; consider using `importlib.util.find_spec` to test for availability"

View File

@@ -1,30 +0,0 @@
---
source: crates/ruff_linter/src/message/text.rs
expression: content
---
F401 `os` imported but unused
--> fib.py:1:8
|
1 | import os
| ^^
|
help: Remove unused import: `os`
F841 Local variable `x` is assigned to but never used
--> fib.py:6:5
|
4 | def fibonacci(n):
5 | """Compute the nth number in the Fibonacci sequence."""
6 | x = 1
| ^
7 | if n == 0:
8 | return 0
|
help: Remove assignment to unused variable `x`
F821 Undefined name `a`
--> undef.py:1:4
|
1 | if a == 1: pass
| ^
|

View File

@@ -1,30 +0,0 @@
---
source: crates/ruff_linter/src/message/text.rs
expression: content
---
F401 `os` imported but unused
--> fib.py:1:8
|
1 | import os
| ^^
|
help: Remove unused import: `os`
F841 Local variable `x` is assigned to but never used
--> fib.py:6:5
|
4 | def fibonacci(n):
5 | """Compute the nth number in the Fibonacci sequence."""
6 | x = 1
| ^
7 | if n == 0:
8 | return 0
|
help: Remove assignment to unused variable `x`
F821 Undefined name `a`
--> undef.py:1:4
|
1 | if a == 1: pass
| ^
|

View File

@@ -1,30 +0,0 @@
---
source: crates/ruff_linter/src/message/text.rs
expression: content
---
F401 [*] `os` imported but unused
--> fib.py:1:8
|
1 | import os
| ^^
|
help: Remove unused import: `os`
F841 [*] Local variable `x` is assigned to but never used
--> fib.py:6:5
|
4 | def fibonacci(n):
5 | """Compute the nth number in the Fibonacci sequence."""
6 | x = 1
| ^
7 | if n == 0:
8 | return 0
|
help: Remove assignment to unused variable `x`
F821 Undefined name `a`
--> undef.py:1:4
|
1 | if a == 1: pass
| ^
|

View File

@@ -1,33 +0,0 @@
---
source: crates/ruff_linter/src/message/text.rs
expression: content
---
F401 [*] `os` imported but unused
--> notebook.ipynb:cell 1:2:8
|
1 | # cell 1
2 | import os
| ^^
|
help: Remove unused import: `os`
F401 [*] `math` imported but unused
--> notebook.ipynb:cell 2:2:8
|
1 | # cell 2
2 | import math
| ^^^^
3 |
4 | print('hello world')
|
help: Remove unused import: `math`
F841 [*] Local variable `x` is assigned to but never used
--> notebook.ipynb:cell 3:4:5
|
2 | def foo():
3 | print()
4 | x = 1
| ^
|
help: Remove assignment to unused variable `x`

View File

@@ -1,23 +0,0 @@
---
source: crates/ruff_linter/src/message/text.rs
expression: content
---
invalid-syntax: Expected one or more symbol names after import
--> syntax_errors.py:1:15
|
1 | from os import
| ^
2 |
3 | if call(foo
|
invalid-syntax: Expected ')', found newline
--> syntax_errors.py:3:12
|
1 | from os import
2 |
3 | if call(foo
| ^
4 | def bar():
5 | pass
|

View File

@@ -1,143 +0,0 @@
use std::io::Write;
use ruff_db::diagnostic::{
Diagnostic, DiagnosticFormat, DisplayDiagnosticConfig, DisplayDiagnostics,
};
use ruff_diagnostics::Applicability;
use crate::message::{Emitter, EmitterContext};
pub struct TextEmitter {
config: DisplayDiagnosticConfig,
}
impl Default for TextEmitter {
fn default() -> Self {
Self {
config: DisplayDiagnosticConfig::default()
.format(DiagnosticFormat::Concise)
.hide_severity(true)
.color(!cfg!(test) && colored::control::SHOULD_COLORIZE.should_colorize()),
}
}
}
impl TextEmitter {
#[must_use]
pub fn with_show_fix_status(mut self, show_fix_status: bool) -> Self {
self.config = self.config.show_fix_status(show_fix_status);
self
}
#[must_use]
pub fn with_show_fix_diff(mut self, show_fix_diff: bool) -> Self {
self.config = self.config.show_fix_diff(show_fix_diff);
self
}
#[must_use]
pub fn with_show_source(mut self, show_source: bool) -> Self {
self.config = self.config.format(if show_source {
DiagnosticFormat::Full
} else {
DiagnosticFormat::Concise
});
self
}
#[must_use]
pub fn with_fix_applicability(mut self, applicability: Applicability) -> Self {
self.config = self.config.fix_applicability(applicability);
self
}
#[must_use]
pub fn with_preview(mut self, preview: bool) -> Self {
self.config = self.config.preview(preview);
self
}
#[must_use]
pub fn with_color(mut self, color: bool) -> Self {
self.config = self.config.color(color);
self
}
}
impl Emitter for TextEmitter {
fn emit(
&mut self,
writer: &mut dyn Write,
diagnostics: &[Diagnostic],
context: &EmitterContext,
) -> anyhow::Result<()> {
write!(
writer,
"{}",
DisplayDiagnostics::new(context, &self.config, diagnostics)
)?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use insta::assert_snapshot;
use ruff_diagnostics::Applicability;
use crate::message::TextEmitter;
use crate::message::tests::{
capture_emitter_notebook_output, capture_emitter_output, create_diagnostics,
create_notebook_diagnostics, create_syntax_error_diagnostics,
};
#[test]
fn default() {
let mut emitter = TextEmitter::default().with_show_source(true);
let content = capture_emitter_output(&mut emitter, &create_diagnostics());
assert_snapshot!(content);
}
#[test]
fn fix_status() {
let mut emitter = TextEmitter::default()
.with_show_fix_status(true)
.with_show_source(true);
let content = capture_emitter_output(&mut emitter, &create_diagnostics());
assert_snapshot!(content);
}
#[test]
fn fix_status_unsafe() {
let mut emitter = TextEmitter::default()
.with_show_fix_status(true)
.with_show_source(true)
.with_fix_applicability(Applicability::Unsafe);
let content = capture_emitter_output(&mut emitter, &create_diagnostics());
assert_snapshot!(content);
}
#[test]
fn notebook_output() {
let mut emitter = TextEmitter::default()
.with_show_fix_status(true)
.with_show_source(true)
.with_fix_applicability(Applicability::Unsafe);
let (messages, notebook_indexes) = create_notebook_diagnostics();
let content = capture_emitter_notebook_output(&mut emitter, &messages, &notebook_indexes);
assert_snapshot!(content);
}
#[test]
fn syntax_errors() {
let mut emitter = TextEmitter::default().with_show_source(true);
let content = capture_emitter_output(&mut emitter, &create_syntax_error_diagnostics());
assert_snapshot!(content);
}
}

View File

@@ -235,3 +235,8 @@ pub(crate) const fn is_a003_class_scope_shadowing_expansion_enabled(
) -> bool {
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/20200
pub(crate) const fn is_refined_submodule_import_match_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}

View File

@@ -17,13 +17,13 @@ use ruff_text_size::TextRange;
pub(crate) enum Replacement {
// There's no replacement or suggestion other than removal
None,
// Additional information. Used when there's no direct maaping replacement.
Message(&'static str),
// The attribute name of a class has been changed.
AttrName(&'static str),
// Additional information. Used when there's replacement but they're not direct mapping.
Message(&'static str),
// Symbols updated in Airflow 3 with replacement
// e.g., `airflow.datasets.Dataset` to `airflow.sdk.Asset`
AutoImport {
Rename {
module: &'static str,
name: &'static str,
},
@@ -37,7 +37,7 @@ pub(crate) enum Replacement {
#[derive(Clone, Debug, Eq, PartialEq)]
pub(crate) enum ProviderReplacement {
AutoImport {
Rename {
module: &'static str,
name: &'static str,
provider: &'static str,

View File

@@ -50,7 +50,7 @@ impl Violation for Airflow3MovedToProvider<'_> {
replacement,
} = self;
match replacement {
ProviderReplacement::AutoImport {
ProviderReplacement::Rename {
name: _,
module: _,
provider,
@@ -70,7 +70,7 @@ impl Violation for Airflow3MovedToProvider<'_> {
fn fix_title(&self) -> Option<String> {
let Airflow3MovedToProvider { replacement, .. } = self;
if let Some((module, name, provider, version)) = match &replacement {
ProviderReplacement::AutoImport {
ProviderReplacement::Rename {
module,
name,
provider,
@@ -125,20 +125,18 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
provider: "amazon",
version: "1.0.0",
},
["airflow", "operators", "gcs_to_s3", "GCSToS3Operator"] => {
ProviderReplacement::AutoImport {
module: "airflow.providers.amazon.aws.transfers.gcs_to_s3",
name: "GCSToS3Operator",
provider: "amazon",
version: "1.0.0",
}
}
["airflow", "operators", "gcs_to_s3", "GCSToS3Operator"] => ProviderReplacement::Rename {
module: "airflow.providers.amazon.aws.transfers.gcs_to_s3",
name: "GCSToS3Operator",
provider: "amazon",
version: "1.0.0",
},
[
"airflow",
"operators",
"google_api_to_s3_transfer",
"GoogleApiToS3Operator" | "GoogleApiToS3Transfer",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.amazon.aws.transfers.google_api_to_s3",
name: "GoogleApiToS3Operator",
provider: "amazon",
@@ -149,7 +147,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"redshift_to_s3_operator",
"RedshiftToS3Operator" | "RedshiftToS3Transfer",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.amazon.aws.transfers.redshift_to_s3",
name: "RedshiftToS3Operator",
provider: "amazon",
@@ -160,7 +158,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"s3_file_transform_operator",
"S3FileTransformOperator",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.amazon.aws.operators.s3",
name: "S3FileTransformOperator",
provider: "amazon",
@@ -171,13 +169,13 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"s3_to_redshift_operator",
"S3ToRedshiftOperator" | "S3ToRedshiftTransfer",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.amazon.aws.transfers.s3_to_redshift",
name: "S3ToRedshiftOperator",
provider: "amazon",
version: "1.0.0",
},
["airflow", "sensors", "s3_key_sensor", "S3KeySensor"] => ProviderReplacement::AutoImport {
["airflow", "sensors", "s3_key_sensor", "S3KeySensor"] => ProviderReplacement::Rename {
module: "airflow.providers.amazon.aws.sensors.s3",
name: "S3KeySensor",
provider: "amazon",
@@ -190,20 +188,20 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"config_templates",
"default_celery",
"DEFAULT_CELERY_CONFIG",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.celery.executors.default_celery",
name: "DEFAULT_CELERY_CONFIG",
provider: "celery",
version: "3.3.0",
},
["airflow", "executors", "celery_executor", rest] => match *rest {
"app" => ProviderReplacement::AutoImport {
"app" => ProviderReplacement::Rename {
module: "airflow.providers.celery.executors.celery_executor_utils",
name: "app",
provider: "celery",
version: "3.3.0",
},
"CeleryExecutor" => ProviderReplacement::AutoImport {
"CeleryExecutor" => ProviderReplacement::Rename {
module: "airflow.providers.celery.executors.celery_executor",
name: "CeleryExecutor",
provider: "celery",
@@ -216,7 +214,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"executors",
"celery_kubernetes_executor",
"CeleryKubernetesExecutor",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.celery.executors.celery_kubernetes_executor",
name: "CeleryKubernetesExecutor",
provider: "celery",
@@ -235,7 +233,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
provider: "common-sql",
version: "1.0.0",
},
["airflow", "hooks", "dbapi_hook", "DbApiHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "dbapi_hook", "DbApiHook"] => ProviderReplacement::Rename {
module: "airflow.providers.common.sql.hooks.sql",
name: "DbApiHook",
provider: "common-sql",
@@ -252,7 +250,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"check_operator" | "druid_check_operator" | "presto_check_operator",
"CheckOperator",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.common.sql.operators.sql",
name: "SQLCheckOperator",
provider: "common-sql",
@@ -269,7 +267,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"presto_check_operator",
"PrestoCheckOperator",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.common.sql.operators.sql",
name: "SQLCheckOperator",
provider: "common-sql",
@@ -288,7 +286,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"IntervalCheckOperator",
]
| ["airflow", "operators", "sql", "SQLIntervalCheckOperator"] => {
ProviderReplacement::AutoImport {
ProviderReplacement::Rename {
module: "airflow.providers.common.sql.operators.sql",
name: "SQLIntervalCheckOperator",
provider: "common-sql",
@@ -300,7 +298,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"presto_check_operator",
"PrestoIntervalCheckOperator",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.common.sql.operators.sql",
name: "SQLIntervalCheckOperator",
provider: "common-sql",
@@ -313,7 +311,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"SQLThresholdCheckOperator" | "ThresholdCheckOperator",
]
| ["airflow", "operators", "sql", "SQLThresholdCheckOperator"] => {
ProviderReplacement::AutoImport {
ProviderReplacement::Rename {
module: "airflow.providers.common.sql.operators.sql",
name: "SQLThresholdCheckOperator",
provider: "common-sql",
@@ -332,20 +330,18 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"presto_check_operator",
"ValueCheckOperator",
]
| ["airflow", "operators", "sql", "SQLValueCheckOperator"] => {
ProviderReplacement::AutoImport {
module: "airflow.providers.common.sql.operators.sql",
name: "SQLValueCheckOperator",
provider: "common-sql",
version: "1.1.0",
}
}
| ["airflow", "operators", "sql", "SQLValueCheckOperator"] => ProviderReplacement::Rename {
module: "airflow.providers.common.sql.operators.sql",
name: "SQLValueCheckOperator",
provider: "common-sql",
version: "1.1.0",
},
[
"airflow",
"operators",
"presto_check_operator",
"PrestoValueCheckOperator",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.common.sql.operators.sql",
name: "SQLValueCheckOperator",
provider: "common-sql",
@@ -370,14 +366,12 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
}
_ => return,
},
["airflow", "sensors", "sql" | "sql_sensor", "SqlSensor"] => {
ProviderReplacement::AutoImport {
module: "airflow.providers.common.sql.sensors.sql",
name: "SqlSensor",
provider: "common-sql",
version: "1.0.0",
}
}
["airflow", "sensors", "sql" | "sql_sensor", "SqlSensor"] => ProviderReplacement::Rename {
module: "airflow.providers.common.sql.sensors.sql",
name: "SqlSensor",
provider: "common-sql",
version: "1.0.0",
},
["airflow", "operators", "jdbc_operator", "JdbcOperator"]
| ["airflow", "operators", "mssql_operator", "MsSqlOperator"]
| ["airflow", "operators", "mysql_operator", "MySqlOperator"]
@@ -389,7 +383,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"PostgresOperator",
]
| ["airflow", "operators", "sqlite_operator", "SqliteOperator"] => {
ProviderReplacement::AutoImport {
ProviderReplacement::Rename {
module: "airflow.providers.common.sql.operators.sql",
name: "SQLExecuteQueryOperator",
provider: "common-sql",
@@ -398,24 +392,22 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
}
// apache-airflow-providers-daskexecutor
["airflow", "executors", "dask_executor", "DaskExecutor"] => {
ProviderReplacement::AutoImport {
module: "airflow.providers.daskexecutor.executors.dask_executor",
name: "DaskExecutor",
provider: "daskexecutor",
version: "1.0.0",
}
}
["airflow", "executors", "dask_executor", "DaskExecutor"] => ProviderReplacement::Rename {
module: "airflow.providers.daskexecutor.executors.dask_executor",
name: "DaskExecutor",
provider: "daskexecutor",
version: "1.0.0",
},
// apache-airflow-providers-docker
["airflow", "hooks", "docker_hook", "DockerHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "docker_hook", "DockerHook"] => ProviderReplacement::Rename {
module: "airflow.providers.docker.hooks.docker",
name: "DockerHook",
provider: "docker",
version: "1.0.0",
},
["airflow", "operators", "docker_operator", "DockerOperator"] => {
ProviderReplacement::AutoImport {
ProviderReplacement::Rename {
module: "airflow.providers.docker.operators.docker",
name: "DockerOperator",
provider: "docker",
@@ -440,7 +432,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"hive_to_druid",
"HiveToDruidOperator" | "HiveToDruidTransfer",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.apache.druid.transfers.hive_to_druid",
name: "HiveToDruidOperator",
provider: "apache-druid",
@@ -497,7 +489,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"fab",
"fab_auth_manager",
"FabAuthManager",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.fab.auth_manager.fab_auth_manager",
name: "FabAuthManager",
provider: "fab",
@@ -511,7 +503,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"security_manager",
"override",
"MAX_NUM_DATABASE_USER_SESSIONS",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.fab.auth_manager.security_manager.override",
name: "MAX_NUM_DATABASE_USER_SESSIONS",
provider: "fab",
@@ -531,7 +523,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"www",
"security",
"FabAirflowSecurityManagerOverride",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.fab.auth_manager.security_manager.override",
name: "FabAirflowSecurityManagerOverride",
provider: "fab",
@@ -539,20 +531,18 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
},
// apache-airflow-providers-apache-hdfs
["airflow", "hooks", "webhdfs_hook", "WebHDFSHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "webhdfs_hook", "WebHDFSHook"] => ProviderReplacement::Rename {
module: "airflow.providers.apache.hdfs.hooks.webhdfs",
name: "WebHDFSHook",
provider: "apache-hdfs",
version: "1.0.0",
},
["airflow", "sensors", "web_hdfs_sensor", "WebHdfsSensor"] => {
ProviderReplacement::AutoImport {
module: "airflow.providers.apache.hdfs.sensors.web_hdfs",
name: "WebHdfsSensor",
provider: "apache-hdfs",
version: "1.0.0",
}
}
["airflow", "sensors", "web_hdfs_sensor", "WebHdfsSensor"] => ProviderReplacement::Rename {
module: "airflow.providers.apache.hdfs.sensors.web_hdfs",
name: "WebHdfsSensor",
provider: "apache-hdfs",
version: "1.0.0",
},
// apache-airflow-providers-apache-hive
[
@@ -580,20 +570,18 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
provider: "apache-hive",
version: "5.1.0",
},
["airflow", "operators", "hive_operator", "HiveOperator"] => {
ProviderReplacement::AutoImport {
module: "airflow.providers.apache.hive.operators.hive",
name: "HiveOperator",
provider: "apache-hive",
version: "1.0.0",
}
}
["airflow", "operators", "hive_operator", "HiveOperator"] => ProviderReplacement::Rename {
module: "airflow.providers.apache.hive.operators.hive",
name: "HiveOperator",
provider: "apache-hive",
version: "1.0.0",
},
[
"airflow",
"operators",
"hive_stats_operator",
"HiveStatsCollectionOperator",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.apache.hive.operators.hive_stats",
name: "HiveStatsCollectionOperator",
provider: "apache-hive",
@@ -604,7 +592,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"hive_to_mysql",
"HiveToMySqlOperator" | "HiveToMySqlTransfer",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.apache.hive.transfers.hive_to_mysql",
name: "HiveToMySqlOperator",
provider: "apache-hive",
@@ -615,7 +603,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"hive_to_samba_operator",
"HiveToSambaOperator",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.apache.hive.transfers.hive_to_samba",
name: "HiveToSambaOperator",
provider: "apache-hive",
@@ -626,7 +614,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"mssql_to_hive",
"MsSqlToHiveOperator" | "MsSqlToHiveTransfer",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.apache.hive.transfers.mssql_to_hive",
name: "MsSqlToHiveOperator",
provider: "apache-hive",
@@ -637,7 +625,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"mysql_to_hive",
"MySqlToHiveOperator" | "MySqlToHiveTransfer",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.apache.hive.transfers.mysql_to_hive",
name: "MySqlToHiveOperator",
provider: "apache-hive",
@@ -648,7 +636,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"s3_to_hive_operator",
"S3ToHiveOperator" | "S3ToHiveTransfer",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.apache.hive.transfers.s3_to_hive",
name: "S3ToHiveOperator",
provider: "apache-hive",
@@ -659,7 +647,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"sensors",
"hive_partition_sensor",
"HivePartitionSensor",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.apache.hive.sensors.hive_partition",
name: "HivePartitionSensor",
provider: "apache-hive",
@@ -670,7 +658,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"sensors",
"metastore_partition_sensor",
"MetastorePartitionSensor",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.apache.hive.sensors.metastore_partition",
name: "MetastorePartitionSensor",
provider: "apache-hive",
@@ -681,7 +669,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"sensors",
"named_hive_partition_sensor",
"NamedHivePartitionSensor",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.apache.hive.sensors.named_hive_partition",
name: "NamedHivePartitionSensor",
provider: "apache-hive",
@@ -689,7 +677,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
},
// apache-airflow-providers-http
["airflow", "hooks", "http_hook", "HttpHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "http_hook", "HttpHook"] => ProviderReplacement::Rename {
module: "airflow.providers.http.hooks.http",
name: "HttpHook",
provider: "http",
@@ -700,13 +688,13 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"http_operator",
"SimpleHttpOperator",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.http.operators.http",
name: "HttpOperator",
provider: "http",
version: "5.0.0",
},
["airflow", "sensors", "http_sensor", "HttpSensor"] => ProviderReplacement::AutoImport {
["airflow", "sensors", "http_sensor", "HttpSensor"] => ProviderReplacement::Rename {
module: "airflow.providers.http.sensors.http",
name: "HttpSensor",
provider: "http",
@@ -765,7 +753,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"kubernetes",
"kubernetes_helper_functions",
"add_pod_suffix",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.cncf.kubernetes.kubernetes_helper_functions",
name: "add_unique_suffix",
provider: "cncf-kubernetes",
@@ -776,7 +764,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"kubernetes",
"kubernetes_helper_functions",
"create_pod_id",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.cncf.kubernetes.kubernetes_helper_functions",
name: "create_unique_id",
provider: "cncf-kubernetes",
@@ -797,13 +785,13 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
version: "7.4.0",
},
["airflow", "kubernetes", "pod", rest] => match *rest {
"Port" => ProviderReplacement::AutoImport {
"Port" => ProviderReplacement::Rename {
module: "kubernetes.client.models",
name: "V1ContainerPort",
provider: "cncf-kubernetes",
version: "7.4.0",
},
"Resources" => ProviderReplacement::AutoImport {
"Resources" => ProviderReplacement::Rename {
module: "kubernetes.client.models",
name: "V1ResourceRequirements",
provider: "cncf-kubernetes",
@@ -823,19 +811,19 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
provider: "cncf-kubernetes",
version: "7.4.0",
},
"PodDefaults" => ProviderReplacement::AutoImport {
"PodDefaults" => ProviderReplacement::Rename {
module: "airflow.providers.cncf.kubernetes.utils.xcom_sidecar",
name: "PodDefaults",
provider: "cncf-kubernetes",
version: "7.4.0",
},
"PodGeneratorDeprecated" => ProviderReplacement::AutoImport {
"PodGeneratorDeprecated" => ProviderReplacement::Rename {
module: "airflow.providers.cncf.kubernetes.pod_generator",
name: "PodGenerator",
provider: "cncf-kubernetes",
version: "7.4.0",
},
"add_pod_suffix" => ProviderReplacement::AutoImport {
"add_pod_suffix" => ProviderReplacement::Rename {
module: "airflow.providers.cncf.kubernetes.kubernetes_helper_functions",
name: "add_unique_suffix",
provider: "cncf-kubernetes",
@@ -865,7 +853,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"kubernetes",
"pod_generator_deprecated" | "pod_launcher_deprecated",
"PodDefaults",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.cncf.kubernetes.utils.xcom_sidecar",
name: "PodDefaults",
provider: "cncf-kubernetes",
@@ -876,7 +864,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"kubernetes",
"pod_launcher_deprecated",
"get_kube_client",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.cncf.kubernetes.kube_client",
name: "get_kube_client",
provider: "cncf-kubernetes",
@@ -887,7 +875,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"kubernetes",
"pod_launcher" | "pod_launcher_deprecated",
"PodLauncher",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.cncf.kubernetes.utils.pod_manager",
name: "PodManager",
provider: "cncf-kubernetes",
@@ -898,7 +886,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"kubernetes",
"pod_launcher" | "pod_launcher_deprecated",
"PodStatus",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: " airflow.providers.cncf.kubernetes.utils.pod_manager",
name: "PodPhase",
provider: "cncf-kubernetes",
@@ -909,20 +897,20 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"kubernetes",
"pod_runtime_info_env",
"PodRuntimeInfoEnv",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "kubernetes.client.models",
name: "V1EnvVar",
provider: "cncf-kubernetes",
version: "7.4.0",
},
["airflow", "kubernetes", "secret", rest] => match *rest {
"K8SModel" => ProviderReplacement::AutoImport {
"K8SModel" => ProviderReplacement::Rename {
module: "airflow.providers.cncf.kubernetes.k8s_model",
name: "K8SModel",
provider: "cncf-kubernetes",
version: "7.4.0",
},
"Secret" => ProviderReplacement::AutoImport {
"Secret" => ProviderReplacement::Rename {
module: "airflow.providers.cncf.kubernetes.secret",
name: "Secret",
provider: "cncf-kubernetes",
@@ -930,23 +918,21 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
},
_ => return,
},
["airflow", "kubernetes", "volume", "Volume"] => ProviderReplacement::AutoImport {
["airflow", "kubernetes", "volume", "Volume"] => ProviderReplacement::Rename {
module: "kubernetes.client.models",
name: "V1Volume",
provider: "cncf-kubernetes",
version: "7.4.0",
},
["airflow", "kubernetes", "volume_mount", "VolumeMount"] => {
ProviderReplacement::AutoImport {
module: "kubernetes.client.models",
name: "V1VolumeMount",
provider: "cncf-kubernetes",
version: "7.4.0",
}
}
["airflow", "kubernetes", "volume_mount", "VolumeMount"] => ProviderReplacement::Rename {
module: "kubernetes.client.models",
name: "V1VolumeMount",
provider: "cncf-kubernetes",
version: "7.4.0",
},
// apache-airflow-providers-microsoft-mssql
["airflow", "hooks", "mssql_hook", "MsSqlHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "mssql_hook", "MsSqlHook"] => ProviderReplacement::Rename {
module: "airflow.providers.microsoft.mssql.hooks.mssql",
name: "MsSqlHook",
provider: "microsoft-mssql",
@@ -954,7 +940,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
},
// apache-airflow-providers-mysql
["airflow", "hooks", "mysql_hook", "MySqlHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "mysql_hook", "MySqlHook"] => ProviderReplacement::Rename {
module: "airflow.providers.mysql.hooks.mysql",
name: "MySqlHook",
provider: "mysql",
@@ -965,7 +951,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"presto_to_mysql",
"PrestoToMySqlOperator" | "PrestoToMySqlTransfer",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.mysql.transfers.presto_to_mysql",
name: "PrestoToMySqlOperator",
provider: "mysql",
@@ -973,7 +959,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
},
// apache-airflow-providers-oracle
["airflow", "hooks", "oracle_hook", "OracleHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "oracle_hook", "OracleHook"] => ProviderReplacement::Rename {
module: "airflow.providers.oracle.hooks.oracle",
name: "OracleHook",
provider: "oracle",
@@ -986,7 +972,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"papermill_operator",
"PapermillOperator",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.papermill.operators.papermill",
name: "PapermillOperator",
provider: "papermill",
@@ -994,23 +980,21 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
},
// apache-airflow-providers-apache-pig
["airflow", "hooks", "pig_hook", "PigCliHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "pig_hook", "PigCliHook"] => ProviderReplacement::Rename {
module: "airflow.providers.apache.pig.hooks.pig",
name: "PigCliHook",
provider: "apache-pig",
version: "1.0.0",
},
["airflow", "operators", "pig_operator", "PigOperator"] => {
ProviderReplacement::AutoImport {
module: "airflow.providers.apache.pig.operators.pig",
name: "PigOperator",
provider: "apache-pig",
version: "1.0.0",
}
}
["airflow", "operators", "pig_operator", "PigOperator"] => ProviderReplacement::Rename {
module: "airflow.providers.apache.pig.operators.pig",
name: "PigOperator",
provider: "apache-pig",
version: "1.0.0",
},
// apache-airflow-providers-postgres
["airflow", "hooks", "postgres_hook", "PostgresHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "postgres_hook", "PostgresHook"] => ProviderReplacement::Rename {
module: "airflow.providers.postgres.hooks.postgres",
name: "PostgresHook",
provider: "postgres",
@@ -1018,7 +1002,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
},
// apache-airflow-providers-presto
["airflow", "hooks", "presto_hook", "PrestoHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "presto_hook", "PrestoHook"] => ProviderReplacement::Rename {
module: "airflow.providers.presto.hooks.presto",
name: "PrestoHook",
provider: "presto",
@@ -1026,7 +1010,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
},
// apache-airflow-providers-samba
["airflow", "hooks", "samba_hook", "SambaHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "samba_hook", "SambaHook"] => ProviderReplacement::Rename {
module: "airflow.providers.samba.hooks.samba",
name: "SambaHook",
provider: "samba",
@@ -1034,7 +1018,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
},
// apache-airflow-providers-slack
["airflow", "hooks", "slack_hook", "SlackHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "slack_hook", "SlackHook"] => ProviderReplacement::Rename {
module: "airflow.providers.slack.hooks.slack",
name: "SlackHook",
provider: "slack",
@@ -1058,7 +1042,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"email_operator" | "email",
"EmailOperator",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.smtp.operators.smtp",
name: "EmailOperator",
provider: "smtp",
@@ -1066,7 +1050,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
},
// apache-airflow-providers-sqlite
["airflow", "hooks", "sqlite_hook", "SqliteHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "sqlite_hook", "SqliteHook"] => ProviderReplacement::Rename {
module: "airflow.providers.sqlite.hooks.sqlite",
name: "SqliteHook",
provider: "sqlite",
@@ -1074,7 +1058,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
},
// apache-airflow-providers-zendesk
["airflow", "hooks", "zendesk_hook", "ZendeskHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "zendesk_hook", "ZendeskHook"] => ProviderReplacement::Rename {
module: "airflow.providers.zendesk.hooks.zendesk",
name: "ZendeskHook",
provider: "zendesk",
@@ -1093,14 +1077,12 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
provider: "standard",
version: "0.0.3",
},
["airflow", "operators", "bash_operator", "BashOperator"] => {
ProviderReplacement::AutoImport {
module: "airflow.providers.standard.operators.bash",
name: "BashOperator",
provider: "standard",
version: "0.0.1",
}
}
["airflow", "operators", "bash_operator", "BashOperator"] => ProviderReplacement::Rename {
module: "airflow.providers.standard.operators.bash",
name: "BashOperator",
provider: "standard",
version: "0.0.1",
},
[
"airflow",
"operators",
@@ -1117,14 +1099,14 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"trigger_dagrun",
"TriggerDagRunLink",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.standard.operators.trigger_dagrun",
name: "TriggerDagRunLink",
provider: "standard",
version: "0.0.2",
},
["airflow", "operators", "datetime", "target_times_as_dates"] => {
ProviderReplacement::AutoImport {
ProviderReplacement::Rename {
module: "airflow.providers.standard.operators.datetime",
name: "target_times_as_dates",
provider: "standard",
@@ -1136,7 +1118,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"dummy" | "dummy_operator",
"EmptyOperator" | "DummyOperator",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.standard.operators.empty",
name: "EmptyOperator",
provider: "standard",
@@ -1147,7 +1129,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"latest_only_operator",
"LatestOnlyOperator",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.standard.operators.latest_only",
name: "LatestOnlyOperator",
provider: "standard",
@@ -1172,7 +1154,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"sensors",
"external_task" | "external_task_sensor",
"ExternalTaskSensorLink",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.standard.sensors.external_task",
name: "ExternalDagLink",
provider: "standard",
@@ -1189,7 +1171,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
provider: "standard",
version: "0.0.3",
},
["airflow", "sensors", "time_delta", "WaitSensor"] => ProviderReplacement::AutoImport {
["airflow", "sensors", "time_delta", "WaitSensor"] => ProviderReplacement::Rename {
module: "airflow.providers.standard.sensors.time_delta",
name: "WaitSensor",
provider: "standard",
@@ -1200,7 +1182,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
};
let (module, name) = match &replacement {
ProviderReplacement::AutoImport { module, name, .. } => (module, *name),
ProviderReplacement::Rename { module, name, .. } => (module, *name),
ProviderReplacement::SourceModuleMovedToProvider { module, name, .. } => {
(module, name.as_str())
}

View File

@@ -59,7 +59,7 @@ impl Violation for Airflow3Removal {
Replacement::None
| Replacement::AttrName(_)
| Replacement::Message(_)
| Replacement::AutoImport { module: _, name: _ }
| Replacement::Rename { module: _, name: _ }
| Replacement::SourceModuleMoved { module: _, name: _ } => {
format!("`{deprecated}` is removed in Airflow 3.0")
}
@@ -72,7 +72,7 @@ impl Violation for Airflow3Removal {
Replacement::None => None,
Replacement::AttrName(name) => Some(format!("Use `{name}` instead")),
Replacement::Message(message) => Some((*message).to_string()),
Replacement::AutoImport { module, name } => {
Replacement::Rename { module, name } => {
Some(format!("Use `{name}` from `{module}` instead."))
}
Replacement::SourceModuleMoved { module, name } => {
@@ -593,7 +593,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
"api_connexion",
"security",
"requires_access_dataset",
] => Replacement::AutoImport {
] => Replacement::Rename {
module: "airflow.api_fastapi.core_api.security",
name: "requires_access_asset",
},
@@ -605,7 +605,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
"managers",
"base_auth_manager",
"BaseAuthManager",
] => Replacement::AutoImport {
] => Replacement::Rename {
module: "airflow.api_fastapi.auth.managers.base_auth_manager",
name: "BaseAuthManager",
},
@@ -616,7 +616,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
"models",
"resource_details",
"DatasetDetails",
] => Replacement::AutoImport {
] => Replacement::Rename {
module: "airflow.api_fastapi.auth.managers.models.resource_details",
name: "AssetDetails",
},
@@ -639,15 +639,15 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
// airflow.datasets.manager
["airflow", "datasets", "manager", rest] => match *rest {
"DatasetManager" => Replacement::AutoImport {
"DatasetManager" => Replacement::Rename {
module: "airflow.assets.manager",
name: "AssetManager",
},
"dataset_manager" => Replacement::AutoImport {
"dataset_manager" => Replacement::Rename {
module: "airflow.assets.manager",
name: "asset_manager",
},
"resolve_dataset_manager" => Replacement::AutoImport {
"resolve_dataset_manager" => Replacement::Rename {
module: "airflow.assets.manager",
name: "resolve_asset_manager",
},
@@ -657,24 +657,24 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
["airflow", "datasets", "DatasetAliasEvent"] => Replacement::None,
// airflow.hooks
["airflow", "hooks", "base_hook", "BaseHook"] => Replacement::AutoImport {
["airflow", "hooks", "base_hook", "BaseHook"] => Replacement::Rename {
module: "airflow.hooks.base",
name: "BaseHook",
},
// airflow.lineage.hook
["airflow", "lineage", "hook", "DatasetLineageInfo"] => Replacement::AutoImport {
["airflow", "lineage", "hook", "DatasetLineageInfo"] => Replacement::Rename {
module: "airflow.lineage.hook",
name: "AssetLineageInfo",
},
// airflow.listeners.spec
["airflow", "listeners", "spec", "dataset", rest] => match *rest {
"on_dataset_created" => Replacement::AutoImport {
"on_dataset_created" => Replacement::Rename {
module: "airflow.listeners.spec.asset",
name: "on_asset_created",
},
"on_dataset_changed" => Replacement::AutoImport {
"on_dataset_changed" => Replacement::Rename {
module: "airflow.listeners.spec.asset",
name: "on_asset_changed",
},
@@ -683,11 +683,11 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
// airflow.metrics.validators
["airflow", "metrics", "validators", rest] => match *rest {
"AllowListValidator" => Replacement::AutoImport {
"AllowListValidator" => Replacement::Rename {
module: "airflow.metrics.validators",
name: "PatternAllowListValidator",
},
"BlockListValidator" => Replacement::AutoImport {
"BlockListValidator" => Replacement::Rename {
module: "airflow.metrics.validators",
name: "PatternBlockListValidator",
},
@@ -695,7 +695,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
},
// airflow.notifications
["airflow", "notifications", "basenotifier", "BaseNotifier"] => Replacement::AutoImport {
["airflow", "notifications", "basenotifier", "BaseNotifier"] => Replacement::Rename {
module: "airflow.sdk.bases.notifier",
name: "BaseNotifier",
},
@@ -705,23 +705,23 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
Replacement::Message("The whole `airflow.subdag` module has been removed.")
}
["airflow", "operators", "postgres_operator", "Mapping"] => Replacement::None,
["airflow", "operators", "python", "get_current_context"] => Replacement::AutoImport {
["airflow", "operators", "python", "get_current_context"] => Replacement::Rename {
module: "airflow.sdk",
name: "get_current_context",
},
// airflow.secrets
["airflow", "secrets", "cache", "SecretCache"] => Replacement::AutoImport {
["airflow", "secrets", "cache", "SecretCache"] => Replacement::Rename {
module: "airflow.sdk",
name: "SecretCache",
},
["airflow", "secrets", "local_filesystem", "load_connections"] => Replacement::AutoImport {
["airflow", "secrets", "local_filesystem", "load_connections"] => Replacement::Rename {
module: "airflow.secrets.local_filesystem",
name: "load_connections_dict",
},
// airflow.security
["airflow", "security", "permissions", "RESOURCE_DATASET"] => Replacement::AutoImport {
["airflow", "security", "permissions", "RESOURCE_DATASET"] => Replacement::Rename {
module: "airflow.security.permissions",
name: "RESOURCE_ASSET",
},
@@ -732,7 +732,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
"sensors",
"base_sensor_operator",
"BaseSensorOperator",
] => Replacement::AutoImport {
] => Replacement::Rename {
module: "airflow.sdk.bases.sensor",
name: "BaseSensorOperator",
},
@@ -743,7 +743,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
"timetables",
"simple",
"DatasetTriggeredTimetable",
] => Replacement::AutoImport {
] => Replacement::Rename {
module: "airflow.timetables.simple",
name: "AssetTriggeredTimetable",
},
@@ -775,25 +775,25 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
] => Replacement::None,
// airflow.utils.file
["file", "TemporaryDirectory"] => Replacement::AutoImport {
["file", "TemporaryDirectory"] => Replacement::Rename {
module: "tempfile",
name: "TemporaryDirectory",
},
["file", "mkdirs"] => Replacement::Message("Use `pathlib.Path({path}).mkdir` instead"),
// airflow.utils.helpers
["helpers", "chain"] => Replacement::AutoImport {
["helpers", "chain"] => Replacement::Rename {
module: "airflow.sdk",
name: "chain",
},
["helpers", "cross_downstream"] => Replacement::AutoImport {
["helpers", "cross_downstream"] => Replacement::Rename {
module: "airflow.sdk",
name: "cross_downstream",
},
// TODO: update it as SourceModuleMoved
// airflow.utils.log.secrets_masker
["log", "secrets_masker"] => Replacement::AutoImport {
["log", "secrets_masker"] => Replacement::Rename {
module: "airflow.sdk.execution_time",
name: "secrets_masker",
},
@@ -834,15 +834,15 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
"s3",
rest,
] => match *rest {
"create_dataset" => Replacement::AutoImport {
"create_dataset" => Replacement::Rename {
module: "airflow.providers.amazon.aws.assets.s3",
name: "create_asset",
},
"convert_dataset_to_openlineage" => Replacement::AutoImport {
"convert_dataset_to_openlineage" => Replacement::Rename {
module: "airflow.providers.amazon.aws.assets.s3",
name: "convert_asset_to_openlineage",
},
"sanitize_uri" => Replacement::AutoImport {
"sanitize_uri" => Replacement::Rename {
module: "airflow.providers.amazon.aws.assets.s3",
name: "sanitize_uri",
},
@@ -858,7 +858,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
"entities",
"AvpEntities",
"DATASET",
] => Replacement::AutoImport {
] => Replacement::Rename {
module: "airflow.providers.amazon.aws.auth_manager.avp.entities",
name: "AvpEntities.ASSET",
},
@@ -874,15 +874,15 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
"file",
rest,
] => match *rest {
"create_dataset" => Replacement::AutoImport {
"create_dataset" => Replacement::Rename {
module: "airflow.providers.common.io.assets.file",
name: "create_asset",
},
"convert_dataset_to_openlineage" => Replacement::AutoImport {
"convert_dataset_to_openlineage" => Replacement::Rename {
module: "airflow.providers.common.io.assets.file",
name: "convert_asset_to_openlineage",
},
"sanitize_uri" => Replacement::AutoImport {
"sanitize_uri" => Replacement::Rename {
module: "airflow.providers.common.io.assets.file",
name: "sanitize_uri",
},
@@ -892,19 +892,19 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
// airflow.providers.google
// airflow.providers.google.datasets
["airflow", "providers", "google", "datasets", rest @ ..] => match &rest {
["bigquery", "create_dataset"] => Replacement::AutoImport {
["bigquery", "create_dataset"] => Replacement::Rename {
module: "airflow.providers.google.assets.bigquery",
name: "create_asset",
},
["gcs", "create_dataset"] => Replacement::AutoImport {
["gcs", "create_dataset"] => Replacement::Rename {
module: "airflow.providers.google.assets.gcs",
name: "create_asset",
},
["gcs", "convert_dataset_to_openlineage"] => Replacement::AutoImport {
["gcs", "convert_dataset_to_openlineage"] => Replacement::Rename {
module: "airflow.providers.google.assets.gcs",
name: "convert_asset_to_openlineage",
},
["gcs", "sanitize_uri"] => Replacement::AutoImport {
["gcs", "sanitize_uri"] => Replacement::Rename {
module: "airflow.providers.google.assets.gcs",
name: "sanitize_uri",
},
@@ -920,7 +920,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
"datasets",
"mysql",
"sanitize_uri",
] => Replacement::AutoImport {
] => Replacement::Rename {
module: "airflow.providers.mysql.assets.mysql",
name: "sanitize_uri",
},
@@ -933,7 +933,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
"datasets",
"postgres",
"sanitize_uri",
] => Replacement::AutoImport {
] => Replacement::Rename {
module: "airflow.providers.postgres.assets.postgres",
name: "sanitize_uri",
},
@@ -948,12 +948,12 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
"utils",
rest,
] => match *rest {
"DatasetInfo" => Replacement::AutoImport {
"DatasetInfo" => Replacement::Rename {
module: "airflow.providers.openlineage.utils.utils",
name: "AssetInfo",
},
"translate_airflow_dataset" => Replacement::AutoImport {
"translate_airflow_dataset" => Replacement::Rename {
module: "airflow.providers.openlineage.utils.utils",
name: "translate_airflow_asset",
},
@@ -968,7 +968,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
"datasets",
"trino",
"sanitize_uri",
] => Replacement::AutoImport {
] => Replacement::Rename {
module: "airflow.providers.trino.assets.trino",
name: "sanitize_uri",
},
@@ -977,7 +977,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
};
let (module, name) = match &replacement {
Replacement::AutoImport { module, name } => (module, *name),
Replacement::Rename { module, name } => (module, *name),
Replacement::SourceModuleMoved { module, name } => (module, name.as_str()),
_ => {
checker.report_diagnostic(

View File

@@ -65,7 +65,7 @@ impl Violation for Airflow3SuggestedToMoveToProvider<'_> {
replacement,
} = self;
match replacement {
ProviderReplacement::AutoImport {
ProviderReplacement::Rename {
name: _,
module: _,
provider,
@@ -88,7 +88,7 @@ impl Violation for Airflow3SuggestedToMoveToProvider<'_> {
fn fix_title(&self) -> Option<String> {
let Airflow3SuggestedToMoveToProvider { replacement, .. } = self;
match replacement {
ProviderReplacement::AutoImport {
ProviderReplacement::Rename {
module,
name,
provider,
@@ -130,34 +130,32 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
let replacement = match qualified_name.segments() {
// apache-airflow-providers-standard
["airflow", "hooks", "filesystem", "FSHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "filesystem", "FSHook"] => ProviderReplacement::Rename {
module: "airflow.providers.standard.hooks.filesystem",
name: "FSHook",
provider: "standard",
version: "0.0.1",
},
["airflow", "hooks", "package_index", "PackageIndexHook"] => {
ProviderReplacement::AutoImport {
module: "airflow.providers.standard.hooks.package_index",
name: "PackageIndexHook",
provider: "standard",
version: "0.0.1",
}
}
["airflow", "hooks", "subprocess", "SubprocessHook"] => ProviderReplacement::AutoImport {
["airflow", "hooks", "package_index", "PackageIndexHook"] => ProviderReplacement::Rename {
module: "airflow.providers.standard.hooks.package_index",
name: "PackageIndexHook",
provider: "standard",
version: "0.0.1",
},
["airflow", "hooks", "subprocess", "SubprocessHook"] => ProviderReplacement::Rename {
module: "airflow.providers.standard.hooks.subprocess",
name: "SubprocessHook",
provider: "standard",
version: "0.0.3",
},
["airflow", "operators", "bash", "BashOperator"] => ProviderReplacement::AutoImport {
["airflow", "operators", "bash", "BashOperator"] => ProviderReplacement::Rename {
module: "airflow.providers.standard.operators.bash",
name: "BashOperator",
provider: "standard",
version: "0.0.1",
},
["airflow", "operators", "datetime", "BranchDateTimeOperator"] => {
ProviderReplacement::AutoImport {
ProviderReplacement::Rename {
module: "airflow.providers.standard.operators.datetime",
name: "BranchDateTimeOperator",
provider: "standard",
@@ -169,20 +167,20 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
"operators",
"trigger_dagrun",
"TriggerDagRunOperator",
] => ProviderReplacement::AutoImport {
] => ProviderReplacement::Rename {
module: "airflow.providers.standard.operators.trigger_dagrun",
name: "TriggerDagRunOperator",
provider: "standard",
version: "0.0.2",
},
["airflow", "operators", "empty", "EmptyOperator"] => ProviderReplacement::AutoImport {
["airflow", "operators", "empty", "EmptyOperator"] => ProviderReplacement::Rename {
module: "airflow.providers.standard.operators.empty",
name: "EmptyOperator",
provider: "standard",
version: "0.0.2",
},
["airflow", "operators", "latest_only", "LatestOnlyOperator"] => {
ProviderReplacement::AutoImport {
ProviderReplacement::Rename {
module: "airflow.providers.standard.operators.latest_only",
name: "LatestOnlyOperator",
provider: "standard",
@@ -204,14 +202,14 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
version: "0.0.1",
},
["airflow", "operators", "weekday", "BranchDayOfWeekOperator"] => {
ProviderReplacement::AutoImport {
ProviderReplacement::Rename {
module: "airflow.providers.standard.operators.weekday",
name: "BranchDayOfWeekOperator",
provider: "standard",
version: "0.0.1",
}
}
["airflow", "sensors", "bash", "BashSensor"] => ProviderReplacement::AutoImport {
["airflow", "sensors", "bash", "BashSensor"] => ProviderReplacement::Rename {
module: "airflow.providers.standard.sensor.bash",
name: "BashSensor",
provider: "standard",
@@ -239,13 +237,13 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
provider: "standard",
version: "0.0.3",
},
["airflow", "sensors", "filesystem", "FileSensor"] => ProviderReplacement::AutoImport {
["airflow", "sensors", "filesystem", "FileSensor"] => ProviderReplacement::Rename {
module: "airflow.providers.standard.sensors.filesystem",
name: "FileSensor",
provider: "standard",
version: "0.0.2",
},
["airflow", "sensors", "python", "PythonSensor"] => ProviderReplacement::AutoImport {
["airflow", "sensors", "python", "PythonSensor"] => ProviderReplacement::Rename {
module: "airflow.providers.standard.sensors.python",
name: "PythonSensor",
provider: "standard",
@@ -273,7 +271,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
provider: "standard",
version: "0.0.1",
},
["airflow", "sensors", "weekday", "DayOfWeekSensor"] => ProviderReplacement::AutoImport {
["airflow", "sensors", "weekday", "DayOfWeekSensor"] => ProviderReplacement::Rename {
module: "airflow.providers.standard.sensors.weekday",
name: "DayOfWeekSensor",
provider: "standard",
@@ -290,7 +288,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
provider: "standard",
version: "0.0.3",
},
["airflow", "triggers", "file", "FileTrigger"] => ProviderReplacement::AutoImport {
["airflow", "triggers", "file", "FileTrigger"] => ProviderReplacement::Rename {
module: "airflow.providers.standard.triggers.file",
name: "FileTrigger",
provider: "standard",
@@ -311,7 +309,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
};
let (module, name) = match &replacement {
ProviderReplacement::AutoImport { module, name, .. } => (module, *name),
ProviderReplacement::Rename { module, name, .. } => (module, *name),
ProviderReplacement::SourceModuleMovedToProvider { module, name, .. } => {
(module, name.as_str())
}

View File

@@ -55,7 +55,7 @@ impl Violation for Airflow3SuggestedUpdate {
Replacement::None
| Replacement::AttrName(_)
| Replacement::Message(_)
| Replacement::AutoImport { module: _, name: _ }
| Replacement::Rename { module: _, name: _ }
| Replacement::SourceModuleMoved { module: _, name: _ } => {
format!(
"`{deprecated}` is removed in Airflow 3.0; \
@@ -71,7 +71,7 @@ impl Violation for Airflow3SuggestedUpdate {
Replacement::None => None,
Replacement::AttrName(name) => Some(format!("Use `{name}` instead")),
Replacement::Message(message) => Some((*message).to_string()),
Replacement::AutoImport { module, name } => {
Replacement::Rename { module, name } => {
Some(format!("Use `{name}` from `{module}` instead."))
}
Replacement::SourceModuleMoved { module, name } => {
@@ -191,30 +191,30 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
let replacement = match qualified_name.segments() {
// airflow.datasets.metadata
["airflow", "datasets", "metadata", "Metadata"] => Replacement::AutoImport {
["airflow", "datasets", "metadata", "Metadata"] => Replacement::Rename {
module: "airflow.sdk",
name: "Metadata",
},
// airflow.datasets
["airflow", "Dataset"] | ["airflow", "datasets", "Dataset"] => Replacement::AutoImport {
["airflow", "Dataset"] | ["airflow", "datasets", "Dataset"] => Replacement::Rename {
module: "airflow.sdk",
name: "Asset",
},
["airflow", "datasets", rest] => match *rest {
"DatasetAliasEvent" => Replacement::None,
"DatasetAlias" => Replacement::AutoImport {
"DatasetAlias" => Replacement::Rename {
module: "airflow.sdk",
name: "AssetAlias",
},
"DatasetAll" => Replacement::AutoImport {
"DatasetAll" => Replacement::Rename {
module: "airflow.sdk",
name: "AssetAll",
},
"DatasetAny" => Replacement::AutoImport {
"DatasetAny" => Replacement::Rename {
module: "airflow.sdk",
name: "AssetAny",
},
"expand_alias_to_datasets" => Replacement::AutoImport {
"expand_alias_to_datasets" => Replacement::Rename {
module: "airflow.models.asset",
name: "expand_alias_to_assets",
},
@@ -261,7 +261,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
name: (*rest).to_string(),
}
}
["airflow", "models", "Param"] => Replacement::AutoImport {
["airflow", "models", "Param"] => Replacement::Rename {
module: "airflow.sdk.definitions.param",
name: "Param",
},
@@ -276,7 +276,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
module: "airflow.sdk",
name: (*rest).to_string(),
},
["airflow", "models", "baseoperatorlink", "BaseOperatorLink"] => Replacement::AutoImport {
["airflow", "models", "baseoperatorlink", "BaseOperatorLink"] => Replacement::Rename {
module: "airflow.sdk",
name: "BaseOperatorLink",
},
@@ -299,7 +299,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
},
// airflow.timetables
["airflow", "timetables", "datasets", "DatasetOrTimeSchedule"] => Replacement::AutoImport {
["airflow", "timetables", "datasets", "DatasetOrTimeSchedule"] => Replacement::Rename {
module: "airflow.timetables.assets",
name: "AssetOrTimeSchedule",
},
@@ -310,7 +310,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
"utils",
"dag_parsing_context",
"get_parsing_context",
] => Replacement::AutoImport {
] => Replacement::Rename {
module: "airflow.sdk",
name: "get_parsing_context",
},
@@ -319,7 +319,7 @@ fn check_name(checker: &Checker, expr: &Expr, range: TextRange) {
};
let (module, name) = match &replacement {
Replacement::AutoImport { module, name } => (module, *name),
Replacement::Rename { module, name } => (module, *name),
Replacement::SourceModuleMoved { module, name } => (module, name.as_str()),
_ => {
checker.report_diagnostic(

View File

@@ -58,7 +58,9 @@ impl Violation for SuppressibleException {
fn fix_title(&self) -> Option<String> {
let SuppressibleException { exception } = self;
Some(format!("Replace with `contextlib.suppress({exception})`"))
Some(format!(
"Replace `try`-`except`-`pass` with `with contextlib.suppress({exception}): ...`"
))
}
}

View File

@@ -11,7 +11,7 @@ SIM105 [*] Use `contextlib.suppress(ValueError)` instead of `try`-`except`-`pass
9 | | pass
| |________^
|
help: Replace with `contextlib.suppress(ValueError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -40,7 +40,7 @@ SIM105 [*] Use `contextlib.suppress(ValueError, OSError)` instead of `try`-`exce
17 |
18 | # SIM105
|
help: Replace with `contextlib.suppress(ValueError, OSError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError, OSError): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -71,7 +71,7 @@ SIM105 [*] Use `contextlib.suppress(ValueError, OSError)` instead of `try`-`exce
23 |
24 | # SIM105
|
help: Replace with `contextlib.suppress(ValueError, OSError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError, OSError): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -102,7 +102,7 @@ SIM105 [*] Use `contextlib.suppress(BaseException)` instead of `try`-`except`-`p
29 |
30 | # SIM105
|
help: Replace with `contextlib.suppress(BaseException)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(BaseException): ...`
1 + import contextlib
2 + import builtins
3 | def foo():
@@ -134,7 +134,7 @@ SIM105 [*] Use `contextlib.suppress(a.Error, b.Error)` instead of `try`-`except`
35 |
36 | # OK
|
help: Replace with `contextlib.suppress(a.Error, b.Error)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(a.Error, b.Error): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -164,7 +164,7 @@ SIM105 [*] Use `contextlib.suppress(ValueError)` instead of `try`-`except`-`pass
88 | | ...
| |___________^
|
help: Replace with `contextlib.suppress(ValueError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -195,7 +195,7 @@ SIM105 Use `contextlib.suppress(ValueError, OSError)` instead of `try`-`except`-
104 |
105 | try:
|
help: Replace with `contextlib.suppress(ValueError, OSError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError, OSError): ...`
SIM105 [*] Use `contextlib.suppress(OSError)` instead of `try`-`except`-`pass`
--> SIM105_0.py:117:5
@@ -210,7 +210,7 @@ SIM105 [*] Use `contextlib.suppress(OSError)` instead of `try`-`except`-`pass`
121 |
122 | try: os.makedirs(model_dir);
|
help: Replace with `contextlib.suppress(OSError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(OSError): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -241,7 +241,7 @@ SIM105 [*] Use `contextlib.suppress(OSError)` instead of `try`-`except`-`pass`
125 |
126 | try: os.makedirs(model_dir);
|
help: Replace with `contextlib.suppress(OSError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(OSError): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -271,7 +271,7 @@ SIM105 [*] Use `contextlib.suppress(OSError)` instead of `try`-`except`-`pass`
129 | \
130 | #
|
help: Replace with `contextlib.suppress(OSError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(OSError): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -299,7 +299,7 @@ SIM105 [*] Use `contextlib.suppress()` instead of `try`-`except`-`pass`
136 | | pass
| |________^
|
help: Replace with `contextlib.suppress()`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -328,7 +328,7 @@ SIM105 [*] Use `contextlib.suppress(BaseException)` instead of `try`-`except`-`p
143 | | pass
| |________^
|
help: Replace with `contextlib.suppress(BaseException)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(BaseException): ...`
1 + import contextlib
2 | def foo():
3 | pass

View File

@@ -11,7 +11,7 @@ SIM105 [*] Use `contextlib.suppress(ValueError)` instead of `try`-`except`-`pass
8 | | pass
| |________^
|
help: Replace with `contextlib.suppress(ValueError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError): ...`
1 | """Case: There's a random import, so it should add `contextlib` after it."""
2 | import math
3 + import contextlib

View File

@@ -11,7 +11,7 @@ SIM105 [*] Use `contextlib.suppress(ValueError)` instead of `try`-`except`-`pass
13 | | pass
| |________^
|
help: Replace with `contextlib.suppress(ValueError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError): ...`
7 |
8 |
9 | # SIM105

View File

@@ -12,4 +12,4 @@ SIM105 Use `contextlib.suppress(ValueError)` instead of `try`-`except`-`pass`
13 | | pass
| |____________^
|
help: Replace with `contextlib.suppress(ValueError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError): ...`

View File

@@ -10,7 +10,7 @@ SIM105 [*] Use `contextlib.suppress(ImportError)` instead of `try`-`except`-`pas
4 | | except ImportError: pass
| |___________________________^
|
help: Replace with `contextlib.suppress(ImportError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ImportError): ...`
1 | #!/usr/bin/env python
- try:
2 + import contextlib

View File

@@ -991,6 +991,29 @@ mod tests {
Ok(())
}
#[test_case(Path::new("plr0402_skip.py"))]
fn plr0402_skips_required_imports(path: &Path) -> Result<()> {
let snapshot = format!("plr0402_skips_required_imports_{}", path.to_string_lossy());
let diagnostics = test_path(
Path::new("isort/required_imports").join(path).as_path(),
&LinterSettings {
src: vec![test_resource_path("fixtures/isort")],
isort: super::settings::Settings {
required_imports: BTreeSet::from_iter([NameImport::Import(
ModuleNameImport::alias(
"concurrent.futures".to_string(),
"futures".to_string(),
),
)]),
..super::settings::Settings::default()
},
..LinterSettings::for_rules([Rule::MissingRequiredImport, Rule::ManualFromImport])
},
)?;
assert_diagnostics!(snapshot, diagnostics);
Ok(())
}
#[test_case(Path::new("from_first.py"))]
fn from_first(path: &Path) -> Result<()> {
let snapshot = format!("from_first_{}", path.to_string_lossy());

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/isort/mod.rs
---

View File

@@ -29,7 +29,7 @@ mod tests {
use crate::settings::{LinterSettings, flags};
use crate::source_kind::SourceKind;
use crate::test::{test_contents, test_path, test_snippet};
use crate::{Locator, assert_diagnostics, directives};
use crate::{Locator, assert_diagnostics, assert_diagnostics_diff, directives};
#[test_case(Rule::UnusedImport, Path::new("F401_0.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_1.py"))]
@@ -392,6 +392,154 @@ mod tests {
Ok(())
}
#[test_case(Rule::UnusedImport, Path::new("F401_0.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_1.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_2.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_3.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_4.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_5.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_6.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_7.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_8.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_9.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_10.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_11.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_12.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_13.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_14.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_15.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_16.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_17.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_18.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_19.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_20.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_21.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_22.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_23.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_32.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_34.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_35.py"))]
fn f401_preview_refined_submodule_handling_diffs(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("preview_diff__{}", path.to_string_lossy());
assert_diagnostics_diff!(
snapshot,
Path::new("pyflakes").join(path).as_path(),
&LinterSettings::for_rule(rule_code),
&LinterSettings {
preview: PreviewMode::Enabled,
..LinterSettings::for_rule(rule_code)
}
);
Ok(())
}
#[test_case(
r"
import a
import a.b
import a.c",
"f401_multiple_unused_submodules"
)]
#[test_case(
r"
import a
import a.b
a.foo()",
"f401_use_top_member"
)]
#[test_case(
r"
import a
import a.b
a.foo()
a.bar()",
"f401_use_top_member_twice"
)]
#[test_case(
r"
# reverts to stable behavior - used between imports
import a
a.foo()
import a.b",
"f401_use_top_member_before_second_import"
)]
#[test_case(
r"
# reverts to stable behavior - used between imports
import a
a.foo()
a = 1
import a.b",
"f401_use_top_member_and_redefine_before_second_import"
)]
#[test_case(
r"
# reverts to stable behavior - used between imports
import a
a.foo()
import a.b
a = 1",
"f401_use_top_member_then_import_then_redefine"
)]
#[test_case(
r#"
import a
import a.b
__all__ = ["a"]"#,
"f401_use_in_dunder_all"
)]
#[test_case(
r"
import a.c
import a.b
a.foo()",
"f401_import_submodules_but_use_top_level"
)]
#[test_case(
r"
import a.c
import a.b.d
a.foo()",
"f401_import_submodules_different_lengths_but_use_top_level"
)]
#[test_case(
r"
# refined logic only applied _within_ scope
import a
def foo():
import a.b
a.foo()",
"f401_import_submodules_in_function_scope"
)]
#[test_case(
r"
# reverts to stable behavior - used between bindings
import a
a.b
import a.b",
"f401_use_in_between_imports"
)]
#[test_case(
r"
# reverts to stable behavior - used between bindings
import a.b
a
import a",
"f401_use_in_between_imports"
)]
fn f401_preview_refined_submodule_handling(contents: &str, snapshot: &str) {
let diagnostics = test_contents(
&SourceKind::Python(dedent(contents).to_string()),
Path::new("f401_preview_submodule.py"),
&LinterSettings {
preview: PreviewMode::Enabled,
..LinterSettings::for_rule(Rule::UnusedImport)
},
)
.0;
assert_diagnostics!(snapshot, diagnostics);
}
#[test]
fn f841_dummy_variable_rgx() -> Result<()> {
let diagnostics = test_path(

View File

@@ -1,11 +1,6 @@
use ruff_python_ast::Alias;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_stdlib::future::is_feature_name;
use ruff_text_size::Ranged;
use crate::Violation;
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for `__future__` imports that are not defined in the current Python
@@ -19,7 +14,7 @@ use crate::checkers::ast::Checker;
/// - [Python documentation: `__future__`](https://docs.python.org/3/library/__future__.html)
#[derive(ViolationMetadata)]
pub(crate) struct FutureFeatureNotDefined {
name: String,
pub name: String,
}
impl Violation for FutureFeatureNotDefined {
@@ -29,17 +24,3 @@ impl Violation for FutureFeatureNotDefined {
format!("Future feature `{name}` is not defined")
}
}
/// F407
pub(crate) fn future_feature_not_defined(checker: &Checker, alias: &Alias) {
if is_feature_name(&alias.name) {
return;
}
checker.report_diagnostic(
FutureFeatureNotDefined {
name: alias.name.to_string(),
},
alias.range(),
);
}

View File

@@ -197,9 +197,7 @@ pub(crate) fn redefined_while_unused(checker: &Checker, scope_id: ScopeId, scope
shadowed,
);
if let Some(ann) = diagnostic.primary_annotation_mut() {
ann.set_message(format_args!("`{name}` redefined here"));
}
diagnostic.set_primary_message(format_args!("`{name}` redefined here"));
if let Some(range) = binding.parent_range(checker.semantic()) {
diagnostic.set_parent(range.start());

View File

@@ -5,19 +5,22 @@ use anyhow::{Result, anyhow, bail};
use std::collections::BTreeMap;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::name::QualifiedName;
use ruff_python_ast::name::{QualifiedName, QualifiedNameBuilder};
use ruff_python_ast::{self as ast, Stmt};
use ruff_python_semantic::{
AnyImport, BindingKind, Exceptions, Imported, NodeId, Scope, ScopeId, SemanticModel,
SubmoduleImport,
AnyImport, Binding, BindingFlags, BindingId, BindingKind, Exceptions, Imported, NodeId, Scope,
ScopeId, SemanticModel, SubmoduleImport,
};
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
use crate::fix;
use crate::preview::is_dunder_init_fix_unused_import_enabled;
use crate::preview::{
is_dunder_init_fix_unused_import_enabled, is_refined_submodule_import_match_enabled,
};
use crate::registry::Rule;
use crate::rules::{isort, isort::ImportSection, isort::ImportType};
use crate::settings::LinterSettings;
use crate::{Applicability, Fix, FixAvailability, Violation};
/// ## What it does
@@ -49,6 +52,43 @@ use crate::{Applicability, Fix, FixAvailability, Violation};
/// __all__ = ["some_module"]
/// ```
///
/// ## Preview
/// When [preview] is enabled (and certain simplifying assumptions
/// are met), we analyze all import statements for a given module
/// when determining whether an import is used, rather than simply
/// the last of these statements. This can result in both different and
/// more import statements being marked as unused.
///
/// For example, if a module consists of
///
/// ```python
/// import a
/// import a.b
/// ```
///
/// then both statements are marked as unused under [preview], whereas
/// only the second is marked as unused under stable behavior.
///
/// As another example, if a module consists of
///
/// ```python
/// import a.b
/// import a
///
/// a.b.foo()
/// ```
///
/// then a diagnostic will only be emitted for the first line under [preview],
/// whereas a diagnostic would only be emitted for the second line under
/// stable behavior.
///
/// Note that this behavior is somewhat subjective and is designed
/// to conform to the developer's intuition rather than Python's actual
/// execution. To wit, the statement `import a.b` automatically executes
/// `import a`, so in some sense `import a` is _always_ redundant
/// in the presence of `import a.b`.
///
///
/// ## Fix safety
///
/// Fixes to remove unused imports are safe, except in `__init__.py` files.
@@ -100,6 +140,8 @@ use crate::{Applicability, Fix, FixAvailability, Violation};
/// - [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)
/// - [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)
/// - [Typing documentation: interface conventions](https://typing.python.org/en/latest/spec/distributing.html#library-interface-public-and-private-symbols)
///
/// [preview]: https://docs.astral.sh/ruff/preview/
#[derive(ViolationMetadata)]
pub(crate) struct UnusedImport {
/// Qualified name of the import
@@ -284,17 +326,7 @@ pub(crate) fn unused_import(checker: &Checker, scope: &Scope) {
let mut unused: BTreeMap<(NodeId, Exceptions), Vec<ImportBinding>> = BTreeMap::default();
let mut ignored: BTreeMap<(NodeId, Exceptions), Vec<ImportBinding>> = BTreeMap::default();
for binding_id in scope.binding_ids() {
let binding = checker.semantic().binding(binding_id);
if binding.is_used()
|| binding.is_explicit_export()
|| binding.is_nonlocal()
|| binding.is_global()
{
continue;
}
for binding in unused_imports_in_scope(checker.semantic(), scope, checker.settings()) {
let Some(import) = binding.as_any_import() else {
continue;
};
@@ -435,6 +467,8 @@ pub(crate) fn unused_import(checker: &Checker, scope: &Scope) {
diagnostic.set_fix(fix.clone());
}
}
diagnostic.add_primary_tag(ruff_db::diagnostic::DiagnosticTag::Unnecessary);
}
}
@@ -455,6 +489,8 @@ pub(crate) fn unused_import(checker: &Checker, scope: &Scope) {
if let Some(range) = binding.parent_range {
diagnostic.set_parent(range.start());
}
diagnostic.add_primary_tag(ruff_db::diagnostic::DiagnosticTag::Unnecessary);
}
}
@@ -582,3 +618,302 @@ fn fix_by_reexporting<'a>(
let isolation = Checker::isolation(checker.semantic().parent_statement_id(node_id));
Ok(Fix::safe_edits(head, tail).isolate(isolation))
}
/// Returns an iterator over bindings to import statements that appear unused.
///
/// The stable behavior is to return those bindings to imports
/// satisfying the following properties:
///
/// - they are not shadowed
/// - they are not `global`, not `nonlocal`, and not explicit exports (i.e. `import foo as foo`)
/// - they have no references, according to the semantic model
///
/// Under preview, there is a more refined analysis performed
/// in the case where all bindings shadowed by a given import
/// binding (including the binding itself) are of a simple form:
/// they are required to be un-aliased imports or submodule imports.
///
/// This alternative analysis is described in the documentation for
/// [`unused_imports_from_binding`].
fn unused_imports_in_scope<'a, 'b>(
semantic: &'a SemanticModel<'b>,
scope: &'a Scope,
settings: &'a LinterSettings,
) -> impl Iterator<Item = &'a Binding<'b>> {
scope
.binding_ids()
.map(|id| (id, semantic.binding(id)))
.filter(|(_, bdg)| {
matches!(
bdg.kind,
BindingKind::Import(_)
| BindingKind::FromImport(_)
| BindingKind::SubmoduleImport(_)
)
})
.filter(|(_, bdg)| !bdg.is_global() && !bdg.is_nonlocal() && !bdg.is_explicit_export())
.flat_map(|(id, bdg)| {
if is_refined_submodule_import_match_enabled(settings)
// No need to apply refined logic if there is only a single binding
&& scope.shadowed_bindings(id).nth(1).is_some()
// Only apply the new logic in certain situations to avoid
// complexity, false positives, and intersection with
// `redefined-while-unused` (`F811`).
&& has_simple_shadowed_bindings(scope, id, semantic)
{
unused_imports_from_binding(semantic, id, scope)
} else if bdg.is_used() {
vec![]
} else {
vec![bdg]
}
})
}
/// Returns a `Vec` of bindings to unused import statements that
/// are shadowed by a given binding.
///
/// This is best explained by example. So suppose we have:
///
/// ```python
/// import a
/// import a.b
/// import a.b.c
///
/// __all__ = ["a"]
///
/// a.b.foo()
/// ```
///
/// As of 2025-09-25, Ruff's semantic model, upon visiting
/// the whole module, will have a single live binding for
/// the symbol `a` that points to the line `import a.b.c`,
/// and the remaining two import bindings are considered shadowed
/// by the last.
///
/// This function expects to receive the `id`
/// for the live binding and will begin by collecting
/// all bindings shadowed by the given one - i.e. all
/// the different import statements binding the symbol `a`.
/// We iterate over references to this
/// module and decide (somewhat subjectively) which
/// import statement the user "intends" to reference. To that end,
/// to each reference we attempt to build a [`QualifiedName`]
/// corresponding to an iterated attribute access (e.g. `a.b.foo`).
/// We then determine the closest matching import statement to that
/// qualified name, and mark it as used.
///
/// In the present example, the qualified name associated to the
/// reference from the dunder all export is `"a"` and the qualified
/// name associated to the reference in the last line is `"a.b.foo"`.
/// The closest matches are `import a` and `import a.b`, respectively,
/// leaving `import a.b.c` unused.
///
/// For a precise definition of "closest match" see [`best_match`]
/// and [`rank_matches`].
///
/// Note: if any reference comes from something other than
/// a `Name` or a dunder all expression, then we return just
/// the original binding, thus reverting the stable behavior.
fn unused_imports_from_binding<'a, 'b>(
semantic: &'a SemanticModel<'b>,
id: BindingId,
scope: &'a Scope,
) -> Vec<&'a Binding<'b>> {
let mut marked = MarkedBindings::from_binding_id(semantic, id, scope);
let binding = semantic.binding(id);
// ensure we only do this once
let mut marked_dunder_all = false;
for ref_id in binding.references() {
let resolved_reference = semantic.reference(ref_id);
if !marked_dunder_all && resolved_reference.in_dunder_all_definition() {
let first = *binding
.as_any_import()
.expect("binding to be import binding since current function called after restricting to these in `unused_imports_in_scope`")
.qualified_name()
.segments().first().expect("import binding to have nonempty qualified name");
mark_uses_of_qualified_name(&mut marked, &QualifiedName::user_defined(first));
marked_dunder_all = true;
continue;
}
let Some(expr_id) = resolved_reference.expression_id() else {
// If there is some other kind of reference, abandon
// the refined approach for the usual one
return vec![binding];
};
let Some(prototype) = expand_to_qualified_name_attribute(semantic, expr_id) else {
return vec![binding];
};
mark_uses_of_qualified_name(&mut marked, &prototype);
}
marked.into_unused()
}
#[derive(Debug)]
struct MarkedBindings<'a, 'b> {
bindings: Vec<&'a Binding<'b>>,
used: Vec<bool>,
}
impl<'a, 'b> MarkedBindings<'a, 'b> {
fn from_binding_id(semantic: &'a SemanticModel<'b>, id: BindingId, scope: &'a Scope) -> Self {
let bindings: Vec<_> = scope
.shadowed_bindings(id)
.map(|id| semantic.binding(id))
.collect();
Self {
used: vec![false; bindings.len()],
bindings,
}
}
fn into_unused(self) -> Vec<&'a Binding<'b>> {
self.bindings
.into_iter()
.zip(self.used)
.filter_map(|(bdg, is_used)| (!is_used).then_some(bdg))
.collect()
}
fn iter_mut(&mut self) -> impl Iterator<Item = (&'a Binding<'b>, &mut bool)> {
self.bindings.iter().copied().zip(self.used.iter_mut())
}
}
/// Returns `Some` [`QualifiedName`] delineating the path for the
/// maximal [`ExprName`] or [`ExprAttribute`] containing the expression
/// associated to the given [`NodeId`], or `None` otherwise.
///
/// For example, if the `expr_id` points to `a` in `a.b.c.foo()`
/// then the qualified name would have segments [`a`, `b`, `c`, `foo`].
fn expand_to_qualified_name_attribute<'b>(
semantic: &SemanticModel<'b>,
expr_id: NodeId,
) -> Option<QualifiedName<'b>> {
let mut builder = QualifiedNameBuilder::with_capacity(16);
let mut expr_id = expr_id;
let expr = semantic.expression(expr_id)?;
let name = expr.as_name_expr()?;
builder.push(&name.id);
while let Some(node_id) = semantic.parent_expression_id(expr_id) {
let Some(expr) = semantic.expression(node_id) else {
break;
};
let Some(expr_attr) = expr.as_attribute_expr() else {
break;
};
builder.push(expr_attr.attr.as_str());
expr_id = node_id;
}
Some(builder.build())
}
fn mark_uses_of_qualified_name(marked: &mut MarkedBindings, prototype: &QualifiedName) {
let Some(best) = best_match(&marked.bindings, prototype) else {
return;
};
let Some(best_import) = best.as_any_import() else {
return;
};
let best_name = best_import.qualified_name();
// We loop through all bindings in case there are repeated instances
// of the `best_name`. For example, if we have
//
// ```python
// import a
// import a
//
// a.foo()
// ```
//
// then we want to mark both import statements as used. It
// is the job of `redefined-while-unused` (`F811`) to catch
// the repeated binding in this case.
for (binding, is_used) in marked.iter_mut() {
if *is_used {
continue;
}
if binding
.as_any_import()
.is_some_and(|imp| imp.qualified_name() == best_name)
{
*is_used = true;
}
}
}
/// Returns a pair with first component the length of the largest
/// shared prefix between the qualified name of the import binding
/// and the `prototype` and second component the length of the
/// qualified name of the import binding (i.e. the number of path
/// segments). Moreover, we regard the second component as ordered
/// in reverse.
///
/// For example, if the binding corresponds to `import a.b.c`
/// and the prototype to `a.b.foo()`, then the function returns
/// `(2,std::cmp::Reverse(3))`.
fn rank_matches(binding: &Binding, prototype: &QualifiedName) -> (usize, std::cmp::Reverse<usize>) {
let Some(import) = binding.as_any_import() else {
unreachable!()
};
let qname = import.qualified_name();
let left = qname
.segments()
.iter()
.zip(prototype.segments())
.take_while(|(x, y)| x == y)
.count();
(left, std::cmp::Reverse(qname.segments().len()))
}
/// Returns the import binding that shares the longest prefix
/// with the `prototype` and is of minimal length amongst these.
///
/// See also [`rank_matches`].
fn best_match<'a, 'b>(
bindings: &Vec<&'a Binding<'b>>,
prototype: &QualifiedName,
) -> Option<&'a Binding<'b>> {
bindings
.iter()
.copied()
.max_by_key(|binding| rank_matches(binding, prototype))
}
#[inline]
fn has_simple_shadowed_bindings(scope: &Scope, id: BindingId, semantic: &SemanticModel) -> bool {
scope.shadowed_bindings(id).enumerate().all(|(i, shadow)| {
let shadowed_binding = semantic.binding(shadow);
// Bail if one of the shadowed bindings is
// used before the last live binding. This is
// to avoid situations like this:
//
// ```
// import a
// a.b
// import a.b
// ```
if i > 0 && shadowed_binding.is_used() {
return false;
}
matches!(
shadowed_binding.kind,
BindingKind::Import(_) | BindingKind::SubmoduleImport(_)
) && !shadowed_binding.flags.contains(BindingFlags::ALIAS)
})
}

View File

@@ -272,4 +272,6 @@ pub(crate) fn unused_variable(checker: &Checker, name: &str, binding: &Binding)
if let Some(fix) = remove_unused_variable(binding, checker) {
diagnostic.set_fix(fix);
}
// Add Unnecessary tag for unused variables
diagnostic.add_primary_tag(ruff_db::diagnostic::DiagnosticTag::Unnecessary);
}

View File

@@ -0,0 +1,16 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
F401 [*] `a.b` imported but unused
--> f401_preview_submodule.py:3:8
|
2 | import a.c
3 | import a.b
| ^^^
4 | a.foo()
|
help: Remove unused import: `a.b`
1 |
2 | import a.c
- import a.b
3 | a.foo()

View File

@@ -0,0 +1,16 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
F401 [*] `a.b.d` imported but unused
--> f401_preview_submodule.py:3:8
|
2 | import a.c
3 | import a.b.d
| ^^^^^
4 | a.foo()
|
help: Remove unused import: `a.b.d`
1 |
2 | import a.c
- import a.b.d
3 | a.foo()

View File

@@ -0,0 +1,19 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
F401 [*] `a` imported but unused
--> f401_preview_submodule.py:3:8
|
2 | # refined logic only applied _within_ scope
3 | import a
| ^
4 | def foo():
5 | import a.b
|
help: Remove unused import: `a`
1 |
2 | # refined logic only applied _within_ scope
- import a
3 | def foo():
4 | import a.b
5 | a.foo()

View File

@@ -0,0 +1,44 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
F401 [*] `a` imported but unused
--> f401_preview_submodule.py:2:8
|
2 | import a
| ^
3 | import a.b
4 | import a.c
|
help: Remove unused import: `a`
1 |
- import a
2 | import a.b
3 | import a.c
F401 [*] `a.b` imported but unused
--> f401_preview_submodule.py:3:8
|
2 | import a
3 | import a.b
| ^^^
4 | import a.c
|
help: Remove unused import: `a.b`
1 |
2 | import a
- import a.b
3 | import a.c
F401 [*] `a.c` imported but unused
--> f401_preview_submodule.py:4:8
|
2 | import a
3 | import a.b
4 | import a.c
| ^^^
|
help: Remove unused import: `a.c`
1 |
2 | import a
3 | import a.b
- import a.c

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---

View File

@@ -0,0 +1,16 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
F401 [*] `a.b` imported but unused
--> f401_preview_submodule.py:3:8
|
2 | import a
3 | import a.b
| ^^^
4 | __all__ = ["a"]
|
help: Remove unused import: `a.b`
1 |
2 | import a
- import a.b
3 | __all__ = ["a"]

View File

@@ -0,0 +1,16 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
F401 [*] `a.b` imported but unused
--> f401_preview_submodule.py:3:8
|
2 | import a
3 | import a.b
| ^^^
4 | a.foo()
|
help: Remove unused import: `a.b`
1 |
2 | import a
- import a.b
3 | a.foo()

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
F401 [*] `a.b` imported but unused
--> f401_preview_submodule.py:3:8
|
2 | import a
3 | import a.b
| ^^^
4 | a.foo()
5 | a.bar()
|
help: Remove unused import: `a.b`
1 |
2 | import a
- import a.b
3 | a.foo()
4 | a.bar()

View File

@@ -0,0 +1,50 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
--- Linter settings ---
-linter.preview = disabled
+linter.preview = enabled
--- Summary ---
Removed: 0
Added: 2
--- Added ---
F401 [*] `multiprocessing.process` imported but unused
--> F401_0.py:10:8
|
8 | )
9 | import multiprocessing.pool
10 | import multiprocessing.process
| ^^^^^^^^^^^^^^^^^^^^^^^
11 | import logging.config
12 | import logging.handlers
|
help: Remove unused import: `multiprocessing.process`
7 | namedtuple,
8 | )
9 | import multiprocessing.pool
- import multiprocessing.process
10 | import logging.config
11 | import logging.handlers
12 | from typing import (
F401 [*] `logging.config` imported but unused
--> F401_0.py:11:8
|
9 | import multiprocessing.pool
10 | import multiprocessing.process
11 | import logging.config
| ^^^^^^^^^^^^^^
12 | import logging.handlers
13 | from typing import (
|
help: Remove unused import: `logging.config`
8 | )
9 | import multiprocessing.pool
10 | import multiprocessing.process
- import logging.config
11 | import logging.handlers
12 | from typing import (
13 | TYPE_CHECKING,

View File

@@ -0,0 +1,10 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
--- Linter settings ---
-linter.preview = disabled
+linter.preview = enabled
--- Summary ---
Removed: 0
Added: 0

View File

@@ -0,0 +1,10 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
--- Linter settings ---
-linter.preview = disabled
+linter.preview = enabled
--- Summary ---
Removed: 0
Added: 0

View File

@@ -0,0 +1,10 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
--- Linter settings ---
-linter.preview = disabled
+linter.preview = enabled
--- Summary ---
Removed: 0
Added: 0

View File

@@ -0,0 +1,10 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
--- Linter settings ---
-linter.preview = disabled
+linter.preview = enabled
--- Summary ---
Removed: 0
Added: 0

View File

@@ -0,0 +1,10 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
--- Linter settings ---
-linter.preview = disabled
+linter.preview = enabled
--- Summary ---
Removed: 0
Added: 0

View File

@@ -0,0 +1,10 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
--- Linter settings ---
-linter.preview = disabled
+linter.preview = enabled
--- Summary ---
Removed: 0
Added: 0

View File

@@ -0,0 +1,10 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
--- Linter settings ---
-linter.preview = disabled
+linter.preview = enabled
--- Summary ---
Removed: 0
Added: 0

View File

@@ -0,0 +1,10 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
--- Linter settings ---
-linter.preview = disabled
+linter.preview = enabled
--- Summary ---
Removed: 0
Added: 0

View File

@@ -0,0 +1,10 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
--- Linter settings ---
-linter.preview = disabled
+linter.preview = enabled
--- Summary ---
Removed: 0
Added: 0

View File

@@ -0,0 +1,10 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
--- Linter settings ---
-linter.preview = disabled
+linter.preview = enabled
--- Summary ---
Removed: 0
Added: 0

View File

@@ -0,0 +1,10 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
--- Linter settings ---
-linter.preview = disabled
+linter.preview = enabled
--- Summary ---
Removed: 0
Added: 0

Some files were not shown because too many files have changed in this diff Show More