Compare commits

...

49 Commits

Author SHA1 Message Date
Zanie
37f6a5ef3c Use indent-width and lint.tab-size when checking indentation in E rules 2023-11-15 17:17:57 -06:00
konsti
a783b14e7d Add --skip-magic-trailing-comma to formatter dev comment (#8689)
Testing the compatibility with the future stable black style, i realized
the `ruff_python_formatter` dev main was lacking the
`--skip-magic-trailing-comma` option. This does not affect `ruff
format`.

Usage:
```shell
cargo run --bin ruff_python_formatter -p ruff_python_formatter -- --skip-magic-trailing-comma --emit stdout scratch.py
```
2023-11-15 09:23:46 +00:00
Jelmer Vernooij
9d76e4e0b9 isort: Support disabling sections with `no-sections = true` (#8657)
## Summary

This adds a ``no-sections`` option for isort in the linter, similar to
the ``no_sections`` option that exists in upstream isort
(https://pycqa.github.io/isort/docs/configuration/options.html#no-sections)

This option puts all imports except for ``__future__`` into the same
section, and is mostly used by monorepos.

I've taken a bit of a leap in assuming that ruff wants to support the
exact same option; more than happy to refactor if you'd prefer a
different way of setting this up.

Fixes #8653

## Test Plan

I've added a test and have run it on a large Python codebase that uses
isort with --no-sections. The option is disabled by default.
2023-11-14 21:45:51 +00:00
bluthej
561277925f [isort] Simplify code structure for ordering imports (#8685)
While fixing #8661 I noticed that the code structure for sorting imports
could be simplified.

## Summary

- Move the logic for `force_sort_within_sections` from `isort/mod.rs` to
`isort/ordering.rs` => now there is just one line in `isort/mod.rs`:
`let imports = order_imports(import_block, settings);` which yields the
sorted imports
- Change the function signature of `order_imports` to directly return a
`Vec<EitherImport<'a>>` => no need for `OrderedImportBlock`

I think this is a bit of an improvement because the code is simpler and
there should be a bit of a speedup when setting
`force-sort-within-sections` to true. Indeed, when it's set to true
we're now directly ordering all the imports, whereas before we would
first order the straight imports, then the from imports, combine them
and finally sort the combination a second time (this is probably not
noticeable in practice though).

## Test Plan

No tests added, this is a simple refactor.
2023-11-14 16:43:46 -05:00
doolio
1074324c52 Add starlette as a user of ruff (#8672)
Mentioned [here on
discord](https://discord.com/channels/1039017663004942429/1039017663512449056/1173726569852837958).
Used their github url as that seems to be the most common url type
though not for Litestar.
2023-11-14 08:38:12 -06:00
Dhruv Manilawala
4099b9610f F-strings doesn't contain bytes literal for PLW0129 (#8675)
For the `PLW0129` rule, the f-string case shouldn't match against bytes
literal as f-strings cannot contain them. F-strings are made up of
either string literals or formatted expressions.
2023-11-14 18:56:18 +05:30
Charlie Marsh
f7d249ae06 Remove repeated and erroneous scoped settings headers in docs (#8670)
Closes https://github.com/astral-sh/ruff/issues/8505.
2023-11-14 05:44:30 +00:00
Charlie Marsh
bf2cc3f520 Add autotyping-like return type inference for annotation rules (#8643)
## Summary

This PR adds (unsafe) fixes to the flake8-annotations rules that enforce
missing return types, offering to automatically insert type annotations
for functions with literal return values. The logic is smart enough to
generate simplified unions (e.g., `float` instead of `int | float`) and
deal with implicit returns (`return` without a value).

Closes https://github.com/astral-sh/ruff/issues/1640 (though we could
open a separate issue for referring parameter types).

Closes https://github.com/astral-sh/ruff/issues/8213.

## Test Plan

`cargo test`
2023-11-13 23:34:15 -05:00
bluthej
23c819b4b3 Fix ordering for force-sort-within-sections (#8665)
Fixes #8661 

## Summary

Imports like `from x import y` don't have an "asname" for the module, so
they were placed before imports like `import x as w` since `None` <
`Some(s)` for any string s.
The fix is to first sort by `first_alias`, since it's `None` for `import
x as w`, and then by `asname`.

## Test Plan

I included the example from the issue to avoid future regressions.
2023-11-13 18:27:56 -05:00
Adrian
16060670b8 Add new rule to check for useless quote escapes (#8630)
When using the autofixer for `Q000` it does not remove the backslashes
from quotes that no longer need escaping.

This new rule checks for such backslashes (regardless whether they come
from the autofixer or not) and can remove them.

fixes #8617
2023-11-13 21:59:37 +00:00
Charlie Marsh
534fc34f11 Extend unnecessary-pass (PIE790) to include ellipses in preview (#8641)
## Summary

This PR extends `unnecessary-pass` (`PIE790`) to flag unnecessary
ellipsis expressions in addition to `pass` statements. A `pass` is
equivalent to a standalone `...`, so it feels correct to me that a
single rule should cover both cases.

When we look to v0.2.0, we should also consider deprecating `PYI013`,
which flags ellipses only for classes.

Closes https://github.com/astral-sh/ruff/issues/8602.
2023-11-13 19:28:16 +00:00
Charlie Marsh
df9ade7fd9 Use AST transformer for relocate (#8660) 2023-11-13 13:24:27 -05:00
Charlie Marsh
345e1401cf Treat class C: ... and class C(): ... equivalently (#8659)
## Summary

These should be seen as identical from the `ComparableAst` perspective.
2023-11-13 18:03:04 +00:00
Charlie Marsh
a8e0d4ab4f Fix lingering generated reference for MkDocs (#8658)
Missed this in #8652.
2023-11-13 18:00:01 +00:00
Alan Du
6f23bdb78f Generalize PIE807 to handle dict literals (#8608)
## Summary

PIE807 will rewrite `lambda: []` to `list` -- AFAICT though, the same
rationale also applies to dicts, so I've modified the code to also
rewrite `lambda: {}` to `dict`.

Two things I'm not sure about:
* Should this go to a new rule? This no longer actually matches the
behavior of flake8-pie, and while I think thematically it makes sense to
be part of the same rule, we could make it a standalone rule (but if so,
where should I put it and what error code should I use)?
* If we want a single rule, are there backwards compatibility concerns
with the rule name change (from `reimplemented_list_builtin` to
`reimplemented_container_builtin`?

## Test Plan

Added snapshot tests of the functionality.
2023-11-13 17:55:17 +00:00
Charlie Marsh
d574fcd1ac Compare formatted and unformatted ASTs during formatter tests (#8624)
## Summary

This PR implements validation in the formatter tests to ensure that we
don't modify the AST during formatting. Black has similar logic.

In implementing this, I learned that Black actually _does_ modify the
AST, and their test infrastructure normalizes the AST to wipe away those
differences. Specifically, Black changes the indentation of docstrings,
which _does_ modify the AST; and it also inserts parentheses in `del`
statements, which changes the AST too.

Ruff also does both these things, so we _also_ implement the same
normalization using a new visitor that allows for modifying the AST.

Closes https://github.com/astral-sh/ruff/issues/8184.

## Test Plan

`cargo test`
2023-11-13 17:43:27 +00:00
Charlie Marsh
3592f44ade Allow whitespace around colon in slices for whitespace-before-punctuation (E203) (#8654)
## Summary

This PR makes `whitespace-before-punctuation` (`E203`) compatible with
the formatter by relaxing the rule a bit, as compared to the pycodestyle
implementation. It's also more consistent with PEP 8, which says:

> However, in a slice the colon acts like a binary operator, and should
have equal amounts on either side (treating it as the operator with the
lowest priority).

Closes https://github.com/astral-sh/ruff/issues/7259.
Closes https://github.com/astral-sh/ruff/issues/8642.

## Test Plan

`cargo test`
2023-11-13 12:16:13 -05:00
Andrew Gallant
8984072df2 ruff_python_formatter: copy and inline shared traits (#8656)
It seems as though using `include!(...)` to avoid the source code copy
breaks rust-analzer. Namely, it treats the included file as unlinked,
and so any part of analysis (e.g., goto-definition) that needs that file
to reason about the code ends up failing.

<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

<!-- How was it tested? -->
2023-11-13 12:16:04 -05:00
Charlie Marsh
6a6de53722 Omit Insiders-only plugin when building docs on CI (#8652) 2023-11-13 10:24:58 -05:00
dependabot[bot]
5ba852a878 Bump annotate-snippets from 0.9.1 to 0.9.2 (#8646) 2023-11-13 14:55:15 +00:00
dependabot[bot]
c4fc2b8584 Bump pyproject-toml from 0.8.0 to 0.8.1 (#8648) 2023-11-13 14:53:47 +00:00
dependabot[bot]
1c5f2288ba Bump fs-err from 2.9.0 to 2.10.0 (#8649) 2023-11-13 09:38:44 -05:00
dependabot[bot]
62f1830898 Bump quick-junit from 0.3.3 to 0.3.5 (#8645) 2023-11-13 09:38:31 -05:00
dependabot[bot]
abca0a86ea Bump smallvec from 1.11.1 to 1.11.2 (#8647) 2023-11-13 09:38:11 -05:00
Charlie Marsh
213d315373 Avoid recommending Self usages in metaclasses (#8639)
PEP 673 forbids the use of `typing(_extensions).Self` in metaclasses, so
we want to avoid flagging `PYI034` on metaclasses. This is based on an
analogous change in `flake8-pyi`:
https://github.com/PyCQA/flake8-pyi/pull/436.

Closes https://github.com/astral-sh/ruff/issues/8353.
2023-11-12 19:47:48 -05:00
Charlie Marsh
7fd95e15d9 Document conventions in the FAQ (#8638)
Enumerates all rules defined in each convention in the FAQ. These lists
mirror
[pydocstyle](https://www.pydocstyle.org/en/latest/error_codes.html#default-conventions).

Closes https://github.com/astral-sh/ruff/issues/8573.
2023-11-12 22:56:39 +00:00
Charlie Marsh
02946e7b0c Redirect from rule codes to rule pages in docs (#8636)
## Summary

This adds redirects from, e.g., `https://docs.astral.sh/ruff/rules/F401`
to `https://docs.astral.sh/ruff/rules/unused-import`, which are
generated automatically when creating the documentation. Though we want
to move towards human-readable names eventually, I think this is a nice
and user-friendly change (and doesn't require any fancy infrastructure,
since the redirects are handled via a plugin and added client-side).

Closes #4710.
2023-11-12 17:47:10 -05:00
Charlie Marsh
cbd9157bbf Use function range for no-self-use (#8637)
Previously, this rule used the range of the `self` annotation, but it's
a lot more natural to use the range of the function name (since it also
means the `# noqa` is associated with the method rather than its first
argument).

Closes https://github.com/astral-sh/ruff/issues/8635.
2023-11-12 16:37:52 -05:00
Charlie Marsh
70f491d31e Omit unrolled augmented assignments in PIE794 (#8634)
Closes https://github.com/astral-sh/ruff/issues/8497.
2023-11-12 20:40:33 +00:00
Jonathan Plasse
776eb8724f Fix FBT001 false negative with unions and optional (#7501)
## Summary

- Close #7487

In the spirit of `flake8-boolean-trap`, any positional argument that can
accept a boolean should raise `FBT001`.
Raise `FBT001` for all annotations that accept booleans (e.g.
`Optional[bool]`, `Union[int, bool]`).

## Test Plan

Add a fixture, with an annotation using `|`, `Optional`, and `Union`,
and containing a boolean.
2023-11-12 15:09:23 -05:00
Charlie Wilson
5f78580775 Remove unecessary commentary in PD901 message (#8625)
## Summary

Removes unnecessary commentary from the PD901 message. This does make it
different from pandas-vet, but it improves consistency with the rest of
messages.

Current Message:

> `df` is a bad variable name. Be kinder to your future self.


New Message

> `df` is a bad variable name.


## Test Plan

The relevant snapshot has been updated with the new message.

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-11-12 17:20:05 +00:00
Bodo Graumann
4d301f6dcc Improve docs for RUF001, RUF002 and RUF003 (#8628)
I got an error from RUF001 and wanted to override it. How to do that was
not quite obvious. In the process I have tried to improve the
documentation for the rule and it's siblings.
2023-11-12 17:19:25 +00:00
Charlie Marsh
96b265ccec Implement autofix for multiple-spaces-after-operator and multiple-spaces-before-operator (#8623) 2023-11-11 23:46:16 +00:00
Charlie Marsh
e0a0ddcf7d Implement autofix for multiple-spaces-after-keyword and multiple-spaces-before-keyword (#8622)
Closes https://github.com/astral-sh/ruff/issues/8312.
2023-11-11 23:41:12 +00:00
Charlie Marsh
9724dfd939 Implement autofix for unnecessary-lambda (PLW0108) (#8621)
Closes https://github.com/astral-sh/ruff/issues/8618.
2023-11-11 18:34:02 -05:00
Steven DeMartini
d7144d6d8e Fix docs typo for ruff format preview configuration (#8611)
## Summary

The ruff configuration section is called "format", rather than
"preview". Using the configuration as it was written in the docs gives
an error:

```
$ ruff format --check .
ruff failed
  Cause: TOML parse error at line 143, column 1
    |
143 | [tool.ruff.preview]
    | ^^^^^^^^^^^^^^^^^^^
invalid type: map, expected a boolean
```

## Test Plan

Tested running `ruff format` with the following in my `pyproject.toml`:

```toml
[tool.ruff.format]
preview = true
```

and it worked properly (using preview rules for formatting).
2023-11-11 03:07:32 +00:00
Jesse Serrao
39728a1198 Add check for is comparison with mutable initialisers to rule F632 (#8607)
## Summary

Adds an extra check to F632 to check for any `is` comparisons to a
mutable initialisers.
Implements #8589 .

Example:
```Python
named_var = {}
if named_var is {}:  # F632 (fix)
    pass
```
The if condition will always evaluate to False because it checks on
identity and it's impossible to take the same identity as a hard coded
list/set/dict initializer.

## Test Plan

Multiple test cases were added to ensure the rule works + doesn't flag
false positives + the fix works correctly.
2023-11-11 00:29:23 +00:00
Shantanu
8207d6df82 Fix unnecessary parentheses in UP007 fix (#8610)
Fixes #8609
2023-11-10 19:15:09 -05:00
Jake Park
c8edac9d2b [pylint] Implement redefined-argument-from-local (R1704) (#8159)
## Summary

It implements Pylint rule R1704: redefined-argument-from-local

Problematic code:
```python
def show(host_id=10.11):
    # +1: [redefined-argument-from-local]
    for host_id, host in [[12.13, "Venus"], [14.15, "Mars"]]:
        print(host_id, host)
```

Correct code:
```python
def show(host_id=10.11):
    for inner_host_id, host in [[12.13, "Venus"], [14.15, "Mars"]]:
        print(host_id, inner_host_id, host)
```

References:
[Pylint
documentation](https://pylint.readthedocs.io/en/latest/user_guide/messages/refactor/redefined-argument-from-local.html)
[Related Issue](https://github.com/astral-sh/ruff/issues/970)

## Test Plan

`cargo test`
2023-11-10 14:13:07 -05:00
Alan Du
5a1a8bebca Allow overriding pydocstyle convention rules (#8586)
## Summary

This fixes #2606 by moving where we apply the convention ignores --
instead of applying that at the very end, e track, we now track which
rules have been specifically enabled (via `Specificity::Rule`). If they
have, then we do *not* apply the docstring overrides at the end.

## Test Plan

Added unit tests to `ruff_workspace` and an integration test to
`ruff_cli`
2023-11-10 18:47:37 +00:00
Dhruv Manilawala
3e00ddce38 Preserve trailing semicolon for Notebooks (#8590)
## Summary

This PR updates the formatter to preserve trailing semicolon for Jupyter
Notebooks.

The motivation behind the change is that semicolons in notebooks are
typically used to hide the output, for example when plotting. This is
highlighted in the linked issue.

The conditions required as to when the trailing semicolon should be
preserved are:
1. It should be a top-level statement which is last in the module.
2. For statement, it can be either assignment, annotated assignment, or
augmented assignment. Here, the target should only be a single
identifier i.e., multiple assignments or tuple unpacking isn't
considered.
3. For expression, it can be any.

## Test Plan

Add a new integration test in `ruff_cli`. The test notebook basically
acts as a document as to which trailing semicolons are to be preserved.

fixes: #8254
2023-11-10 21:53:35 +05:30
Andrew Gallant
a7dbe9d670 refine pyupgrade's TimeoutErrorAlias lint (UP041) to remove false positives (#8587)
Previously, this lint had its alias detection logic a little
backwards. That is, for Python 3.11+, it would *only* detect
asyncio.TimeoutError as an alias, but it should have also detected
socket.timeout as an alias. And in Python <3.11, it would falsely
detect asyncio.TimeoutError as an alias where it should have only
detected socket.timeout as an alias.

We fix it so that both asyncio.TimeoutError and socket.timeout are
detected as aliases in Python 3.11+, and only socket.timeout is
detected as an alias in Python 3.10.

Fixes #8565

## Test Plan

I tested this by updating the existing snapshot test which had
erroneously
asserted that socket.timeout should not be replaced with TimeoutError in
Python
3.11+. I also added a new regression test that targets Python 3.10 and
ensures
that the suggestion to replace asyncio.TimeoutError with TimeoutError
does not
occur.
2023-11-10 10:15:33 -05:00
Charlie Marsh
036b6bc0bd Document context manager breaking deviation vs. Black (#8597)
Closes https://github.com/astral-sh/ruff/issues/8180.
Closes https://github.com/astral-sh/ruff/issues/8580.
Closes https://github.com/astral-sh/ruff/issues/7441.
2023-11-10 04:32:29 +00:00
Dhruv Manilawala
d5606b7705 Consider the new f-string tokens for flake8-commas (#8582)
## Summary

This fixes the bug where the `flake8-commas` rules weren't taking the
new f-string tokens into account.

## Test Plan

Add new test cases around f-strings for all of `flake8-commas`'s rules.

fixes: #8556
2023-11-10 09:49:14 +05:30
Charlie Marsh
7968e190dd Write unchanged, excluded files to stdout when read via stdin (#8596)
## Summary

When you run Ruff via stdin, and pass `format` or `check --fix`, we
typically write the changed or unchanged contents to stdout. It turns
out we forgot to do this when the file is _excluded_, so if you run
`ruff format /path/to/excluded/file.py`, we don't write _anything_ to
`stdout`. This led to a bug in the LSP whereby we deleted file contents
for third-party files.

The right thing to do here is write back the unchanged contents, as it
should always be safe to write the output of stdout back to a file.
2023-11-09 23:15:01 -05:00
Charlie Marsh
346a828db2 Add a BindingKind for WithItem variables (#8594) 2023-11-09 22:44:49 -05:00
Charlie Marsh
0ac124acef Make unpacked assignment a flag rather than a BindingKind (#8595)
## Summary

An assignment can be _both_ (e.g.) a loop variable _and_ assigned via
unpacking. In other words, unpacking is a quality of an assignment, not
a _kind_.
2023-11-09 21:41:30 -05:00
Adrian
4ebd0bd31e Support local and dynamic class- and static-method decorators (#8592)
## Summary

This brings ruff's behavior in line with what `pep8-naming` already does
and thus closes #8397.

I had initially implemented this to look at the last segment of a dotted
path only when the entry in the `*-decorators` setting started with a
`.`, but in the end I thought it's better to remain consistent w/
`pep8-naming` and doing a match against the last segment of the
decorator name in any case.

If you prefer to diverge from this in favor of less ambiguity in the
configuration let me know and I'll change it so you would need to put
e.g. `.expression` in the `classmethod-decorators` list.

## Test Plan

Tested against the file in the issue linked below, plus the new testcase
added in this PR.
2023-11-10 02:04:25 +00:00
Zanie Blue
565ddebb15 Improve detection of TYPE_CHECKING blocks imported from typing_extensions or _typeshed (#8429)
~Improves detection of types imported from `typing_extensions`. Removes
the hard-coded list of supported types in `typing_extensions`; instead
assuming all types could be imported from `typing`, `_typeshed`, or
`typing_extensions`.~

~The typing extensions package appears to re-export types even if they
do not need modification.~


Adds detection of `if typing_extensions.TYPE_CHECKING` blocks. Avoids
inserting a new `if TYPE_CHECKING` block and `from typing import
TYPE_CHECKING` if `typing_extensions.TYPE_CHECKING` is used (closes
https://github.com/astral-sh/ruff/issues/8427)

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-11-09 12:21:03 -06:00
205 changed files with 8603 additions and 1164 deletions

View File

@@ -392,7 +392,7 @@ jobs:
run: mkdocs build --strict -f mkdocs.insiders.yml
- name: "Build docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: mkdocs build --strict -f mkdocs.generated.yml
run: mkdocs build --strict -f mkdocs.public.yml
check-formatter-instability-and-black-similarity:
name: "formatter instabilities and black similarity"

View File

@@ -44,7 +44,7 @@ jobs:
run: mkdocs build --strict -f mkdocs.insiders.yml
- name: "Build docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: mkdocs build --strict -f mkdocs.generated.yml
run: mkdocs build --strict -f mkdocs.public.yml
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
uses: cloudflare/wrangler-action@v3.3.2

90
Cargo.lock generated
View File

@@ -64,9 +64,9 @@ checksum = "c7021ce4924a3f25f802b2cccd1af585e39ea1a363a1aa2e72afe54b67a3a7a7"
[[package]]
name = "annotate-snippets"
version = "0.9.1"
version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c3b9d411ecbaf79885c6df4d75fff75858d5995ff25385657a28af47e82f9c36"
checksum = "ccaf7e9dfbb6ab22c82e473cd1a8a7bd313c19a5b7e40970f3d89ef5a5c9e81e"
dependencies = [
"unicode-width",
"yansi-term",
@@ -278,9 +278,7 @@ checksum = "7f2c685bad3eb3d45a01354cedb7d5faa66194d1d58ba6e267a8de788f79db38"
dependencies = [
"android-tzdata",
"iana-time-zone",
"js-sys",
"num-traits",
"wasm-bindgen",
"windows-targets 0.48.5",
]
@@ -829,7 +827,7 @@ dependencies = [
"serde_json",
"strum",
"strum_macros",
"toml",
"toml 0.7.8",
]
[[package]]
@@ -859,9 +857,12 @@ dependencies = [
[[package]]
name = "fs-err"
version = "2.9.0"
version = "2.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0845fa252299212f0389d64ba26f34fa32cfe41588355f21ed507c59a0f64541"
checksum = "fb5fd9bcbe8b1087cbd395b51498c01bc997cef73e778a80b77a811af5e2d29f"
dependencies = [
"autocfg",
]
[[package]]
name = "fsevent-sys"
@@ -927,9 +928,9 @@ checksum = "8a9ee70c43aaf417c914396645a0fa852624801b24ebb7ae78fe8272889ac888"
[[package]]
name = "hashbrown"
version = "0.14.0"
version = "0.14.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2c6201b9ff9fd90a5a3bac2e56a830d0caa509576f0e503818ee82c181b3437a"
checksum = "f93e7192158dbcda357bdec5fb5788eebf8bbac027f3f33e719d29135ae84156"
[[package]]
name = "heck"
@@ -1033,12 +1034,12 @@ dependencies = [
[[package]]
name = "indexmap"
version = "2.0.0"
version = "2.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d5477fe2230a79769d8dc68e0eabf5437907c0457a5614a9e8dddb67f65eb65d"
checksum = "d530e1a18b1cb4c484e6e34556a0d948706958449fca0cab753d649f2bce3d1f"
dependencies = [
"equivalent",
"hashbrown 0.14.0",
"hashbrown 0.14.2",
"serde",
]
@@ -1801,36 +1802,37 @@ dependencies = [
[[package]]
name = "pyproject-toml"
version = "0.8.0"
version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0774c13ff0b8b7ebb4791c050c497aefcfe3f6a222c0829c7017161ed38391ff"
checksum = "46d4a5e69187f23a29f8aa0ea57491d104ba541bc55f76552c2a74962aa20e04"
dependencies = [
"indexmap",
"pep440_rs",
"pep508_rs",
"serde",
"toml",
"toml 0.8.2",
]
[[package]]
name = "quick-junit"
version = "0.3.3"
version = "0.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6bf780b59d590c25f8c59b44c124166a2a93587868b619fb8f5b47fb15e9ed6d"
checksum = "1b9599bffc2cd7511355996e0cfd979266b2cfa3f3ff5247d07a3a6e1ded6158"
dependencies = [
"chrono",
"indexmap",
"nextest-workspace-hack",
"quick-xml",
"strip-ansi-escapes",
"thiserror",
"uuid",
]
[[package]]
name = "quick-xml"
version = "0.29.0"
version = "0.31.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "81b9228215d82c7b61490fec1de287136b5de6f5700f6e58ea9ad61a7964ca51"
checksum = "1004a344b30a54e2ee58d66a71b32d2db2feb0a31f9a2d302bf0536f15de2a33"
dependencies = [
"memchr",
]
@@ -2062,7 +2064,7 @@ dependencies = [
name = "ruff_cli"
version = "0.1.5"
dependencies = [
"annotate-snippets 0.9.1",
"annotate-snippets 0.9.2",
"anyhow",
"argfile",
"assert_cmd",
@@ -2152,7 +2154,7 @@ dependencies = [
"strum",
"strum_macros",
"tempfile",
"toml",
"toml 0.7.8",
"tracing",
"tracing-indicatif",
"tracing-subscriber",
@@ -2199,7 +2201,7 @@ name = "ruff_linter"
version = "0.1.5"
dependencies = [
"aho-corasick",
"annotate-snippets 0.9.1",
"annotate-snippets 0.9.2",
"anyhow",
"bitflags 2.4.1",
"chrono",
@@ -2252,7 +2254,7 @@ dependencies = [
"tempfile",
"test-case",
"thiserror",
"toml",
"toml 0.7.8",
"typed-arena",
"unicode-width",
"unicode_names2",
@@ -2538,7 +2540,7 @@ dependencies = [
"shellexpand",
"strum",
"tempfile",
"toml",
"toml 0.7.8",
]
[[package]]
@@ -2802,9 +2804,9 @@ checksum = "38b58827f4464d87d377d175e90bf58eb00fd8716ff0a62f80356b5e61555d0d"
[[package]]
name = "smallvec"
version = "1.11.1"
version = "1.11.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "942b4a808e05215192e39f4ab80813e599068285906cc91aa64f923db842bd5a"
checksum = "4dccd0940a2dcdf68d092b8cbab7dc0ad8fa938bf95787e1b916b0e3d0e8e970"
[[package]]
name = "spin"
@@ -2831,6 +2833,15 @@ dependencies = [
"precomputed-hash",
]
[[package]]
name = "strip-ansi-escapes"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "55ff8ef943b384c414f54aefa961dd2bd853add74ec75e7ac74cf91dba62bcfa"
dependencies = [
"vte",
]
[[package]]
name = "strsim"
version = "0.10.0"
@@ -3086,7 +3097,19 @@ dependencies = [
"serde",
"serde_spanned",
"toml_datetime",
"toml_edit",
"toml_edit 0.19.15",
]
[[package]]
name = "toml"
version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "185d8ab0dfbb35cf1399a6344d8484209c088f75f8f68230da55d48d95d43e3d"
dependencies = [
"serde",
"serde_spanned",
"toml_datetime",
"toml_edit 0.20.2",
]
[[package]]
@@ -3111,6 +3134,19 @@ dependencies = [
"winnow",
]
[[package]]
name = "toml_edit"
version = "0.20.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "396e4d48bbb2b7554c944bde63101b5ae446cff6ec4a24227428f15eb72ef338"
dependencies = [
"indexmap",
"serde",
"serde_spanned",
"toml_datetime",
"winnow",
]
[[package]]
name = "tracing"
version = "0.1.40"

View File

@@ -38,7 +38,7 @@ serde = { version = "1.0.190", features = ["derive"] }
serde_json = { version = "1.0.108" }
shellexpand = { version = "3.0.0" }
similar = { version = "2.3.0", features = ["inline"] }
smallvec = { version = "1.11.1" }
smallvec = { version = "1.11.2" }
static_assertions = "1.1.0"
strum = { version = "0.25.0", features = ["strum_macros"] }
strum_macros = { version = "0.25.3" }

View File

@@ -54,7 +54,7 @@ Ruff is extremely actively developed and used in major open-source projects like
- [Pandas](https://github.com/pandas-dev/pandas)
- [SciPy](https://github.com/scipy/scipy)
...and many more.
...and [many more](#whos-using-ruff).
Ruff is backed by [Astral](https://astral.sh). Read the [launch post](https://astral.sh/blog/announcing-astral-the-company-behind-ruff),
or the original [project announcement](https://notes.crmarsh.com/python-tooling-could-be-much-much-faster).
@@ -377,8 +377,8 @@ Ruff is used by a number of major open-source projects and companies, including:
- Anthropic ([Python SDK](https://github.com/anthropics/anthropic-sdk-python))
- [Apache Airflow](https://github.com/apache/airflow)
- AstraZeneca ([Magnus](https://github.com/AstraZeneca/magnus-core))
- Benchling ([Refac](https://github.com/benchling/refac))
- [Babel](https://github.com/python-babel/babel)
- Benchling ([Refac](https://github.com/benchling/refac))
- [Bokeh](https://github.com/bokeh/bokeh)
- [Cryptography (PyCA)](https://github.com/pyca/cryptography)
- [DVC](https://github.com/iterative/dvc)
@@ -389,15 +389,16 @@ Ruff is used by a number of major open-source projects and companies, including:
- [Gradio](https://github.com/gradio-app/gradio)
- [Great Expectations](https://github.com/great-expectations/great_expectations)
- [HTTPX](https://github.com/encode/httpx)
- [Hatch](https://github.com/pypa/hatch)
- [Home Assistant](https://github.com/home-assistant/core)
- Hugging Face ([Transformers](https://github.com/huggingface/transformers),
[Datasets](https://github.com/huggingface/datasets),
[Diffusers](https://github.com/huggingface/diffusers))
- [Hatch](https://github.com/pypa/hatch)
- [Home Assistant](https://github.com/home-assistant/core)
- ING Bank ([popmon](https://github.com/ing-bank/popmon), [probatus](https://github.com/ing-bank/probatus))
- [Ibis](https://github.com/ibis-project/ibis)
- [Jupyter](https://github.com/jupyter-server/jupyter_server)
- [LangChain](https://github.com/hwchase17/langchain)
- [Litestar](https://litestar.dev/)
- [LlamaIndex](https://github.com/jerryjliu/llama_index)
- Matrix ([Synapse](https://github.com/matrix-org/synapse))
- [MegaLinter](https://github.com/oxsecurity/megalinter)
@@ -422,20 +423,20 @@ Ruff is used by a number of major open-source projects and companies, including:
- [PostHog](https://github.com/PostHog/posthog)
- Prefect ([Python SDK](https://github.com/PrefectHQ/prefect), [Marvin](https://github.com/PrefectHQ/marvin))
- [PyInstaller](https://github.com/pyinstaller/pyinstaller)
- [PyMC-Marketing](https://github.com/pymc-labs/pymc-marketing)
- [PyTorch](https://github.com/pytorch/pytorch)
- [Pydantic](https://github.com/pydantic/pydantic)
- [Pylint](https://github.com/PyCQA/pylint)
- [PyMC-Marketing](https://github.com/pymc-labs/pymc-marketing)
- [Reflex](https://github.com/reflex-dev/reflex)
- [Rippling](https://rippling.com)
- [Robyn](https://github.com/sansyrox/robyn)
- Scale AI ([Launch SDK](https://github.com/scaleapi/launch-python-client))
- Snowflake ([SnowCLI](https://github.com/Snowflake-Labs/snowcli))
- [Saleor](https://github.com/saleor/saleor)
- Scale AI ([Launch SDK](https://github.com/scaleapi/launch-python-client))
- [SciPy](https://github.com/scipy/scipy)
- Snowflake ([SnowCLI](https://github.com/Snowflake-Labs/snowcli))
- [Sphinx](https://github.com/sphinx-doc/sphinx)
- [Stable Baselines3](https://github.com/DLR-RM/stable-baselines3)
- [Litestar](https://litestar.dev/)
- [Starlette](https://github.com/encode/starlette)
- [The Algorithms](https://github.com/TheAlgorithms/Python)
- [Vega-Altair](https://github.com/altair-viz/altair)
- WordPress ([Openverse](https://github.com/WordPress/openverse))

View File

@@ -28,7 +28,7 @@ ruff_python_trivia = { path = "../ruff_python_trivia" }
ruff_workspace = { path = "../ruff_workspace" }
ruff_text_size = { path = "../ruff_text_size" }
annotate-snippets = { version = "0.9.1", features = ["color"] }
annotate-snippets = { version = "0.9.2", features = ["color"] }
anyhow = { workspace = true }
argfile = { version = "0.1.6" }
bincode = { version = "1.3.3" }

View File

@@ -0,0 +1,413 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "4f8ce941-1492-4d4e-8ab5-70d733fe891a",
"metadata": {},
"outputs": [],
"source": [
"%config ZMQInteractiveShell.ast_node_interactivity=\"last_expr_or_assign\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "721ec705-0c65-4bfb-9809-7ed8bc534186",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Assignment statement without a semicolon\n",
"x = 1"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "de50e495-17e5-41cc-94bd-565757555d7e",
"metadata": {},
"outputs": [],
"source": [
"# Assignment statement with a semicolon\n",
"x = 1;\n",
"x = 1;"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "39e31201-23da-44eb-8684-41bba3663991",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"2"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Augmented assignment without a semicolon\n",
"x += 1"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6b73d3dd-c73a-4697-9e97-e109a6c1fbab",
"metadata": {},
"outputs": [],
"source": [
"# Augmented assignment without a semicolon\n",
"x += 1;\n",
"x += 1; # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2a3e5b86-aa5b-46ba-b9c6-0386d876f58c",
"metadata": {},
"outputs": [],
"source": [
"# Multiple assignment without a semicolon\n",
"x = y = 1"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "07f89e51-9357-4cfb-8fc5-76fb75e35949",
"metadata": {},
"outputs": [],
"source": [
"# Multiple assignment with a semicolon\n",
"x = y = 1;\n",
"x = y = 1;"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c22b539d-473e-48f8-a236-625e58c47a00",
"metadata": {},
"outputs": [],
"source": [
"# Tuple unpacking without a semicolon\n",
"x, y = 1, 2"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "12c87940-a0d5-403b-a81c-7507eb06dc7e",
"metadata": {},
"outputs": [],
"source": [
"# Tuple unpacking with a semicolon (irrelevant)\n",
"x, y = 1, 2;\n",
"x, y = 1, 2; # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "5a768c76-6bc4-470c-b37e-8cc14bc6caf4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Annotated assignment statement without a semicolon\n",
"x: int = 1"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "21bfda82-1a9a-4ba1-9078-74ac480804b5",
"metadata": {},
"outputs": [],
"source": [
"# Annotated assignment statement without a semicolon\n",
"x: int = 1;\n",
"x: int = 1; # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "09929999-ff29-4d10-ad2b-e665af15812d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Assignment expression without a semicolon\n",
"(x := 1)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "32a83217-1bad-4f61-855e-ffcdb119c763",
"metadata": {},
"outputs": [],
"source": [
"# Assignment expression with a semicolon\n",
"(x := 1);\n",
"(x := 1); # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "61b81865-277e-4964-b03e-eb78f1f318eb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"x = 1\n",
"# Expression without a semicolon\n",
"x"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "974c29be-67e1-4000-95fa-6ca118a63bad",
"metadata": {},
"outputs": [],
"source": [
"x = 1\n",
"# Expression with a semicolon\n",
"x;"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "cfeb1757-46d6-4f13-969f-a283b6d0304f",
"metadata": {},
"outputs": [],
"source": [
"class Point:\n",
" def __init__(self, x, y):\n",
" self.x = x\n",
" self.y = y\n",
"\n",
"\n",
"p = Point(0, 0);"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "2ee7f1a5-ccfe-4004-bfa4-ef834a58da97",
"metadata": {},
"outputs": [],
"source": [
"# Assignment statement where the left is an attribute access doesn't\n",
"# print the value.\n",
"p.x = 1;"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "3e49370a-048b-474d-aa0a-3d1d4a73ad37",
"metadata": {},
"outputs": [],
"source": [
"data = {}\n",
"\n",
"# Neither does the subscript node\n",
"data[\"foo\"] = 1;"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "d594bdd3-eaa9-41ef-8cda-cf01bc273b2d",
"metadata": {},
"outputs": [],
"source": [
"if (x := 1):\n",
" # It should be the top level statement\n",
" x"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "e532f0cf-80c7-42b7-8226-6002fcf74fb6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Parentheses with comments\n",
"(\n",
" x := 1 # comment\n",
") # comment"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "473c5d62-871b-46ed-8a34-27095243f462",
"metadata": {},
"outputs": [],
"source": [
"# Parentheses with comments\n",
"(\n",
" x := 1 # comment\n",
"); # comment"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "8c3c2361-f49f-45fe-bbe3-7e27410a8a86",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hello world!'"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\"\"\"Hello world!\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "23dbe9b5-3f68-4890-ab2d-ab0dbfd0712a",
"metadata": {},
"outputs": [],
"source": [
"\"\"\"Hello world!\"\"\"; # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "3ce33108-d95d-4c70-83d1-0d4fd36a2951",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'x = 1'"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"x = 1\n",
"f\"x = {x}\""
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "654a4a67-de43-4684-824a-9451c67db48f",
"metadata": {},
"outputs": [],
"source": [
"x = 1\n",
"f\"x = {x}\";\n",
"f\"x = {x}\"; # comment\n",
"# comment"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python (ruff-playground)",
"language": "python",
"name": "ruff-playground"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -8,7 +8,7 @@ use ruff_workspace::resolver::{match_exclusion, python_file_at_path, PyprojectCo
use crate::args::CliOverrides;
use crate::diagnostics::{lint_stdin, Diagnostics};
use crate::stdin::read_from_stdin;
use crate::stdin::{parrot_stdin, read_from_stdin};
/// Run the linter over a single file, read from `stdin`.
pub(crate) fn check_stdin(
@@ -21,6 +21,9 @@ pub(crate) fn check_stdin(
if pyproject_config.settings.file_resolver.force_exclude {
if let Some(filename) = filename {
if !python_file_at_path(filename, pyproject_config, overrides)? {
if fix_mode.is_apply() {
parrot_stdin()?;
}
return Ok(Diagnostics::default());
}
@@ -29,14 +32,17 @@ pub(crate) fn check_stdin(
.file_name()
.is_some_and(|name| match_exclusion(filename, name, &lint_settings.exclude))
{
if fix_mode.is_apply() {
parrot_stdin()?;
}
return Ok(Diagnostics::default());
}
}
}
let stdin = read_from_stdin()?;
let package_root = filename.and_then(Path::parent).and_then(|path| {
packaging::detect_package_root(path, &pyproject_config.settings.linter.namespace_packages)
});
let stdin = read_from_stdin()?;
let mut diagnostics = lint_stdin(
filename,
package_root,

View File

@@ -15,7 +15,7 @@ use crate::commands::format::{
FormatResult, FormattedSource,
};
use crate::resolve::resolve;
use crate::stdin::read_from_stdin;
use crate::stdin::{parrot_stdin, read_from_stdin};
use crate::ExitStatus;
/// Run the formatter over a single file, read from `stdin`.
@@ -34,6 +34,9 @@ pub(crate) fn format_stdin(cli: &FormatArguments, overrides: &CliOverrides) -> R
if pyproject_config.settings.file_resolver.force_exclude {
if let Some(filename) = cli.stdin_filename.as_deref() {
if !python_file_at_path(filename, &pyproject_config, overrides)? {
if mode.is_write() {
parrot_stdin()?;
}
return Ok(ExitStatus::Success);
}
@@ -42,6 +45,9 @@ pub(crate) fn format_stdin(cli: &FormatArguments, overrides: &CliOverrides) -> R
.file_name()
.is_some_and(|name| match_exclusion(filename, name, &format_settings.exclude))
{
if mode.is_write() {
parrot_stdin()?;
}
return Ok(ExitStatus::Success);
}
}
@@ -50,6 +56,9 @@ pub(crate) fn format_stdin(cli: &FormatArguments, overrides: &CliOverrides) -> R
let path = cli.stdin_filename.as_deref();
let SourceType::Python(source_type) = path.map(SourceType::from).unwrap_or_default() else {
if mode.is_write() {
parrot_stdin()?;
}
return Ok(ExitStatus::Success);
};

View File

@@ -1,5 +1,5 @@
use std::io;
use std::io::Read;
use std::io::{Read, Write};
/// Read a string from `stdin`.
pub(crate) fn read_from_stdin() -> Result<String, io::Error> {
@@ -7,3 +7,11 @@ pub(crate) fn read_from_stdin() -> Result<String, io::Error> {
io::stdin().lock().read_to_string(&mut buffer)?;
Ok(buffer)
}
/// Read bytes from `stdin` and write them to `stdout`.
pub(crate) fn parrot_stdin() -> Result<(), io::Error> {
let mut buffer = String::new();
io::stdin().lock().read_to_string(&mut buffer)?;
io::stdout().write_all(buffer.as_bytes())?;
Ok(())
}

View File

@@ -320,6 +320,11 @@ if __name__ == '__main__':
exit_code: 0
----- stdout -----
from test import say_hy
if __name__ == '__main__':
say_hy("dear Ruff contributor")
----- stderr -----
"###);
Ok(())
@@ -813,3 +818,432 @@ fn test_diff_stdin_formatted() {
----- stderr -----
"###);
}
#[test]
fn test_notebook_trailing_semicolon() {
let fixtures = Path::new("resources").join("test").join("fixtures");
let unformatted = fs::read(fixtures.join("trailing_semicolon.ipynb")).unwrap();
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.ipynb"])
.arg("-")
.pass_stdin(unformatted), @r###"
success: true
exit_code: 0
----- stdout -----
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "4f8ce941-1492-4d4e-8ab5-70d733fe891a",
"metadata": {},
"outputs": [],
"source": [
"%config ZMQInteractiveShell.ast_node_interactivity=\"last_expr_or_assign\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "721ec705-0c65-4bfb-9809-7ed8bc534186",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Assignment statement without a semicolon\n",
"x = 1"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "de50e495-17e5-41cc-94bd-565757555d7e",
"metadata": {},
"outputs": [],
"source": [
"# Assignment statement with a semicolon\n",
"x = 1\n",
"x = 1;"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "39e31201-23da-44eb-8684-41bba3663991",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"2"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Augmented assignment without a semicolon\n",
"x += 1"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6b73d3dd-c73a-4697-9e97-e109a6c1fbab",
"metadata": {},
"outputs": [],
"source": [
"# Augmented assignment without a semicolon\n",
"x += 1\n",
"x += 1; # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2a3e5b86-aa5b-46ba-b9c6-0386d876f58c",
"metadata": {},
"outputs": [],
"source": [
"# Multiple assignment without a semicolon\n",
"x = y = 1"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "07f89e51-9357-4cfb-8fc5-76fb75e35949",
"metadata": {},
"outputs": [],
"source": [
"# Multiple assignment with a semicolon\n",
"x = y = 1\n",
"x = y = 1"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c22b539d-473e-48f8-a236-625e58c47a00",
"metadata": {},
"outputs": [],
"source": [
"# Tuple unpacking without a semicolon\n",
"x, y = 1, 2"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "12c87940-a0d5-403b-a81c-7507eb06dc7e",
"metadata": {},
"outputs": [],
"source": [
"# Tuple unpacking with a semicolon (irrelevant)\n",
"x, y = 1, 2\n",
"x, y = 1, 2 # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "5a768c76-6bc4-470c-b37e-8cc14bc6caf4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Annotated assignment statement without a semicolon\n",
"x: int = 1"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "21bfda82-1a9a-4ba1-9078-74ac480804b5",
"metadata": {},
"outputs": [],
"source": [
"# Annotated assignment statement without a semicolon\n",
"x: int = 1\n",
"x: int = 1; # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "09929999-ff29-4d10-ad2b-e665af15812d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Assignment expression without a semicolon\n",
"(x := 1)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "32a83217-1bad-4f61-855e-ffcdb119c763",
"metadata": {},
"outputs": [],
"source": [
"# Assignment expression with a semicolon\n",
"(x := 1)\n",
"(x := 1); # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "61b81865-277e-4964-b03e-eb78f1f318eb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"x = 1\n",
"# Expression without a semicolon\n",
"x"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "974c29be-67e1-4000-95fa-6ca118a63bad",
"metadata": {},
"outputs": [],
"source": [
"x = 1\n",
"# Expression with a semicolon\n",
"x;"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "cfeb1757-46d6-4f13-969f-a283b6d0304f",
"metadata": {},
"outputs": [],
"source": [
"class Point:\n",
" def __init__(self, x, y):\n",
" self.x = x\n",
" self.y = y\n",
"\n",
"\n",
"p = Point(0, 0);"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "2ee7f1a5-ccfe-4004-bfa4-ef834a58da97",
"metadata": {},
"outputs": [],
"source": [
"# Assignment statement where the left is an attribute access doesn't\n",
"# print the value.\n",
"p.x = 1"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "3e49370a-048b-474d-aa0a-3d1d4a73ad37",
"metadata": {},
"outputs": [],
"source": [
"data = {}\n",
"\n",
"# Neither does the subscript node\n",
"data[\"foo\"] = 1"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "d594bdd3-eaa9-41ef-8cda-cf01bc273b2d",
"metadata": {},
"outputs": [],
"source": [
"if x := 1:\n",
" # It should be the top level statement\n",
" x"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "e532f0cf-80c7-42b7-8226-6002fcf74fb6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Parentheses with comments\n",
"(\n",
" x := 1 # comment\n",
") # comment"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "473c5d62-871b-46ed-8a34-27095243f462",
"metadata": {},
"outputs": [],
"source": [
"# Parentheses with comments\n",
"(\n",
" x := 1 # comment\n",
"); # comment"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "8c3c2361-f49f-45fe-bbe3-7e27410a8a86",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hello world!'"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\"\"\"Hello world!\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "23dbe9b5-3f68-4890-ab2d-ab0dbfd0712a",
"metadata": {},
"outputs": [],
"source": [
"\"\"\"Hello world!\"\"\"; # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "3ce33108-d95d-4c70-83d1-0d4fd36a2951",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'x = 1'"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"x = 1\n",
"f\"x = {x}\""
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "654a4a67-de43-4684-824a-9451c67db48f",
"metadata": {},
"outputs": [],
"source": [
"x = 1\n",
"f\"x = {x}\"\n",
"f\"x = {x}\"; # comment\n",
"# comment"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python (ruff-playground)",
"language": "python",
"name": "ruff-playground"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
----- stderr -----
"###);
}

View File

@@ -1564,3 +1564,62 @@ extend-safe-fixes = ["UP03"]
Ok(())
}
#[test]
fn check_docstring_conventions_overrides() -> Result<()> {
// But if we explicitly select it, we override the convention
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
[lint.pydocstyle]
convention = "numpy"
"#,
)?;
let stdin = r#"
def log(x, base) -> float:
"""Calculate natural log of a value
Parameters
----------
x :
Hello
"""
return math.log(x)
"#;
// If we only select the prefix, then everything passes
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "-", "--config"])
.arg(&ruff_toml)
.args(["--output-format", "text", "--no-cache", "--select", "D41"])
.pass_stdin(stdin),
@r###"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
"###
);
// But if we select the exact code, we get an error
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "-", "--config"])
.arg(&ruff_toml)
.args(["--output-format", "text", "--no-cache", "--select", "D417"])
.pass_stdin(stdin),
@r###"
success: false
exit_code: 1
----- stdout -----
-:2:5: D417 Missing argument description in the docstring for `log`: `base`
Found 1 error.
----- stderr -----
"###
);
Ok(())
}

View File

@@ -129,12 +129,12 @@ fn emit_field(output: &mut String, name: &str, field: &OptionField, parent_set:
output.push_str("**Example usage**:\n\n");
output.push_str(&format_tab(
"pyproject.toml",
&format_header(parent_set, ConfigurationFile::PyprojectToml),
&format_header(field.scope, parent_set, ConfigurationFile::PyprojectToml),
field.example,
));
output.push_str(&format_tab(
"ruff.toml",
&format_header(parent_set, ConfigurationFile::RuffToml),
&format_header(field.scope, parent_set, ConfigurationFile::RuffToml),
field.example,
));
output.push('\n');
@@ -149,23 +149,53 @@ fn format_tab(tab_name: &str, header: &str, content: &str) -> String {
)
}
fn format_header(parent_set: &Set, configuration: ConfigurationFile) -> String {
let fmt = if let Some(set_name) = parent_set.name() {
if set_name == "format" {
String::from(".format")
} else {
format!(".lint.{set_name}")
}
} else {
String::new()
};
/// Format the TOML header for the example usage for a given option.
///
/// For example: `[tool.ruff.format]` or `[tool.ruff.lint.isort]`.
fn format_header(
scope: Option<&str>,
parent_set: &Set,
configuration: ConfigurationFile,
) -> String {
match configuration {
ConfigurationFile::PyprojectToml => format!("[tool.ruff{fmt}]"),
ConfigurationFile::PyprojectToml => {
let mut header = if let Some(set_name) = parent_set.name() {
if set_name == "format" {
String::from("tool.ruff.format")
} else {
format!("tool.ruff.lint.{set_name}")
}
} else {
"tool.ruff".to_string()
};
if let Some(scope) = scope {
if !header.is_empty() {
header.push('.');
}
header.push_str(scope);
}
format!("[{header}]")
}
ConfigurationFile::RuffToml => {
if fmt.is_empty() {
let mut header = if let Some(set_name) = parent_set.name() {
if set_name == "format" {
String::from("format")
} else {
format!("lint.{set_name}")
}
} else {
String::new()
};
if let Some(scope) = scope {
if !header.is_empty() {
header.push('.');
}
header.push_str(scope);
}
if header.is_empty() {
String::new()
} else {
format!("[{}]", fmt.strip_prefix('.').unwrap())
format!("[{header}]")
}
}
}

View File

@@ -30,7 +30,7 @@ ruff_source_file = { path = "../ruff_source_file", features = ["serde"] }
ruff_text_size = { path = "../ruff_text_size" }
aho-corasick = { version = "1.1.2" }
annotate-snippets = { version = "0.9.1", features = ["color"] }
annotate-snippets = { version = "0.9.2", features = ["color"] }
anyhow = { workspace = true }
bitflags = { workspace = true }
chrono = { workspace = true }
@@ -53,8 +53,8 @@ path-absolutize = { workspace = true, features = [
] }
pathdiff = { version = "0.2.1" }
pep440_rs = { version = "0.3.12", features = ["serde"] }
pyproject-toml = { version = "0.8.0" }
quick-junit = { version = "0.3.2" }
pyproject-toml = { version = "0.8.1" }
quick-junit = { version = "0.3.5" }
regex = { workspace = true }
result-like = { version = "0.4.6" }
rustc-hash = { workspace = true }

View File

@@ -0,0 +1,32 @@
def func():
return 1
def func():
return 1.5
def func(x: int):
if x > 0:
return 1
else:
return 1.5
def func():
return True
def func(x: int):
if x > 0:
return None
else:
return
def func(x: int):
return 1 or 2.5 if x > 0 else 1.5 or "str"
def func(x: int):
return 1 + 2.5 if x > 0 else 1.5 or "str"

View File

@@ -91,3 +91,18 @@ class Registry:
def foo(self) -> None:
object.__setattr__(self, "flag", True)
from typing import Optional, Union
def func(x: Union[list, Optional[int | str | float | bool]]):
pass
def func(x: bool | str):
pass
def func(x: int | str):
pass

View File

@@ -639,3 +639,18 @@ foo = namedtuple(
:20
],
)
# F-strings
kwargs.pop("remove", f"this {trailing_comma}",)
raise Exception(
"first", extra=f"Add trailing comma here ->"
)
assert False, f"<- This is not a trailing comma"
f"""This is a test. {
"Another sentence."
if True else
"Don't add a trailing comma here ->"
}"""

View File

@@ -148,3 +148,32 @@ for i in range(10):
for i in range(10):
pass # comment
pass
def foo():
print("foo")
...
def foo():
"""A docstring."""
print("foo")
...
for i in range(10):
...
...
for i in range(10):
...
...
for i in range(10):
... # comment
...
for i in range(10):
...
pass

View File

@@ -38,3 +38,15 @@ class User:
foo: bool = BooleanField()
# ...
bar = StringField() # PIE794
class Person:
name = "Foo"
name = name + " Bar"
name = "Bar" # PIE794
class Person:
name: str = "Foo"
name: str = name + " Bar"
name: str = "Bar" # PIE794

View File

@@ -1,27 +1,37 @@
@dataclass
class Foo:
foo: List[str] = field(default_factory=lambda: []) # PIE807
bar: Dict[str, int] = field(default_factory=lambda: {}) # PIE807
class FooTable(BaseTable):
bar = fields.ListField(default=lambda: []) # PIE807
foo = fields.ListField(default=lambda: []) # PIE807
bar = fields.ListField(default=lambda: {}) # PIE807
class FooTable(BaseTable):
bar = fields.ListField(lambda: []) # PIE807
foo = fields.ListField(lambda: []) # PIE807
bar = fields.ListField(default=lambda: {}) # PIE807
@dataclass
class Foo:
foo: List[str] = field(default_factory=list)
bar: Dict[str, int] = field(default_factory=dict)
class FooTable(BaseTable):
bar = fields.ListField(list)
foo = fields.ListField(list)
bar = fields.ListField(dict)
lambda *args, **kwargs: []
lambda *args, **kwargs: {}
lambda *args: []
lambda *args: {}
lambda **kwargs: []
lambda **kwargs: {}
lambda: {**unwrap}

View File

@@ -3,9 +3,11 @@
import abc
import builtins
import collections.abc
import enum
import typing
from abc import abstractmethod
from abc import ABCMeta, abstractmethod
from collections.abc import AsyncIterable, AsyncIterator, Iterable, Iterator
from enum import EnumMeta
from typing import Any, overload
import typing_extensions
@@ -199,6 +201,31 @@ class AsyncIteratorReturningAsyncIterable:
... # Y045 "__aiter__" methods should return an AsyncIterator, not an AsyncIterable
class MetaclassInWhichSelfCannotBeUsed(type):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed) -> MetaclassInWhichSelfCannotBeUsed: ...
class MetaclassInWhichSelfCannotBeUsed2(EnumMeta):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed2: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed2: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed2: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed2) -> MetaclassInWhichSelfCannotBeUsed2: ...
class MetaclassInWhichSelfCannotBeUsed3(enum.EnumType):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed3: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed3: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed3: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed3) -> MetaclassInWhichSelfCannotBeUsed3: ...
class MetaclassInWhichSelfCannotBeUsed4(ABCMeta):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed4: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed4: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed4: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed4) -> MetaclassInWhichSelfCannotBeUsed4: ...
class Abstract(Iterator[str]):
@abstractmethod
def __iter__(self) -> Iterator[str]:

View File

@@ -3,9 +3,11 @@
import abc
import builtins
import collections.abc
import enum
import typing
from abc import abstractmethod
from abc import ABCMeta, abstractmethod
from collections.abc import AsyncIterable, AsyncIterator, Iterable, Iterator
from enum import EnumMeta
from typing import Any, overload
import typing_extensions
@@ -152,6 +154,30 @@ class AsyncIteratorReturningAsyncIterable:
str
]: ... # Y045 "__aiter__" methods should return an AsyncIterator, not an AsyncIterable
class MetaclassInWhichSelfCannotBeUsed(type):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed) -> MetaclassInWhichSelfCannotBeUsed: ...
class MetaclassInWhichSelfCannotBeUsed2(EnumMeta):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed2: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed2: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed2: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed2) -> MetaclassInWhichSelfCannotBeUsed2: ...
class MetaclassInWhichSelfCannotBeUsed3(enum.EnumType):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed3: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed3: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed3: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed3) -> MetaclassInWhichSelfCannotBeUsed3: ...
class MetaclassInWhichSelfCannotBeUsed4(ABCMeta):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed4: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed4: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed4: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed4) -> MetaclassInWhichSelfCannotBeUsed4: ...
class Abstract(Iterator[str]):
@abstractmethod
def __iter__(self) -> Iterator[str]: ...

View File

@@ -0,0 +1,45 @@
this_should_raise_Q004 = 'This is a \"string\"'
this_should_raise_Q004 = 'This is \\ a \\\"string\"'
this_is_fine = '"This" is a \"string\"'
this_is_fine = "This is a 'string'"
this_is_fine = "\"This\" is a 'string'"
this_is_fine = r'This is a \"string\"'
this_is_fine = R'This is a \"string\"'
this_should_raise_Q004 = (
'This is a'
'\"string\"'
)
# Same as above, but with f-strings
f'This is a \"string\"' # Q004
f'This is \\ a \\\"string\"' # Q004
f'"This" is a \"string\"'
f"This is a 'string'"
f"\"This\" is a 'string'"
fr'This is a \"string\"'
fR'This is a \"string\"'
this_should_raise_Q004 = (
f'This is a'
f'\"string\"' # Q004
)
# Nested f-strings (Python 3.12+)
#
# The first one is interesting because the fix for it is valid pre 3.12:
#
# f"'foo' {'nested'}"
#
# but as the actual string itself is invalid pre 3.12, we don't catch it.
f'\"foo\" {'nested'}' # Q004
f'\"foo\" {f'nested'}' # Q004
f'\"foo\" {f'\"nested\"'} \"\"' # Q004
f'normal {f'nested'} normal'
f'\"normal\" {f'nested'} normal' # Q004
f'\"normal\" {f'nested'} "double quotes"'
f'\"normal\" {f'\"nested\" {'other'} normal'} "double quotes"' # Q004
f'\"normal\" {f'\"nested\" {'other'} "double quotes"'} normal' # Q004
# Make sure we do not unescape quotes
this_is_fine = 'This is an \\"escaped\\" quote'
this_should_raise_Q004 = 'This is an \\\"escaped\\\" quote with an extra backslash'

View File

@@ -0,0 +1,43 @@
this_should_raise_Q004 = "This is a \'string\'"
this_should_raise_Q004 = "'This' is a \'string\'"
this_is_fine = 'This is a "string"'
this_is_fine = '\'This\' is a "string"'
this_is_fine = r"This is a \'string\'"
this_is_fine = R"This is a \'string\'"
this_should_raise_Q004 = (
"This is a"
"\'string\'"
)
# Same as above, but with f-strings
f"This is a \'string\'" # Q004
f"'This' is a \'string\'" # Q004
f'This is a "string"'
f'\'This\' is a "string"'
fr"This is a \'string\'"
fR"This is a \'string\'"
this_should_raise_Q004 = (
f"This is a"
f"\'string\'" # Q004
)
# Nested f-strings (Python 3.12+)
#
# The first one is interesting because the fix for it is valid pre 3.12:
#
# f'"foo" {"nested"}'
#
# but as the actual string itself is invalid pre 3.12, we don't catch it.
f"\'foo\' {"foo"}" # Q004
f"\'foo\' {f"foo"}" # Q004
f"\'foo\' {f"\'foo\'"} \'\'" # Q004
f"normal {f"nested"} normal"
f"\'normal\' {f"nested"} normal" # Q004
f"\'normal\' {f"nested"} 'single quotes'"
f"\'normal\' {f"\'nested\' {"other"} normal"} 'single quotes'" # Q004
f"\'normal\' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
# Make sure we do not unescape quotes
this_is_fine = "This is an \\'escaped\\' quote"
this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash"

View File

@@ -172,3 +172,14 @@ def f():
from module import Member
x: Member = 1
def f():
from typing_extensions import TYPE_CHECKING
from pandas import y
if TYPE_CHECKING:
_type = x
elif True:
_type = y

View File

@@ -0,0 +1,10 @@
from __future__ import annotations
from typing_extensions import TYPE_CHECKING
if TYPE_CHECKING:
from pandas import DataFrame
def example() -> DataFrame:
pass

View File

@@ -0,0 +1,10 @@
from __future__ import annotations
from typing_extensions import TYPE_CHECKING
if TYPE_CHECKING:
from pandas import DataFrame
def example() -> DataFrame:
x = DataFrame()

View File

@@ -37,3 +37,9 @@ if False:
if 0:
x: List
from typing_extensions import TYPE_CHECKING
if TYPE_CHECKING:
pass # TCH005

View File

@@ -0,0 +1,9 @@
from __future__ import annotations
from typing_extensions import Self
def func():
from pandas import DataFrame
df: DataFrame

View File

@@ -0,0 +1,9 @@
from __future__ import annotations
import typing_extensions
def func():
from pandas import DataFrame
df: DataFrame

View File

@@ -0,0 +1,5 @@
import encodings
from datetime import timezone as tz
from datetime import timedelta
import datetime as dt
import datetime

View File

@@ -0,0 +1,8 @@
from __future__ import annotations
import django.settings
import os
import pytz
import sys
from . import local
from library import foo

View File

@@ -63,3 +63,33 @@ class PosOnlyClass:
def bad_method_pos_only(this, blah, /, self, something: str):
pass
class ModelClass:
@hybrid_property
def bad(cls):
pass
@bad.expression
def bad(self):
pass
@bad.wtf
def bad(cls):
pass
@hybrid_property
def good(self):
pass
@good.expression
def good(cls):
pass
@good.wtf
def good(self):
pass
@foobar.thisisstatic
def badstatic(foo):
pass

View File

@@ -89,3 +89,49 @@ x = [ #
f"{ {'a': 1} }"
f"{[ { {'a': 1} } ]}"
f"normal { {f"{ { [1, 2] } }" } } normal"
#: Okay
ham[lower + offset : upper + offset]
#: Okay
ham[(lower + offset) : upper + offset]
#: E203:1:19
ham{lower + offset : upper + offset}
#: E203:1:19
ham[lower + offset : upper + offset]
#: Okay
release_lines = history_file_lines[history_file_lines.index('## Unreleased') + 1: -1]
#: Okay
release_lines = history_file_lines[history_file_lines.index('## Unreleased') + 1 : -1]
#: Okay
ham[1:9], ham[1:9:3], ham[:9:3], ham[1::3], ham[1:9:]
ham[lower:upper], ham[lower:upper:], ham[lower::step]
ham[lower+offset : upper+offset]
ham[: upper_fn(x) : step_fn(x)], ham[:: step_fn(x)]
ham[lower + offset : upper + offset]
#: E201:1:5
ham[ : upper]
#: Okay
ham[lower + offset :: upper + offset]
#: Okay
ham[(lower + offset) :: upper + offset]
#: Okay
ham[lower + offset::upper + offset]
#: E203:1:21
ham[lower + offset : : upper + offset]
#: E203:1:20
ham[lower + offset: :upper + offset]
#: E203:1:20
ham[{lower + offset : upper + offset} : upper + offset]

View File

@@ -16,6 +16,48 @@ if False == None: # E711, E712 (fix)
if None == False: # E711, E712 (fix)
pass
named_var = []
if [] is []: # F632 (fix)
pass
if named_var is []: # F632 (fix)
pass
if [] is named_var: # F632 (fix)
pass
if named_var is [1]: # F632 (fix)
pass
if [1] is named_var: # F632 (fix)
pass
if named_var is [i for i in [1]]: # F632 (fix)
pass
named_var = {}
if {} is {}: # F632 (fix)
pass
if named_var is {}: # F632 (fix)
pass
if {} is named_var: # F632 (fix)
pass
if named_var is {1}: # F632 (fix)
pass
if {1} is named_var: # F632 (fix)
pass
if named_var is {i for i in [1]}: # F632 (fix)
pass
named_var = {1: 1}
if {1: 1} is {1: 1}: # F632 (fix)
pass
if named_var is {1: 1}: # F632 (fix)
pass
if {1: 1} is named_var: # F632 (fix)
pass
if named_var is {1: 1}: # F632 (fix)
pass
if {1: 1} is named_var: # F632 (fix)
pass
if named_var is {i: 1 for i in [1]}: # F632 (fix)
pass
###
# Non-errors
###
@@ -33,3 +75,45 @@ if False is None:
pass
if None is False:
pass
named_var = []
if [] == []:
pass
if named_var == []:
pass
if [] == named_var:
pass
if named_var == [1]:
pass
if [1] == named_var:
pass
if named_var == [i for i in [1]]:
pass
named_var = {}
if {} == {}:
pass
if named_var == {}:
pass
if {} == named_var:
pass
if named_var == {1}:
pass
if {1} == named_var:
pass
if named_var == {i for i in [1]}:
pass
named_var = {1: 1}
if {1: 1} == {1: 1}:
pass
if named_var == {1: 1}:
pass
if {1: 1} == named_var:
pass
if named_var == {1: 1}:
pass
if {1: 1} == named_var:
pass
if named_var == {i: 1 for i in [1]}:
pass

View File

@@ -0,0 +1,79 @@
# No Errors
def func(a):
for b in range(1):
...
def func(a):
try:
...
except ValueError:
...
except KeyError:
...
if True:
def func(a):
...
else:
for a in range(1):
print(a)
# Errors
def func(a):
for a in range(1):
...
def func(i):
for i in range(10):
print(i)
def func(e):
try:
...
except Exception as e:
print(e)
def func(f):
with open('', ) as f:
print(f)
def func(a, b):
with context() as (a, b, c):
print(a, b, c)
def func(a, b):
with context() as [a, b, c]:
print(a, b, c)
def func(a):
with open('foo.py', ) as f, open('bar.py') as a:
...
def func(a):
def bar(b):
for a in range(1):
print(a)
def func(a):
def bar(b):
for b in range(1):
print(b)
def func(a=1):
def bar(b=2):
for a in range(1):
print(a)
for b in range(1):
print(b)

View File

@@ -114,3 +114,8 @@ class ServiceRefOrValue:
class Collection(Protocol[*_B0]):
def __iter__(self) -> Iterator[Union[*_B0]]:
...
# Regression test for: https://github.com/astral-sh/ruff/issues/8609
def f(x: Union[int, str, bytes]) -> None:
...

View File

@@ -18,8 +18,8 @@ pub(crate) fn deferred_lambdas(checker: &mut Checker) {
if checker.enabled(Rule::UnnecessaryLambda) {
pylint::rules::unnecessary_lambda(checker, lambda);
}
if checker.enabled(Rule::ReimplementedListBuiltin) {
flake8_pie::rules::reimplemented_list_builtin(checker, lambda);
if checker.enabled(Rule::ReimplementedContainerBuiltin) {
flake8_pie::rules::reimplemented_container_builtin(checker, lambda);
}
}
}

View File

@@ -12,6 +12,7 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
if !checker.any_enabled(&[
Rule::GlobalVariableNotAssigned,
Rule::ImportShadowedByLoopVar,
Rule::RedefinedArgumentFromLocal,
Rule::RedefinedWhileUnused,
Rule::RuntimeImportInTypeCheckingBlock,
Rule::TypingOnlyFirstPartyImport,
@@ -89,6 +90,32 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
}
}
if checker.enabled(Rule::RedefinedArgumentFromLocal) {
for (name, binding_id) in scope.bindings() {
for shadow in checker.semantic.shadowed_bindings(scope_id, binding_id) {
let binding = &checker.semantic.bindings[shadow.binding_id()];
if !matches!(
binding.kind,
BindingKind::LoopVar
| BindingKind::BoundException
| BindingKind::WithItemVar
) {
continue;
}
let shadowed = &checker.semantic.bindings[shadow.shadowed_id()];
if !shadowed.kind.is_argument() {
continue;
}
checker.diagnostics.push(Diagnostic::new(
pylint::rules::RedefinedArgumentFromLocal {
name: name.to_string(),
},
binding.range(),
));
}
}
}
if checker.enabled(Rule::ImportShadowedByLoopVar) {
for (name, binding_id) in scope.bindings() {
for shadow in checker.semantic.shadowed_bindings(scope_id, binding_id) {

View File

@@ -6,7 +6,7 @@ use crate::rules::flake8_pie;
/// Run lint rules over a suite of [`Stmt`] syntax nodes.
pub(crate) fn suite(suite: &[Stmt], checker: &mut Checker) {
if checker.enabled(Rule::UnnecessaryPass) {
flake8_pie::rules::no_unnecessary_pass(checker, suite);
if checker.enabled(Rule::UnnecessaryPlaceholder) {
flake8_pie::rules::unnecessary_placeholder(checker, suite);
}
}

View File

@@ -1615,38 +1615,28 @@ impl<'a> Checker<'a> {
fn handle_node_store(&mut self, id: &'a str, expr: &Expr) {
let parent = self.semantic.current_statement();
let mut flags = BindingFlags::empty();
if helpers::is_unpacking_assignment(parent, expr) {
flags.insert(BindingFlags::UNPACKED_ASSIGNMENT);
}
// Match the left-hand side of an annotated assignment, like `x` in `x: int`.
if matches!(
parent,
Stmt::AnnAssign(ast::StmtAnnAssign { value: None, .. })
) && !self.semantic.in_annotation()
{
self.add_binding(
id,
expr.range(),
BindingKind::Annotation,
BindingFlags::empty(),
);
self.add_binding(id, expr.range(), BindingKind::Annotation, flags);
return;
}
if parent.is_for_stmt() {
self.add_binding(
id,
expr.range(),
BindingKind::LoopVar,
BindingFlags::empty(),
);
self.add_binding(id, expr.range(), BindingKind::LoopVar, flags);
return;
}
if helpers::is_unpacking_assignment(parent, expr) {
self.add_binding(
id,
expr.range(),
BindingKind::UnpackedAssignment,
BindingFlags::empty(),
);
if parent.is_with_stmt() {
self.add_binding(id, expr.range(), BindingKind::WithItemVar, flags);
return;
}
@@ -1681,7 +1671,6 @@ impl<'a> Checker<'a> {
let (all_names, all_flags) =
extract_all_names(parent, |name| self.semantic.is_builtin(name));
let mut flags = BindingFlags::empty();
if all_flags.intersects(DunderAllFlags::INVALID_OBJECT) {
flags |= BindingFlags::INVALID_ALL_OBJECT;
}
@@ -1705,21 +1694,11 @@ impl<'a> Checker<'a> {
.current_expressions()
.any(Expr::is_named_expr_expr)
{
self.add_binding(
id,
expr.range(),
BindingKind::NamedExprAssignment,
BindingFlags::empty(),
);
self.add_binding(id, expr.range(), BindingKind::NamedExprAssignment, flags);
return;
}
self.add_binding(
id,
expr.range(),
BindingKind::Assignment,
BindingFlags::empty(),
);
self.add_binding(id, expr.range(), BindingKind::Assignment, flags);
}
fn handle_node_delete(&mut self, expr: &'a Expr) {

View File

@@ -86,7 +86,7 @@ pub(crate) fn check_logical_lines(
let indent_level = expand_indent(locator.slice(range));
let indent_size = 4;
let indent_size = settings.tab_size.as_usize();
for kind in indentation(
&line,

View File

@@ -115,6 +115,10 @@ pub(crate) fn check_tokens(
flake8_quotes::rules::avoidable_escaped_quote(&mut diagnostics, tokens, locator, settings);
}
if settings.rules.enabled(Rule::UnnecessaryEscapedQuote) {
flake8_quotes::rules::unnecessary_escaped_quote(&mut diagnostics, tokens, locator);
}
if settings.rules.any_enabled(&[
Rule::BadQuotesInlineString,
Rule::BadQuotesMultilineString,
@@ -141,7 +145,7 @@ pub(crate) fn check_tokens(
Rule::TrailingCommaOnBareTuple,
Rule::ProhibitedTrailingComma,
]) {
flake8_commas::rules::trailing_commas(&mut diagnostics, tokens, locator);
flake8_commas::rules::trailing_commas(&mut diagnostics, tokens, locator, indexer);
}
if settings.rules.enabled(Rule::ExtraneousParentheses) {

View File

@@ -252,6 +252,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "R0915") => (RuleGroup::Stable, rules::pylint::rules::TooManyStatements),
(Pylint, "R0916") => (RuleGroup::Preview, rules::pylint::rules::TooManyBooleanExpressions),
(Pylint, "R1701") => (RuleGroup::Stable, rules::pylint::rules::RepeatedIsinstanceCalls),
(Pylint, "R1704") => (RuleGroup::Preview, rules::pylint::rules::RedefinedArgumentFromLocal),
(Pylint, "R1711") => (RuleGroup::Stable, rules::pylint::rules::UselessReturn),
(Pylint, "R1714") => (RuleGroup::Stable, rules::pylint::rules::RepeatedEqualityComparison),
(Pylint, "R1706") => (RuleGroup::Preview, rules::pylint::rules::AndOrTernary),
@@ -402,6 +403,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Quotes, "001") => (RuleGroup::Stable, rules::flake8_quotes::rules::BadQuotesMultilineString),
(Flake8Quotes, "002") => (RuleGroup::Stable, rules::flake8_quotes::rules::BadQuotesDocstring),
(Flake8Quotes, "003") => (RuleGroup::Stable, rules::flake8_quotes::rules::AvoidableEscapedQuote),
(Flake8Quotes, "004") => (RuleGroup::Preview, rules::flake8_quotes::rules::UnnecessaryEscapedQuote),
// flake8-annotations
(Flake8Annotations, "001") => (RuleGroup::Stable, rules::flake8_annotations::rules::MissingTypeFunctionArgument),
@@ -767,12 +769,12 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8PytestStyle, "027") => (RuleGroup::Stable, rules::flake8_pytest_style::rules::PytestUnittestRaisesAssertion),
// flake8-pie
(Flake8Pie, "790") => (RuleGroup::Stable, rules::flake8_pie::rules::UnnecessaryPass),
(Flake8Pie, "790") => (RuleGroup::Stable, rules::flake8_pie::rules::UnnecessaryPlaceholder),
(Flake8Pie, "794") => (RuleGroup::Stable, rules::flake8_pie::rules::DuplicateClassFieldDefinition),
(Flake8Pie, "796") => (RuleGroup::Stable, rules::flake8_pie::rules::NonUniqueEnums),
(Flake8Pie, "800") => (RuleGroup::Stable, rules::flake8_pie::rules::UnnecessarySpread),
(Flake8Pie, "804") => (RuleGroup::Stable, rules::flake8_pie::rules::UnnecessaryDictKwargs),
(Flake8Pie, "807") => (RuleGroup::Stable, rules::flake8_pie::rules::ReimplementedListBuiltin),
(Flake8Pie, "807") => (RuleGroup::Stable, rules::flake8_pie::rules::ReimplementedContainerBuiltin),
(Flake8Pie, "808") => (RuleGroup::Stable, rules::flake8_pie::rules::UnnecessaryRangeStart),
(Flake8Pie, "810") => (RuleGroup::Stable, rules::flake8_pie::rules::MultipleStartsEndsWith),

View File

@@ -132,11 +132,7 @@ impl<'a> Importer<'a> {
)?;
// Import the `TYPE_CHECKING` symbol from the typing module.
let (type_checking_edit, type_checking) = self.get_or_import_symbol(
&ImportRequest::import_from("typing", "TYPE_CHECKING"),
at,
semantic,
)?;
let (type_checking_edit, type_checking) = self.get_or_import_type_checking(at, semantic)?;
// Add the import to a `TYPE_CHECKING` block.
let add_import_edit = if let Some(block) = self.preceding_type_checking_block(at) {
@@ -161,6 +157,30 @@ impl<'a> Importer<'a> {
})
}
/// Generate an [`Edit`] to reference `typing.TYPE_CHECKING`. Returns the [`Edit`] necessary to
/// make the symbol available in the current scope along with the bound name of the symbol.
fn get_or_import_type_checking(
&self,
at: TextSize,
semantic: &SemanticModel,
) -> Result<(Edit, String), ResolutionError> {
for module in semantic.typing_modules() {
if let Some((edit, name)) = self.get_symbol(
&ImportRequest::import_from(module, "TYPE_CHECKING"),
at,
semantic,
)? {
return Ok((edit, name));
}
}
self.import_symbol(
&ImportRequest::import_from("typing", "TYPE_CHECKING"),
at,
semantic,
)
}
/// Generate an [`Edit`] to reference the given symbol. Returns the [`Edit`] necessary to make
/// the symbol available in the current scope along with the bound name of the symbol.
///

View File

@@ -245,10 +245,10 @@ impl Renamer {
| BindingKind::Argument
| BindingKind::TypeParam
| BindingKind::NamedExprAssignment
| BindingKind::UnpackedAssignment
| BindingKind::Assignment
| BindingKind::BoundException
| BindingKind::LoopVar
| BindingKind::WithItemVar
| BindingKind::Global
| BindingKind::Nonlocal(_)
| BindingKind::ClassDefinition(_)

View File

@@ -1,5 +1,14 @@
use itertools::Itertools;
use ruff_python_ast::helpers::{pep_604_union, ReturnStatementVisitor};
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::{self as ast, Expr, ExprContext};
use ruff_python_semantic::analyze::type_inference::{NumberLike, PythonType, ResolvedPythonType};
use ruff_python_semantic::analyze::visibility;
use ruff_python_semantic::{Definition, SemanticModel};
use ruff_text_size::TextRange;
use crate::settings::types::PythonVersion;
/// Return the name of the function, if it's overloaded.
pub(crate) fn overloaded_name(definition: &Definition, semantic: &SemanticModel) -> Option<String> {
@@ -27,3 +36,81 @@ pub(crate) fn is_overload_impl(
function.name.as_str() == overloaded_name
}
}
/// Given a function, guess its return type.
pub(crate) fn auto_return_type(
function: &ast::StmtFunctionDef,
target_version: PythonVersion,
) -> Option<Expr> {
// Collect all the `return` statements.
let returns = {
let mut visitor = ReturnStatementVisitor::default();
visitor.visit_body(&function.body);
if visitor.is_generator {
return None;
}
visitor.returns
};
// Determine the return type of the first `return` statement.
let (return_statement, returns) = returns.split_first()?;
let mut return_type = return_statement.value.as_deref().map_or(
ResolvedPythonType::Atom(PythonType::None),
ResolvedPythonType::from,
);
// Merge the return types of the remaining `return` statements.
for return_statement in returns {
return_type = return_type.union(return_statement.value.as_deref().map_or(
ResolvedPythonType::Atom(PythonType::None),
ResolvedPythonType::from,
));
}
match return_type {
ResolvedPythonType::Atom(python_type) => type_expr(python_type),
ResolvedPythonType::Union(python_types) if target_version >= PythonVersion::Py310 => {
// Aggregate all the individual types (e.g., `int`, `float`).
let names = python_types
.iter()
.sorted_unstable()
.filter_map(|python_type| type_expr(*python_type))
.collect::<Vec<_>>();
// Wrap in a bitwise union (e.g., `int | float`).
Some(pep_604_union(&names))
}
ResolvedPythonType::Union(_) => None,
ResolvedPythonType::Unknown => None,
ResolvedPythonType::TypeError => None,
}
}
/// Given a [`PythonType`], return an [`Expr`] that resolves to that type.
fn type_expr(python_type: PythonType) -> Option<Expr> {
fn name(name: &str) -> Expr {
Expr::Name(ast::ExprName {
id: name.into(),
range: TextRange::default(),
ctx: ExprContext::Load,
})
}
match python_type {
PythonType::String => Some(name("str")),
PythonType::Bytes => Some(name("bytes")),
PythonType::Number(number) => match number {
NumberLike::Integer => Some(name("int")),
NumberLike::Float => Some(name("float")),
NumberLike::Complex => Some(name("complex")),
NumberLike::Bool => Some(name("bool")),
},
PythonType::None => Some(name("None")),
PythonType::Ellipsis => None,
PythonType::Dict => None,
PythonType::List => None,
PythonType::Set => None,
PythonType::Tuple => None,
PythonType::Generator => None,
}
}

View File

@@ -110,6 +110,24 @@ mod tests {
Ok(())
}
#[test]
fn auto_return_type() -> Result<()> {
let diagnostics = test_path(
Path::new("flake8_annotations/auto_return_type.py"),
&LinterSettings {
..LinterSettings::for_rules(vec![
Rule::MissingReturnTypeUndocumentedPublicFunction,
Rule::MissingReturnTypePrivateFunction,
Rule::MissingReturnTypeSpecialMethod,
Rule::MissingReturnTypeStaticMethod,
Rule::MissingReturnTypeClassMethod,
])
},
)?;
assert_messages!(diagnostics);
Ok(())
}
#[test]
fn suppress_none_returning() -> Result<()> {
let diagnostics = test_path(

View File

@@ -1,8 +1,8 @@
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix, Violation};
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::ReturnStatementVisitor;
use ruff_python_ast::identifier::Identifier;
use ruff_python_ast::statement_visitor::StatementVisitor;
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::{self as ast, Expr, ParameterWithDefault, Stmt};
use ruff_python_parser::typing::parse_type_annotation;
use ruff_python_semantic::analyze::visibility;
@@ -12,6 +12,7 @@ use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::registry::Rule;
use crate::rules::flake8_annotations::helpers::auto_return_type;
use crate::rules::ruff::typing::type_hint_resolves_to_any;
/// ## What it does
@@ -41,7 +42,7 @@ pub struct MissingTypeFunctionArgument {
impl Violation for MissingTypeFunctionArgument {
#[derive_message_formats]
fn message(&self) -> String {
let MissingTypeFunctionArgument { name } = self;
let Self { name } = self;
format!("Missing type annotation for function argument `{name}`")
}
}
@@ -73,7 +74,7 @@ pub struct MissingTypeArgs {
impl Violation for MissingTypeArgs {
#[derive_message_formats]
fn message(&self) -> String {
let MissingTypeArgs { name } = self;
let Self { name } = self;
format!("Missing type annotation for `*{name}`")
}
}
@@ -105,7 +106,7 @@ pub struct MissingTypeKwargs {
impl Violation for MissingTypeKwargs {
#[derive_message_formats]
fn message(&self) -> String {
let MissingTypeKwargs { name } = self;
let Self { name } = self;
format!("Missing type annotation for `**{name}`")
}
}
@@ -142,7 +143,7 @@ pub struct MissingTypeSelf {
impl Violation for MissingTypeSelf {
#[derive_message_formats]
fn message(&self) -> String {
let MissingTypeSelf { name } = self;
let Self { name } = self;
format!("Missing type annotation for `{name}` in method")
}
}
@@ -181,7 +182,7 @@ pub struct MissingTypeCls {
impl Violation for MissingTypeCls {
#[derive_message_formats]
fn message(&self) -> String {
let MissingTypeCls { name } = self;
let Self { name } = self;
format!("Missing type annotation for `{name}` in classmethod")
}
}
@@ -208,14 +209,26 @@ impl Violation for MissingTypeCls {
#[violation]
pub struct MissingReturnTypeUndocumentedPublicFunction {
name: String,
annotation: Option<String>,
}
impl Violation for MissingReturnTypeUndocumentedPublicFunction {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let MissingReturnTypeUndocumentedPublicFunction { name } = self;
let Self { name, .. } = self;
format!("Missing return type annotation for public function `{name}`")
}
fn fix_title(&self) -> Option<String> {
let Self { annotation, .. } = self;
if let Some(annotation) = annotation {
Some(format!("Add return type annotation: `{annotation}`"))
} else {
Some(format!("Add return type annotation"))
}
}
}
/// ## What it does
@@ -240,14 +253,26 @@ impl Violation for MissingReturnTypeUndocumentedPublicFunction {
#[violation]
pub struct MissingReturnTypePrivateFunction {
name: String,
annotation: Option<String>,
}
impl Violation for MissingReturnTypePrivateFunction {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let MissingReturnTypePrivateFunction { name } = self;
let Self { name, .. } = self;
format!("Missing return type annotation for private function `{name}`")
}
fn fix_title(&self) -> Option<String> {
let Self { annotation, .. } = self;
if let Some(annotation) = annotation {
Some(format!("Add return type annotation: `{annotation}`"))
} else {
Some(format!("Add return type annotation"))
}
}
}
/// ## What it does
@@ -285,17 +310,25 @@ impl Violation for MissingReturnTypePrivateFunction {
#[violation]
pub struct MissingReturnTypeSpecialMethod {
name: String,
annotation: Option<String>,
}
impl AlwaysFixableViolation for MissingReturnTypeSpecialMethod {
impl Violation for MissingReturnTypeSpecialMethod {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let MissingReturnTypeSpecialMethod { name } = self;
let Self { name, .. } = self;
format!("Missing return type annotation for special method `{name}`")
}
fn fix_title(&self) -> String {
"Add `None` return type".to_string()
fn fix_title(&self) -> Option<String> {
let Self { annotation, .. } = self;
if let Some(annotation) = annotation {
Some(format!("Add return type annotation: `{annotation}`"))
} else {
Some(format!("Add return type annotation"))
}
}
}
@@ -325,14 +358,26 @@ impl AlwaysFixableViolation for MissingReturnTypeSpecialMethod {
#[violation]
pub struct MissingReturnTypeStaticMethod {
name: String,
annotation: Option<String>,
}
impl Violation for MissingReturnTypeStaticMethod {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let MissingReturnTypeStaticMethod { name } = self;
let Self { name, .. } = self;
format!("Missing return type annotation for staticmethod `{name}`")
}
fn fix_title(&self) -> Option<String> {
let Self { annotation, .. } = self;
if let Some(annotation) = annotation {
Some(format!("Add return type annotation: `{annotation}`"))
} else {
Some(format!("Add return type annotation"))
}
}
}
/// ## What it does
@@ -361,14 +406,26 @@ impl Violation for MissingReturnTypeStaticMethod {
#[violation]
pub struct MissingReturnTypeClassMethod {
name: String,
annotation: Option<String>,
}
impl Violation for MissingReturnTypeClassMethod {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let MissingReturnTypeClassMethod { name } = self;
let Self { name, .. } = self;
format!("Missing return type annotation for classmethod `{name}`")
}
fn fix_title(&self) -> Option<String> {
let Self { annotation, .. } = self;
if let Some(annotation) = annotation {
Some(format!("Add return type annotation: `{annotation}`"))
} else {
Some(format!("Add return type annotation"))
}
}
}
/// ## What it does
@@ -421,7 +478,7 @@ pub struct AnyType {
impl Violation for AnyType {
#[derive_message_formats]
fn message(&self) -> String {
let AnyType { name } = self;
let Self { name } = self;
format!("Dynamically typed expressions (typing.Any) are disallowed in `{name}`")
}
}
@@ -673,21 +730,41 @@ pub(crate) fn definition(
) {
if is_method && visibility::is_classmethod(decorator_list, checker.semantic()) {
if checker.enabled(Rule::MissingReturnTypeClassMethod) {
diagnostics.push(Diagnostic::new(
let return_type = auto_return_type(function, checker.settings.target_version)
.map(|return_type| checker.generator().expr(&return_type));
let mut diagnostic = Diagnostic::new(
MissingReturnTypeClassMethod {
name: name.to_string(),
annotation: return_type.clone(),
},
function.identifier(),
));
);
if let Some(return_type) = return_type {
diagnostic.set_fix(Fix::unsafe_edit(Edit::insertion(
format!(" -> {return_type}"),
function.parameters.range().end(),
)));
}
diagnostics.push(diagnostic);
}
} else if is_method && visibility::is_staticmethod(decorator_list, checker.semantic()) {
if checker.enabled(Rule::MissingReturnTypeStaticMethod) {
diagnostics.push(Diagnostic::new(
let return_type = auto_return_type(function, checker.settings.target_version)
.map(|return_type| checker.generator().expr(&return_type));
let mut diagnostic = Diagnostic::new(
MissingReturnTypeStaticMethod {
name: name.to_string(),
annotation: return_type.clone(),
},
function.identifier(),
));
);
if let Some(return_type) = return_type {
diagnostic.set_fix(Fix::unsafe_edit(Edit::insertion(
format!(" -> {return_type}"),
function.parameters.range().end(),
)));
}
diagnostics.push(diagnostic);
}
} else if is_method && visibility::is_init(name) {
// Allow omission of return annotation in `__init__` functions, as long as at
@@ -697,6 +774,7 @@ pub(crate) fn definition(
let mut diagnostic = Diagnostic::new(
MissingReturnTypeSpecialMethod {
name: name.to_string(),
annotation: Some("None".to_string()),
},
function.identifier(),
);
@@ -709,13 +787,15 @@ pub(crate) fn definition(
}
} else if is_method && visibility::is_magic(name) {
if checker.enabled(Rule::MissingReturnTypeSpecialMethod) {
let return_type = simple_magic_return_type(name);
let mut diagnostic = Diagnostic::new(
MissingReturnTypeSpecialMethod {
name: name.to_string(),
annotation: return_type.map(ToString::to_string),
},
function.identifier(),
);
if let Some(return_type) = simple_magic_return_type(name) {
if let Some(return_type) = return_type {
diagnostic.set_fix(Fix::unsafe_edit(Edit::insertion(
format!(" -> {return_type}"),
function.parameters.range().end(),
@@ -727,22 +807,44 @@ pub(crate) fn definition(
match visibility {
visibility::Visibility::Public => {
if checker.enabled(Rule::MissingReturnTypeUndocumentedPublicFunction) {
diagnostics.push(Diagnostic::new(
let return_type =
auto_return_type(function, checker.settings.target_version)
.map(|return_type| checker.generator().expr(&return_type));
let mut diagnostic = Diagnostic::new(
MissingReturnTypeUndocumentedPublicFunction {
name: name.to_string(),
annotation: return_type.clone(),
},
function.identifier(),
));
);
if let Some(return_type) = return_type {
diagnostic.set_fix(Fix::unsafe_edit(Edit::insertion(
format!(" -> {return_type}"),
function.parameters.range().end(),
)));
}
diagnostics.push(diagnostic);
}
}
visibility::Visibility::Private => {
if checker.enabled(Rule::MissingReturnTypePrivateFunction) {
diagnostics.push(Diagnostic::new(
let return_type =
auto_return_type(function, checker.settings.target_version)
.map(|return_type| checker.generator().expr(&return_type));
let mut diagnostic = Diagnostic::new(
MissingReturnTypePrivateFunction {
name: name.to_string(),
annotation: return_type.clone(),
},
function.identifier(),
));
);
if let Some(return_type) = return_type {
diagnostic.set_fix(Fix::unsafe_edit(Edit::insertion(
format!(" -> {return_type}"),
function.parameters.range().end(),
)));
}
diagnostics.push(diagnostic);
}
}
}

View File

@@ -8,5 +8,6 @@ allow_overload.py:29:9: ANN201 Missing return type annotation for public functio
| ^^^ ANN201
30 | return i
|
= help: Add return type annotation

View File

@@ -0,0 +1,127 @@
---
source: crates/ruff_linter/src/rules/flake8_annotations/mod.rs
---
auto_return_type.py:1:5: ANN201 [*] Missing return type annotation for public function `func`
|
1 | def func():
| ^^^^ ANN201
2 | return 1
|
= help: Add return type annotation: `int`
Unsafe fix
1 |-def func():
1 |+def func() -> int:
2 2 | return 1
3 3 |
4 4 |
auto_return_type.py:5:5: ANN201 [*] Missing return type annotation for public function `func`
|
5 | def func():
| ^^^^ ANN201
6 | return 1.5
|
= help: Add return type annotation: `float`
Unsafe fix
2 2 | return 1
3 3 |
4 4 |
5 |-def func():
5 |+def func() -> float:
6 6 | return 1.5
7 7 |
8 8 |
auto_return_type.py:9:5: ANN201 [*] Missing return type annotation for public function `func`
|
9 | def func(x: int):
| ^^^^ ANN201
10 | if x > 0:
11 | return 1
|
= help: Add return type annotation: `float`
Unsafe fix
6 6 | return 1.5
7 7 |
8 8 |
9 |-def func(x: int):
9 |+def func(x: int) -> float:
10 10 | if x > 0:
11 11 | return 1
12 12 | else:
auto_return_type.py:16:5: ANN201 [*] Missing return type annotation for public function `func`
|
16 | def func():
| ^^^^ ANN201
17 | return True
|
= help: Add return type annotation: `bool`
Unsafe fix
13 13 | return 1.5
14 14 |
15 15 |
16 |-def func():
16 |+def func() -> bool:
17 17 | return True
18 18 |
19 19 |
auto_return_type.py:20:5: ANN201 [*] Missing return type annotation for public function `func`
|
20 | def func(x: int):
| ^^^^ ANN201
21 | if x > 0:
22 | return None
|
= help: Add return type annotation: `None`
Unsafe fix
17 17 | return True
18 18 |
19 19 |
20 |-def func(x: int):
20 |+def func(x: int) -> None:
21 21 | if x > 0:
22 22 | return None
23 23 | else:
auto_return_type.py:27:5: ANN201 [*] Missing return type annotation for public function `func`
|
27 | def func(x: int):
| ^^^^ ANN201
28 | return 1 or 2.5 if x > 0 else 1.5 or "str"
|
= help: Add return type annotation: `str | float`
Unsafe fix
24 24 | return
25 25 |
26 26 |
27 |-def func(x: int):
27 |+def func(x: int) -> str | float:
28 28 | return 1 or 2.5 if x > 0 else 1.5 or "str"
29 29 |
30 30 |
auto_return_type.py:31:5: ANN201 [*] Missing return type annotation for public function `func`
|
31 | def func(x: int):
| ^^^^ ANN201
32 | return 1 + 2.5 if x > 0 else 1.5 or "str"
|
= help: Add return type annotation: `str | float`
Unsafe fix
28 28 | return 1 or 2.5 if x > 0 else 1.5 or "str"
29 29 |
30 30 |
31 |-def func(x: int):
31 |+def func(x: int) -> str | float:
32 32 | return 1 + 2.5 if x > 0 else 1.5 or "str"

View File

@@ -8,6 +8,7 @@ annotation_presence.py:5:5: ANN201 Missing return type annotation for public fun
| ^^^ ANN201
6 | pass
|
= help: Add return type annotation
annotation_presence.py:5:9: ANN001 Missing type annotation for function argument `a`
|
@@ -32,6 +33,7 @@ annotation_presence.py:10:5: ANN201 Missing return type annotation for public fu
| ^^^ ANN201
11 | pass
|
= help: Add return type annotation
annotation_presence.py:10:17: ANN001 Missing type annotation for function argument `b`
|
@@ -56,6 +58,7 @@ annotation_presence.py:20:5: ANN201 Missing return type annotation for public fu
| ^^^ ANN201
21 | pass
|
= help: Add return type annotation
annotation_presence.py:25:5: ANN201 Missing return type annotation for public function `foo`
|
@@ -64,6 +67,7 @@ annotation_presence.py:25:5: ANN201 Missing return type annotation for public fu
| ^^^ ANN201
26 | pass
|
= help: Add return type annotation
annotation_presence.py:45:12: ANN401 Dynamically typed expressions (typing.Any) are disallowed in `a`
|
@@ -250,7 +254,7 @@ annotation_presence.py:159:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^ ANN204
160 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `None`
Unsafe fix
156 156 |
@@ -270,7 +274,7 @@ annotation_presence.py:165:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^ ANN204
166 | print(f"{self.attr=}")
|
= help: Add `None` return type
= help: Add return type annotation: `None`
Unsafe fix
162 162 |

View File

@@ -7,6 +7,7 @@ ignore_fully_untyped.py:24:5: ANN201 Missing return type annotation for public f
| ^^^^^^^^^^^^^^^^^^^^^^^ ANN201
25 | pass
|
= help: Add return type annotation
ignore_fully_untyped.py:24:37: ANN001 Missing type annotation for function argument `b`
|
@@ -28,6 +29,7 @@ ignore_fully_untyped.py:32:5: ANN201 Missing return type annotation for public f
| ^^^^^^^^^^^^^^^^^^^^^^^ ANN201
33 | pass
|
= help: Add return type annotation
ignore_fully_untyped.py:43:9: ANN201 Missing return type annotation for public function `error_typed_self`
|
@@ -37,5 +39,6 @@ ignore_fully_untyped.py:43:9: ANN201 Missing return type annotation for public f
| ^^^^^^^^^^^^^^^^ ANN201
44 | pass
|
= help: Add return type annotation

View File

@@ -9,7 +9,7 @@ mypy_init_return.py:5:9: ANN204 [*] Missing return type annotation for special m
| ^^^^^^^^ ANN204
6 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `None`
Unsafe fix
2 2 |
@@ -29,7 +29,7 @@ mypy_init_return.py:11:9: ANN204 [*] Missing return type annotation for special
| ^^^^^^^^ ANN204
12 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `None`
Unsafe fix
8 8 |
@@ -48,6 +48,7 @@ mypy_init_return.py:40:5: ANN202 Missing return type annotation for private func
| ^^^^^^^^ ANN202
41 | ...
|
= help: Add return type annotation
mypy_init_return.py:47:9: ANN204 [*] Missing return type annotation for special method `__init__`
|
@@ -57,7 +58,7 @@ mypy_init_return.py:47:9: ANN204 [*] Missing return type annotation for special
| ^^^^^^^^ ANN204
48 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `None`
Unsafe fix
44 44 | # Error used to be ok for a moment since the mere presence

View File

@@ -8,7 +8,7 @@ simple_magic_methods.py:2:9: ANN204 [*] Missing return type annotation for speci
| ^^^^^^^ ANN204
3 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `str`
Unsafe fix
1 1 | class Foo:
@@ -26,7 +26,7 @@ simple_magic_methods.py:5:9: ANN204 [*] Missing return type annotation for speci
| ^^^^^^^^ ANN204
6 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `str`
Unsafe fix
2 2 | def __str__(self):
@@ -46,7 +46,7 @@ simple_magic_methods.py:8:9: ANN204 [*] Missing return type annotation for speci
| ^^^^^^^ ANN204
9 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `int`
Unsafe fix
5 5 | def __repr__(self):
@@ -66,7 +66,7 @@ simple_magic_methods.py:11:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^^^^^^^^ ANN204
12 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `int`
Unsafe fix
8 8 | def __len__(self):
@@ -86,7 +86,7 @@ simple_magic_methods.py:14:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^ ANN204
15 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `None`
Unsafe fix
11 11 | def __length_hint__(self):
@@ -106,7 +106,7 @@ simple_magic_methods.py:17:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^ ANN204
18 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `None`
Unsafe fix
14 14 | def __init__(self):
@@ -126,7 +126,7 @@ simple_magic_methods.py:20:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^ ANN204
21 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `bool`
Unsafe fix
17 17 | def __del__(self):
@@ -146,7 +146,7 @@ simple_magic_methods.py:23:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^^ ANN204
24 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `bytes`
Unsafe fix
20 20 | def __bool__(self):
@@ -166,7 +166,7 @@ simple_magic_methods.py:26:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^^^ ANN204
27 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `str`
Unsafe fix
23 23 | def __bytes__(self):
@@ -186,7 +186,7 @@ simple_magic_methods.py:29:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^^^^^ ANN204
30 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `bool`
Unsafe fix
26 26 | def __format__(self, format_spec):
@@ -206,7 +206,7 @@ simple_magic_methods.py:32:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^^^^ ANN204
33 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `complex`
Unsafe fix
29 29 | def __contains__(self, item):
@@ -226,7 +226,7 @@ simple_magic_methods.py:35:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^ ANN204
36 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `int`
Unsafe fix
32 32 | def __complex__(self):
@@ -246,7 +246,7 @@ simple_magic_methods.py:38:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^^ ANN204
39 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `float`
Unsafe fix
35 35 | def __int__(self):
@@ -266,7 +266,7 @@ simple_magic_methods.py:41:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^^ ANN204
42 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `int`
Unsafe fix
38 38 | def __float__(self):

View File

@@ -1,15 +1,26 @@
---
source: crates/ruff_linter/src/rules/flake8_annotations/mod.rs
---
suppress_none_returning.py:45:5: ANN201 Missing return type annotation for public function `foo`
suppress_none_returning.py:45:5: ANN201 [*] Missing return type annotation for public function `foo`
|
44 | # Error
45 | def foo():
| ^^^ ANN201
46 | return True
|
= help: Add return type annotation: `bool`
suppress_none_returning.py:50:5: ANN201 Missing return type annotation for public function `foo`
Unsafe fix
42 42 |
43 43 |
44 44 | # Error
45 |-def foo():
45 |+def foo() -> bool:
46 46 | return True
47 47 |
48 48 |
suppress_none_returning.py:50:5: ANN201 [*] Missing return type annotation for public function `foo`
|
49 | # Error
50 | def foo():
@@ -17,6 +28,17 @@ suppress_none_returning.py:50:5: ANN201 Missing return type annotation for publi
51 | a = 2 + 2
52 | if a == 4:
|
= help: Add return type annotation: `bool | None`
Unsafe fix
47 47 |
48 48 |
49 49 | # Error
50 |-def foo():
50 |+def foo() -> bool | None:
51 51 | a = 2 + 2
52 52 | if a == 4:
53 53 | return True
suppress_none_returning.py:59:9: ANN001 Missing type annotation for function argument `a`
|

View File

@@ -10,6 +10,7 @@ mod tests {
use test_case::test_case;
use crate::registry::Rule;
use crate::settings::types::PreviewMode;
use crate::test::test_path;
use crate::{assert_messages, settings};
@@ -25,4 +26,22 @@ mod tests {
assert_messages!(snapshot, diagnostics);
Ok(())
}
#[test_case(Rule::BooleanTypeHintPositionalArgument, Path::new("FBT.py"))]
fn preview_rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!(
"preview__{}_{}",
rule_code.noqa_code(),
path.to_string_lossy()
);
let diagnostics = test_path(
Path::new("flake8_boolean_trap").join(path).as_path(),
&settings::LinterSettings {
preview: PreviewMode::Enabled,
..settings::LinterSettings::for_rule(rule_code)
},
)?;
assert_messages!(snapshot, diagnostics);
Ok(())
}
}

View File

@@ -4,6 +4,7 @@ use ruff_diagnostics::Diagnostic;
use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::call_path::collect_call_path;
use ruff_python_semantic::SemanticModel;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
@@ -26,6 +27,9 @@ use crate::rules::flake8_boolean_trap::helpers::is_allowed_func_def;
/// keyword-only argument, to force callers to be explicit when providing
/// the argument.
///
/// In [preview], this rule will also flag annotations that include boolean
/// variants, like `bool | int`.
///
/// ## Example
/// ```python
/// from math import ceil, floor
@@ -86,6 +90,8 @@ use crate::rules::flake8_boolean_trap::helpers::is_allowed_func_def;
/// ## References
/// - [Python documentation: Calls](https://docs.python.org/3/reference/expressions.html#calls)
/// - [_How to Avoid “The Boolean Trap”_ by Adam Johnson](https://adamj.eu/tech/2021/07/10/python-type-hints-how-to-avoid-the-boolean-trap/)
///
/// [preview]: https://docs.astral.sh/ruff/preview/
#[violation]
pub struct BooleanTypeHintPositionalArgument;
@@ -96,6 +102,7 @@ impl Violation for BooleanTypeHintPositionalArgument {
}
}
/// FBT001
pub(crate) fn boolean_type_hint_positional_argument(
checker: &mut Checker,
name: &str,
@@ -122,15 +129,17 @@ pub(crate) fn boolean_type_hint_positional_argument(
let Some(annotation) = parameter.annotation.as_ref() else {
continue;
};
// check for both bool (python class) and 'bool' (string annotation)
let hint = match annotation.as_ref() {
Expr::Name(name) => &name.id == "bool",
Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) => value == "bool",
_ => false,
};
if !hint || !checker.semantic().is_builtin("bool") {
continue;
if checker.settings.preview.is_enabled() {
if !match_annotation_to_complex_bool(annotation, checker.semantic()) {
continue;
}
} else {
if !match_annotation_to_literal_bool(annotation) {
continue;
}
}
if !checker.semantic().is_builtin("bool") {
return;
}
checker.diagnostics.push(Diagnostic::new(
BooleanTypeHintPositionalArgument,
@@ -138,3 +147,52 @@ pub(crate) fn boolean_type_hint_positional_argument(
));
}
}
/// Returns `true` if the annotation is a boolean type hint (e.g., `bool`).
fn match_annotation_to_literal_bool(annotation: &Expr) -> bool {
match annotation {
// Ex) `True`
Expr::Name(name) => &name.id == "bool",
// Ex) `"True"`
Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) => value == "bool",
_ => false,
}
}
/// Returns `true` if the annotation is a boolean type hint (e.g., `bool`), or a type hint that
/// includes boolean as a variant (e.g., `bool | int`).
fn match_annotation_to_complex_bool(annotation: &Expr, semantic: &SemanticModel) -> bool {
match annotation {
// Ex) `bool`
Expr::Name(name) => &name.id == "bool",
// Ex) `"bool"`
Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) => value == "bool",
// Ex) `bool | int`
Expr::BinOp(ast::ExprBinOp {
left,
op: ast::Operator::BitOr,
right,
..
}) => {
match_annotation_to_complex_bool(left, semantic)
|| match_annotation_to_complex_bool(right, semantic)
}
// Ex) `typing.Union[bool, int]`
Expr::Subscript(ast::ExprSubscript { value, slice, .. }) => {
if semantic.match_typing_expr(value, "Union") {
if let Expr::Tuple(ast::ExprTuple { elts, .. }) = slice.as_ref() {
elts.iter()
.any(|elt| match_annotation_to_complex_bool(elt, semantic))
} else {
// Union with a single type is an invalid type annotation
false
}
} else if semantic.match_typing_expr(value, "Optional") {
match_annotation_to_complex_bool(slice, semantic)
} else {
false
}
}
_ => false,
}
}

View File

@@ -0,0 +1,106 @@
---
source: crates/ruff_linter/src/rules/flake8_boolean_trap/mod.rs
---
FBT.py:4:5: FBT001 Boolean-typed positional argument in function definition
|
2 | posonly_nohint,
3 | posonly_nonboolhint: int,
4 | posonly_boolhint: bool,
| ^^^^^^^^^^^^^^^^ FBT001
5 | posonly_boolstrhint: "bool",
6 | /,
|
FBT.py:5:5: FBT001 Boolean-typed positional argument in function definition
|
3 | posonly_nonboolhint: int,
4 | posonly_boolhint: bool,
5 | posonly_boolstrhint: "bool",
| ^^^^^^^^^^^^^^^^^^^ FBT001
6 | /,
7 | offset,
|
FBT.py:10:5: FBT001 Boolean-typed positional argument in function definition
|
8 | posorkw_nonvalued_nohint,
9 | posorkw_nonvalued_nonboolhint: int,
10 | posorkw_nonvalued_boolhint: bool,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ FBT001
11 | posorkw_nonvalued_boolstrhint: "bool",
12 | posorkw_boolvalued_nohint=True,
|
FBT.py:11:5: FBT001 Boolean-typed positional argument in function definition
|
9 | posorkw_nonvalued_nonboolhint: int,
10 | posorkw_nonvalued_boolhint: bool,
11 | posorkw_nonvalued_boolstrhint: "bool",
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FBT001
12 | posorkw_boolvalued_nohint=True,
13 | posorkw_boolvalued_nonboolhint: int = True,
|
FBT.py:14:5: FBT001 Boolean-typed positional argument in function definition
|
12 | posorkw_boolvalued_nohint=True,
13 | posorkw_boolvalued_nonboolhint: int = True,
14 | posorkw_boolvalued_boolhint: bool = True,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ FBT001
15 | posorkw_boolvalued_boolstrhint: "bool" = True,
16 | posorkw_nonboolvalued_nohint=1,
|
FBT.py:15:5: FBT001 Boolean-typed positional argument in function definition
|
13 | posorkw_boolvalued_nonboolhint: int = True,
14 | posorkw_boolvalued_boolhint: bool = True,
15 | posorkw_boolvalued_boolstrhint: "bool" = True,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FBT001
16 | posorkw_nonboolvalued_nohint=1,
17 | posorkw_nonboolvalued_nonboolhint: int = 2,
|
FBT.py:18:5: FBT001 Boolean-typed positional argument in function definition
|
16 | posorkw_nonboolvalued_nohint=1,
17 | posorkw_nonboolvalued_nonboolhint: int = 2,
18 | posorkw_nonboolvalued_boolhint: bool = 3,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FBT001
19 | posorkw_nonboolvalued_boolstrhint: "bool" = 4,
20 | *,
|
FBT.py:19:5: FBT001 Boolean-typed positional argument in function definition
|
17 | posorkw_nonboolvalued_nonboolhint: int = 2,
18 | posorkw_nonboolvalued_boolhint: bool = 3,
19 | posorkw_nonboolvalued_boolstrhint: "bool" = 4,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FBT001
20 | *,
21 | kwonly_nonvalued_nohint,
|
FBT.py:89:19: FBT001 Boolean-typed positional argument in function definition
|
88 | # FBT001: Boolean positional arg in function definition
89 | def foo(self, value: bool) -> None:
| ^^^^^ FBT001
90 | pass
|
FBT.py:99:10: FBT001 Boolean-typed positional argument in function definition
|
99 | def func(x: Union[list, Optional[int | str | float | bool]]):
| ^ FBT001
100 | pass
|
FBT.py:103:10: FBT001 Boolean-typed positional argument in function definition
|
103 | def func(x: bool | str):
| ^ FBT001
104 | pass
|

View File

@@ -3,6 +3,7 @@ use itertools::Itertools;
use ruff_diagnostics::{AlwaysFixableViolation, Violation};
use ruff_diagnostics::{Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_index::Indexer;
use ruff_python_parser::lexer::{LexResult, Spanned};
use ruff_python_parser::Tok;
use ruff_source_file::Locator;
@@ -29,22 +30,33 @@ enum TokenType {
/// Simplified token specialized for the task.
#[derive(Copy, Clone)]
struct Token<'tok> {
type_: TokenType,
// Underlying token.
spanned: Option<&'tok Spanned>,
struct Token {
r#type: TokenType,
range: TextRange,
}
impl<'tok> Token<'tok> {
const fn irrelevant() -> Token<'static> {
Token {
type_: TokenType::Irrelevant,
spanned: None,
}
impl Ranged for Token {
fn range(&self) -> TextRange {
self.range
}
}
impl Token {
fn new(r#type: TokenType, range: TextRange) -> Self {
Self { r#type, range }
}
const fn from_spanned(spanned: &'tok Spanned) -> Token<'tok> {
let type_ = match &spanned.0 {
fn irrelevant() -> Token {
Token {
r#type: TokenType::Irrelevant,
range: TextRange::default(),
}
}
}
impl From<&Spanned> for Token {
fn from(spanned: &Spanned) -> Self {
let r#type = match &spanned.0 {
Tok::NonLogicalNewline => TokenType::NonLogicalNewline,
Tok::Newline => TokenType::Newline,
Tok::For => TokenType::For,
@@ -63,8 +75,8 @@ impl<'tok> Token<'tok> {
_ => TokenType::Irrelevant,
};
Self {
spanned: Some(spanned),
type_,
range: spanned.1,
r#type,
}
}
}
@@ -92,14 +104,14 @@ enum ContextType {
/// Comma context - described a comma-delimited "situation".
#[derive(Copy, Clone)]
struct Context {
type_: ContextType,
r#type: ContextType,
num_commas: u32,
}
impl Context {
const fn new(type_: ContextType) -> Self {
const fn new(r#type: ContextType) -> Self {
Self {
type_,
r#type,
num_commas: 0,
}
}
@@ -222,21 +234,49 @@ pub(crate) fn trailing_commas(
diagnostics: &mut Vec<Diagnostic>,
tokens: &[LexResult],
locator: &Locator,
indexer: &Indexer,
) {
let mut fstrings = 0u32;
let tokens = tokens
.iter()
.flatten()
// Completely ignore comments -- they just interfere with the logic.
.filter(|&r| !matches!(r, (Tok::Comment(_), _)))
.map(Token::from_spanned);
.filter_map(|spanned @ (tok, tok_range)| match tok {
// Completely ignore comments -- they just interfere with the logic.
Tok::Comment(_) => None,
// F-strings are handled as `String` token type with the complete range
// of the outermost f-string. This means that the expression inside the
// f-string is not checked for trailing commas.
Tok::FStringStart => {
fstrings = fstrings.saturating_add(1);
None
}
Tok::FStringEnd => {
fstrings = fstrings.saturating_sub(1);
if fstrings == 0 {
indexer
.fstring_ranges()
.outermost(tok_range.start())
.map(|range| Token::new(TokenType::String, range))
} else {
None
}
}
_ => {
if fstrings == 0 {
Some(Token::from(spanned))
} else {
None
}
}
});
let tokens = [Token::irrelevant(), Token::irrelevant()]
.into_iter()
.chain(tokens);
// Collapse consecutive newlines to the first one -- trailing commas are
// added before the first newline.
let tokens = tokens.coalesce(|previous, current| {
if previous.type_ == TokenType::NonLogicalNewline
&& current.type_ == TokenType::NonLogicalNewline
if previous.r#type == TokenType::NonLogicalNewline
&& current.r#type == TokenType::NonLogicalNewline
{
Ok(previous)
} else {
@@ -249,8 +289,8 @@ pub(crate) fn trailing_commas(
for (prev_prev, prev, token) in tokens.tuple_windows() {
// Update the comma context stack.
match token.type_ {
TokenType::OpeningBracket => match (prev.type_, prev_prev.type_) {
match token.r#type {
TokenType::OpeningBracket => match (prev.r#type, prev_prev.r#type) {
(TokenType::Named, TokenType::Def) => {
stack.push(Context::new(ContextType::FunctionParameters));
}
@@ -261,7 +301,7 @@ pub(crate) fn trailing_commas(
stack.push(Context::new(ContextType::Tuple));
}
},
TokenType::OpeningSquareBracket => match prev.type_ {
TokenType::OpeningSquareBracket => match prev.r#type {
TokenType::ClosingBracket | TokenType::Named | TokenType::String => {
stack.push(Context::new(ContextType::Subscript));
}
@@ -288,8 +328,8 @@ pub(crate) fn trailing_commas(
let context = &stack[stack.len() - 1];
// Is it allowed to have a trailing comma before this token?
let comma_allowed = token.type_ == TokenType::ClosingBracket
&& match context.type_ {
let comma_allowed = token.r#type == TokenType::ClosingBracket
&& match context.r#type {
ContextType::No => false,
ContextType::FunctionParameters => true,
ContextType::CallArguments => true,
@@ -304,22 +344,21 @@ pub(crate) fn trailing_commas(
};
// Is prev a prohibited trailing comma?
let comma_prohibited = prev.type_ == TokenType::Comma && {
let comma_prohibited = prev.r#type == TokenType::Comma && {
// Is `(1,)` or `x[1,]`?
let is_singleton_tuplish =
matches!(context.type_, ContextType::Subscript | ContextType::Tuple)
matches!(context.r#type, ContextType::Subscript | ContextType::Tuple)
&& context.num_commas <= 1;
// There was no non-logical newline, so prohibit (except in `(1,)` or `x[1,]`).
if comma_allowed && !is_singleton_tuplish {
true
// Lambdas not handled by comma_allowed so handle it specially.
} else {
context.type_ == ContextType::LambdaParameters && token.type_ == TokenType::Colon
context.r#type == ContextType::LambdaParameters && token.r#type == TokenType::Colon
}
};
if comma_prohibited {
let comma = prev.spanned.unwrap();
let mut diagnostic = Diagnostic::new(ProhibitedTrailingComma, comma.1);
let mut diagnostic = Diagnostic::new(ProhibitedTrailingComma, prev.range());
diagnostic.set_fix(Fix::safe_edit(Edit::range_deletion(diagnostic.range())));
diagnostics.push(diagnostic);
}
@@ -327,10 +366,9 @@ pub(crate) fn trailing_commas(
// Is prev a prohibited trailing comma on a bare tuple?
// Approximation: any comma followed by a statement-ending newline.
let bare_comma_prohibited =
prev.type_ == TokenType::Comma && token.type_ == TokenType::Newline;
prev.r#type == TokenType::Comma && token.r#type == TokenType::Newline;
if bare_comma_prohibited {
let comma = prev.spanned.unwrap();
diagnostics.push(Diagnostic::new(TrailingCommaOnBareTuple, comma.1));
diagnostics.push(Diagnostic::new(TrailingCommaOnBareTuple, prev.range()));
}
// Comma is required if:
@@ -339,40 +377,37 @@ pub(crate) fn trailing_commas(
// - Not already present,
// - Not on an empty (), {}, [].
let comma_required = comma_allowed
&& prev.type_ == TokenType::NonLogicalNewline
&& prev.r#type == TokenType::NonLogicalNewline
&& !matches!(
prev_prev.type_,
prev_prev.r#type,
TokenType::Comma
| TokenType::OpeningBracket
| TokenType::OpeningSquareBracket
| TokenType::OpeningCurlyBracket
);
if comma_required {
let missing_comma = prev_prev.spanned.unwrap();
let mut diagnostic = Diagnostic::new(
MissingTrailingComma,
TextRange::empty(missing_comma.1.end()),
);
let mut diagnostic =
Diagnostic::new(MissingTrailingComma, TextRange::empty(prev_prev.end()));
// Create a replacement that includes the final bracket (or other token),
// rather than just inserting a comma at the end. This prevents the UP034 fix
// removing any brackets in the same linter pass - doing both at the same time could
// lead to a syntax error.
let contents = locator.slice(missing_comma.1);
let contents = locator.slice(prev_prev.range());
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
format!("{contents},"),
missing_comma.1,
prev_prev.range(),
)));
diagnostics.push(diagnostic);
}
// Pop the current context if the current token ended it.
// The top context is never popped (if unbalanced closing brackets).
let pop_context = match context.type_ {
let pop_context = match context.r#type {
// Lambda terminated by `:`.
ContextType::LambdaParameters => token.type_ == TokenType::Colon,
ContextType::LambdaParameters => token.r#type == TokenType::Colon,
// All others terminated by a closing bracket.
// flake8-commas doesn't verify that it matches the opening...
_ => token.type_ == TokenType::ClosingBracket,
_ => token.r#type == TokenType::ClosingBracket,
};
if pop_context && stack.len() > 1 {
stack.pop();

View File

@@ -939,4 +939,43 @@ COM81.py:632:42: COM812 [*] Trailing comma missing
634 634 |
635 635 | foo = namedtuple(
COM81.py:644:46: COM819 [*] Trailing comma prohibited
|
643 | # F-strings
644 | kwargs.pop("remove", f"this {trailing_comma}",)
| ^ COM819
645 |
646 | raise Exception(
|
= help: Remove trailing comma
Safe fix
641 641 | )
642 642 |
643 643 | # F-strings
644 |-kwargs.pop("remove", f"this {trailing_comma}",)
644 |+kwargs.pop("remove", f"this {trailing_comma}")
645 645 |
646 646 | raise Exception(
647 647 | "first", extra=f"Add trailing comma here ->"
COM81.py:647:49: COM812 [*] Trailing comma missing
|
646 | raise Exception(
647 | "first", extra=f"Add trailing comma here ->"
| COM812
648 | )
|
= help: Add trailing comma
Safe fix
644 644 | kwargs.pop("remove", f"this {trailing_comma}",)
645 645 |
646 646 | raise Exception(
647 |- "first", extra=f"Add trailing comma here ->"
647 |+ "first", extra=f"Add trailing comma here ->",
648 648 | )
649 649 |
650 650 | assert False, f"<- This is not a trailing comma"

View File

@@ -8,7 +8,7 @@ LOG002.py:11:11: LOG002 [*] Use `__name__` with `logging.getLogger()`
| ^^^^^^^^ LOG002
12 | logging.getLogger(name=__file__)
|
= help: Replace with `name`
= help: Replace with `__name__`
Unsafe fix
8 8 | logging.getLogger(name="custom")
@@ -29,7 +29,7 @@ LOG002.py:12:24: LOG002 [*] Use `__name__` with `logging.getLogger()`
13 |
14 | logging.getLogger(__cached__)
|
= help: Replace with `name`
= help: Replace with `__name__`
Unsafe fix
9 9 |
@@ -49,7 +49,7 @@ LOG002.py:14:19: LOG002 [*] Use `__name__` with `logging.getLogger()`
| ^^^^^^^^^^ LOG002
15 | getLogger(name=__cached__)
|
= help: Replace with `name`
= help: Replace with `__name__`
Unsafe fix
11 11 | getLogger(__file__)
@@ -67,7 +67,7 @@ LOG002.py:15:16: LOG002 [*] Use `__name__` with `logging.getLogger()`
15 | getLogger(name=__cached__)
| ^^^^^^^^^^ LOG002
|
= help: Replace with `name`
= help: Replace with `__name__`
Unsafe fix
12 12 | logging.getLogger(name=__file__)

View File

@@ -9,6 +9,7 @@ mod tests {
use test_case::test_case;
use crate::registry::Rule;
use crate::settings::types::PreviewMode;
use crate::test::test_path;
use crate::{assert_messages, settings};
@@ -16,9 +17,9 @@ mod tests {
#[test_case(Rule::UnnecessaryDictKwargs, Path::new("PIE804.py"))]
#[test_case(Rule::MultipleStartsEndsWith, Path::new("PIE810.py"))]
#[test_case(Rule::UnnecessaryRangeStart, Path::new("PIE808.py"))]
#[test_case(Rule::UnnecessaryPass, Path::new("PIE790.py"))]
#[test_case(Rule::UnnecessaryPlaceholder, Path::new("PIE790.py"))]
#[test_case(Rule::UnnecessarySpread, Path::new("PIE800.py"))]
#[test_case(Rule::ReimplementedListBuiltin, Path::new("PIE807.py"))]
#[test_case(Rule::ReimplementedContainerBuiltin, Path::new("PIE807.py"))]
#[test_case(Rule::NonUniqueEnums, Path::new("PIE796.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
@@ -29,4 +30,23 @@ mod tests {
assert_messages!(snapshot, diagnostics);
Ok(())
}
#[test_case(Rule::UnnecessaryPlaceholder, Path::new("PIE790.py"))]
#[test_case(Rule::ReimplementedContainerBuiltin, Path::new("PIE807.py"))]
fn preview_rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!(
"preview__{}_{}",
rule_code.noqa_code(),
path.to_string_lossy()
);
let diagnostics = test_path(
Path::new("flake8_pie").join(path).as_path(),
&settings::LinterSettings {
preview: PreviewMode::Enabled,
..settings::LinterSettings::for_rule(rule_code)
},
)?;
assert_messages!(snapshot, diagnostics);
Ok(())
}
}

View File

@@ -3,6 +3,7 @@ use rustc_hash::FxHashSet;
use ruff_diagnostics::Diagnostic;
use ruff_diagnostics::{AlwaysFixableViolation, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::any_over_expr;
use ruff_python_ast::{self as ast, Expr, Stmt};
use ruff_text_size::Ranged;
@@ -55,14 +56,14 @@ pub(crate) fn duplicate_class_field_definition(checker: &mut Checker, body: &[St
// Extract the property name from the assignment statement.
let target = match stmt {
Stmt::Assign(ast::StmtAssign { targets, .. }) => {
if let [Expr::Name(ast::ExprName { id, .. })] = targets.as_slice() {
if let [Expr::Name(id)] = targets.as_slice() {
id
} else {
continue;
}
}
Stmt::AnnAssign(ast::StmtAnnAssign { target, .. }) => {
if let Expr::Name(ast::ExprName { id, .. }) = target.as_ref() {
if let Expr::Name(id) = target.as_ref() {
id
} else {
continue;
@@ -71,10 +72,31 @@ pub(crate) fn duplicate_class_field_definition(checker: &mut Checker, body: &[St
_ => continue,
};
if !seen_targets.insert(target) {
// If this is an unrolled augmented assignment (e.g., `x = x + 1`), skip it.
match stmt {
Stmt::Assign(ast::StmtAssign { value, .. }) => {
if any_over_expr(value.as_ref(), &|expr| {
expr.as_name_expr().is_some_and(|name| name.id == target.id)
}) {
continue;
}
}
Stmt::AnnAssign(ast::StmtAnnAssign {
value: Some(value), ..
}) => {
if any_over_expr(value.as_ref(), &|expr| {
expr.as_name_expr().is_some_and(|name| name.id == target.id)
}) {
continue;
}
}
_ => continue,
}
if !seen_targets.insert(target.id.as_str()) {
let mut diagnostic = Diagnostic::new(
DuplicateClassFieldDefinition {
name: target.to_string(),
name: target.id.to_string(),
},
stmt.range(),
);

View File

@@ -1,17 +1,17 @@
pub(crate) use duplicate_class_field_definition::*;
pub(crate) use multiple_starts_ends_with::*;
pub(crate) use non_unique_enums::*;
pub(crate) use reimplemented_list_builtin::*;
pub(crate) use reimplemented_container_builtin::*;
pub(crate) use unnecessary_dict_kwargs::*;
pub(crate) use unnecessary_pass::*;
pub(crate) use unnecessary_placeholder::*;
pub(crate) use unnecessary_range_start::*;
pub(crate) use unnecessary_spread::*;
mod duplicate_class_field_definition;
mod multiple_starts_ends_with;
mod non_unique_enums;
mod reimplemented_list_builtin;
mod reimplemented_container_builtin;
mod unnecessary_dict_kwargs;
mod unnecessary_pass;
mod unnecessary_placeholder;
mod unnecessary_range_start;
mod unnecessary_spread;

View File

@@ -0,0 +1,119 @@
use ruff_python_ast::{self as ast, Expr, ExprLambda};
use ruff_diagnostics::{Diagnostic, Edit, Fix};
use ruff_diagnostics::{FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for lambdas that can be replaced with the `list` builtin.
///
/// In [preview], this rule will also flag lambdas that can be replaced with
/// the `dict` builtin.
///
/// ## Why is this bad?
/// Using container builtins are more succinct and idiomatic than wrapping
/// the literal in a lambda.
///
/// ## Example
/// ```python
/// from dataclasses import dataclass, field
///
///
/// @dataclass
/// class Foo:
/// bar: list[int] = field(default_factory=lambda: [])
/// ```
///
/// Use instead:
/// ```python
/// from dataclasses import dataclass, field
///
///
/// @dataclass
/// class Foo:
/// bar: list[int] = field(default_factory=list)
/// baz: dict[str, int] = field(default_factory=dict)
/// ```
///
/// ## References
/// - [Python documentation: `list`](https://docs.python.org/3/library/functions.html#func-list)
///
/// [preview]: https://docs.astral.sh/ruff/preview/
#[violation]
pub struct ReimplementedContainerBuiltin {
container: Container,
}
impl Violation for ReimplementedContainerBuiltin {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let Self { container } = self;
format!("Prefer `{container}` over useless lambda")
}
fn fix_title(&self) -> Option<String> {
let Self { container } = self;
Some(format!("Replace with `lambda` with `{container}`"))
}
}
/// PIE807
pub(crate) fn reimplemented_container_builtin(checker: &mut Checker, expr: &ExprLambda) {
let ExprLambda {
parameters,
body,
range: _,
} = expr;
if parameters.is_none() {
let container = match body.as_ref() {
Expr::List(ast::ExprList { elts, .. }) if elts.is_empty() => Some(Container::List),
Expr::Dict(ast::ExprDict { values, .. })
if values.is_empty() & checker.settings.preview.is_enabled() =>
{
Some(Container::Dict)
}
_ => None,
};
if let Some(container) = container {
let mut diagnostic =
Diagnostic::new(ReimplementedContainerBuiltin { container }, expr.range());
if checker.semantic().is_builtin(container.as_str()) {
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
container.to_string(),
expr.range(),
)));
}
checker.diagnostics.push(diagnostic);
}
}
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
enum Container {
List,
Dict,
}
impl Container {
fn as_str(self) -> &'static str {
match self {
Container::List => "list",
Container::Dict => "dict",
}
}
}
impl std::fmt::Display for Container {
fn fmt(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
match self {
Container::List => fmt.write_str("list"),
Container::Dict => fmt.write_str("dict"),
}
}
}

View File

@@ -1,76 +0,0 @@
use ruff_python_ast::{self as ast, Expr, ExprLambda};
use ruff_diagnostics::{Diagnostic, Edit, Fix};
use ruff_diagnostics::{FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for lambdas that can be replaced with the `list` builtin.
///
/// ## Why is this bad?
/// Using `list` builtin is more readable.
///
/// ## Example
/// ```python
/// from dataclasses import dataclass, field
///
///
/// @dataclass
/// class Foo:
/// bar: list[int] = field(default_factory=lambda: [])
/// ```
///
/// Use instead:
/// ```python
/// from dataclasses import dataclass, field
///
///
/// @dataclass
/// class Foo:
/// bar: list[int] = field(default_factory=list)
/// ```
///
/// ## References
/// - [Python documentation: `list`](https://docs.python.org/3/library/functions.html#func-list)
#[violation]
pub struct ReimplementedListBuiltin;
impl Violation for ReimplementedListBuiltin {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
format!("Prefer `list` over useless lambda")
}
fn fix_title(&self) -> Option<String> {
Some("Replace with `list`".to_string())
}
}
/// PIE807
pub(crate) fn reimplemented_list_builtin(checker: &mut Checker, expr: &ExprLambda) {
let ExprLambda {
parameters,
body,
range: _,
} = expr;
if parameters.is_none() {
if let Expr::List(ast::ExprList { elts, .. }) = body.as_ref() {
if elts.is_empty() {
let mut diagnostic = Diagnostic::new(ReimplementedListBuiltin, expr.range());
if checker.semantic().is_builtin("list") {
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
"list".to_string(),
expr.range(),
)));
}
checker.diagnostics.push(diagnostic);
}
}
}
}

View File

@@ -1,75 +0,0 @@
use ruff_python_ast::Stmt;
use ruff_diagnostics::AlwaysFixableViolation;
use ruff_diagnostics::{Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::whitespace::trailing_comment_start_offset;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::fix;
/// ## What it does
/// Checks for unnecessary `pass` statements in functions, classes, and other
/// blocks.
///
/// ## Why is this bad?
/// In Python, the `pass` statement serves as a placeholder, allowing for
/// syntactically correct empty code blocks. The primary purpose of the `pass`
/// statement is to avoid syntax errors in situations where a statement is
/// syntactically required, but no code needs to be executed.
///
/// If a `pass` statement is present in a code block that includes at least
/// one other statement (even, e.g., a docstring), it is unnecessary and should
/// be removed.
///
/// ## Example
/// ```python
/// def func():
/// """Placeholder docstring."""
/// pass
/// ```
///
/// Use instead:
/// ```python
/// def func():
/// """Placeholder docstring."""
/// ```
///
/// ## References
/// - [Python documentation: The `pass` statement](https://docs.python.org/3/reference/simple_stmts.html#the-pass-statement)
#[violation]
pub struct UnnecessaryPass;
impl AlwaysFixableViolation for UnnecessaryPass {
#[derive_message_formats]
fn message(&self) -> String {
format!("Unnecessary `pass` statement")
}
fn fix_title(&self) -> String {
"Remove unnecessary `pass`".to_string()
}
}
/// PIE790
pub(crate) fn no_unnecessary_pass(checker: &mut Checker, body: &[Stmt]) {
if body.len() < 2 {
return;
}
body.iter()
.filter(|stmt| stmt.is_pass_stmt())
.for_each(|stmt| {
let mut diagnostic = Diagnostic::new(UnnecessaryPass, stmt.range());
let edit = if let Some(index) = trailing_comment_start_offset(stmt, checker.locator()) {
Edit::range_deletion(stmt.range().add_end(index))
} else {
fix::edits::delete_stmt(stmt, None, checker.locator(), checker.indexer())
};
diagnostic.set_fix(Fix::safe_edit(edit).isolate(Checker::isolation(
checker.semantic().current_statement_id(),
)));
checker.diagnostics.push(diagnostic);
});
}

View File

@@ -0,0 +1,127 @@
use ruff_diagnostics::AlwaysFixableViolation;
use ruff_diagnostics::{Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::whitespace::trailing_comment_start_offset;
use ruff_python_ast::Stmt;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::fix;
/// ## What it does
/// Checks for unnecessary `pass` statements in functions, classes, and other
/// blocks.
///
/// In [preview], this rule also checks for unnecessary ellipsis (`...`)
/// literals.
///
/// ## Why is this bad?
/// In Python, the `pass` statement and ellipsis (`...`) literal serve as
/// placeholders, allowing for syntactically correct empty code blocks. The
/// primary purpose of these nodes is to avoid syntax errors in situations
/// where a statement or expression is syntactically required, but no code
/// needs to be executed.
///
/// If a `pass` or ellipsis is present in a code block that includes at least
/// one other statement (even, e.g., a docstring), it is unnecessary and should
/// be removed.
///
/// ## Example
/// ```python
/// def func():
/// """Placeholder docstring."""
/// pass
/// ```
///
/// Use instead:
/// ```python
/// def func():
/// """Placeholder docstring."""
/// ```
///
/// In [preview]:
/// ```python
/// def func():
/// """Placeholder docstring."""
/// ...
/// ```
///
/// Use instead:
/// ```python
/// def func():
/// """Placeholder docstring."""
/// ```
///
/// ## References
/// - [Python documentation: The `pass` statement](https://docs.python.org/3/reference/simple_stmts.html#the-pass-statement)
///
/// [preview]: https://docs.astral.sh/ruff/preview/
#[violation]
pub struct UnnecessaryPlaceholder {
kind: Placeholder,
}
impl AlwaysFixableViolation for UnnecessaryPlaceholder {
#[derive_message_formats]
fn message(&self) -> String {
let Self { kind } = self;
match kind {
Placeholder::Pass => format!("Unnecessary `pass` statement"),
Placeholder::Ellipsis => format!("Unnecessary `...` literal"),
}
}
fn fix_title(&self) -> String {
let Self { kind } = self;
match kind {
Placeholder::Pass => format!("Remove unnecessary `pass`"),
Placeholder::Ellipsis => format!("Remove unnecessary `...`"),
}
}
}
/// PIE790
pub(crate) fn unnecessary_placeholder(checker: &mut Checker, body: &[Stmt]) {
if body.len() < 2 {
return;
}
for stmt in body {
let kind = match stmt {
Stmt::Pass(_) => Placeholder::Pass,
Stmt::Expr(expr)
if expr.value.is_ellipsis_literal_expr()
&& checker.settings.preview.is_enabled() =>
{
Placeholder::Ellipsis
}
_ => continue,
};
let mut diagnostic = Diagnostic::new(UnnecessaryPlaceholder { kind }, stmt.range());
let edit = if let Some(index) = trailing_comment_start_offset(stmt, checker.locator()) {
Edit::range_deletion(stmt.range().add_end(index))
} else {
fix::edits::delete_stmt(stmt, None, checker.locator(), checker.indexer())
};
diagnostic.set_fix(Fix::safe_edit(edit).isolate(Checker::isolation(
checker.semantic().current_statement_id(),
)));
checker.diagnostics.push(diagnostic);
}
}
#[derive(Debug, PartialEq, Eq)]
enum Placeholder {
Pass,
Ellipsis,
}
impl std::fmt::Display for Placeholder {
fn fmt(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
match self {
Self::Pass => fmt.write_str("pass"),
Self::Ellipsis => fmt.write_str("..."),
}
}
}

View File

@@ -446,6 +446,8 @@ PIE790.py:149:5: PIE790 [*] Unnecessary `pass` statement
149 |- pass # comment
149 |+ # comment
150 150 | pass
151 151 |
152 152 |
PIE790.py:150:5: PIE790 [*] Unnecessary `pass` statement
|
@@ -461,5 +463,23 @@ PIE790.py:150:5: PIE790 [*] Unnecessary `pass` statement
148 148 | for i in range(10):
149 149 | pass # comment
150 |- pass
151 150 |
152 151 |
153 152 | def foo():
PIE790.py:179:5: PIE790 [*] Unnecessary `pass` statement
|
177 | for i in range(10):
178 | ...
179 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
176 176 |
177 177 | for i in range(10):
178 178 | ...
179 |- pass

View File

@@ -73,5 +73,41 @@ PIE794.py:40:5: PIE794 [*] Class field `bar` is defined multiple times
38 38 | foo: bool = BooleanField()
39 39 | # ...
40 |- bar = StringField() # PIE794
41 40 |
42 41 |
43 42 | class Person:
PIE794.py:46:5: PIE794 [*] Class field `name` is defined multiple times
|
44 | name = "Foo"
45 | name = name + " Bar"
46 | name = "Bar" # PIE794
| ^^^^^^^^^^^^ PIE794
|
= help: Remove duplicate field definition for `name`
Unsafe fix
43 43 | class Person:
44 44 | name = "Foo"
45 45 | name = name + " Bar"
46 |- name = "Bar" # PIE794
47 46 |
48 47 |
49 48 | class Person:
PIE794.py:52:5: PIE794 [*] Class field `name` is defined multiple times
|
50 | name: str = "Foo"
51 | name: str = name + " Bar"
52 | name: str = "Bar" # PIE794
| ^^^^^^^^^^^^^^^^^ PIE794
|
= help: Remove duplicate field definition for `name`
Unsafe fix
49 49 | class Person:
50 50 | name: str = "Foo"
51 51 | name: str = name + " Bar"
52 |- name: str = "Bar" # PIE794

View File

@@ -7,52 +7,55 @@ PIE807.py:3:44: PIE807 [*] Prefer `list` over useless lambda
2 | class Foo:
3 | foo: List[str] = field(default_factory=lambda: []) # PIE807
| ^^^^^^^^^^ PIE807
4 | bar: Dict[str, int] = field(default_factory=lambda: {}) # PIE807
|
= help: Replace with `list`
= help: Replace with `lambda` with `list`
Safe fix
1 1 | @dataclass
2 2 | class Foo:
3 |- foo: List[str] = field(default_factory=lambda: []) # PIE807
3 |+ foo: List[str] = field(default_factory=list) # PIE807
4 4 |
4 4 | bar: Dict[str, int] = field(default_factory=lambda: {}) # PIE807
5 5 |
6 6 | class FooTable(BaseTable):
6 6 |
PIE807.py:7:36: PIE807 [*] Prefer `list` over useless lambda
PIE807.py:8:36: PIE807 [*] Prefer `list` over useless lambda
|
6 | class FooTable(BaseTable):
7 | bar = fields.ListField(default=lambda: []) # PIE807
7 | class FooTable(BaseTable):
8 | foo = fields.ListField(default=lambda: []) # PIE807
| ^^^^^^^^^^ PIE807
9 | bar = fields.ListField(default=lambda: {}) # PIE807
|
= help: Replace with `list`
= help: Replace with `lambda` with `list`
Safe fix
4 4 |
5 5 |
6 6 | class FooTable(BaseTable):
7 |- bar = fields.ListField(default=lambda: []) # PIE807
7 |+ bar = fields.ListField(default=list) # PIE807
8 8 |
9 9 |
10 10 | class FooTable(BaseTable):
6 6 |
7 7 | class FooTable(BaseTable):
8 |- foo = fields.ListField(default=lambda: []) # PIE807
8 |+ foo = fields.ListField(default=list) # PIE807
9 9 | bar = fields.ListField(default=lambda: {}) # PIE807
10 10 |
11 11 |
PIE807.py:11:28: PIE807 [*] Prefer `list` over useless lambda
PIE807.py:13:28: PIE807 [*] Prefer `list` over useless lambda
|
10 | class FooTable(BaseTable):
11 | bar = fields.ListField(lambda: []) # PIE807
12 | class FooTable(BaseTable):
13 | foo = fields.ListField(lambda: []) # PIE807
| ^^^^^^^^^^ PIE807
14 | bar = fields.ListField(default=lambda: {}) # PIE807
|
= help: Replace with `list`
= help: Replace with `lambda` with `list`
Safe fix
8 8 |
9 9 |
10 10 | class FooTable(BaseTable):
11 |- bar = fields.ListField(lambda: []) # PIE807
11 |+ bar = fields.ListField(list) # PIE807
12 12 |
13 13 |
14 14 | @dataclass
10 10 |
11 11 |
12 12 | class FooTable(BaseTable):
13 |- foo = fields.ListField(lambda: []) # PIE807
13 |+ foo = fields.ListField(list) # PIE807
14 14 | bar = fields.ListField(default=lambda: {}) # PIE807
15 15 |
16 16 |

View File

@@ -0,0 +1,653 @@
---
source: crates/ruff_linter/src/rules/flake8_pie/mod.rs
---
PIE790.py:4:5: PIE790 [*] Unnecessary `pass` statement
|
2 | """buzz"""
3 |
4 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
1 1 | class Foo:
2 2 | """buzz"""
3 3 |
4 |- pass
5 4 |
6 5 |
7 6 | if foo:
PIE790.py:9:5: PIE790 [*] Unnecessary `pass` statement
|
7 | if foo:
8 | """foo"""
9 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
6 6 |
7 7 | if foo:
8 8 | """foo"""
9 |- pass
10 9 |
11 10 |
12 11 | def multi_statement() -> None:
PIE790.py:14:5: PIE790 [*] Unnecessary `pass` statement
|
12 | def multi_statement() -> None:
13 | """This is a function."""
14 | pass; print("hello")
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
11 11 |
12 12 | def multi_statement() -> None:
13 13 | """This is a function."""
14 |- pass; print("hello")
14 |+ print("hello")
15 15 |
16 16 |
17 17 | if foo:
PIE790.py:21:5: PIE790 [*] Unnecessary `pass` statement
|
19 | else:
20 | """bar"""
21 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
18 18 | pass
19 19 | else:
20 20 | """bar"""
21 |- pass
22 21 |
23 22 |
24 23 | while True:
PIE790.py:28:5: PIE790 [*] Unnecessary `pass` statement
|
26 | else:
27 | """bar"""
28 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
25 25 | pass
26 26 | else:
27 27 | """bar"""
28 |- pass
29 28 |
30 29 |
31 30 | for _ in range(10):
PIE790.py:35:5: PIE790 [*] Unnecessary `pass` statement
|
33 | else:
34 | """bar"""
35 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
32 32 | pass
33 33 | else:
34 34 | """bar"""
35 |- pass
36 35 |
37 36 |
38 37 | async for _ in range(10):
PIE790.py:42:5: PIE790 [*] Unnecessary `pass` statement
|
40 | else:
41 | """bar"""
42 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
39 39 | pass
40 40 | else:
41 41 | """bar"""
42 |- pass
43 42 |
44 43 |
45 44 | def foo() -> None:
PIE790.py:50:5: PIE790 [*] Unnecessary `pass` statement
|
48 | """
49 |
50 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
47 47 | buzz
48 48 | """
49 49 |
50 |- pass
51 50 |
52 51 |
53 52 | async def foo():
PIE790.py:58:5: PIE790 [*] Unnecessary `pass` statement
|
56 | """
57 |
58 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
55 55 | buzz
56 56 | """
57 57 |
58 |- pass
59 58 |
60 59 |
61 60 | try:
PIE790.py:65:5: PIE790 [*] Unnecessary `pass` statement
|
63 | buzz
64 | """
65 | pass
| ^^^^ PIE790
66 | except ValueError:
67 | pass
|
= help: Remove unnecessary `pass`
Safe fix
62 62 | """
63 63 | buzz
64 64 | """
65 |- pass
66 65 | except ValueError:
67 66 | pass
68 67 |
PIE790.py:74:5: PIE790 [*] Unnecessary `pass` statement
|
72 | except ValueError:
73 | """bar"""
74 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
71 71 | bar()
72 72 | except ValueError:
73 73 | """bar"""
74 |- pass
75 74 |
76 75 |
77 76 | for _ in range(10):
PIE790.py:79:5: PIE790 [*] Unnecessary `pass` statement
|
77 | for _ in range(10):
78 | """buzz"""
79 | pass
| ^^^^ PIE790
80 |
81 | async for _ in range(10):
|
= help: Remove unnecessary `pass`
Safe fix
76 76 |
77 77 | for _ in range(10):
78 78 | """buzz"""
79 |- pass
80 79 |
81 80 | async for _ in range(10):
82 81 | """buzz"""
PIE790.py:83:5: PIE790 [*] Unnecessary `pass` statement
|
81 | async for _ in range(10):
82 | """buzz"""
83 | pass
| ^^^^ PIE790
84 |
85 | while cond:
|
= help: Remove unnecessary `pass`
Safe fix
80 80 |
81 81 | async for _ in range(10):
82 82 | """buzz"""
83 |- pass
84 83 |
85 84 | while cond:
86 85 | """buzz"""
PIE790.py:87:5: PIE790 [*] Unnecessary `pass` statement
|
85 | while cond:
86 | """buzz"""
87 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
84 84 |
85 85 | while cond:
86 86 | """buzz"""
87 |- pass
88 87 |
89 88 |
90 89 | with bar:
PIE790.py:92:5: PIE790 [*] Unnecessary `pass` statement
|
90 | with bar:
91 | """buzz"""
92 | pass
| ^^^^ PIE790
93 |
94 | async with bar:
|
= help: Remove unnecessary `pass`
Safe fix
89 89 |
90 90 | with bar:
91 91 | """buzz"""
92 |- pass
93 92 |
94 93 | async with bar:
95 94 | """buzz"""
PIE790.py:96:5: PIE790 [*] Unnecessary `pass` statement
|
94 | async with bar:
95 | """buzz"""
96 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
93 93 |
94 94 | async with bar:
95 95 | """buzz"""
96 |- pass
97 96 |
98 97 |
99 98 | def foo() -> None:
PIE790.py:101:5: PIE790 [*] Unnecessary `pass` statement
|
99 | def foo() -> None:
100 | """buzz"""
101 | pass # bar
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
98 98 |
99 99 | def foo() -> None:
100 100 | """buzz"""
101 |- pass # bar
101 |+ # bar
102 102 |
103 103 |
104 104 | class Foo:
PIE790.py:130:5: PIE790 [*] Unnecessary `pass` statement
|
128 | def foo():
129 | print("foo")
130 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
127 127 |
128 128 | def foo():
129 129 | print("foo")
130 |- pass
131 130 |
132 131 |
133 132 | def foo():
PIE790.py:136:5: PIE790 [*] Unnecessary `pass` statement
|
134 | """A docstring."""
135 | print("foo")
136 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
133 133 | def foo():
134 134 | """A docstring."""
135 135 | print("foo")
136 |- pass
137 136 |
138 137 |
139 138 | for i in range(10):
PIE790.py:140:5: PIE790 [*] Unnecessary `pass` statement
|
139 | for i in range(10):
140 | pass
| ^^^^ PIE790
141 | pass
|
= help: Remove unnecessary `pass`
Safe fix
138 138 |
139 139 | for i in range(10):
140 140 | pass
141 |- pass
142 141 |
143 142 | for i in range(10):
144 143 | pass
PIE790.py:141:5: PIE790 [*] Unnecessary `pass` statement
|
139 | for i in range(10):
140 | pass
141 | pass
| ^^^^ PIE790
142 |
143 | for i in range(10):
|
= help: Remove unnecessary `pass`
Safe fix
138 138 |
139 139 | for i in range(10):
140 140 | pass
141 |- pass
142 141 |
143 142 | for i in range(10):
144 143 | pass
PIE790.py:144:5: PIE790 [*] Unnecessary `pass` statement
|
143 | for i in range(10):
144 | pass
| ^^^^ PIE790
145 |
146 | pass
|
= help: Remove unnecessary `pass`
Safe fix
141 141 | pass
142 142 |
143 143 | for i in range(10):
144 |- pass
145 144 |
146 145 | pass
147 146 |
PIE790.py:146:5: PIE790 [*] Unnecessary `pass` statement
|
144 | pass
145 |
146 | pass
| ^^^^ PIE790
147 |
148 | for i in range(10):
|
= help: Remove unnecessary `pass`
Safe fix
143 143 | for i in range(10):
144 144 | pass
145 145 |
146 |- pass
147 146 |
148 147 | for i in range(10):
149 148 | pass # comment
PIE790.py:149:5: PIE790 [*] Unnecessary `pass` statement
|
148 | for i in range(10):
149 | pass # comment
| ^^^^ PIE790
150 | pass
|
= help: Remove unnecessary `pass`
Safe fix
146 146 | pass
147 147 |
148 148 | for i in range(10):
149 |- pass # comment
149 |+ # comment
150 150 | pass
151 151 |
152 152 |
PIE790.py:150:5: PIE790 [*] Unnecessary `pass` statement
|
148 | for i in range(10):
149 | pass # comment
150 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
147 147 |
148 148 | for i in range(10):
149 149 | pass # comment
150 |- pass
151 150 |
152 151 |
153 152 | def foo():
PIE790.py:155:5: PIE790 [*] Unnecessary `...` literal
|
153 | def foo():
154 | print("foo")
155 | ...
| ^^^ PIE790
|
= help: Remove unnecessary `...`
Safe fix
152 152 |
153 153 | def foo():
154 154 | print("foo")
155 |- ...
156 155 |
157 156 |
158 157 | def foo():
PIE790.py:161:5: PIE790 [*] Unnecessary `...` literal
|
159 | """A docstring."""
160 | print("foo")
161 | ...
| ^^^ PIE790
|
= help: Remove unnecessary `...`
Safe fix
158 158 | def foo():
159 159 | """A docstring."""
160 160 | print("foo")
161 |- ...
162 161 |
163 162 |
164 163 | for i in range(10):
PIE790.py:165:5: PIE790 [*] Unnecessary `...` literal
|
164 | for i in range(10):
165 | ...
| ^^^ PIE790
166 | ...
|
= help: Remove unnecessary `...`
Safe fix
163 163 |
164 164 | for i in range(10):
165 165 | ...
166 |- ...
167 166 |
168 167 | for i in range(10):
169 168 | ...
PIE790.py:166:5: PIE790 [*] Unnecessary `...` literal
|
164 | for i in range(10):
165 | ...
166 | ...
| ^^^ PIE790
167 |
168 | for i in range(10):
|
= help: Remove unnecessary `...`
Safe fix
163 163 |
164 164 | for i in range(10):
165 165 | ...
166 |- ...
167 166 |
168 167 | for i in range(10):
169 168 | ...
PIE790.py:169:5: PIE790 [*] Unnecessary `...` literal
|
168 | for i in range(10):
169 | ...
| ^^^ PIE790
170 |
171 | ...
|
= help: Remove unnecessary `...`
Safe fix
166 166 | ...
167 167 |
168 168 | for i in range(10):
169 |- ...
170 169 |
171 170 | ...
172 171 |
PIE790.py:171:5: PIE790 [*] Unnecessary `...` literal
|
169 | ...
170 |
171 | ...
| ^^^ PIE790
172 |
173 | for i in range(10):
|
= help: Remove unnecessary `...`
Safe fix
168 168 | for i in range(10):
169 169 | ...
170 170 |
171 |- ...
172 171 |
173 172 | for i in range(10):
174 173 | ... # comment
PIE790.py:174:5: PIE790 [*] Unnecessary `...` literal
|
173 | for i in range(10):
174 | ... # comment
| ^^^ PIE790
175 | ...
|
= help: Remove unnecessary `...`
Safe fix
171 171 | ...
172 172 |
173 173 | for i in range(10):
174 |- ... # comment
174 |+ # comment
175 175 | ...
176 176 |
177 177 | for i in range(10):
PIE790.py:175:5: PIE790 [*] Unnecessary `...` literal
|
173 | for i in range(10):
174 | ... # comment
175 | ...
| ^^^ PIE790
176 |
177 | for i in range(10):
|
= help: Remove unnecessary `...`
Safe fix
172 172 |
173 173 | for i in range(10):
174 174 | ... # comment
175 |- ...
176 175 |
177 176 | for i in range(10):
178 177 | ...
PIE790.py:178:5: PIE790 [*] Unnecessary `...` literal
|
177 | for i in range(10):
178 | ...
| ^^^ PIE790
179 | pass
|
= help: Remove unnecessary `...`
Safe fix
175 175 | ...
176 176 |
177 177 | for i in range(10):
178 |- ...
179 178 | pass
PIE790.py:179:5: PIE790 [*] Unnecessary `pass` statement
|
177 | for i in range(10):
178 | ...
179 | pass
| ^^^^ PIE790
|
= help: Remove unnecessary `pass`
Safe fix
176 176 |
177 177 | for i in range(10):
178 178 | ...
179 |- pass

View File

@@ -0,0 +1,118 @@
---
source: crates/ruff_linter/src/rules/flake8_pie/mod.rs
---
PIE807.py:3:44: PIE807 [*] Prefer `list` over useless lambda
|
1 | @dataclass
2 | class Foo:
3 | foo: List[str] = field(default_factory=lambda: []) # PIE807
| ^^^^^^^^^^ PIE807
4 | bar: Dict[str, int] = field(default_factory=lambda: {}) # PIE807
|
= help: Replace with `lambda` with `list`
Safe fix
1 1 | @dataclass
2 2 | class Foo:
3 |- foo: List[str] = field(default_factory=lambda: []) # PIE807
3 |+ foo: List[str] = field(default_factory=list) # PIE807
4 4 | bar: Dict[str, int] = field(default_factory=lambda: {}) # PIE807
5 5 |
6 6 |
PIE807.py:4:49: PIE807 [*] Prefer `dict` over useless lambda
|
2 | class Foo:
3 | foo: List[str] = field(default_factory=lambda: []) # PIE807
4 | bar: Dict[str, int] = field(default_factory=lambda: {}) # PIE807
| ^^^^^^^^^^ PIE807
|
= help: Replace with `lambda` with `dict`
Safe fix
1 1 | @dataclass
2 2 | class Foo:
3 3 | foo: List[str] = field(default_factory=lambda: []) # PIE807
4 |- bar: Dict[str, int] = field(default_factory=lambda: {}) # PIE807
4 |+ bar: Dict[str, int] = field(default_factory=dict) # PIE807
5 5 |
6 6 |
7 7 | class FooTable(BaseTable):
PIE807.py:8:36: PIE807 [*] Prefer `list` over useless lambda
|
7 | class FooTable(BaseTable):
8 | foo = fields.ListField(default=lambda: []) # PIE807
| ^^^^^^^^^^ PIE807
9 | bar = fields.ListField(default=lambda: {}) # PIE807
|
= help: Replace with `lambda` with `list`
Safe fix
5 5 |
6 6 |
7 7 | class FooTable(BaseTable):
8 |- foo = fields.ListField(default=lambda: []) # PIE807
8 |+ foo = fields.ListField(default=list) # PIE807
9 9 | bar = fields.ListField(default=lambda: {}) # PIE807
10 10 |
11 11 |
PIE807.py:9:36: PIE807 [*] Prefer `dict` over useless lambda
|
7 | class FooTable(BaseTable):
8 | foo = fields.ListField(default=lambda: []) # PIE807
9 | bar = fields.ListField(default=lambda: {}) # PIE807
| ^^^^^^^^^^ PIE807
|
= help: Replace with `lambda` with `dict`
Safe fix
6 6 |
7 7 | class FooTable(BaseTable):
8 8 | foo = fields.ListField(default=lambda: []) # PIE807
9 |- bar = fields.ListField(default=lambda: {}) # PIE807
9 |+ bar = fields.ListField(default=dict) # PIE807
10 10 |
11 11 |
12 12 | class FooTable(BaseTable):
PIE807.py:13:28: PIE807 [*] Prefer `list` over useless lambda
|
12 | class FooTable(BaseTable):
13 | foo = fields.ListField(lambda: []) # PIE807
| ^^^^^^^^^^ PIE807
14 | bar = fields.ListField(default=lambda: {}) # PIE807
|
= help: Replace with `lambda` with `list`
Safe fix
10 10 |
11 11 |
12 12 | class FooTable(BaseTable):
13 |- foo = fields.ListField(lambda: []) # PIE807
13 |+ foo = fields.ListField(list) # PIE807
14 14 | bar = fields.ListField(default=lambda: {}) # PIE807
15 15 |
16 16 |
PIE807.py:14:36: PIE807 [*] Prefer `dict` over useless lambda
|
12 | class FooTable(BaseTable):
13 | foo = fields.ListField(lambda: []) # PIE807
14 | bar = fields.ListField(default=lambda: {}) # PIE807
| ^^^^^^^^^^ PIE807
|
= help: Replace with `lambda` with `dict`
Safe fix
11 11 |
12 12 | class FooTable(BaseTable):
13 13 | foo = fields.ListField(lambda: []) # PIE807
14 |- bar = fields.ListField(default=lambda: {}) # PIE807
14 |+ bar = fields.ListField(default=dict) # PIE807
15 15 |
16 16 |
17 17 | @dataclass

View File

@@ -131,6 +131,11 @@ pub(crate) fn non_self_return_type(
return;
};
// PEP 673 forbids the use of `typing(_extensions).Self` in metaclasses.
if is_metaclass(class_def, checker.semantic()) {
return;
}
// Skip any abstract or overloaded methods.
if is_abstract(decorator_list, checker.semantic())
|| is_overload(decorator_list, checker.semantic())
@@ -214,6 +219,26 @@ pub(crate) fn non_self_return_type(
}
}
/// Returns `true` if the given class is a metaclass.
fn is_metaclass(class_def: &ast::StmtClassDef, semantic: &SemanticModel) -> bool {
class_def.arguments.as_ref().is_some_and(|arguments| {
arguments
.args
.iter()
.any(|expr| is_metaclass_base(expr, semantic))
})
}
/// Returns `true` if the given expression resolves to a metaclass.
fn is_metaclass_base(base: &Expr, semantic: &SemanticModel) -> bool {
semantic.resolve_call_path(base).is_some_and(|call_path| {
matches!(
call_path.as_slice(),
["" | "builtins", "type"] | ["abc", "ABCMeta"] | ["enum", "EnumMeta" | "EnumType"]
)
})
}
/// Returns `true` if the method is an in-place binary operator.
fn is_inplace_bin_op(name: &str) -> bool {
matches!(

View File

@@ -9,7 +9,7 @@ PYI029.pyi:10:9: PYI029 [*] Defining `__str__` in a stub is almost always redund
11 |
12 | class ShouldRemove:
|
= help: Remove definition of `str`
= help: Remove definition of `__str__`
Safe fix
7 7 | def __repr__(self, *, foo) -> str: ...
@@ -28,7 +28,7 @@ PYI029.pyi:13:9: PYI029 [*] Defining `__repr__` in a stub is almost always redun
| ^^^^^^^^ PYI029
14 | def __str__(self) -> builtins.str: ... # Error: PYI029
|
= help: Remove definition of `repr`
= help: Remove definition of `__repr__`
Safe fix
10 10 | def __str__(self) -> builtins.str: ... # Error: PYI029
@@ -48,7 +48,7 @@ PYI029.pyi:14:9: PYI029 [*] Defining `__str__` in a stub is almost always redund
15 |
16 | class NoReturnSpecified:
|
= help: Remove definition of `str`
= help: Remove definition of `__str__`
Safe fix
11 11 |

View File

@@ -1,91 +1,91 @@
---
source: crates/ruff_linter/src/rules/flake8_pyi/mod.rs
---
PYI034.py:19:9: PYI034 `__new__` methods in classes like `Bad` usually return `self` at runtime
PYI034.py:21:9: PYI034 `__new__` methods in classes like `Bad` usually return `self` at runtime
|
17 | object
18 | ): # Y040 Do not inherit from "object" explicitly, as it is redundant in Python 3
19 | def __new__(cls, *args: Any, **kwargs: Any) -> Bad:
19 | object
20 | ): # Y040 Do not inherit from "object" explicitly, as it is redundant in Python 3
21 | def __new__(cls, *args: Any, **kwargs: Any) -> Bad:
| ^^^^^^^ PYI034
20 | ... # Y034 "__new__" methods usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__new__", e.g. "def __new__(cls, *args: Any, **kwargs: Any) -> Self: ..."
22 | ... # Y034 "__new__" methods usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__new__", e.g. "def __new__(cls, *args: Any, **kwargs: Any) -> Self: ..."
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.py:34:9: PYI034 `__enter__` methods in classes like `Bad` usually return `self` at runtime
PYI034.py:36:9: PYI034 `__enter__` methods in classes like `Bad` usually return `self` at runtime
|
32 | ... # Y032 Prefer "object" to "Any" for the second parameter in "__ne__" methods
33 |
34 | def __enter__(self) -> Bad:
34 | ... # Y032 Prefer "object" to "Any" for the second parameter in "__ne__" methods
35 |
36 | def __enter__(self) -> Bad:
| ^^^^^^^^^ PYI034
35 | ... # Y034 "__enter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__enter__", e.g. "def __enter__(self) -> Self: ..."
37 | ... # Y034 "__enter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__enter__", e.g. "def __enter__(self) -> Self: ..."
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.py:37:15: PYI034 `__aenter__` methods in classes like `Bad` usually return `self` at runtime
PYI034.py:39:15: PYI034 `__aenter__` methods in classes like `Bad` usually return `self` at runtime
|
35 | ... # Y034 "__enter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__enter__", e.g. "def __enter__(self) -> Self: ..."
36 |
37 | async def __aenter__(self) -> Bad:
37 | ... # Y034 "__enter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__enter__", e.g. "def __enter__(self) -> Self: ..."
38 |
39 | async def __aenter__(self) -> Bad:
| ^^^^^^^^^^ PYI034
38 | ... # Y034 "__aenter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__aenter__", e.g. "async def __aenter__(self) -> Self: ..."
40 | ... # Y034 "__aenter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__aenter__", e.g. "async def __aenter__(self) -> Self: ..."
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.py:40:9: PYI034 `__iadd__` methods in classes like `Bad` usually return `self` at runtime
PYI034.py:42:9: PYI034 `__iadd__` methods in classes like `Bad` usually return `self` at runtime
|
38 | ... # Y034 "__aenter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__aenter__", e.g. "async def __aenter__(self) -> Self: ..."
39 |
40 | def __iadd__(self, other: Bad) -> Bad:
40 | ... # Y034 "__aenter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__aenter__", e.g. "async def __aenter__(self) -> Self: ..."
41 |
42 | def __iadd__(self, other: Bad) -> Bad:
| ^^^^^^^^ PYI034
41 | ... # Y034 "__iadd__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__iadd__", e.g. "def __iadd__(self, other: Bad) -> Self: ..."
43 | ... # Y034 "__iadd__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__iadd__", e.g. "def __iadd__(self, other: Bad) -> Self: ..."
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.py:163:9: PYI034 `__iter__` methods in classes like `BadIterator1` usually return `self` at runtime
PYI034.py:165:9: PYI034 `__iter__` methods in classes like `BadIterator1` usually return `self` at runtime
|
162 | class BadIterator1(Iterator[int]):
163 | def __iter__(self) -> Iterator[int]:
164 | class BadIterator1(Iterator[int]):
165 | def __iter__(self) -> Iterator[int]:
| ^^^^^^^^ PYI034
164 | ... # Y034 "__iter__" methods in classes like "BadIterator1" usually return "self" at runtime. Consider using "typing_extensions.Self" in "BadIterator1.__iter__", e.g. "def __iter__(self) -> Self: ..."
166 | ... # Y034 "__iter__" methods in classes like "BadIterator1" usually return "self" at runtime. Consider using "typing_extensions.Self" in "BadIterator1.__iter__", e.g. "def __iter__(self) -> Self: ..."
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.py:170:9: PYI034 `__iter__` methods in classes like `BadIterator2` usually return `self` at runtime
PYI034.py:172:9: PYI034 `__iter__` methods in classes like `BadIterator2` usually return `self` at runtime
|
168 | typing.Iterator[int]
169 | ): # Y022 Use "collections.abc.Iterator[T]" instead of "typing.Iterator[T]" (PEP 585 syntax)
170 | def __iter__(self) -> Iterator[int]:
170 | typing.Iterator[int]
171 | ): # Y022 Use "collections.abc.Iterator[T]" instead of "typing.Iterator[T]" (PEP 585 syntax)
172 | def __iter__(self) -> Iterator[int]:
| ^^^^^^^^ PYI034
171 | ... # Y034 "__iter__" methods in classes like "BadIterator2" usually return "self" at runtime. Consider using "typing_extensions.Self" in "BadIterator2.__iter__", e.g. "def __iter__(self) -> Self: ..."
173 | ... # Y034 "__iter__" methods in classes like "BadIterator2" usually return "self" at runtime. Consider using "typing_extensions.Self" in "BadIterator2.__iter__", e.g. "def __iter__(self) -> Self: ..."
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.py:177:9: PYI034 `__iter__` methods in classes like `BadIterator3` usually return `self` at runtime
PYI034.py:179:9: PYI034 `__iter__` methods in classes like `BadIterator3` usually return `self` at runtime
|
175 | typing.Iterator[int]
176 | ): # Y022 Use "collections.abc.Iterator[T]" instead of "typing.Iterator[T]" (PEP 585 syntax)
177 | def __iter__(self) -> collections.abc.Iterator[int]:
177 | typing.Iterator[int]
178 | ): # Y022 Use "collections.abc.Iterator[T]" instead of "typing.Iterator[T]" (PEP 585 syntax)
179 | def __iter__(self) -> collections.abc.Iterator[int]:
| ^^^^^^^^ PYI034
178 | ... # Y034 "__iter__" methods in classes like "BadIterator3" usually return "self" at runtime. Consider using "typing_extensions.Self" in "BadIterator3.__iter__", e.g. "def __iter__(self) -> Self: ..."
180 | ... # Y034 "__iter__" methods in classes like "BadIterator3" usually return "self" at runtime. Consider using "typing_extensions.Self" in "BadIterator3.__iter__", e.g. "def __iter__(self) -> Self: ..."
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.py:183:9: PYI034 `__iter__` methods in classes like `BadIterator4` usually return `self` at runtime
PYI034.py:185:9: PYI034 `__iter__` methods in classes like `BadIterator4` usually return `self` at runtime
|
181 | class BadIterator4(Iterator[int]):
182 | # Note: *Iterable*, not *Iterator*, returned!
183 | def __iter__(self) -> Iterable[int]:
183 | class BadIterator4(Iterator[int]):
184 | # Note: *Iterable*, not *Iterator*, returned!
185 | def __iter__(self) -> Iterable[int]:
| ^^^^^^^^ PYI034
184 | ... # Y034 "__iter__" methods in classes like "BadIterator4" usually return "self" at runtime. Consider using "typing_extensions.Self" in "BadIterator4.__iter__", e.g. "def __iter__(self) -> Self: ..."
186 | ... # Y034 "__iter__" methods in classes like "BadIterator4" usually return "self" at runtime. Consider using "typing_extensions.Self" in "BadIterator4.__iter__", e.g. "def __iter__(self) -> Self: ..."
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.py:193:9: PYI034 `__aiter__` methods in classes like `BadAsyncIterator` usually return `self` at runtime
PYI034.py:195:9: PYI034 `__aiter__` methods in classes like `BadAsyncIterator` usually return `self` at runtime
|
192 | class BadAsyncIterator(collections.abc.AsyncIterator[str]):
193 | def __aiter__(self) -> typing.AsyncIterator[str]:
194 | class BadAsyncIterator(collections.abc.AsyncIterator[str]):
195 | def __aiter__(self) -> typing.AsyncIterator[str]:
| ^^^^^^^^^ PYI034
194 | ... # Y034 "__aiter__" methods in classes like "BadAsyncIterator" usually return "self" at runtime. Consider using "typing_extensions.Self" in "BadAsyncIterator.__aiter__", e.g. "def __aiter__(self) -> Self: ..." # Y022 Use "collections.abc.AsyncIterator[T]" instead of "typing.AsyncIterator[T]" (PEP 585 syntax)
196 | ... # Y034 "__aiter__" methods in classes like "BadAsyncIterator" usually return "self" at runtime. Consider using "typing_extensions.Self" in "BadAsyncIterator.__aiter__", e.g. "def __aiter__(self) -> Self: ..." # Y022 Use "collections.abc.AsyncIterator[T]" instead of "typing.AsyncIterator[T]" (PEP 585 syntax)
|
= help: Consider using `typing_extensions.Self` as return type

View File

@@ -1,100 +1,100 @@
---
source: crates/ruff_linter/src/rules/flake8_pyi/mod.rs
---
PYI034.pyi:18:9: PYI034 `__new__` methods in classes like `Bad` usually return `self` at runtime
PYI034.pyi:20:9: PYI034 `__new__` methods in classes like `Bad` usually return `self` at runtime
|
16 | object
17 | ): # Y040 Do not inherit from "object" explicitly, as it is redundant in Python 3
18 | def __new__(
18 | object
19 | ): # Y040 Do not inherit from "object" explicitly, as it is redundant in Python 3
20 | def __new__(
| ^^^^^^^ PYI034
19 | cls, *args: Any, **kwargs: Any
20 | ) -> Bad: ... # Y034 "__new__" methods usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__new__", e.g. "def __new__(cls, *args: Any, **kwargs: Any) -> Self: ..."
21 | cls, *args: Any, **kwargs: Any
22 | ) -> Bad: ... # Y034 "__new__" methods usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__new__", e.g. "def __new__(cls, *args: Any, **kwargs: Any) -> Self: ..."
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.pyi:33:9: PYI034 `__enter__` methods in classes like `Bad` usually return `self` at runtime
PYI034.pyi:35:9: PYI034 `__enter__` methods in classes like `Bad` usually return `self` at runtime
|
31 | self, other: typing.Any
32 | ) -> typing.Any: ... # Y032 Prefer "object" to "Any" for the second parameter in "__ne__" methods
33 | def __enter__(
33 | self, other: typing.Any
34 | ) -> typing.Any: ... # Y032 Prefer "object" to "Any" for the second parameter in "__ne__" methods
35 | def __enter__(
| ^^^^^^^^^ PYI034
34 | self,
35 | ) -> Bad: ... # Y034 "__enter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__enter__", e.g. "def __enter__(self) -> Self: ..."
36 | self,
37 | ) -> Bad: ... # Y034 "__enter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__enter__", e.g. "def __enter__(self) -> Self: ..."
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.pyi:36:15: PYI034 `__aenter__` methods in classes like `Bad` usually return `self` at runtime
PYI034.pyi:38:15: PYI034 `__aenter__` methods in classes like `Bad` usually return `self` at runtime
|
34 | self,
35 | ) -> Bad: ... # Y034 "__enter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__enter__", e.g. "def __enter__(self) -> Self: ..."
36 | async def __aenter__(
36 | self,
37 | ) -> Bad: ... # Y034 "__enter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__enter__", e.g. "def __enter__(self) -> Self: ..."
38 | async def __aenter__(
| ^^^^^^^^^^ PYI034
37 | self,
38 | ) -> Bad: ... # Y034 "__aenter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__aenter__", e.g. "async def __aenter__(self) -> Self: ..."
39 | self,
40 | ) -> Bad: ... # Y034 "__aenter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__aenter__", e.g. "async def __aenter__(self) -> Self: ..."
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.pyi:39:9: PYI034 `__iadd__` methods in classes like `Bad` usually return `self` at runtime
PYI034.pyi:41:9: PYI034 `__iadd__` methods in classes like `Bad` usually return `self` at runtime
|
37 | self,
38 | ) -> Bad: ... # Y034 "__aenter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__aenter__", e.g. "async def __aenter__(self) -> Self: ..."
39 | def __iadd__(
39 | self,
40 | ) -> Bad: ... # Y034 "__aenter__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__aenter__", e.g. "async def __aenter__(self) -> Self: ..."
41 | def __iadd__(
| ^^^^^^^^ PYI034
40 | self, other: Bad
41 | ) -> Bad: ... # Y034 "__iadd__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__iadd__", e.g. "def __iadd__(self, other: Bad) -> Self: ..."
42 | self, other: Bad
43 | ) -> Bad: ... # Y034 "__iadd__" methods in classes like "Bad" usually return "self" at runtime. Consider using "typing_extensions.Self" in "Bad.__iadd__", e.g. "def __iadd__(self, other: Bad) -> Self: ..."
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.pyi:102:9: PYI034 `__iter__` methods in classes like `BadIterator1` usually return `self` at runtime
PYI034.pyi:104:9: PYI034 `__iter__` methods in classes like `BadIterator1` usually return `self` at runtime
|
101 | class BadIterator1(Iterator[int]):
102 | def __iter__(
103 | class BadIterator1(Iterator[int]):
104 | def __iter__(
| ^^^^^^^^ PYI034
103 | self,
104 | ) -> Iterator[
105 | self,
106 | ) -> Iterator[
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.pyi:111:9: PYI034 `__iter__` methods in classes like `BadIterator2` usually return `self` at runtime
PYI034.pyi:113:9: PYI034 `__iter__` methods in classes like `BadIterator2` usually return `self` at runtime
|
109 | typing.Iterator[int]
110 | ): # Y022 Use "collections.abc.Iterator[T]" instead of "typing.Iterator[T]" (PEP 585 syntax)
111 | def __iter__(
111 | typing.Iterator[int]
112 | ): # Y022 Use "collections.abc.Iterator[T]" instead of "typing.Iterator[T]" (PEP 585 syntax)
113 | def __iter__(
| ^^^^^^^^ PYI034
112 | self,
113 | ) -> Iterator[
114 | self,
115 | ) -> Iterator[
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.pyi:120:9: PYI034 `__iter__` methods in classes like `BadIterator3` usually return `self` at runtime
PYI034.pyi:122:9: PYI034 `__iter__` methods in classes like `BadIterator3` usually return `self` at runtime
|
118 | typing.Iterator[int]
119 | ): # Y022 Use "collections.abc.Iterator[T]" instead of "typing.Iterator[T]" (PEP 585 syntax)
120 | def __iter__(
120 | typing.Iterator[int]
121 | ): # Y022 Use "collections.abc.Iterator[T]" instead of "typing.Iterator[T]" (PEP 585 syntax)
122 | def __iter__(
| ^^^^^^^^ PYI034
121 | self,
122 | ) -> collections.abc.Iterator[
123 | self,
124 | ) -> collections.abc.Iterator[
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.pyi:128:9: PYI034 `__iter__` methods in classes like `BadIterator4` usually return `self` at runtime
PYI034.pyi:130:9: PYI034 `__iter__` methods in classes like `BadIterator4` usually return `self` at runtime
|
126 | class BadIterator4(Iterator[int]):
127 | # Note: *Iterable*, not *Iterator*, returned!
128 | def __iter__(
128 | class BadIterator4(Iterator[int]):
129 | # Note: *Iterable*, not *Iterator*, returned!
130 | def __iter__(
| ^^^^^^^^ PYI034
129 | self,
130 | ) -> Iterable[
131 | self,
132 | ) -> Iterable[
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.pyi:142:9: PYI034 `__aiter__` methods in classes like `BadAsyncIterator` usually return `self` at runtime
PYI034.pyi:144:9: PYI034 `__aiter__` methods in classes like `BadAsyncIterator` usually return `self` at runtime
|
141 | class BadAsyncIterator(collections.abc.AsyncIterator[str]):
142 | def __aiter__(
143 | class BadAsyncIterator(collections.abc.AsyncIterator[str]):
144 | def __aiter__(
| ^^^^^^^^^ PYI034
143 | self,
144 | ) -> typing.AsyncIterator[
145 | self,
146 | ) -> typing.AsyncIterator[
|
= help: Consider using `typing_extensions.Self` as return type

View File

@@ -19,6 +19,7 @@ mod tests {
#[test_case(Path::new("doubles.py"))]
#[test_case(Path::new("doubles_escaped.py"))]
#[test_case(Path::new("doubles_escaped_unnecessary.py"))]
#[test_case(Path::new("doubles_implicit.py"))]
#[test_case(Path::new("doubles_multiline_string.py"))]
#[test_case(Path::new("doubles_noqa.py"))]
@@ -39,6 +40,7 @@ mod tests {
Rule::BadQuotesMultilineString,
Rule::BadQuotesDocstring,
Rule::AvoidableEscapedQuote,
Rule::UnnecessaryEscapedQuote,
])
},
)?;
@@ -86,6 +88,7 @@ mod tests {
#[test_case(Path::new("singles.py"))]
#[test_case(Path::new("singles_escaped.py"))]
#[test_case(Path::new("singles_escaped_unnecessary.py"))]
#[test_case(Path::new("singles_implicit.py"))]
#[test_case(Path::new("singles_multiline_string.py"))]
#[test_case(Path::new("singles_noqa.py"))]
@@ -106,6 +109,7 @@ mod tests {
Rule::BadQuotesMultilineString,
Rule::BadQuotesDocstring,
Rule::AvoidableEscapedQuote,
Rule::UnnecessaryEscapedQuote,
])
},
)?;
@@ -139,6 +143,7 @@ mod tests {
Rule::BadQuotesMultilineString,
Rule::BadQuotesDocstring,
Rule::AvoidableEscapedQuote,
Rule::UnnecessaryEscapedQuote,
])
},
)?;
@@ -172,6 +177,7 @@ mod tests {
Rule::BadQuotesMultilineString,
Rule::BadQuotesDocstring,
Rule::AvoidableEscapedQuote,
Rule::UnnecessaryEscapedQuote,
])
},
)?;

View File

@@ -7,9 +7,10 @@ use ruff_source_file::Locator;
use ruff_text_size::TextRange;
use crate::lex::docstring_detection::StateMachine;
use crate::settings::LinterSettings;
use super::super::settings::Quote;
/// ## What it does
/// Checks for strings that include escaped quotes, and suggests changing
/// the quote style to avoid the need to escape them.
@@ -48,6 +49,43 @@ impl AlwaysFixableViolation for AvoidableEscapedQuote {
}
}
/// ## What it does
/// Checks for strings that include unnecessarily escaped quotes.
///
/// ## Why is this bad?
/// If a string contains an escaped quote that doesn't match the quote
/// character used for the string, it's unnecessary and can be removed.
///
/// ## Example
/// ```python
/// foo = "bar\'s"
/// ```
///
/// Use instead:
/// ```python
/// foo = "bar's"
/// ```
///
/// ## Formatter compatibility
/// We recommend against using this rule alongside the [formatter]. The
/// formatter automatically removes unnecessary escapes, making the rule
/// redundant.
///
/// [formatter]: https://docs.astral.sh/ruff/formatter
#[violation]
pub struct UnnecessaryEscapedQuote;
impl AlwaysFixableViolation for UnnecessaryEscapedQuote {
#[derive_message_formats]
fn message(&self) -> String {
format!("Unnecessary escape on inner quote character")
}
fn fix_title(&self) -> String {
"Remove backslash".to_string()
}
}
struct FStringContext {
/// Whether to check for escaped quotes in the f-string.
check_for_escaped_quote: bool,
@@ -55,14 +93,21 @@ struct FStringContext {
start_range: TextRange,
/// The ranges of the f-string middle tokens containing escaped quotes.
middle_ranges_with_escapes: Vec<TextRange>,
/// The quote style used for the f-string
quote_style: Quote,
}
impl FStringContext {
fn new(check_for_escaped_quote: bool, fstring_start_range: TextRange) -> Self {
fn new(
check_for_escaped_quote: bool,
fstring_start_range: TextRange,
quote_style: Quote,
) -> Self {
Self {
check_for_escaped_quote,
start_range: fstring_start_range,
middle_ranges_with_escapes: vec![],
quote_style,
}
}
@@ -132,21 +177,27 @@ pub(crate) fn avoidable_escaped_quote(
}
// Check if we're using the preferred quotation style.
if !leading_quote(locator.slice(tok_range))
.is_some_and(|text| text.contains(quotes_settings.inline_quotes.as_char()))
{
if !leading_quote(locator.slice(tok_range)).is_some_and(|text| {
contains_quote(text, quotes_settings.inline_quotes.as_char())
}) {
continue;
}
if string_contents.contains(quotes_settings.inline_quotes.as_char())
&& !string_contents.contains(quotes_settings.inline_quotes.opposite().as_char())
if contains_escaped_quote(string_contents, quotes_settings.inline_quotes.as_char())
&& !contains_quote(
string_contents,
quotes_settings.inline_quotes.opposite().as_char(),
)
{
let mut diagnostic = Diagnostic::new(AvoidableEscapedQuote, tok_range);
let fixed_contents = format!(
"{prefix}{quote}{value}{quote}",
prefix = kind.as_str(),
quote = quotes_settings.inline_quotes.opposite().as_char(),
value = unescape_string(string_contents)
value = unescape_string(
string_contents,
quotes_settings.inline_quotes.as_char()
)
);
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
fixed_contents,
@@ -159,10 +210,13 @@ pub(crate) fn avoidable_escaped_quote(
let text = locator.slice(tok_range);
// Check for escaped quote only if we're using the preferred quotation
// style and it isn't a triple-quoted f-string.
let check_for_escaped_quote = text
.contains(quotes_settings.inline_quotes.as_char())
&& !is_triple_quote(text);
fstrings.push(FStringContext::new(check_for_escaped_quote, tok_range));
let check_for_escaped_quote = !is_triple_quote(text)
&& contains_quote(text, quotes_settings.inline_quotes.as_char());
fstrings.push(FStringContext::new(
check_for_escaped_quote,
tok_range,
quotes_settings.inline_quotes,
));
}
Tok::FStringMiddle {
value: string_contents,
@@ -176,11 +230,15 @@ pub(crate) fn avoidable_escaped_quote(
}
// If any part of the f-string contains the opposite quote,
// we can't change the quote style in the entire f-string.
if string_contents.contains(quotes_settings.inline_quotes.opposite().as_char()) {
if contains_quote(
string_contents,
quotes_settings.inline_quotes.opposite().as_char(),
) {
context.ignore_escaped_quotes();
continue;
}
if string_contents.contains(quotes_settings.inline_quotes.as_char()) {
if contains_escaped_quote(string_contents, quotes_settings.inline_quotes.as_char())
{
context.push_fstring_middle_range(tok_range);
}
}
@@ -207,7 +265,13 @@ pub(crate) fn avoidable_escaped_quote(
.middle_ranges_with_escapes
.iter()
.map(|&range| {
Edit::range_replacement(unescape_string(locator.slice(range)), range)
Edit::range_replacement(
unescape_string(
locator.slice(range),
quotes_settings.inline_quotes.as_char(),
),
range,
)
})
.chain(std::iter::once(
// `FStringEnd` edit
@@ -231,13 +295,152 @@ pub(crate) fn avoidable_escaped_quote(
}
}
fn unescape_string(value: &str) -> String {
let mut fixed_contents = String::with_capacity(value.len());
/// Q004
pub(crate) fn unnecessary_escaped_quote(
diagnostics: &mut Vec<Diagnostic>,
lxr: &[LexResult],
locator: &Locator,
) {
let mut fstrings: Vec<FStringContext> = Vec::new();
let mut state_machine = StateMachine::default();
let mut chars = value.chars().peekable();
for &(ref tok, tok_range) in lxr.iter().flatten() {
let is_docstring = state_machine.consume(tok);
if is_docstring {
continue;
}
match tok {
Tok::String {
value: string_contents,
kind,
triple_quoted,
} => {
if kind.is_raw() || *triple_quoted {
continue;
}
let leading = match leading_quote(locator.slice(tok_range)) {
Some("\"") => Quote::Double,
Some("'") => Quote::Single,
_ => continue,
};
if !contains_escaped_quote(string_contents, leading.opposite().as_char()) {
continue;
}
let mut diagnostic = Diagnostic::new(UnnecessaryEscapedQuote, tok_range);
let fixed_contents = format!(
"{prefix}{quote}{value}{quote}",
prefix = kind.as_str(),
quote = leading.as_char(),
value = unescape_string(string_contents, leading.opposite().as_char())
);
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
fixed_contents,
tok_range,
)));
diagnostics.push(diagnostic);
}
Tok::FStringStart => {
let text = locator.slice(tok_range);
// Check for escaped quote only if we're using the preferred quotation
// style and it isn't a triple-quoted f-string.
let check_for_escaped_quote = !is_triple_quote(text);
let quote_style = if contains_quote(text, Quote::Single.as_char()) {
Quote::Single
} else {
Quote::Double
};
fstrings.push(FStringContext::new(
check_for_escaped_quote,
tok_range,
quote_style,
));
}
Tok::FStringMiddle {
value: string_contents,
is_raw,
} if !is_raw => {
let Some(context) = fstrings.last_mut() else {
continue;
};
if !context.check_for_escaped_quote {
continue;
}
if contains_escaped_quote(string_contents, context.quote_style.opposite().as_char())
{
context.push_fstring_middle_range(tok_range);
}
}
Tok::FStringEnd => {
let Some(context) = fstrings.pop() else {
continue;
};
let [first, rest @ ..] = context.middle_ranges_with_escapes.as_slice() else {
continue;
};
let mut diagnostic = Diagnostic::new(
UnnecessaryEscapedQuote,
TextRange::new(context.start_range.start(), tok_range.end()),
);
let first_edit = Edit::range_replacement(
unescape_string(
locator.slice(first),
context.quote_style.opposite().as_char(),
),
*first,
);
let rest_edits = rest.iter().map(|&range| {
Edit::range_replacement(
unescape_string(
locator.slice(range),
context.quote_style.opposite().as_char(),
),
range,
)
});
diagnostic.set_fix(Fix::safe_edits(first_edit, rest_edits));
diagnostics.push(diagnostic);
}
_ => {}
}
}
}
/// Return `true` if the haystack contains the quote.
fn contains_quote(haystack: &str, quote: char) -> bool {
memchr::memchr(quote as u8, haystack.as_bytes()).is_some()
}
/// Return `true` if the haystack contains an escaped quote.
fn contains_escaped_quote(haystack: &str, quote: char) -> bool {
for index in memchr::memchr_iter(quote as u8, haystack.as_bytes()) {
// If the quote is preceded by an even number of backslashes, it's not escaped.
if haystack.as_bytes()[..index]
.iter()
.rev()
.take_while(|&&c| c == b'\\')
.count()
% 2
!= 0
{
return true;
}
}
false
}
/// Return a modified version of the string with all quote escapes removed.
fn unescape_string(haystack: &str, quote: char) -> String {
let mut fixed_contents = String::with_capacity(haystack.len());
let mut chars = haystack.chars().peekable();
let mut backslashes = 0;
while let Some(char) = chars.next() {
if char != '\\' {
fixed_contents.push(char);
backslashes = 0;
continue;
}
// If we're at the end of the line
@@ -246,9 +449,11 @@ fn unescape_string(value: &str) -> String {
continue;
};
// Remove quote escape
if matches!(*next_char, '\'' | '"') {
if *next_char == quote && backslashes % 2 == 0 {
backslashes = 0;
continue;
}
backslashes += 1;
fixed_contents.push(char);
}

View File

@@ -0,0 +1,341 @@
---
source: crates/ruff_linter/src/rules/flake8_quotes/mod.rs
---
singles_escaped_unnecessary.py:1:26: Q004 [*] Unnecessary escape on inner quote character
|
1 | this_should_raise_Q004 = "This is a \'string\'"
| ^^^^^^^^^^^^^^^^^^^^^^ Q004
2 | this_should_raise_Q004 = "'This' is a \'string\'"
3 | this_is_fine = 'This is a "string"'
|
= help: Remove backslash
Safe fix
1 |-this_should_raise_Q004 = "This is a \'string\'"
1 |+this_should_raise_Q004 = "This is a 'string'"
2 2 | this_should_raise_Q004 = "'This' is a \'string\'"
3 3 | this_is_fine = 'This is a "string"'
4 4 | this_is_fine = '\'This\' is a "string"'
singles_escaped_unnecessary.py:2:26: Q004 [*] Unnecessary escape on inner quote character
|
1 | this_should_raise_Q004 = "This is a \'string\'"
2 | this_should_raise_Q004 = "'This' is a \'string\'"
| ^^^^^^^^^^^^^^^^^^^^^^^^ Q004
3 | this_is_fine = 'This is a "string"'
4 | this_is_fine = '\'This\' is a "string"'
|
= help: Remove backslash
Safe fix
1 1 | this_should_raise_Q004 = "This is a \'string\'"
2 |-this_should_raise_Q004 = "'This' is a \'string\'"
2 |+this_should_raise_Q004 = "'This' is a 'string'"
3 3 | this_is_fine = 'This is a "string"'
4 4 | this_is_fine = '\'This\' is a "string"'
5 5 | this_is_fine = r"This is a \'string\'"
singles_escaped_unnecessary.py:9:5: Q004 [*] Unnecessary escape on inner quote character
|
7 | this_should_raise_Q004 = (
8 | "This is a"
9 | "\'string\'"
| ^^^^^^^^^^^^ Q004
10 | )
|
= help: Remove backslash
Safe fix
6 6 | this_is_fine = R"This is a \'string\'"
7 7 | this_should_raise_Q004 = (
8 8 | "This is a"
9 |- "\'string\'"
9 |+ "'string'"
10 10 | )
11 11 |
12 12 | # Same as above, but with f-strings
singles_escaped_unnecessary.py:13:1: Q004 [*] Unnecessary escape on inner quote character
|
12 | # Same as above, but with f-strings
13 | f"This is a \'string\'" # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^ Q004
14 | f"'This' is a \'string\'" # Q004
15 | f'This is a "string"'
|
= help: Remove backslash
Safe fix
10 10 | )
11 11 |
12 12 | # Same as above, but with f-strings
13 |-f"This is a \'string\'" # Q004
13 |+f"This is a 'string'" # Q004
14 14 | f"'This' is a \'string\'" # Q004
15 15 | f'This is a "string"'
16 16 | f'\'This\' is a "string"'
singles_escaped_unnecessary.py:14:1: Q004 [*] Unnecessary escape on inner quote character
|
12 | # Same as above, but with f-strings
13 | f"This is a \'string\'" # Q004
14 | f"'This' is a \'string\'" # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
15 | f'This is a "string"'
16 | f'\'This\' is a "string"'
|
= help: Remove backslash
Safe fix
11 11 |
12 12 | # Same as above, but with f-strings
13 13 | f"This is a \'string\'" # Q004
14 |-f"'This' is a \'string\'" # Q004
14 |+f"'This' is a 'string'" # Q004
15 15 | f'This is a "string"'
16 16 | f'\'This\' is a "string"'
17 17 | fr"This is a \'string\'"
singles_escaped_unnecessary.py:21:5: Q004 [*] Unnecessary escape on inner quote character
|
19 | this_should_raise_Q004 = (
20 | f"This is a"
21 | f"\'string\'" # Q004
| ^^^^^^^^^^^^^ Q004
22 | )
|
= help: Remove backslash
Safe fix
18 18 | fR"This is a \'string\'"
19 19 | this_should_raise_Q004 = (
20 20 | f"This is a"
21 |- f"\'string\'" # Q004
21 |+ f"'string'" # Q004
22 22 | )
23 23 |
24 24 | # Nested f-strings (Python 3.12+)
singles_escaped_unnecessary.py:31:1: Q004 [*] Unnecessary escape on inner quote character
|
29 | #
30 | # but as the actual string itself is invalid pre 3.12, we don't catch it.
31 | f"\'foo\' {"foo"}" # Q004
| ^^^^^^^^^^^^^^^^^^ Q004
32 | f"\'foo\' {f"foo"}" # Q004
33 | f"\'foo\' {f"\'foo\'"} \'\'" # Q004
|
= help: Remove backslash
Safe fix
28 28 | # f'"foo" {"nested"}'
29 29 | #
30 30 | # but as the actual string itself is invalid pre 3.12, we don't catch it.
31 |-f"\'foo\' {"foo"}" # Q004
31 |+f"'foo' {"foo"}" # Q004
32 32 | f"\'foo\' {f"foo"}" # Q004
33 33 | f"\'foo\' {f"\'foo\'"} \'\'" # Q004
34 34 |
singles_escaped_unnecessary.py:32:1: Q004 [*] Unnecessary escape on inner quote character
|
30 | # but as the actual string itself is invalid pre 3.12, we don't catch it.
31 | f"\'foo\' {"foo"}" # Q004
32 | f"\'foo\' {f"foo"}" # Q004
| ^^^^^^^^^^^^^^^^^^^ Q004
33 | f"\'foo\' {f"\'foo\'"} \'\'" # Q004
|
= help: Remove backslash
Safe fix
29 29 | #
30 30 | # but as the actual string itself is invalid pre 3.12, we don't catch it.
31 31 | f"\'foo\' {"foo"}" # Q004
32 |-f"\'foo\' {f"foo"}" # Q004
32 |+f"'foo' {f"foo"}" # Q004
33 33 | f"\'foo\' {f"\'foo\'"} \'\'" # Q004
34 34 |
35 35 | f"normal {f"nested"} normal"
singles_escaped_unnecessary.py:33:1: Q004 [*] Unnecessary escape on inner quote character
|
31 | f"\'foo\' {"foo"}" # Q004
32 | f"\'foo\' {f"foo"}" # Q004
33 | f"\'foo\' {f"\'foo\'"} \'\'" # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
34 |
35 | f"normal {f"nested"} normal"
|
= help: Remove backslash
Safe fix
30 30 | # but as the actual string itself is invalid pre 3.12, we don't catch it.
31 31 | f"\'foo\' {"foo"}" # Q004
32 32 | f"\'foo\' {f"foo"}" # Q004
33 |-f"\'foo\' {f"\'foo\'"} \'\'" # Q004
33 |+f"'foo' {f"\'foo\'"} ''" # Q004
34 34 |
35 35 | f"normal {f"nested"} normal"
36 36 | f"\'normal\' {f"nested"} normal" # Q004
singles_escaped_unnecessary.py:33:12: Q004 [*] Unnecessary escape on inner quote character
|
31 | f"\'foo\' {"foo"}" # Q004
32 | f"\'foo\' {f"foo"}" # Q004
33 | f"\'foo\' {f"\'foo\'"} \'\'" # Q004
| ^^^^^^^^^^ Q004
34 |
35 | f"normal {f"nested"} normal"
|
= help: Remove backslash
Safe fix
30 30 | # but as the actual string itself is invalid pre 3.12, we don't catch it.
31 31 | f"\'foo\' {"foo"}" # Q004
32 32 | f"\'foo\' {f"foo"}" # Q004
33 |-f"\'foo\' {f"\'foo\'"} \'\'" # Q004
33 |+f"\'foo\' {f"'foo'"} \'\'" # Q004
34 34 |
35 35 | f"normal {f"nested"} normal"
36 36 | f"\'normal\' {f"nested"} normal" # Q004
singles_escaped_unnecessary.py:36:1: Q004 [*] Unnecessary escape on inner quote character
|
35 | f"normal {f"nested"} normal"
36 | f"\'normal\' {f"nested"} normal" # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
37 | f"\'normal\' {f"nested"} 'single quotes'"
38 | f"\'normal\' {f"\'nested\' {"other"} normal"} 'single quotes'" # Q004
|
= help: Remove backslash
Safe fix
33 33 | f"\'foo\' {f"\'foo\'"} \'\'" # Q004
34 34 |
35 35 | f"normal {f"nested"} normal"
36 |-f"\'normal\' {f"nested"} normal" # Q004
36 |+f"'normal' {f"nested"} normal" # Q004
37 37 | f"\'normal\' {f"nested"} 'single quotes'"
38 38 | f"\'normal\' {f"\'nested\' {"other"} normal"} 'single quotes'" # Q004
39 39 | f"\'normal\' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
singles_escaped_unnecessary.py:37:1: Q004 [*] Unnecessary escape on inner quote character
|
35 | f"normal {f"nested"} normal"
36 | f"\'normal\' {f"nested"} normal" # Q004
37 | f"\'normal\' {f"nested"} 'single quotes'"
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
38 | f"\'normal\' {f"\'nested\' {"other"} normal"} 'single quotes'" # Q004
39 | f"\'normal\' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
|
= help: Remove backslash
Safe fix
34 34 |
35 35 | f"normal {f"nested"} normal"
36 36 | f"\'normal\' {f"nested"} normal" # Q004
37 |-f"\'normal\' {f"nested"} 'single quotes'"
37 |+f"'normal' {f"nested"} 'single quotes'"
38 38 | f"\'normal\' {f"\'nested\' {"other"} normal"} 'single quotes'" # Q004
39 39 | f"\'normal\' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
40 40 |
singles_escaped_unnecessary.py:38:1: Q004 [*] Unnecessary escape on inner quote character
|
36 | f"\'normal\' {f"nested"} normal" # Q004
37 | f"\'normal\' {f"nested"} 'single quotes'"
38 | f"\'normal\' {f"\'nested\' {"other"} normal"} 'single quotes'" # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
39 | f"\'normal\' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
|
= help: Remove backslash
Safe fix
35 35 | f"normal {f"nested"} normal"
36 36 | f"\'normal\' {f"nested"} normal" # Q004
37 37 | f"\'normal\' {f"nested"} 'single quotes'"
38 |-f"\'normal\' {f"\'nested\' {"other"} normal"} 'single quotes'" # Q004
38 |+f"'normal' {f"\'nested\' {"other"} normal"} 'single quotes'" # Q004
39 39 | f"\'normal\' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
40 40 |
41 41 | # Make sure we do not unescape quotes
singles_escaped_unnecessary.py:38:15: Q004 [*] Unnecessary escape on inner quote character
|
36 | f"\'normal\' {f"nested"} normal" # Q004
37 | f"\'normal\' {f"nested"} 'single quotes'"
38 | f"\'normal\' {f"\'nested\' {"other"} normal"} 'single quotes'" # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
39 | f"\'normal\' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
|
= help: Remove backslash
Safe fix
35 35 | f"normal {f"nested"} normal"
36 36 | f"\'normal\' {f"nested"} normal" # Q004
37 37 | f"\'normal\' {f"nested"} 'single quotes'"
38 |-f"\'normal\' {f"\'nested\' {"other"} normal"} 'single quotes'" # Q004
38 |+f"\'normal\' {f"'nested' {"other"} normal"} 'single quotes'" # Q004
39 39 | f"\'normal\' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
40 40 |
41 41 | # Make sure we do not unescape quotes
singles_escaped_unnecessary.py:39:1: Q004 [*] Unnecessary escape on inner quote character
|
37 | f"\'normal\' {f"nested"} 'single quotes'"
38 | f"\'normal\' {f"\'nested\' {"other"} normal"} 'single quotes'" # Q004
39 | f"\'normal\' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
40 |
41 | # Make sure we do not unescape quotes
|
= help: Remove backslash
Safe fix
36 36 | f"\'normal\' {f"nested"} normal" # Q004
37 37 | f"\'normal\' {f"nested"} 'single quotes'"
38 38 | f"\'normal\' {f"\'nested\' {"other"} normal"} 'single quotes'" # Q004
39 |-f"\'normal\' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
39 |+f"'normal' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
40 40 |
41 41 | # Make sure we do not unescape quotes
42 42 | this_is_fine = "This is an \\'escaped\\' quote"
singles_escaped_unnecessary.py:39:15: Q004 [*] Unnecessary escape on inner quote character
|
37 | f"\'normal\' {f"nested"} 'single quotes'"
38 | f"\'normal\' {f"\'nested\' {"other"} normal"} 'single quotes'" # Q004
39 | f"\'normal\' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
40 |
41 | # Make sure we do not unescape quotes
|
= help: Remove backslash
Safe fix
36 36 | f"\'normal\' {f"nested"} normal" # Q004
37 37 | f"\'normal\' {f"nested"} 'single quotes'"
38 38 | f"\'normal\' {f"\'nested\' {"other"} normal"} 'single quotes'" # Q004
39 |-f"\'normal\' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
39 |+f"\'normal\' {f"'nested' {"other"} 'single quotes'"} normal" # Q004
40 40 |
41 41 | # Make sure we do not unescape quotes
42 42 | this_is_fine = "This is an \\'escaped\\' quote"
singles_escaped_unnecessary.py:43:26: Q004 [*] Unnecessary escape on inner quote character
|
41 | # Make sure we do not unescape quotes
42 | this_is_fine = "This is an \\'escaped\\' quote"
43 | this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash"
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
|
= help: Remove backslash
Safe fix
40 40 |
41 41 | # Make sure we do not unescape quotes
42 42 | this_is_fine = "This is an \\'escaped\\' quote"
43 |-this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash"
43 |+this_should_raise_Q004 = "This is an \\'escaped\\' quote with an extra backslash"

View File

@@ -0,0 +1,382 @@
---
source: crates/ruff_linter/src/rules/flake8_quotes/mod.rs
---
doubles_escaped_unnecessary.py:1:26: Q004 [*] Unnecessary escape on inner quote character
|
1 | this_should_raise_Q004 = 'This is a \"string\"'
| ^^^^^^^^^^^^^^^^^^^^^^ Q004
2 | this_should_raise_Q004 = 'This is \\ a \\\"string\"'
3 | this_is_fine = '"This" is a \"string\"'
|
= help: Remove backslash
Safe fix
1 |-this_should_raise_Q004 = 'This is a \"string\"'
1 |+this_should_raise_Q004 = 'This is a "string"'
2 2 | this_should_raise_Q004 = 'This is \\ a \\\"string\"'
3 3 | this_is_fine = '"This" is a \"string\"'
4 4 | this_is_fine = "This is a 'string'"
doubles_escaped_unnecessary.py:2:26: Q004 [*] Unnecessary escape on inner quote character
|
1 | this_should_raise_Q004 = 'This is a \"string\"'
2 | this_should_raise_Q004 = 'This is \\ a \\\"string\"'
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
3 | this_is_fine = '"This" is a \"string\"'
4 | this_is_fine = "This is a 'string'"
|
= help: Remove backslash
Safe fix
1 1 | this_should_raise_Q004 = 'This is a \"string\"'
2 |-this_should_raise_Q004 = 'This is \\ a \\\"string\"'
2 |+this_should_raise_Q004 = 'This is \\ a \\"string"'
3 3 | this_is_fine = '"This" is a \"string\"'
4 4 | this_is_fine = "This is a 'string'"
5 5 | this_is_fine = "\"This\" is a 'string'"
doubles_escaped_unnecessary.py:3:16: Q004 [*] Unnecessary escape on inner quote character
|
1 | this_should_raise_Q004 = 'This is a \"string\"'
2 | this_should_raise_Q004 = 'This is \\ a \\\"string\"'
3 | this_is_fine = '"This" is a \"string\"'
| ^^^^^^^^^^^^^^^^^^^^^^^^ Q004
4 | this_is_fine = "This is a 'string'"
5 | this_is_fine = "\"This\" is a 'string'"
|
= help: Remove backslash
Safe fix
1 1 | this_should_raise_Q004 = 'This is a \"string\"'
2 2 | this_should_raise_Q004 = 'This is \\ a \\\"string\"'
3 |-this_is_fine = '"This" is a \"string\"'
3 |+this_is_fine = '"This" is a "string"'
4 4 | this_is_fine = "This is a 'string'"
5 5 | this_is_fine = "\"This\" is a 'string'"
6 6 | this_is_fine = r'This is a \"string\"'
doubles_escaped_unnecessary.py:10:5: Q004 [*] Unnecessary escape on inner quote character
|
8 | this_should_raise_Q004 = (
9 | 'This is a'
10 | '\"string\"'
| ^^^^^^^^^^^^ Q004
11 | )
|
= help: Remove backslash
Safe fix
7 7 | this_is_fine = R'This is a \"string\"'
8 8 | this_should_raise_Q004 = (
9 9 | 'This is a'
10 |- '\"string\"'
10 |+ '"string"'
11 11 | )
12 12 |
13 13 | # Same as above, but with f-strings
doubles_escaped_unnecessary.py:14:1: Q004 [*] Unnecessary escape on inner quote character
|
13 | # Same as above, but with f-strings
14 | f'This is a \"string\"' # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^ Q004
15 | f'This is \\ a \\\"string\"' # Q004
16 | f'"This" is a \"string\"'
|
= help: Remove backslash
Safe fix
11 11 | )
12 12 |
13 13 | # Same as above, but with f-strings
14 |-f'This is a \"string\"' # Q004
14 |+f'This is a "string"' # Q004
15 15 | f'This is \\ a \\\"string\"' # Q004
16 16 | f'"This" is a \"string\"'
17 17 | f"This is a 'string'"
doubles_escaped_unnecessary.py:15:1: Q004 [*] Unnecessary escape on inner quote character
|
13 | # Same as above, but with f-strings
14 | f'This is a \"string\"' # Q004
15 | f'This is \\ a \\\"string\"' # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
16 | f'"This" is a \"string\"'
17 | f"This is a 'string'"
|
= help: Remove backslash
Safe fix
12 12 |
13 13 | # Same as above, but with f-strings
14 14 | f'This is a \"string\"' # Q004
15 |-f'This is \\ a \\\"string\"' # Q004
15 |+f'This is \\ a \\"string"' # Q004
16 16 | f'"This" is a \"string\"'
17 17 | f"This is a 'string'"
18 18 | f"\"This\" is a 'string'"
doubles_escaped_unnecessary.py:16:1: Q004 [*] Unnecessary escape on inner quote character
|
14 | f'This is a \"string\"' # Q004
15 | f'This is \\ a \\\"string\"' # Q004
16 | f'"This" is a \"string\"'
| ^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
17 | f"This is a 'string'"
18 | f"\"This\" is a 'string'"
|
= help: Remove backslash
Safe fix
13 13 | # Same as above, but with f-strings
14 14 | f'This is a \"string\"' # Q004
15 15 | f'This is \\ a \\\"string\"' # Q004
16 |-f'"This" is a \"string\"'
16 |+f'"This" is a "string"'
17 17 | f"This is a 'string'"
18 18 | f"\"This\" is a 'string'"
19 19 | fr'This is a \"string\"'
doubles_escaped_unnecessary.py:23:5: Q004 [*] Unnecessary escape on inner quote character
|
21 | this_should_raise_Q004 = (
22 | f'This is a'
23 | f'\"string\"' # Q004
| ^^^^^^^^^^^^^ Q004
24 | )
|
= help: Remove backslash
Safe fix
20 20 | fR'This is a \"string\"'
21 21 | this_should_raise_Q004 = (
22 22 | f'This is a'
23 |- f'\"string\"' # Q004
23 |+ f'"string"' # Q004
24 24 | )
25 25 |
26 26 | # Nested f-strings (Python 3.12+)
doubles_escaped_unnecessary.py:33:1: Q004 [*] Unnecessary escape on inner quote character
|
31 | #
32 | # but as the actual string itself is invalid pre 3.12, we don't catch it.
33 | f'\"foo\" {'nested'}' # Q004
| ^^^^^^^^^^^^^^^^^^^^^ Q004
34 | f'\"foo\" {f'nested'}' # Q004
35 | f'\"foo\" {f'\"nested\"'} \"\"' # Q004
|
= help: Remove backslash
Safe fix
30 30 | # f"'foo' {'nested'}"
31 31 | #
32 32 | # but as the actual string itself is invalid pre 3.12, we don't catch it.
33 |-f'\"foo\" {'nested'}' # Q004
33 |+f'"foo" {'nested'}' # Q004
34 34 | f'\"foo\" {f'nested'}' # Q004
35 35 | f'\"foo\" {f'\"nested\"'} \"\"' # Q004
36 36 |
doubles_escaped_unnecessary.py:34:1: Q004 [*] Unnecessary escape on inner quote character
|
32 | # but as the actual string itself is invalid pre 3.12, we don't catch it.
33 | f'\"foo\" {'nested'}' # Q004
34 | f'\"foo\" {f'nested'}' # Q004
| ^^^^^^^^^^^^^^^^^^^^^^ Q004
35 | f'\"foo\" {f'\"nested\"'} \"\"' # Q004
|
= help: Remove backslash
Safe fix
31 31 | #
32 32 | # but as the actual string itself is invalid pre 3.12, we don't catch it.
33 33 | f'\"foo\" {'nested'}' # Q004
34 |-f'\"foo\" {f'nested'}' # Q004
34 |+f'"foo" {f'nested'}' # Q004
35 35 | f'\"foo\" {f'\"nested\"'} \"\"' # Q004
36 36 |
37 37 | f'normal {f'nested'} normal'
doubles_escaped_unnecessary.py:35:1: Q004 [*] Unnecessary escape on inner quote character
|
33 | f'\"foo\" {'nested'}' # Q004
34 | f'\"foo\" {f'nested'}' # Q004
35 | f'\"foo\" {f'\"nested\"'} \"\"' # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
36 |
37 | f'normal {f'nested'} normal'
|
= help: Remove backslash
Safe fix
32 32 | # but as the actual string itself is invalid pre 3.12, we don't catch it.
33 33 | f'\"foo\" {'nested'}' # Q004
34 34 | f'\"foo\" {f'nested'}' # Q004
35 |-f'\"foo\" {f'\"nested\"'} \"\"' # Q004
35 |+f'"foo" {f'\"nested\"'} ""' # Q004
36 36 |
37 37 | f'normal {f'nested'} normal'
38 38 | f'\"normal\" {f'nested'} normal' # Q004
doubles_escaped_unnecessary.py:35:12: Q004 [*] Unnecessary escape on inner quote character
|
33 | f'\"foo\" {'nested'}' # Q004
34 | f'\"foo\" {f'nested'}' # Q004
35 | f'\"foo\" {f'\"nested\"'} \"\"' # Q004
| ^^^^^^^^^^^^^ Q004
36 |
37 | f'normal {f'nested'} normal'
|
= help: Remove backslash
Safe fix
32 32 | # but as the actual string itself is invalid pre 3.12, we don't catch it.
33 33 | f'\"foo\" {'nested'}' # Q004
34 34 | f'\"foo\" {f'nested'}' # Q004
35 |-f'\"foo\" {f'\"nested\"'} \"\"' # Q004
35 |+f'\"foo\" {f'"nested"'} \"\"' # Q004
36 36 |
37 37 | f'normal {f'nested'} normal'
38 38 | f'\"normal\" {f'nested'} normal' # Q004
doubles_escaped_unnecessary.py:38:1: Q004 [*] Unnecessary escape on inner quote character
|
37 | f'normal {f'nested'} normal'
38 | f'\"normal\" {f'nested'} normal' # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
39 | f'\"normal\" {f'nested'} "double quotes"'
40 | f'\"normal\" {f'\"nested\" {'other'} normal'} "double quotes"' # Q004
|
= help: Remove backslash
Safe fix
35 35 | f'\"foo\" {f'\"nested\"'} \"\"' # Q004
36 36 |
37 37 | f'normal {f'nested'} normal'
38 |-f'\"normal\" {f'nested'} normal' # Q004
38 |+f'"normal" {f'nested'} normal' # Q004
39 39 | f'\"normal\" {f'nested'} "double quotes"'
40 40 | f'\"normal\" {f'\"nested\" {'other'} normal'} "double quotes"' # Q004
41 41 | f'\"normal\" {f'\"nested\" {'other'} "double quotes"'} normal' # Q004
doubles_escaped_unnecessary.py:39:1: Q004 [*] Unnecessary escape on inner quote character
|
37 | f'normal {f'nested'} normal'
38 | f'\"normal\" {f'nested'} normal' # Q004
39 | f'\"normal\" {f'nested'} "double quotes"'
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
40 | f'\"normal\" {f'\"nested\" {'other'} normal'} "double quotes"' # Q004
41 | f'\"normal\" {f'\"nested\" {'other'} "double quotes"'} normal' # Q004
|
= help: Remove backslash
Safe fix
36 36 |
37 37 | f'normal {f'nested'} normal'
38 38 | f'\"normal\" {f'nested'} normal' # Q004
39 |-f'\"normal\" {f'nested'} "double quotes"'
39 |+f'"normal" {f'nested'} "double quotes"'
40 40 | f'\"normal\" {f'\"nested\" {'other'} normal'} "double quotes"' # Q004
41 41 | f'\"normal\" {f'\"nested\" {'other'} "double quotes"'} normal' # Q004
42 42 |
doubles_escaped_unnecessary.py:40:1: Q004 [*] Unnecessary escape on inner quote character
|
38 | f'\"normal\" {f'nested'} normal' # Q004
39 | f'\"normal\" {f'nested'} "double quotes"'
40 | f'\"normal\" {f'\"nested\" {'other'} normal'} "double quotes"' # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
41 | f'\"normal\" {f'\"nested\" {'other'} "double quotes"'} normal' # Q004
|
= help: Remove backslash
Safe fix
37 37 | f'normal {f'nested'} normal'
38 38 | f'\"normal\" {f'nested'} normal' # Q004
39 39 | f'\"normal\" {f'nested'} "double quotes"'
40 |-f'\"normal\" {f'\"nested\" {'other'} normal'} "double quotes"' # Q004
40 |+f'"normal" {f'\"nested\" {'other'} normal'} "double quotes"' # Q004
41 41 | f'\"normal\" {f'\"nested\" {'other'} "double quotes"'} normal' # Q004
42 42 |
43 43 | # Make sure we do not unescape quotes
doubles_escaped_unnecessary.py:40:15: Q004 [*] Unnecessary escape on inner quote character
|
38 | f'\"normal\" {f'nested'} normal' # Q004
39 | f'\"normal\" {f'nested'} "double quotes"'
40 | f'\"normal\" {f'\"nested\" {'other'} normal'} "double quotes"' # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
41 | f'\"normal\" {f'\"nested\" {'other'} "double quotes"'} normal' # Q004
|
= help: Remove backslash
Safe fix
37 37 | f'normal {f'nested'} normal'
38 38 | f'\"normal\" {f'nested'} normal' # Q004
39 39 | f'\"normal\" {f'nested'} "double quotes"'
40 |-f'\"normal\" {f'\"nested\" {'other'} normal'} "double quotes"' # Q004
40 |+f'\"normal\" {f'"nested" {'other'} normal'} "double quotes"' # Q004
41 41 | f'\"normal\" {f'\"nested\" {'other'} "double quotes"'} normal' # Q004
42 42 |
43 43 | # Make sure we do not unescape quotes
doubles_escaped_unnecessary.py:41:1: Q004 [*] Unnecessary escape on inner quote character
|
39 | f'\"normal\" {f'nested'} "double quotes"'
40 | f'\"normal\" {f'\"nested\" {'other'} normal'} "double quotes"' # Q004
41 | f'\"normal\" {f'\"nested\" {'other'} "double quotes"'} normal' # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
42 |
43 | # Make sure we do not unescape quotes
|
= help: Remove backslash
Safe fix
38 38 | f'\"normal\" {f'nested'} normal' # Q004
39 39 | f'\"normal\" {f'nested'} "double quotes"'
40 40 | f'\"normal\" {f'\"nested\" {'other'} normal'} "double quotes"' # Q004
41 |-f'\"normal\" {f'\"nested\" {'other'} "double quotes"'} normal' # Q004
41 |+f'"normal" {f'\"nested\" {'other'} "double quotes"'} normal' # Q004
42 42 |
43 43 | # Make sure we do not unescape quotes
44 44 | this_is_fine = 'This is an \\"escaped\\" quote'
doubles_escaped_unnecessary.py:41:15: Q004 [*] Unnecessary escape on inner quote character
|
39 | f'\"normal\" {f'nested'} "double quotes"'
40 | f'\"normal\" {f'\"nested\" {'other'} normal'} "double quotes"' # Q004
41 | f'\"normal\" {f'\"nested\" {'other'} "double quotes"'} normal' # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
42 |
43 | # Make sure we do not unescape quotes
|
= help: Remove backslash
Safe fix
38 38 | f'\"normal\" {f'nested'} normal' # Q004
39 39 | f'\"normal\" {f'nested'} "double quotes"'
40 40 | f'\"normal\" {f'\"nested\" {'other'} normal'} "double quotes"' # Q004
41 |-f'\"normal\" {f'\"nested\" {'other'} "double quotes"'} normal' # Q004
41 |+f'\"normal\" {f'"nested" {'other'} "double quotes"'} normal' # Q004
42 42 |
43 43 | # Make sure we do not unescape quotes
44 44 | this_is_fine = 'This is an \\"escaped\\" quote'
doubles_escaped_unnecessary.py:45:26: Q004 [*] Unnecessary escape on inner quote character
|
43 | # Make sure we do not unescape quotes
44 | this_is_fine = 'This is an \\"escaped\\" quote'
45 | this_should_raise_Q004 = 'This is an \\\"escaped\\\" quote with an extra backslash'
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
|
= help: Remove backslash
Safe fix
42 42 |
43 43 | # Make sure we do not unescape quotes
44 44 | this_is_fine = 'This is an \\"escaped\\" quote'
45 |-this_should_raise_Q004 = 'This is an \\\"escaped\\\" quote with an extra backslash'
45 |+this_should_raise_Q004 = 'This is an \\"escaped\\" quote with an extra backslash'

View File

@@ -23,6 +23,8 @@ mod tests {
#[test_case(Rule::RuntimeImportInTypeCheckingBlock, Path::new("TCH004_13.py"))]
#[test_case(Rule::RuntimeImportInTypeCheckingBlock, Path::new("TCH004_14.pyi"))]
#[test_case(Rule::RuntimeImportInTypeCheckingBlock, Path::new("TCH004_15.py"))]
#[test_case(Rule::RuntimeImportInTypeCheckingBlock, Path::new("TCH004_16.py"))]
#[test_case(Rule::RuntimeImportInTypeCheckingBlock, Path::new("TCH004_17.py"))]
#[test_case(Rule::RuntimeImportInTypeCheckingBlock, Path::new("TCH004_2.py"))]
#[test_case(Rule::RuntimeImportInTypeCheckingBlock, Path::new("TCH004_3.py"))]
#[test_case(Rule::RuntimeImportInTypeCheckingBlock, Path::new("TCH004_4.py"))]
@@ -36,6 +38,8 @@ mod tests {
#[test_case(Rule::TypingOnlyStandardLibraryImport, Path::new("snapshot.py"))]
#[test_case(Rule::TypingOnlyThirdPartyImport, Path::new("TCH002.py"))]
#[test_case(Rule::TypingOnlyThirdPartyImport, Path::new("strict.py"))]
#[test_case(Rule::TypingOnlyThirdPartyImport, Path::new("typing_modules_1.py"))]
#[test_case(Rule::TypingOnlyThirdPartyImport, Path::new("typing_modules_2.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let diagnostics = test_path(

View File

@@ -285,6 +285,7 @@ pub(crate) fn typing_only_runtime_import(
checker.settings.isort.detect_same_package,
&checker.settings.isort.known_modules,
checker.settings.target_version,
checker.settings.isort.no_sections,
) {
ImportSection::Known(ImportType::LocalFolder | ImportType::FirstParty) => {
ImportType::FirstParty

View File

@@ -96,4 +96,19 @@ TCH005.py:22:9: TCH005 [*] Found empty type-checking block
24 22 |
25 23 |
TCH005.py:45:5: TCH005 [*] Found empty type-checking block
|
44 | if TYPE_CHECKING:
45 | pass # TCH005
| ^^^^ TCH005
|
= help: Delete empty type-checking block
Safe fix
41 41 |
42 42 | from typing_extensions import TYPE_CHECKING
43 43 |
44 |-if TYPE_CHECKING:
45 |- pass # TCH005

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_type_checking/mod.rs
---

View File

@@ -0,0 +1,25 @@
---
source: crates/ruff_linter/src/rules/flake8_type_checking/mod.rs
---
TCH004_17.py:6:24: TCH004 [*] Move import `pandas.DataFrame` out of type-checking block. Import is used for more than type hinting.
|
5 | if TYPE_CHECKING:
6 | from pandas import DataFrame
| ^^^^^^^^^ TCH004
|
= help: Move out of type-checking block
Unsafe fix
1 1 | from __future__ import annotations
2 2 |
3 3 | from typing_extensions import TYPE_CHECKING
4 |+from pandas import DataFrame
4 5 |
5 6 | if TYPE_CHECKING:
6 |- from pandas import DataFrame
7 |+ pass
7 8 |
8 9 |
9 10 | def example() -> DataFrame:

View File

@@ -248,5 +248,6 @@ TCH002.py:172:24: TCH002 [*] Move third-party import `module.Member` into a type
172 |- from module import Member
173 176 |
174 177 | x: Member = 1
175 178 |

View File

@@ -0,0 +1,29 @@
---
source: crates/ruff_linter/src/rules/flake8_type_checking/mod.rs
---
typing_modules_1.py:7:24: TCH002 [*] Move third-party import `pandas.DataFrame` into a type-checking block
|
6 | def func():
7 | from pandas import DataFrame
| ^^^^^^^^^ TCH002
8 |
9 | df: DataFrame
|
= help: Move into type-checking block
Unsafe fix
1 1 | from __future__ import annotations
2 2 |
3 3 | from typing_extensions import Self
4 |+from typing import TYPE_CHECKING
5 |+
6 |+if TYPE_CHECKING:
7 |+ from pandas import DataFrame
4 8 |
5 9 |
6 10 | def func():
7 |- from pandas import DataFrame
8 11 |
9 12 | df: DataFrame

View File

@@ -0,0 +1,27 @@
---
source: crates/ruff_linter/src/rules/flake8_type_checking/mod.rs
---
typing_modules_2.py:7:24: TCH002 [*] Move third-party import `pandas.DataFrame` into a type-checking block
|
6 | def func():
7 | from pandas import DataFrame
| ^^^^^^^^^ TCH002
8 |
9 | df: DataFrame
|
= help: Move into type-checking block
Unsafe fix
2 2 |
3 3 | import typing_extensions
4 4 |
5 |+if typing_extensions.TYPE_CHECKING:
6 |+ from pandas import DataFrame
7 |+
5 8 |
6 9 | def func():
7 |- from pandas import DataFrame
8 10 |
9 11 | df: DataFrame

View File

@@ -61,6 +61,7 @@ enum Reason<'a> {
SourceMatch(&'a Path),
NoMatch,
UserDefinedSection,
NoSections,
}
#[allow(clippy::too_many_arguments)]
@@ -72,16 +73,22 @@ pub(crate) fn categorize<'a>(
detect_same_package: bool,
known_modules: &'a KnownModules,
target_version: PythonVersion,
no_sections: bool,
) -> &'a ImportSection {
let module_base = module_name.split('.').next().unwrap();
let (import_type, reason) = {
if level.is_some_and(|level| level > 0) {
if matches!(level, None | Some(0)) && module_base == "__future__" {
(&ImportSection::Known(ImportType::Future), Reason::Future)
} else if no_sections {
(
&ImportSection::Known(ImportType::FirstParty),
Reason::NoSections,
)
} else if level.is_some_and(|level| level > 0) {
(
&ImportSection::Known(ImportType::LocalFolder),
Reason::NonZeroLevel,
)
} else if module_base == "__future__" {
(&ImportSection::Known(ImportType::Future), Reason::Future)
} else if let Some((import_type, reason)) = known_modules.categorize(module_name) {
(import_type, reason)
} else if is_known_standard_library(target_version.minor(), module_base) {
@@ -141,6 +148,7 @@ pub(crate) fn categorize_imports<'a>(
detect_same_package: bool,
known_modules: &'a KnownModules,
target_version: PythonVersion,
no_sections: bool,
) -> BTreeMap<&'a ImportSection, ImportBlock<'a>> {
let mut block_by_type: BTreeMap<&ImportSection, ImportBlock> = BTreeMap::default();
// Categorize `Stmt::Import`.
@@ -153,6 +161,7 @@ pub(crate) fn categorize_imports<'a>(
detect_same_package,
known_modules,
target_version,
no_sections,
);
block_by_type
.entry(import_type)
@@ -170,6 +179,7 @@ pub(crate) fn categorize_imports<'a>(
detect_same_package,
known_modules,
target_version,
no_sections,
);
block_by_type
.entry(classification)
@@ -187,6 +197,7 @@ pub(crate) fn categorize_imports<'a>(
detect_same_package,
known_modules,
target_version,
no_sections,
);
block_by_type
.entry(classification)
@@ -204,6 +215,7 @@ pub(crate) fn categorize_imports<'a>(
detect_same_package,
known_modules,
target_version,
no_sections,
);
block_by_type
.entry(classification)

View File

@@ -2,8 +2,6 @@
use std::path::{Path, PathBuf};
use itertools::Itertools;
use annotate::annotate_imports;
use block::{Block, Trailer};
pub(crate) use categorize::categorize;
@@ -16,9 +14,8 @@ use ruff_python_ast::PySourceType;
use ruff_python_codegen::Stylist;
use ruff_source_file::Locator;
use settings::Settings;
use sorting::ModuleKey;
use types::EitherImport::{Import, ImportFrom};
use types::{AliasData, EitherImport, ImportBlock, TrailingComma};
use types::{AliasData, ImportBlock, TrailingComma};
use crate::line_width::{LineLength, LineWidthBuilder};
use crate::settings::types::PythonVersion;
@@ -156,6 +153,7 @@ fn format_import_block(
settings.detect_same_package,
&settings.known_modules,
target_version,
settings.no_sections,
);
let mut output = String::new();
@@ -175,30 +173,6 @@ fn format_import_block(
let imports = order_imports(import_block, settings);
let imports = imports
.import
.into_iter()
.map(Import)
.chain(imports.import_from.into_iter().map(ImportFrom));
let imports: Vec<EitherImport> = if settings.force_sort_within_sections {
imports
.sorted_by_cached_key(|import| match import {
Import((alias, _)) => {
ModuleKey::from_module(Some(alias.name), alias.asname, None, None, settings)
}
ImportFrom((import_from, _, _, aliases)) => ModuleKey::from_module(
import_from.module,
None,
import_from.level,
aliases.first().map(|(alias, _)| (alias.name, alias.asname)),
settings,
),
})
.collect()
} else {
imports.collect()
};
// Add a blank line between every section.
if is_first_block {
is_first_block = false;
@@ -697,6 +671,7 @@ mod tests {
}
#[test_case(Path::new("force_sort_within_sections.py"))]
#[test_case(Path::new("force_sort_within_sections_with_as_names.py"))]
fn force_sort_within_sections(path: &Path) -> Result<()> {
let snapshot = format!("force_sort_within_sections_{}", path.to_string_lossy());
let mut diagnostics = test_path(
@@ -862,6 +837,24 @@ mod tests {
Ok(())
}
#[test_case(Path::new("no_sections.py"))]
fn no_sections(path: &Path) -> Result<()> {
let snapshot = format!("no_sections_{}", path.to_string_lossy());
let diagnostics = test_path(
Path::new("isort").join(path).as_path(),
&LinterSettings {
isort: super::settings::Settings {
no_sections: true,
..super::settings::Settings::default()
},
src: vec![test_resource_path("fixtures/isort")],
..LinterSettings::for_rule(Rule::UnsortedImports)
},
)?;
assert_messages!(snapshot, diagnostics);
Ok(())
}
#[test_case(Path::new("no_lines_before.py"))]
fn no_lines_before(path: &Path) -> Result<()> {
let snapshot = format!("no_lines_before.py_{}", path.to_string_lossy());

View File

@@ -2,23 +2,16 @@ use itertools::Itertools;
use super::settings::Settings;
use super::sorting::{MemberKey, ModuleKey};
use super::types::{AliasData, CommentSet, ImportBlock, ImportFromStatement, OrderedImportBlock};
use super::types::EitherImport::{self, Import, ImportFrom};
use super::types::{AliasData, CommentSet, ImportBlock, ImportFromStatement};
pub(crate) fn order_imports<'a>(
block: ImportBlock<'a>,
settings: &Settings,
) -> OrderedImportBlock<'a> {
let mut ordered = OrderedImportBlock::default();
) -> Vec<EitherImport<'a>> {
let straight_imports = block.import.into_iter();
// Sort `Stmt::Import`.
ordered
.import
.extend(block.import.into_iter().sorted_by_cached_key(|(alias, _)| {
ModuleKey::from_module(Some(alias.name), alias.asname, None, None, settings)
}));
// Sort `Stmt::ImportFrom`.
ordered.import_from.extend(
let from_imports =
// Include all non-re-exports.
block
.import_from
@@ -56,8 +49,31 @@ pub(crate) fn order_imports<'a>(
.collect::<Vec<(AliasData, CommentSet)>>(),
)
},
)
.sorted_by_cached_key(|(import_from, _, _, aliases)| {
);
let ordered_imports = if settings.force_sort_within_sections {
straight_imports
.map(Import)
.chain(from_imports.map(ImportFrom))
.sorted_by_cached_key(|import| match import {
Import((alias, _)) => {
ModuleKey::from_module(Some(alias.name), alias.asname, None, None, settings)
}
ImportFrom((import_from, _, _, aliases)) => ModuleKey::from_module(
import_from.module,
None,
import_from.level,
aliases.first().map(|(alias, _)| (alias.name, alias.asname)),
settings,
),
})
.collect()
} else {
let ordered_straight_imports = straight_imports.sorted_by_cached_key(|(alias, _)| {
ModuleKey::from_module(Some(alias.name), alias.asname, None, None, settings)
});
let ordered_from_imports =
from_imports.sorted_by_cached_key(|(import_from, _, _, aliases)| {
ModuleKey::from_module(
import_from.module,
None,
@@ -65,7 +81,13 @@ pub(crate) fn order_imports<'a>(
aliases.first().map(|(alias, _)| (alias.name, alias.asname)),
settings,
)
}),
);
ordered
});
ordered_straight_imports
.into_iter()
.map(Import)
.chain(ordered_from_imports.into_iter().map(ImportFrom))
.collect()
};
ordered_imports
}

View File

@@ -56,6 +56,7 @@ pub struct Settings {
pub lines_between_types: usize,
pub forced_separate: Vec<String>,
pub section_order: Vec<ImportSection>,
pub no_sections: bool,
}
impl Default for Settings {
@@ -82,6 +83,7 @@ impl Default for Settings {
lines_between_types: 0,
forced_separate: Vec::new(),
section_order: ImportType::iter().map(ImportSection::Known).collect(),
no_sections: false,
}
}
}

View File

@@ -8,7 +8,7 @@ docstring.py:1:1: I002 [*] Missing required import: `from __future__ import anno
2 |
3 | x = 1
|
= help: Insert required import: `from future import annotations`
= help: Insert required import: `from __future__ import annotations`
Safe fix
1 1 | """Hello, world!"""
@@ -23,7 +23,7 @@ docstring.py:1:1: I002 [*] Missing required import: `from __future__ import gene
2 |
3 | x = 1
|
= help: Insert required import: `from future import generator_stop`
= help: Insert required import: `from __future__ import generator_stop`
Safe fix
1 1 | """Hello, world!"""

View File

@@ -0,0 +1,25 @@
---
source: crates/ruff_linter/src/rules/isort/mod.rs
---
force_sort_within_sections_with_as_names.py:1:1: I001 [*] Import block is un-sorted or un-formatted
|
1 | / import encodings
2 | | from datetime import timezone as tz
3 | | from datetime import timedelta
4 | | import datetime as dt
5 | | import datetime
|
= help: Organize imports
Safe fix
1 |+import datetime
2 |+import datetime as dt
3 |+from datetime import timedelta
4 |+from datetime import timezone as tz
1 5 | import encodings
2 |-from datetime import timezone as tz
3 |-from datetime import timedelta
4 |-import datetime as dt
5 |-import datetime

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/isort/mod.rs
---

View File

@@ -8,7 +8,7 @@ comment.py:1:1: I002 [*] Missing required import: `from __future__ import annota
2 |
3 | x = 1
|
= help: Insert required import: `from future import annotations`
= help: Insert required import: `from __future__ import annotations`
Safe fix
1 1 | #!/usr/bin/env python3

View File

@@ -7,7 +7,7 @@ comments_and_newlines.py:1:1: I002 [*] Missing required import: `from __future__
| I002
2 | # A copyright notice could go here
|
= help: Insert required import: `from future import annotations`
= help: Insert required import: `from __future__ import annotations`
Safe fix
2 2 | # A copyright notice could go here

View File

@@ -8,7 +8,7 @@ docstring.py:1:1: I002 [*] Missing required import: `from __future__ import anno
2 |
3 | x = 1
|
= help: Insert required import: `from future import annotations`
= help: Insert required import: `from __future__ import annotations`
Safe fix
1 1 | """Hello, world!"""

Some files were not shown because too many files have changed in this diff Show More