Compare commits

..

43 Commits

Author SHA1 Message Date
Charlie Marsh
8fbec8e6a2 Update docs to match updated logo and color palette 2023-06-21 20:48:29 -04:00
Charlie Marsh
ac146e11f0 Allow typing.Final for mutable-class-default annotations (RUF012) (#5274)
## Summary

See: https://github.com/astral-sh/ruff/issues/5243.
2023-06-22 00:24:53 +00:00
Charlie Marsh
1229600e1d Ignore Pydantic classes when evaluating mutable-class-default (RUF012) (#5273)
Closes https://github.com/astral-sh/ruff/issues/5272.
2023-06-21 23:59:44 +00:00
Micha Reiser
ccf34aae8c Format Attribute Expression (#5259) 2023-06-21 21:33:53 +00:00
Tom Kuson
341b12d918 Complete documentation for Ruff-specific rules (#5262)
## Summary

Completes the documentation for the Ruff-specific ruleset. Related to
#2646.

## Test Plan

`python scripts/check_docs_formatted.py`
2023-06-21 21:30:44 +00:00
Micha Reiser
3d7411bfaf Use trait for labels instead of TypeId (#5270) 2023-06-21 22:26:09 +01:00
David Szotten
1eccbbb60e Format StmtFor (#5163)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

format StmtFor

still trying to learn how to help out with the formatter. trying
something slightly more advanced than [break](#5158)

mostly copied form StmtWhile

## Test Plan

snapshots
2023-06-21 23:00:31 +02:00
Charlie Marsh
e71f044f0d Avoid including nursery rules in linter-level selectors (#5268)
## Summary

Ensures that `--select PL` and `--select PLC` don't include `PLC1901`.
Previously, `--select PL` _did_, because it's a "linter-level selector"
(`--select PLC` is viewed as selecting the `C` prefix from `PL`), and we
were missing this filtering path.
2023-06-21 20:11:40 +00:00
James Berry
f194572be8 Remove visit_arg_with_default (#5265)
## Summary

This is a follow up to #5221. Turns out it was easy to restructure the
visitor to get the right order, I'm just dumb 🤷‍♂️ I've
removed `visit_arg_with_default` entirely from the `Visitor`, although
it still exists as part of `preorder::Visitor`.
2023-06-21 16:00:24 -04:00
Charlie Marsh
62e2c46f98 Move compare-to-empty-string to nursery (#5264)
## Summary

This rule has too many false positives. It has parity with the Pylint
version, but the Pylint version is part of an
[extension](https://pylint.readthedocs.io/en/stable/user_guide/messages/convention/compare-to-empty-string.html),
and so requires explicit opt-in.

I'm moving this rule to the nursery to require explicit opt-in, as with
Pylint.

Closes #4282.
2023-06-21 19:47:02 +00:00
konstin
9419d3f9c8 Special ExprTuple formatting option for for-loops (#5175)
## Motivation

While black keeps parentheses nearly everywhere, the notable exception
is in the body of for loops:
```python
for (a, b) in x:
    pass
```
becomes
```python
for a, b in x:
    pass
```

This currently blocks #5163, which this PR should unblock.

## Solution

This changes the `ExprTuple` formatting option to include one additional
option that removes the parentheses when not using magic trailing comma
and not breaking. It is supposed to be used through
```rust
#[derive(Debug)]
struct ExprTupleWithoutParentheses<'a>(&'a Expr);

impl Format<PyFormatContext<'_>> for ExprTupleWithoutParentheses<'_> {
    fn fmt(&self, f: &mut Formatter<PyFormatContext<'_>>) -> FormatResult<()> {
        match self.0 {
            Expr::Tuple(expr_tuple) => expr_tuple
                .format()
                .with_options(TupleParentheses::StripInsideForLoop)
                .fmt(f),
            other => other.format().with_options(Parenthesize::IfBreaks).fmt(f),
        }
    }
}
```


## Testing

The for loop formatting isn't merged due to missing this (and i didn't
want to create more git weirdness across two people), but I've confirmed
that when applying this to while loops instead of for loops, then
```rust
        write!(
            f,
            [
                text("while"),
                space(),
                ExprTupleWithoutParentheses(test.as_ref()),
                text(":"),
                trailing_comments(trailing_condition_comments),
                block_indent(&body.format())
            ]
        )?;
```
makes
```python
while (a, b):
    pass

while (
    ajssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssa,
    b,
):
    pass

while (a,b,):
    pass
```
formatted as
```python
while a, b:
    pass

while (
    ajssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssa,
    b,
):
    pass

while (
    a,
    b,
):
    pass
```
2023-06-21 21:17:47 +02:00
James Berry
9b5fb8f38f Fix AST visitor traversal order (#5221)
## Summary

According to the AST visitor documentation, the AST visitor "visits all
nodes in the AST recursively in evaluation-order". However, the current
traversal fails to meet this specification in a few places.

### Function traversal

```python
order = []
@(order.append("decorator") or (lambda x: x))
def f(
    posonly: order.append("posonly annotation") = order.append("posonly default"),
    /,
    arg: order.append("arg annotation") = order.append("arg default"),
    *args: order.append("vararg annotation"),
    kwarg: order.append("kwarg annotation") = order.append("kwarg default"),
    **kwargs: order.append("kwarg annotation")
) -> order.append("return annotation"):
    pass
print(order)
```

Executing the above snippet using CPython 3.10.6 prints the following
result (formatted for readability):
```python
[
    'decorator',
    'posonly default',
    'arg default',
    'kwarg default',
    'arg annotation',
    'posonly annotation',
    'vararg annotation',
    'kwarg annotation',
    'kwarg annotation',
    'return annotation',
]
```

Here we can see that decorators are evaluated first, followed by
argument defaults, and annotations are last. The current traversal of a
function's AST does not align with this order.

### Annotated assignment traversal
```python
order = []
x: order.append("annotation") = order.append("expression")
print(order)
```

Executing the above snippet using CPython 3.10.6 prints the following
result:
```python
['expression', 'annotation']
```

Here we can see that an annotated assignments annotation gets evaluated
after the assignment's expression. The current traversal of an annotated
assignment's AST does not align with this order.

## Why?

I'm slowly working on #3946 and porting over some of the logic and tests
from ssort. ssort is very sensitive to AST traversal order, so ensuring
the utmost correctness here is important.

## Test Plan

There doesn't seem to be existing tests for the AST visitor, so I didn't
bother adding tests for these very subtle changes. However, this
behavior will be captured in the tests for the PR which addresses #3946.
2023-06-21 14:40:58 -04:00
konstin
d7c7484618 Format function argument separator comments (#5211)
## Summary

This is a complete rewrite of the handling of `/` and `*` comment
handling in function signatures. The key problem is that slash and star
don't have a note. We now parse out the positions of slash and star and
their respective preceding and following note. I've left code comments
for each possible case of function signature structure and comment
placement

## Test Plan

I extended the function statement fixtures with cases that i found. If
you have more weird edge cases your input would be appreciated.
2023-06-21 17:56:47 +00:00
konstin
bc63cc9b3c Fix remaining CPython formatter errors except for function argument separator comments (#5210)
## Summary

This fixes two problems discovered when trying to format the cpython
repo with `cargo run --bin ruff_dev -- check-formatter-stability
projects/cpython`:

The first is to ignore try/except trailing comments for now since they
lead to unstable formatting on the dummy.

The second is to avoid dropping trailing if comments through placement:
This changes the placement to keep a comment trailing an if-elif or
if-elif-else to keep the comment a trailing comment on the entire if.
Previously the last comment would have been lost.
```python
if "first if":
    pass
elif "first elif":
    pass
```

The last remaining problem in cpython so far is function signature
argument separator comment placement which is its own PR on top of this.

## Test Plan

I added test fixtures of minimized examples with links back to the
original cpython location
2023-06-21 19:45:53 +02:00
Charlie Marsh
bf1a94ee54 Initialize caches for packages and standalone files (#5237)
## Summary

While fixing https://github.com/astral-sh/ruff/pull/5233, I noticed that
in FastAPI, 343 out of 823 files weren't hitting the cache. It turns out
these are standalone files in the documentation that lack a "package
root". Later, when looking up the cache entries, we fallback to the
package directory.

This PR ensures that we initialize the cache for both kinds of files:
those that are in a package, and those that aren't.

The total size of the FastAPI cache for me is now 388K. I also suspect
that this approach is much faster than as initially written, since
before, we were probably initializing one cache per _directory_.

## Test Plan

Ran `cargo run -p ruff_cli -- check ../fastapi --verbose`; verified
that, on second execution, there were no "Checking" entries in the logs.
2023-06-21 17:29:09 +00:00
Dhruv Manilawala
c792c10eaa Add support for nested quoted annotations in RUF013 (#5254)
## Summary

This is a follow up on #5235 to add support for nested quoted
annotations for RUF013.

## Test Plan

`cargo test`
2023-06-21 17:25:27 +00:00
Evan Rittenhouse
f9ffb3d50d Add Applicability to pylint (#5251) 2023-06-21 17:22:01 +00:00
Evan Rittenhouse
2b76d88bd3 Add Applicability to pandas_vet (#5252) 2023-06-21 17:12:47 +00:00
Evan Rittenhouse
41ef17b007 Add Applicability to pyflakes (#5253) 2023-06-21 17:04:55 +00:00
Charlie Marsh
0aa21277c6 Improve documentation for overlong-line rules (#5260)
Closes https://github.com/astral-sh/ruff/issues/5248.
2023-06-21 17:02:20 +00:00
Charlie Marsh
ecf61d49fa Restore existing bindings when unbinding caught exceptions (#5256)
## Summary

In the latest release, we made some improvements to the semantic model,
but our modifications to exception-unbinding are causing some
false-positives. For example:

```py
try:
    v = 3
except ImportError as v:
    print(v)
else:
    print(v)
```

In the latest release, we started unbinding `v` after the `except`
handler. (We used to restore the existing binding, the `v = 3`, but this
was quite complicated.) Because we don't have full branch analysis, we
can't then know that `v` is still bound in the `else` branch.

The solution here modifies `resolve_read` to skip-lookup when hitting
unbound exceptions. So when store the "unbind" for `except ImportError
as v`, we save the binding that it shadowed `v = 3`, and skip to that.

Closes #5249.

Closes #5250.
2023-06-21 12:53:58 -04:00
Charlie Marsh
d99b3bf661 Add some projects to the ecosystem CI check (#5258) 2023-06-21 12:42:58 -04:00
Micha Reiser
e47aa468d5 Format Identifier (#5255) 2023-06-21 17:35:37 +02:00
konstin
6155fd647d Format Slice Expressions (#5047)
This formats slice expressions and subscript expressions.

Spaces around the colons follows the same rules as black
(https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#slices):
```python
e00 = "e"[:]
e01 = "e"[:1]
e02 = "e"[: a()]
e10 = "e"[1:]
e11 = "e"[1:1]
e12 = "e"[1 : a()]
e20 = "e"[a() :]
e21 = "e"[a() : 1]
e22 = "e"[a() : a()]
e200 = "e"[a() : :]
e201 = "e"[a() :: 1]
e202 = "e"[a() :: a()]
e210 = "e"[a() : 1 :]
```

Comment placement is different due to our very different infrastructure.
If we have explicit bounds (e.g. `x[1:2]`) all comments get assigned as
leading or trailing to the bound expression. If a bound is missing
`[:]`, comments get marked as dangling and placed in the same section as
they were originally in:
```python
x = "x"[ # a
      # b
    :  # c
      # d
]
```
to
```python
x = "x"[
    # a
    # b
    :
    # c
    # d
]
```
Except for the potential trailing end-of-line comments, all comments get
formatted on their own line. This can be improved by keeping end-of-line
comments after the opening bracket or after a colon as such but the
changes were already complex enough.

I added tests for comment placement and spaces.
2023-06-21 15:09:39 +00:00
Charlie Marsh
4634560c80 Ensure release tagging has access to repo clone (#5240)
## Summary

The [release
failed](https://github.com/astral-sh/ruff/actions/runs/5329733171/jobs/9656004063),
but late enough that I was able to do the remaining steps manually. The
issue here is that the tagging step requires that we clone the repo. I
split the upload (to PyPI), tagging (in Git), and publishing (to GitHub
Releases) phases into their own steps, since they need different
resources + permissions anyway.
2023-06-21 10:24:33 -04:00
Charlie Marsh
10885d09a1 Add support for top-level quoted annotations in RUF013 (#5235)
## Summary

This PR adds support for autofixing annotations like:

```python
def f(x: "int" = None):
    ...
```

However, we don't yet support nested quotes, like:

```python
def f(x: Union["int", "str"] = None):
    ...
```

Closes #5231.
2023-06-21 10:23:37 -04:00
konstin
44156f6962 Improve debuggability of place_comment (#5209)
## Summary

I found it hard to figure out which function decides placement for a
specific comment. An explicit loop makes this easier to debug

## Test Plan

There should be no functional changes, no changes to the formatting of
the fixtures.
2023-06-21 09:52:13 +00:00
konstin
f551c9aad2 Unify benchmarking and profiling docs (#5145)
This moves all docs about benchmarking and profiling into
CONTRIBUTING.md by moving the readme of `ruff_benchmark` and adding more
information on profiling.

We need to somehow consolidate that documentation, but i'm not convinced
that this is the best way (i tried subpages in mkdocs, but that didn't
seem good either), so i'm happy to take suggestions.
2023-06-21 09:39:56 +00:00
Micha Reiser
653dbb6d17 Format BoolOp (#4986) 2023-06-21 09:27:57 +00:00
konstin
db301c14bd Consistently name comment own line/end-of-line line_position() (#5215)
## Summary

Previously, `DecoratedComment` used `text_position()` and
`SourceComment` used `position()`. This PR unifies this to
`line_position` everywhere.

## Test Plan

This is a rename refactoring.
2023-06-21 11:04:56 +02:00
Micha Reiser
1336ca601b Format UnaryExpr
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

This PR adds basic formatting for unary expressions.

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

I added a new `unary.py` with custom test cases
2023-06-21 10:09:47 +02:00
Micha Reiser
3973836420 Correctly handle left/right breaking of binary expression
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary
Black supports for layouts when it comes to breaking binary expressions:

```rust
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
enum BinaryLayout {
    /// Put each operand on their own line if either side expands
    Default,

    /// Try to expand the left to make it fit. Add parentheses if the left or right don't fit.
    ///
    ///```python
    /// [
    ///     a,
    ///     b
    /// ] & c
    ///```
    ExpandLeft,

    /// Try to expand the right to make it fix. Add parentheses if the left or right don't fit.
    ///
    /// ```python
    /// a & [
    ///     b,
    ///     c
    /// ]
    /// ```
    ExpandRight,

    /// Both the left and right side can be expanded. Try in the following order:
    /// * expand the right side
    /// * expand the left side
    /// * expand both sides
    ///
    /// to make the expression fit
    ///
    /// ```python
    /// [
    ///     a,
    ///     b
    /// ] & [
    ///     c,
    ///     d
    /// ]
    /// ```
    ExpandRightThenLeft,
}
```

Our current implementation only handles `ExpandRight` and `Default` correctly. This PR adds support for `ExpandRightThenLeft` and `ExpandLeft`. 

## Test Plan

I added tests that play through all 4 binary expression layouts.
2023-06-21 09:40:05 +02:00
Charlie Marsh
a332f078db Checkout repo to support release tag validation (#5238)
## Summary

The
[release](https://github.com/astral-sh/ruff/actions/runs/5329340068/jobs/9655224008)
failed due to an inability to find `pyproject.toml`. This PR moves that
validation into its own step (so we can fail fast) and ensures we clone
the repo.
2023-06-21 03:16:35 +00:00
Charlie Marsh
e0339b538b Bump version to 0.0.274 (#5230) 2023-06-20 22:12:32 -04:00
Charlie Marsh
07b6b7401f Move copyright rules to flake8_copyright module (#5236)
## Summary

I initially wanted this category to be more general and decoupled from
the plugin, but I got some feedback that the titling felt inconsistent
with others.
2023-06-21 01:56:40 +00:00
Charlie Marsh
1db7d9e759 Avoid erroneous RUF013 violations for quoted annotations (#5234)
## Summary

Temporary fix for #5231: if we can't flag and fix these properly, just
disabling them for now.

\cc @dhruvmanila 

## Test Plan

`cargo test`
2023-06-21 01:29:12 +00:00
Charlie Marsh
621e9ace88 Use package roots rather than package members for cache initialization (#5233)
## Summary

This is a proper fix for the issue patched-over in
https://github.com/astral-sh/ruff/pull/5229, thanks to an extremely
helpful repro from @tlambert03 in that thread. It looks like we were
using the keys of `package_roots` rather than the values to initialize
the cache -- but it's a map from package to package root.

## Test Plan

Reverted #5229, then ran through the plan that @tlambert03 included in
https://github.com/astral-sh/ruff/pull/5229#issuecomment-1599723226.
Verified the panic before but not after this change.
2023-06-20 21:21:45 -04:00
Charlie Marsh
f9f77cf617 Revert change to RUF010 to remove unnecessary str calls (#5232)
## Summary

This PR reverts #4971 (aba073a791). It
turns out that `f"{str(x)}"` and `f"{x}"` are often but not exactly
equivalent, and performing that conversion automatically can lead to
subtle bugs, See the discussion in
https://github.com/astral-sh/ruff/issues/4958.
2023-06-20 21:15:17 -04:00
Charlie Marsh
1a2bd984f2 Avoid .unwrap() on cache access (#5229)
## Summary

I haven't been able to determine why / when this is happening, but in
some cases, users are reporting that this `unwrap()` is causing a panic.
It's fine to just return `None` here and fallback to "No cache",
certainly better than panicking (while we figure out the edge case).

Closes #5225.

Closes #5228.
2023-06-20 19:01:21 -04:00
Tom Kuson
4717d0779f Complete flake8-debugger documentation (#5223)
## Summary

Completes the documentation for the `flake8-debugger` ruleset. Related
to #2646.

## Test Plan

`python scripts/check_docs_formatted.py`
2023-06-20 21:04:32 +00:00
Florian Stasse
07409ce201 Fixed typo in numpy deprecated type alias rule documentation (#5224)
## Summary

It is a very simple typo fix in the "numy deprecated type alias"
documentation.
2023-06-20 16:51:51 -04:00
Addison Crump
2c0ec97782 Use cpython with fuzzer corpus (#5183)
Following #5055, add cpython as a member of the fuzzer corpus
unconditionally.
2023-06-20 16:51:06 -04:00
Micha Reiser
e520a3a721 Fix ArgWithDefault comments handling (#5204) 2023-06-20 20:48:07 +00:00
182 changed files with 6336 additions and 2327 deletions

View File

@@ -4,7 +4,7 @@ on:
workflow_dispatch:
inputs:
tag:
description: "The version to tag, without the leading 'v'. If omitted, will initiate a dry run skipping uploading artifact."
description: "The version to tag, without the leading 'v'. If omitted, will initiate a dry run (no uploads)."
type: string
sha:
description: "Optionally, the full sha of the commit to be released"
@@ -392,28 +392,14 @@ jobs:
*.tar.gz
*.sha256
release:
name: Release
validate-tag:
name: Validate tag
runs-on: ubuntu-latest
needs:
- macos-universal
- macos-x86_64
- windows
- linux
- linux-cross
- musllinux
- musllinux-cross
# If you don't set an input it's a dry run skipping uploading artifact
# If you don't set an input tag, it's a dry run (no uploads).
if: ${{ inputs.tag }}
environment:
name: release
permissions:
# For pypi trusted publishing
id-token: write
# For GitHub release publishing
contents: write
steps:
- name: Consistency check tag
- uses: actions/checkout@v3
- name: Check tag consistency
run: |
version=$(grep "version = " pyproject.toml | sed -e 's/version = "\(.*\)"/\1/g')
if [ "${{ inputs.tag }}" != "${version}" ]; then
@@ -424,7 +410,7 @@ jobs:
else
echo "Releasing ${version}"
fi
- name: Consistency check sha
- name: Check SHA consistency
if: ${{ inputs.sha }}
run: |
git_sha=$(git rev-parse HEAD)
@@ -436,6 +422,27 @@ jobs:
else
echo "Releasing ${git_sha}"
fi
upload-release:
name: Upload to PyPI
runs-on: ubuntu-latest
needs:
- macos-universal
- macos-x86_64
- windows
- linux
- linux-cross
- musllinux
- musllinux-cross
- validate-tag
# If you don't set an input tag, it's a dry run (no uploads).
if: ${{ inputs.tag }}
environment:
name: release
permissions:
# For pypi trusted publishing
id-token: write
steps:
- uses: actions/download-artifact@v3
with:
name: wheels
@@ -446,10 +453,18 @@ jobs:
skip-existing: true
packages-dir: wheels
verbose: true
- uses: actions/download-artifact@v3
with:
name: binaries
path: binaries
tag-release:
name: Tag release
runs-on: ubuntu-latest
needs: upload-release
# If you don't set an input tag, it's a dry run (no uploads).
if: ${{ inputs.tag }}
permissions:
# For git tag
contents: write
steps:
- uses: actions/checkout@v3
- name: git tag
run: |
git config user.email "hey@astral.sh"
@@ -458,10 +473,25 @@ jobs:
# If there is duplicate tag, this will fail. The publish to pypi action will have been a noop (due to skip
# existing), so we make a non-destructive exit here
git push --tags
publish-release:
name: Publish to GitHub
runs-on: ubuntu-latest
needs: tag-release
# If you don't set an input tag, it's a dry run (no uploads).
if: ${{ inputs.tag }}
permissions:
# For GitHub release publishing
contents: write
steps:
- uses: actions/download-artifact@v3
with:
name: binaries
path: binaries
- name: "Publish to GitHub"
uses: softprops/action-gh-release@v1
with:
draft: true
draft: false
files: binaries/*
tag_name: v${{ inputs.tag }}

View File

@@ -12,7 +12,7 @@ Welcome! We're happy to have you here. Thank you in advance for your contributio
- [Example: Adding a new configuration option](#example-adding-a-new-configuration-option)
- [MkDocs](#mkdocs)
- [Release Process](#release-process)
- [Benchmarks](#benchmarks)
- [Benchmarks](#benchmarking-and-profiling)
## The Basics
@@ -307,7 +307,15 @@ downloading the [`known-github-tomls.json`](https://github.com/akx/ruff-usage-ag
as `github_search.jsonl` and following the instructions in [scripts/Dockerfile.ecosystem](https://github.com/astral-sh/ruff/blob/main/scripts/Dockerfile.ecosystem).
Note that this check will take a while to run.
## Benchmarks
## Benchmarking and Profiling
We have several ways of benchmarking and profiling Ruff:
- Our main performance benchmark comparing Ruff with other tools on the CPython codebase
- Microbenchmarks which the linter or the formatter on individual files. There run on pull requests.
- Profiling the linter on either the microbenchmarks or entire projects
### CPython Benchmark
First, clone [CPython](https://github.com/python/cpython). It's a large and diverse Python codebase,
which makes it a good target for benchmarking.
@@ -386,9 +394,9 @@ Summary
159.43 ± 2.48 times faster than 'pycodestyle crates/ruff/resources/test/cpython'
```
You can run `poetry install` from `./scripts` to create a working environment for the above. All
reported benchmarks were computed using the versions specified by `./scripts/pyproject.toml`
on Python 3.11.
You can run `poetry install` from `./scripts/benchmarks` to create a working environment for the
above. All reported benchmarks were computed using the versions specified by
`./scripts/benchmarks/pyproject.toml` on Python 3.11.
To benchmark Pylint, remove the following files from the CPython repository:
@@ -429,3 +437,116 @@ Benchmark 1: find . -type f -name "*.py" | xargs -P 0 pyupgrade --py311-plus
Time (mean ± σ): 30.119 s ± 0.195 s [User: 28.638 s, System: 0.390 s]
Range (min … max): 29.813 s … 30.356 s 10 runs
```
## Microbenchmarks
The `ruff_benchmark` crate benchmarks the linter and the formatter on individual files.
You can run the benchmarks with
```shell
cargo benchmark
```
### Benchmark driven Development
Ruff uses [Criterion.rs](https://bheisler.github.io/criterion.rs/book/) for benchmarks. You can use
`--save-baseline=<name>` to store an initial baseline benchmark (e.g. on `main`) and then use
`--benchmark=<name>` to compare against that benchmark. Criterion will print a message telling you
if the benchmark improved/regressed compared to that baseline.
```shell
# Run once on your "baseline" code
cargo benchmark --save-baseline=main
# Then iterate with
cargo benchmark --baseline=main
```
### PR Summary
You can use `--save-baseline` and `critcmp` to get a pretty comparison between two recordings.
This is useful to illustrate the improvements of a PR.
```shell
# On main
cargo benchmark --save-baseline=main
# After applying your changes
cargo benchmark --save-baseline=pr
critcmp main pr
```
You must install [`critcmp`](https://github.com/BurntSushi/critcmp) for the comparison.
```bash
cargo install critcmp
```
### Tips
- Use `cargo benchmark <filter>` to only run specific benchmarks. For example: `cargo benchmark linter/pydantic`
to only run the pydantic tests.
- Use `cargo benchmark --quiet` for a more cleaned up output (without statistical relevance)
- Use `cargo benchmark --quick` to get faster results (more prone to noise)
## Profiling Projects
You can either use the microbenchmarks from above or a project directory for benchmarking. There
are a lot of profiling tools out there,
[The Rust Performance Book](https://nnethercote.github.io/perf-book/profiling.html) lists some
examples.
### Linux
Install `perf` and build `ruff_benchmark` with the `release-debug` profile and then run it with perf
```shell
cargo bench -p ruff_benchmark --no-run --profile=release-debug && perf record -g -F 9999 cargo bench -p ruff_benchmark --profile=release-debug -- --profile-time=1
```
You can also use the `ruff_dev` launcher to run `ruff check` multiple times on a repository to
gather enough samples for a good flamegraph (change the 999, the sample rate, and the 30, the number
of checks, to your liking)
```shell
cargo build --bin ruff_dev --profile=release-debug
perf record -g -F 999 target/release-debug/ruff_dev repeat --repeat 30 --exit-zero --no-cache path/to/cpython > /dev/null
```
Then convert the recorded profile
```shell
perf script -F +pid > /tmp/test.perf
```
You can now view the converted file with [firefox profiler](https://profiler.firefox.com/), with a
more in-depth guide [here](https://profiler.firefox.com/docs/#/./guide-perf-profiling)
An alternative is to convert the perf data to `flamegraph.svg` using
[flamegraph](https://github.com/flamegraph-rs/flamegraph) (`cargo install flamegraph`):
```shell
flamegraph --perfdata perf.data
```
### Mac
Install [`cargo-instruments`](https://crates.io/crates/cargo-instruments):
```shell
cargo install cargo-instruments
```
Then run the profiler with
```shell
cargo instruments -t time --bench linter --profile release-debug -p ruff_benchmark -- --profile-time=1
```
- `-t`: Specifies what to profile. Useful options are `time` to profile the wall time and `alloc`
for profiling the allocations.
- You may want to pass an additional filter to run a single test file
Otherwise, follow the instructions from the linux section.

18
Cargo.lock generated
View File

@@ -733,7 +733,7 @@ dependencies = [
[[package]]
name = "flake8-to-ruff"
version = "0.0.273"
version = "0.0.274"
dependencies = [
"anyhow",
"clap",
@@ -1793,7 +1793,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.273"
version = "0.0.274"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -1889,7 +1889,7 @@ dependencies = [
[[package]]
name = "ruff_cli"
version = "0.0.273"
version = "0.0.274"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -2105,7 +2105,7 @@ dependencies = [
[[package]]
name = "ruff_text_size"
version = "0.0.0"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=ed3b4eb72b6e497bbdb4d19dec6621074d724130#ed3b4eb72b6e497bbdb4d19dec6621074d724130"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=08ebbe40d7776cac6e3ba66277d435056f2b8dca#08ebbe40d7776cac6e3ba66277d435056f2b8dca"
dependencies = [
"schemars",
"serde",
@@ -2183,7 +2183,7 @@ dependencies = [
[[package]]
name = "rustpython-ast"
version = "0.2.0"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=ed3b4eb72b6e497bbdb4d19dec6621074d724130#ed3b4eb72b6e497bbdb4d19dec6621074d724130"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=08ebbe40d7776cac6e3ba66277d435056f2b8dca#08ebbe40d7776cac6e3ba66277d435056f2b8dca"
dependencies = [
"is-macro",
"num-bigint",
@@ -2194,7 +2194,7 @@ dependencies = [
[[package]]
name = "rustpython-format"
version = "0.2.0"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=ed3b4eb72b6e497bbdb4d19dec6621074d724130#ed3b4eb72b6e497bbdb4d19dec6621074d724130"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=08ebbe40d7776cac6e3ba66277d435056f2b8dca#08ebbe40d7776cac6e3ba66277d435056f2b8dca"
dependencies = [
"bitflags 2.3.1",
"itertools",
@@ -2206,7 +2206,7 @@ dependencies = [
[[package]]
name = "rustpython-literal"
version = "0.2.0"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=ed3b4eb72b6e497bbdb4d19dec6621074d724130#ed3b4eb72b6e497bbdb4d19dec6621074d724130"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=08ebbe40d7776cac6e3ba66277d435056f2b8dca#08ebbe40d7776cac6e3ba66277d435056f2b8dca"
dependencies = [
"hexf-parse",
"is-macro",
@@ -2218,7 +2218,7 @@ dependencies = [
[[package]]
name = "rustpython-parser"
version = "0.2.0"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=ed3b4eb72b6e497bbdb4d19dec6621074d724130#ed3b4eb72b6e497bbdb4d19dec6621074d724130"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=08ebbe40d7776cac6e3ba66277d435056f2b8dca#08ebbe40d7776cac6e3ba66277d435056f2b8dca"
dependencies = [
"anyhow",
"is-macro",
@@ -2241,7 +2241,7 @@ dependencies = [
[[package]]
name = "rustpython-parser-core"
version = "0.2.0"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=ed3b4eb72b6e497bbdb4d19dec6621074d724130#ed3b4eb72b6e497bbdb4d19dec6621074d724130"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=08ebbe40d7776cac6e3ba66277d435056f2b8dca#08ebbe40d7776cac6e3ba66277d435056f2b8dca"
dependencies = [
"is-macro",
"memchr",

View File

@@ -50,15 +50,15 @@ toml = { version = "0.7.2" }
# v0.0.1
libcst = { git = "https://github.com/charliermarsh/LibCST", rev = "80e4c1399f95e5beb532fdd1e209ad2dbb470438" }
# v0.0.3
ruff_text_size = { git = "https://github.com/astral-sh/RustPython-Parser.git", rev = "ed3b4eb72b6e497bbdb4d19dec6621074d724130" }
ruff_text_size = { git = "https://github.com/astral-sh/RustPython-Parser.git", rev = "08ebbe40d7776cac6e3ba66277d435056f2b8dca" }
# v0.0.3
rustpython-ast = { git = "https://github.com/astral-sh/RustPython-Parser.git", rev = "ed3b4eb72b6e497bbdb4d19dec6621074d724130" , default-features = false, features = ["all-nodes-with-ranges", "num-bigint"]}
rustpython-ast = { git = "https://github.com/astral-sh/RustPython-Parser.git", rev = "08ebbe40d7776cac6e3ba66277d435056f2b8dca" , default-features = false, features = ["all-nodes-with-ranges", "num-bigint"]}
# v0.0.3
rustpython-format = { git = "https://github.com/astral-sh/RustPython-Parser.git", rev = "ed3b4eb72b6e497bbdb4d19dec6621074d724130", default-features = false, features = ["num-bigint"] }
rustpython-format = { git = "https://github.com/astral-sh/RustPython-Parser.git", rev = "08ebbe40d7776cac6e3ba66277d435056f2b8dca", default-features = false, features = ["num-bigint"] }
# v0.0.3
rustpython-literal = { git = "https://github.com/astral-sh/RustPython-Parser.git", rev = "ed3b4eb72b6e497bbdb4d19dec6621074d724130", default-features = false }
rustpython-literal = { git = "https://github.com/astral-sh/RustPython-Parser.git", rev = "08ebbe40d7776cac6e3ba66277d435056f2b8dca", default-features = false }
# v0.0.3
rustpython-parser = { git = "https://github.com/astral-sh/RustPython-Parser.git", rev = "ed3b4eb72b6e497bbdb4d19dec6621074d724130" , default-features = false, features = ["full-lexer", "all-nodes-with-ranges", "num-bigint"] }
rustpython-parser = { git = "https://github.com/astral-sh/RustPython-Parser.git", rev = "08ebbe40d7776cac6e3ba66277d435056f2b8dca" , default-features = false, features = ["full-lexer", "all-nodes-with-ranges", "num-bigint"] }
[profile.release]
lto = "fat"

View File

@@ -14,9 +14,9 @@ An extremely fast Python linter, written in Rust.
<p align="center">
<picture align="center">
<source media="(prefers-color-scheme: dark)" srcset="https://user-images.githubusercontent.com/1309177/212613422-7faaf278-706b-4294-ad92-236ffcab3430.svg">
<source media="(prefers-color-scheme: light)" srcset="https://user-images.githubusercontent.com/1309177/212613257-5f4bca12-6d6b-4c79-9bac-51a4c6d08928.svg">
<img alt="Shows a bar chart with benchmark results." src="https://user-images.githubusercontent.com/1309177/212613257-5f4bca12-6d6b-4c79-9bac-51a4c6d08928.svg">
<source media="(prefers-color-scheme: dark)" srcset="https://user-images.githubusercontent.com/1309177/232603514-c95e9b0f-6b31-43de-9a80-9e844173fd6a.svg">
<source media="(prefers-color-scheme: light)" srcset="https://user-images.githubusercontent.com/1309177/232603516-4fb4892d-585c-4b20-b810-3db9161831e4.svg">
<img alt="Shows a bar chart with benchmark results." src="https://user-images.githubusercontent.com/1309177/232603516-4fb4892d-585c-4b20-b810-3db9161831e4.svg">
</picture>
</p>
@@ -139,7 +139,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com) hook:
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.0.273
rev: v0.0.274
hooks:
- id: ruff
```

View File

@@ -1,6 +1,6 @@
[package]
name = "flake8-to-ruff"
version = "0.0.273"
version = "0.0.274"
description = """
Convert Flake8 configuration files to Ruff configuration files.
"""

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.0.273"
version = "0.0.274"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -34,12 +34,3 @@ f"{ascii(bla)}" # OK
" intermediary content "
f" that flows {repr(obj)} of type {type(obj)}.{additional_message}" # RUF010
)
f"{str(bla)}" # RUF010
f"{str(bla):20}" # RUF010
f"{bla!s}" # RUF010
f"{bla!s:20}" # OK

View File

@@ -1,5 +1,5 @@
import typing
from typing import ClassVar, Sequence
from typing import ClassVar, Sequence, Final
KNOWINGLY_MUTABLE_DEFAULT = []
@@ -10,6 +10,7 @@ class A:
without_annotation = []
correct_code: list[int] = KNOWINGLY_MUTABLE_DEFAULT
class_variable: typing.ClassVar[list[int]] = []
final_variable: typing.Final[list[int]] = []
class B:
@@ -18,6 +19,7 @@ class B:
without_annotation = []
correct_code: list[int] = KNOWINGLY_MUTABLE_DEFAULT
class_variable: ClassVar[list[int]] = []
final_variable: Final[list[int]] = []
from dataclasses import dataclass, field
@@ -31,3 +33,17 @@ class C:
correct_code: list[int] = KNOWINGLY_MUTABLE_DEFAULT
perfectly_fine: list[int] = field(default_factory=list)
class_variable: ClassVar[list[int]] = []
final_variable: Final[list[int]] = []
from pydantic import BaseModel
class D(BaseModel):
mutable_default: list[int] = []
immutable_annotation: Sequence[int] = []
without_annotation = []
correct_code: list[int] = KNOWINGLY_MUTABLE_DEFAULT
perfectly_fine: list[int] = field(default_factory=list)
class_variable: ClassVar[list[int]] = []
final_variable: Final[list[int]] = []

View File

@@ -18,19 +18,19 @@ def f(arg: object = None):
pass
def f(arg: int = None): # RUF011
def f(arg: int = None): # RUF013
pass
def f(arg: str = None): # RUF011
def f(arg: str = None): # RUF013
pass
def f(arg: typing.List[str] = None): # RUF011
def f(arg: typing.List[str] = None): # RUF013
pass
def f(arg: Tuple[str] = None): # RUF011
def f(arg: Tuple[str] = None): # RUF013
pass
@@ -64,15 +64,15 @@ def f(arg: Union[int, str, Any] = None):
pass
def f(arg: Union = None): # RUF011
def f(arg: Union = None): # RUF013
pass
def f(arg: Union[int, str] = None): # RUF011
def f(arg: Union[int, str] = None): # RUF013
pass
def f(arg: typing.Union[int, str] = None): # RUF011
def f(arg: typing.Union[int, str] = None): # RUF013
pass
@@ -91,11 +91,11 @@ def f(arg: int | float | str | None = None):
pass
def f(arg: int | float = None): # RUF011
def f(arg: int | float = None): # RUF013
pass
def f(arg: int | float | str | bytes = None): # RUF011
def f(arg: int | float | str | bytes = None): # RUF013
pass
@@ -110,11 +110,11 @@ def f(arg: Literal[1, 2, None, 3] = None):
pass
def f(arg: Literal[1, "foo"] = None): # RUF011
def f(arg: Literal[1, "foo"] = None): # RUF013
pass
def f(arg: typing.Literal[1, "foo", True] = None): # RUF011
def f(arg: typing.Literal[1, "foo", True] = None): # RUF013
pass
@@ -133,11 +133,11 @@ def f(arg: Annotated[Any, ...] = None):
pass
def f(arg: Annotated[int, ...] = None): # RUF011
def f(arg: Annotated[int, ...] = None): # RUF013
pass
def f(arg: Annotated[Annotated[int | str, ...], ...] = None): # RUF011
def f(arg: Annotated[Annotated[int | str, ...], ...] = None): # RUF013
pass
@@ -153,9 +153,9 @@ def f(
def f(
arg1: int = None, # RUF011
arg2: Union[int, float] = None, # RUF011
arg3: Literal[1, 2, 3] = None, # RUF011
arg1: int = None, # RUF013
arg2: Union[int, float] = None, # RUF013
arg3: Literal[1, 2, 3] = None, # RUF013
):
pass
@@ -183,5 +183,41 @@ def f(arg: Union[Annotated[int, ...], Annotated[Optional[float], ...]] = None):
pass
def f(arg: Union[Annotated[int, ...], Union[str, bytes]] = None): # RUF011
def f(arg: Union[Annotated[int, ...], Union[str, bytes]] = None): # RUF013
pass
# Quoted
def f(arg: "int" = None): # RUF013
pass
def f(arg: "str" = None): # RUF013
pass
def f(arg: "st" "r" = None): # RUF013
pass
def f(arg: "Optional[int]" = None):
pass
def f(arg: Union["int", "str"] = None): # RUF013
pass
def f(arg: Union["int", "None"] = None):
pass
def f(arg: Union["No" "ne", "int"] = None):
pass
# Avoid flagging when there's a parse error in the forward reference
def f(arg: Union["<>", "int"] = None):
pass

View File

@@ -3852,6 +3852,9 @@ where
);
}
// Store the existing binding, if any.
let existing_id = self.semantic.lookup(name);
// Add the bound exception name to the scope.
let binding_id = self.add_binding(
name,
@@ -3862,14 +3865,6 @@ where
walk_except_handler(self, except_handler);
// Remove it from the scope immediately after.
self.add_binding(
name,
range,
BindingKind::UnboundException,
BindingFlags::empty(),
);
// If the exception name wasn't used in the scope, emit a diagnostic.
if !self.semantic.is_used(binding_id) {
if self.enabled(Rule::UnusedVariable) {
@@ -3889,6 +3884,13 @@ where
self.diagnostics.push(diagnostic);
}
}
self.add_binding(
name,
range,
BindingKind::UnboundException(existing_id),
BindingFlags::empty(),
);
}
None => walk_except_handler(self, except_handler),
}
@@ -4236,7 +4238,7 @@ impl<'a> Checker<'a> {
let shadowed = &self.semantic.bindings[shadowed_id];
if !matches!(
shadowed.kind,
BindingKind::Builtin | BindingKind::Deletion | BindingKind::UnboundException,
BindingKind::Builtin | BindingKind::Deletion | BindingKind::UnboundException(_),
) {
let references = shadowed.references.clone();
let is_global = shadowed.is_global();

View File

@@ -8,7 +8,7 @@ use ruff_python_ast::source_code::{Indexer, Locator, Stylist};
use ruff_python_whitespace::UniversalNewlines;
use crate::registry::Rule;
use crate::rules::copyright::rules::missing_copyright_notice;
use crate::rules::flake8_copyright::rules::missing_copyright_notice;
use crate::rules::flake8_executable::helpers::{extract_shebang, ShebangDirective};
use crate::rules::flake8_executable::rules::{
shebang_missing, shebang_newline, shebang_not_executable, shebang_python, shebang_whitespace,

View File

@@ -157,7 +157,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
// pylint
(Pylint, "C0414") => (RuleGroup::Unspecified, rules::pylint::rules::UselessImportAlias),
(Pylint, "C1901") => (RuleGroup::Unspecified, rules::pylint::rules::CompareToEmptyString),
(Pylint, "C1901") => (RuleGroup::Nursery, rules::pylint::rules::CompareToEmptyString),
(Pylint, "C3002") => (RuleGroup::Unspecified, rules::pylint::rules::UnnecessaryDirectLambdaCall),
(Pylint, "C0208") => (RuleGroup::Unspecified, rules::pylint::rules::IterationOverSet),
(Pylint, "E0100") => (RuleGroup::Unspecified, rules::pylint::rules::YieldInInit),
@@ -375,7 +375,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Simplify, "910") => (RuleGroup::Unspecified, rules::flake8_simplify::rules::DictGetWithNoneDefault),
// copyright
(Copyright, "001") => (RuleGroup::Nursery, rules::copyright::rules::MissingCopyrightNotice),
(Copyright, "001") => (RuleGroup::Nursery, rules::flake8_copyright::rules::MissingCopyrightNotice),
// pyupgrade
(Pyupgrade, "001") => (RuleGroup::Unspecified, rules::pyupgrade::rules::UselessMetaclassType),

View File

@@ -78,7 +78,7 @@ fn detect_package_root_with_cache<'a>(
current
}
/// Return a mapping from Python file to its package root.
/// Return a mapping from Python package to its package root.
pub fn detect_package_roots<'a>(
files: &[&'a Path],
resolver: &'a Resolver,

View File

@@ -251,7 +251,7 @@ impl Renamer {
| BindingKind::ClassDefinition
| BindingKind::FunctionDefinition
| BindingKind::Deletion
| BindingKind::UnboundException => {
| BindingKind::UnboundException(_) => {
Some(Edit::range_replacement(target.to_string(), binding.range))
}
}

View File

@@ -1,4 +0,0 @@
---
source: crates/ruff/src/rules/copyright/mod.rs
---

View File

@@ -1,4 +0,0 @@
---
source: crates/ruff/src/rules/copyright/mod.rs
---

View File

@@ -1,4 +0,0 @@
---
source: crates/ruff/src/rules/copyright/mod.rs
---

View File

@@ -1,4 +0,0 @@
---
source: crates/ruff/src/rules/copyright/mod.rs
---

View File

@@ -1,4 +0,0 @@
---
source: crates/ruff/src/rules/copyright/mod.rs
---

View File

@@ -1,4 +0,0 @@
---
source: crates/ruff/src/rules/copyright/mod.rs
---

View File

@@ -75,7 +75,7 @@ import os
"#
.trim(),
&settings::Settings {
copyright: super::settings::Settings {
flake8_copyright: super::settings::Settings {
author: Some("Ruff".to_string()),
..super::settings::Settings::default()
},
@@ -95,7 +95,7 @@ import os
"#
.trim(),
&settings::Settings {
copyright: super::settings::Settings {
flake8_copyright: super::settings::Settings {
author: Some("Ruff".to_string()),
..super::settings::Settings::default()
},
@@ -113,7 +113,7 @@ import os
"#
.trim(),
&settings::Settings {
copyright: super::settings::Settings {
flake8_copyright: super::settings::Settings {
min_file_size: 256,
..super::settings::Settings::default()
},

View File

@@ -28,7 +28,7 @@ pub(crate) fn missing_copyright_notice(
settings: &Settings,
) -> Option<Diagnostic> {
// Ignore files that are too small to contain a copyright notice.
if locator.len() < settings.copyright.min_file_size {
if locator.len() < settings.flake8_copyright.min_file_size {
return None;
}
@@ -40,8 +40,8 @@ pub(crate) fn missing_copyright_notice(
};
// Locate the copyright notice.
if let Some(match_) = settings.copyright.notice_rgx.find(contents) {
match settings.copyright.author {
if let Some(match_) = settings.flake8_copyright.notice_rgx.find(contents) {
match settings.flake8_copyright.author {
Some(ref author) => {
// Ensure that it's immediately followed by the author.
if contents[match_.end()..].trim_start().starts_with(author) {

View File

@@ -1,4 +1,4 @@
//! Settings for the `copyright` plugin.
//! Settings for the `flake8-copyright` plugin.
use once_cell::sync::Lazy;
use regex::Regex;
@@ -12,7 +12,7 @@ use ruff_macros::{CacheKey, CombineOptions, ConfigurationOptions};
#[serde(
deny_unknown_fields,
rename_all = "kebab-case",
rename = "CopyrightOptions"
rename = "Flake8CopyrightOptions"
)]
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
pub struct Options {

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/src/rules/copyright/mod.rs
source: crates/ruff/src/rules/flake8_copyright/mod.rs
---
<filename>:1:1: CPY001 Missing copyright notice at top of file
|

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/src/rules/copyright/mod.rs
source: crates/ruff/src/rules/flake8_copyright/mod.rs
---
<filename>:1:1: CPY001 Missing copyright notice at top of file
|

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff/src/rules/flake8_copyright/mod.rs
---

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff/src/rules/flake8_copyright/mod.rs
---

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff/src/rules/flake8_copyright/mod.rs
---

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff/src/rules/flake8_copyright/mod.rs
---

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff/src/rules/flake8_copyright/mod.rs
---

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff/src/rules/flake8_copyright/mod.rs
---

View File

@@ -7,6 +7,28 @@ use ruff_python_ast::call_path::{format_call_path, from_unqualified_name, CallPa
use crate::checkers::ast::Checker;
use crate::rules::flake8_debugger::types::DebuggerUsingType;
/// ## What it does
/// Checks for the presence of debugger calls and imports.
///
/// ## Why is this bad?
/// Debugger calls and imports should be used for debugging purposes only. The
/// presence of a debugger call or import in production code is likely a
/// mistake and may cause unintended behavior, such as exposing sensitive
/// information or causing the program to hang.
///
/// Instead, consider using a logging library to log information about the
/// program's state, and writing tests to verify that the program behaves
/// as expected.
///
/// ## Example
/// ```python
/// def foo():
/// breakpoint()
/// ```
///
/// ## References
/// - [Python documentation: `pdb` — The Python Debugger](https://docs.python.org/3/library/pdb.html)
/// - [Python documentation: `logging` — Logging facility for Python](https://docs.python.org/3/library/logging.html)
#[violation]
pub struct Debugger {
using_type: DebuggerUsingType,

View File

@@ -1,6 +1,5 @@
#![allow(clippy::useless_format)]
pub mod airflow;
pub mod copyright;
pub mod eradicate;
pub mod flake8_2020;
pub mod flake8_annotations;
@@ -12,6 +11,7 @@ pub mod flake8_bugbear;
pub mod flake8_builtins;
pub mod flake8_commas;
pub mod flake8_comprehensions;
pub mod flake8_copyright;
pub mod flake8_datetimez;
pub mod flake8_debugger;
pub mod flake8_django;

View File

@@ -12,7 +12,7 @@ use crate::registry::AsRule;
/// ## Why is this bad?
/// NumPy's `np.int` has long been an alias of the builtin `int`. The same
/// goes for `np.float`, `np.bool`, and others. These aliases exist
/// primarily primarily for historic reasons, and have been a cause of
/// primarily for historic reasons, and have been a cause of
/// frequent confusion for newcomers.
///
/// These aliases were been deprecated in 1.20, and removed in 1.24.

View File

@@ -40,6 +40,5 @@ pub(super) fn convert_inplace_argument_to_assignment(
false,
)
.ok()?;
#[allow(deprecated)]
Some(Fix::unspecified_edits(insert_assignment, [remove_argument]))
Some(Fix::suggested_edits(insert_assignment, [remove_argument]))
}

View File

@@ -10,7 +10,21 @@ use crate::settings::Settings;
///
/// ## Why is this bad?
/// For flowing long blocks of text (docstrings or comments), overlong lines
/// can hurt readability.
/// can hurt readability. [PEP 8], for example, recommends that such lines be
/// limited to 72 characters.
///
/// In the context of this rule, a "doc line" is defined as a line consisting
/// of either a standalone comment or a standalone string, like a docstring.
///
/// In the interest of pragmatism, this rule makes a few exceptions when
/// determining whether a line is overlong. Namely, it ignores lines that
/// consist of a single "word" (i.e., without any whitespace between its
/// characters), and lines that end with a URL (as long as the URL starts
/// before the line-length threshold).
///
/// If `pycodestyle.ignore_overlong_task_comments` is `true`, this rule will
/// also ignore comments that start with any of the specified `task-tags`
/// (e.g., `# TODO:`).
///
/// ## Example
/// ```python
@@ -26,6 +40,13 @@ use crate::settings::Settings;
/// Duis auctor purus ut ex fermentum, at maximus est hendrerit.
/// """
/// ```
///
///
/// ## Options
/// - `task-tags`
/// - `pycodestyle.ignore-overlong-task-comments`
///
/// [PEP 8]: https://peps.python.org/pep-0008/#maximum-line-length
#[violation]
pub struct DocLineTooLong(pub usize, pub usize);

View File

@@ -9,7 +9,18 @@ use crate::settings::Settings;
/// Checks for lines that exceed the specified maximum character length.
///
/// ## Why is this bad?
/// Overlong lines can hurt readability.
/// Overlong lines can hurt readability. [PEP 8], for example, recommends
/// limiting lines to 79 characters.
///
/// In the interest of pragmatism, this rule makes a few exceptions when
/// determining whether a line is overlong. Namely, it ignores lines that
/// consist of a single "word" (i.e., without any whitespace between its
/// characters), and lines that end with a URL (as long as the URL starts
/// before the line-length threshold).
///
/// If `pycodestyle.ignore_overlong_task_comments` is `true`, this rule will
/// also ignore comments that start with any of the specified `task-tags`
/// (e.g., `# TODO:`).
///
/// ## Example
/// ```python
@@ -26,6 +37,9 @@ use crate::settings::Settings;
///
/// ## Options
/// - `task-tags`
/// - `pycodestyle.ignore-overlong-task-comments`
///
/// [PEP 8]: https://peps.python.org/pep-0008/#maximum-line-length
#[violation]
pub struct LineTooLong(pub usize, pub usize);

View File

@@ -353,9 +353,59 @@ mod tests {
except Exception as x:
pass
# No error here, though it should arguably be an F821 error. `x` will
# be unbound after the `except` block (assuming an exception is raised
# and caught).
print(x)
"#,
"print_after_shadowing_except"
"print_in_body_after_shadowing_except"
)]
#[test_case(
r#"
def f():
x = 1
try:
1 / 0
except ValueError as x:
pass
except ImportError as x:
pass
# No error here, though it should arguably be an F821 error. `x` will
# be unbound after the `except` block (assuming an exception is raised
# and caught).
print(x)
"#,
"print_in_body_after_double_shadowing_except"
)]
#[test_case(
r#"
def f():
try:
x = 3
except ImportError as x:
print(x)
else:
print(x)
"#,
"print_in_try_else_after_shadowing_except"
)]
#[test_case(
r#"
def f():
list = [1, 2, 3]
for e in list:
if e % 2 == 0:
try:
pass
except Exception as e:
print(e)
else:
print(e)
"#,
"print_in_if_else_after_shadowing_except"
)]
#[test_case(
r#"
@@ -366,6 +416,79 @@ mod tests {
"#,
"double_del"
)]
#[test_case(
r#"
x = 1
def f():
try:
pass
except ValueError as x:
pass
# This should resolve to the `x` in `x = 1`.
print(x)
"#,
"load_after_unbind_from_module_scope"
)]
#[test_case(
r#"
x = 1
def f():
try:
pass
except ValueError as x:
pass
try:
pass
except ValueError as x:
pass
# This should resolve to the `x` in `x = 1`.
print(x)
"#,
"load_after_multiple_unbinds_from_module_scope"
)]
#[test_case(
r#"
x = 1
def f():
try:
pass
except ValueError as x:
pass
def g():
try:
pass
except ValueError as x:
pass
# This should resolve to the `x` in `x = 1`.
print(x)
"#,
"load_after_unbind_from_nested_module_scope"
)]
#[test_case(
r#"
class C:
x = 1
def f():
try:
pass
except ValueError as x:
pass
# This should raise an F821 error, rather than resolving to the
# `x` in `x = 1`.
print(x)
"#,
"load_after_unbind_from_class_scope"
)]
fn contents(contents: &str, snapshot: &str) {
let diagnostics = test_snippet(contents, &Settings::for_rules(&Linter::Pyflakes));
assert_messages!(snapshot, diagnostics);

View File

@@ -89,8 +89,7 @@ fn fix_f_string_missing_placeholders(
checker: &mut Checker,
) -> Fix {
let content = &checker.locator.contents()[TextRange::new(prefix_range.end(), tok_range.end())];
#[allow(deprecated)]
Fix::unspecified(Edit::replacement(
Fix::automatic(Edit::replacement(
unescape_f_string(content),
prefix_range.start(),
tok_range.end(),

View File

@@ -11,7 +11,7 @@ F541.py:6:5: F541 [*] f-string without any placeholders
|
= help: Remove extraneous `f` prefix
Suggested fix
Fix
3 3 | b = f"ghi{'jkl'}"
4 4 |
5 5 | # Errors
@@ -32,7 +32,7 @@ F541.py:7:5: F541 [*] f-string without any placeholders
|
= help: Remove extraneous `f` prefix
Suggested fix
Fix
4 4 |
5 5 | # Errors
6 6 | c = f"def"
@@ -53,7 +53,7 @@ F541.py:9:5: F541 [*] f-string without any placeholders
|
= help: Remove extraneous `f` prefix
Suggested fix
Fix
6 6 | c = f"def"
7 7 | d = f"def" + "ghi"
8 8 | e = (
@@ -74,7 +74,7 @@ F541.py:13:5: F541 [*] f-string without any placeholders
|
= help: Remove extraneous `f` prefix
Suggested fix
Fix
10 10 | "ghi"
11 11 | )
12 12 | f = (
@@ -95,7 +95,7 @@ F541.py:14:5: F541 [*] f-string without any placeholders
|
= help: Remove extraneous `f` prefix
Suggested fix
Fix
11 11 | )
12 12 | f = (
13 13 | f"a"
@@ -116,7 +116,7 @@ F541.py:16:5: F541 [*] f-string without any placeholders
|
= help: Remove extraneous `f` prefix
Suggested fix
Fix
13 13 | f"a"
14 14 | F"b"
15 15 | "c"
@@ -137,7 +137,7 @@ F541.py:17:5: F541 [*] f-string without any placeholders
|
= help: Remove extraneous `f` prefix
Suggested fix
Fix
14 14 | F"b"
15 15 | "c"
16 16 | rf"d"
@@ -158,7 +158,7 @@ F541.py:19:5: F541 [*] f-string without any placeholders
|
= help: Remove extraneous `f` prefix
Suggested fix
Fix
16 16 | rf"d"
17 17 | fr"e"
18 18 | )
@@ -178,7 +178,7 @@ F541.py:25:13: F541 [*] f-string without any placeholders
|
= help: Remove extraneous `f` prefix
Suggested fix
Fix
22 22 | g = f"ghi{123:{45}}"
23 23 |
24 24 | # Error
@@ -198,7 +198,7 @@ F541.py:34:7: F541 [*] f-string without any placeholders
|
= help: Remove extraneous `f` prefix
Suggested fix
Fix
31 31 | f"{f'{v:0.2f}'}"
32 32 |
33 33 | # Errors
@@ -219,7 +219,7 @@ F541.py:35:4: F541 [*] f-string without any placeholders
|
= help: Remove extraneous `f` prefix
Suggested fix
Fix
32 32 |
33 33 | # Errors
34 34 | f"{v:{f'0.2f'}}"
@@ -240,7 +240,7 @@ F541.py:36:1: F541 [*] f-string without any placeholders
|
= help: Remove extraneous `f` prefix
Suggested fix
Fix
33 33 | # Errors
34 34 | f"{v:{f'0.2f'}}"
35 35 | f"{f''}"
@@ -261,7 +261,7 @@ F541.py:37:1: F541 [*] f-string without any placeholders
|
= help: Remove extraneous `f` prefix
Suggested fix
Fix
34 34 | f"{v:{f'0.2f'}}"
35 35 | f"{f''}"
36 36 | f"{{test}}"
@@ -281,7 +281,7 @@ F541.py:38:1: F541 [*] f-string without any placeholders
|
= help: Remove extraneous `f` prefix
Suggested fix
Fix
35 35 | f"{f''}"
36 36 | f"{{test}}"
37 37 | f'{{ 40 }}'
@@ -302,7 +302,7 @@ F541.py:39:1: F541 [*] f-string without any placeholders
|
= help: Remove extraneous `f` prefix
Suggested fix
Fix
36 36 | f"{{test}}"
37 37 | f'{{ 40 }}'
38 38 | f"{{a {{x}}"

View File

@@ -0,0 +1,44 @@
---
source: crates/ruff/src/rules/pyflakes/mod.rs
---
<filename>:7:26: F841 [*] Local variable `x` is assigned to but never used
|
5 | try:
6 | pass
7 | except ValueError as x:
| ^ F841
8 | pass
|
= help: Remove assignment to unused variable `x`
Fix
4 4 | def f():
5 5 | try:
6 6 | pass
7 |- except ValueError as x:
7 |+ except ValueError:
8 8 | pass
9 9 |
10 10 | try:
<filename>:12:26: F841 [*] Local variable `x` is assigned to but never used
|
10 | try:
11 | pass
12 | except ValueError as x:
| ^ F841
13 | pass
|
= help: Remove assignment to unused variable `x`
Fix
9 9 |
10 10 | try:
11 11 | pass
12 |- except ValueError as x:
12 |+ except ValueError:
13 13 | pass
14 14 |
15 15 | # This should resolve to the `x` in `x = 1`.

View File

@@ -0,0 +1,32 @@
---
source: crates/ruff/src/rules/pyflakes/mod.rs
---
<filename>:8:30: F841 [*] Local variable `x` is assigned to but never used
|
6 | try:
7 | pass
8 | except ValueError as x:
| ^ F841
9 | pass
|
= help: Remove assignment to unused variable `x`
Fix
5 5 | def f():
6 6 | try:
7 7 | pass
8 |- except ValueError as x:
8 |+ except ValueError:
9 9 | pass
10 10 |
11 11 | # This should raise an F821 error, rather than resolving to the
<filename>:13:15: F821 Undefined name `x`
|
11 | # This should raise an F821 error, rather than resolving to the
12 | # `x` in `x = 1`.
13 | print(x)
| ^ F821
|

View File

@@ -0,0 +1,24 @@
---
source: crates/ruff/src/rules/pyflakes/mod.rs
---
<filename>:7:26: F841 [*] Local variable `x` is assigned to but never used
|
5 | try:
6 | pass
7 | except ValueError as x:
| ^ F841
8 | pass
|
= help: Remove assignment to unused variable `x`
Fix
4 4 | def f():
5 5 | try:
6 6 | pass
7 |- except ValueError as x:
7 |+ except ValueError:
8 8 | pass
9 9 |
10 10 | # This should resolve to the `x` in `x = 1`.

View File

@@ -0,0 +1,44 @@
---
source: crates/ruff/src/rules/pyflakes/mod.rs
---
<filename>:7:26: F841 [*] Local variable `x` is assigned to but never used
|
5 | try:
6 | pass
7 | except ValueError as x:
| ^ F841
8 | pass
|
= help: Remove assignment to unused variable `x`
Fix
4 4 | def f():
5 5 | try:
6 6 | pass
7 |- except ValueError as x:
7 |+ except ValueError:
8 8 | pass
9 9 |
10 10 | def g():
<filename>:13:30: F841 [*] Local variable `x` is assigned to but never used
|
11 | try:
12 | pass
13 | except ValueError as x:
| ^ F841
14 | pass
|
= help: Remove assignment to unused variable `x`
Fix
10 10 | def g():
11 11 | try:
12 12 | pass
13 |- except ValueError as x:
13 |+ except ValueError:
14 14 | pass
15 15 |
16 16 | # This should resolve to the `x` in `x = 1`.

View File

@@ -0,0 +1,45 @@
---
source: crates/ruff/src/rules/pyflakes/mod.rs
---
<filename>:7:26: F841 [*] Local variable `x` is assigned to but never used
|
5 | try:
6 | 1 / 0
7 | except ValueError as x:
| ^ F841
8 | pass
9 | except ImportError as x:
|
= help: Remove assignment to unused variable `x`
Fix
4 4 |
5 5 | try:
6 6 | 1 / 0
7 |- except ValueError as x:
7 |+ except ValueError:
8 8 | pass
9 9 | except ImportError as x:
10 10 | pass
<filename>:9:27: F841 [*] Local variable `x` is assigned to but never used
|
7 | except ValueError as x:
8 | pass
9 | except ImportError as x:
| ^ F841
10 | pass
|
= help: Remove assignment to unused variable `x`
Fix
6 6 | 1 / 0
7 7 | except ValueError as x:
8 8 | pass
9 |- except ImportError as x:
9 |+ except ImportError:
10 10 | pass
11 11 |
12 12 | # No error here, though it should arguably be an F821 error. `x` will

View File

@@ -19,14 +19,6 @@ source: crates/ruff/src/rules/pyflakes/mod.rs
7 |+ except Exception:
8 8 | pass
9 9 |
10 10 | print(x)
<filename>:10:11: F821 Undefined name `x`
|
8 | pass
9 |
10 | print(x)
| ^ F821
|
10 10 | # No error here, though it should arguably be an F821 error. `x` will

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff/src/rules/pyflakes/mod.rs
---

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff/src/rules/pyflakes/mod.rs
---

View File

@@ -7,49 +7,6 @@ use ruff_macros::{derive_message_formats, violation};
use crate::checkers::ast::Checker;
#[derive(Debug, PartialEq, Eq, Copy, Clone)]
pub(crate) enum EmptyStringCmpOp {
Is,
IsNot,
Eq,
NotEq,
}
impl TryFrom<&CmpOp> for EmptyStringCmpOp {
type Error = anyhow::Error;
fn try_from(value: &CmpOp) -> Result<Self, Self::Error> {
match value {
CmpOp::Is => Ok(Self::Is),
CmpOp::IsNot => Ok(Self::IsNot),
CmpOp::Eq => Ok(Self::Eq),
CmpOp::NotEq => Ok(Self::NotEq),
_ => bail!("{value:?} cannot be converted to EmptyStringCmpOp"),
}
}
}
impl EmptyStringCmpOp {
pub(crate) fn into_unary(self) -> &'static str {
match self {
Self::Is | Self::Eq => "not ",
Self::IsNot | Self::NotEq => "",
}
}
}
impl std::fmt::Display for EmptyStringCmpOp {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let repr = match self {
Self::Is => "is",
Self::IsNot => "is not",
Self::Eq => "==",
Self::NotEq => "!=",
};
write!(f, "{repr}")
}
}
/// ## What it does
/// Checks for comparisons to empty strings.
///
@@ -83,13 +40,15 @@ pub struct CompareToEmptyString {
impl Violation for CompareToEmptyString {
#[derive_message_formats]
fn message(&self) -> String {
format!(
"`{}` can be simplified to `{}` as an empty string is falsey",
self.existing, self.replacement,
)
let CompareToEmptyString {
existing,
replacement,
} = self;
format!("`{existing}` can be simplified to `{replacement}` as an empty string is falsey",)
}
}
/// PLC1901
pub(crate) fn compare_to_empty_string(
checker: &mut Checker,
left: &Expr,
@@ -98,10 +57,12 @@ pub(crate) fn compare_to_empty_string(
) {
// Omit string comparison rules within subscripts. This is mostly commonly used within
// DataFrame and np.ndarray indexing.
for parent in checker.semantic().expr_ancestors() {
if matches!(parent, Expr::Subscript(_)) {
return;
}
if checker
.semantic()
.expr_ancestors()
.any(|parent| parent.is_subscript_expr())
{
return;
}
let mut first = true;
@@ -153,3 +114,46 @@ pub(crate) fn compare_to_empty_string(
}
}
}
#[derive(Debug, PartialEq, Eq, Copy, Clone)]
enum EmptyStringCmpOp {
Is,
IsNot,
Eq,
NotEq,
}
impl TryFrom<&CmpOp> for EmptyStringCmpOp {
type Error = anyhow::Error;
fn try_from(value: &CmpOp) -> Result<Self, Self::Error> {
match value {
CmpOp::Is => Ok(Self::Is),
CmpOp::IsNot => Ok(Self::IsNot),
CmpOp::Eq => Ok(Self::Eq),
CmpOp::NotEq => Ok(Self::NotEq),
_ => bail!("{value:?} cannot be converted to EmptyStringCmpOp"),
}
}
}
impl EmptyStringCmpOp {
fn into_unary(self) -> &'static str {
match self {
Self::Is | Self::Eq => "not ",
Self::IsNot | Self::NotEq => "",
}
}
}
impl std::fmt::Display for EmptyStringCmpOp {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let repr = match self {
Self::Is => "is",
Self::IsNot => "is not",
Self::Eq => "==",
Self::NotEq => "!=",
};
write!(f, "{repr}")
}
}

View File

@@ -191,8 +191,7 @@ pub(crate) fn invalid_string_characters(locator: &Locator, range: TextRange) ->
let location = range.start() + TextSize::try_from(column).unwrap();
let range = TextRange::at(location, c.text_len());
#[allow(deprecated)]
diagnostics.push(Diagnostic::new(rule, range).with_fix(Fix::unspecified(
diagnostics.push(Diagnostic::new(rule, range).with_fix(Fix::automatic(
Edit::range_replacement(replacement.to_string(), range),
)));
}

View File

@@ -157,8 +157,7 @@ pub(crate) fn nested_min_max(
keywords: keywords.to_owned(),
range: TextRange::default(),
});
#[allow(deprecated)]
diagnostic.set_fix(Fix::unspecified(Edit::range_replacement(
diagnostic.set_fix(Fix::suggested(Edit::range_replacement(
checker.generator().expr(&flattened_expr),
expr.range(),
)));

View File

@@ -82,8 +82,7 @@ pub(crate) fn sys_exit_alias(checker: &mut Checker, func: &Expr) {
checker.semantic(),
)?;
let reference_edit = Edit::range_replacement(binding, func.range());
#[allow(deprecated)]
Ok(Fix::unspecified_edits(import_edit, [reference_edit]))
Ok(Fix::suggested_edits(import_edit, [reference_edit]))
});
}
checker.diagnostics.push(diagnostic);

View File

@@ -49,8 +49,7 @@ pub(crate) fn useless_import_alias(checker: &mut Checker, alias: &Alias) {
let mut diagnostic = Diagnostic::new(UselessImportAlias, alias.range());
if checker.patch(diagnostic.kind.rule()) {
#[allow(deprecated)]
diagnostic.set_fix(Fix::unspecified(Edit::range_replacement(
diagnostic.set_fix(Fix::suggested(Edit::range_replacement(
asname.to_string(),
alias.range(),
)));

View File

@@ -12,7 +12,7 @@ invalid_characters.py:15:6: PLE2510 [*] Invalid unescaped character backspace, u
|
= help: Replace with escape sequence
Suggested fix
Fix
12 12 | # (Pylint, "C0414") => Rule::UselessImportAlias,
13 13 | # (Pylint, "C3002") => Rule::UnnecessaryDirectLambdaCall,
14 14 | #foo = 'hi'

View File

@@ -12,7 +12,7 @@ invalid_characters.py:21:12: PLE2512 [*] Invalid unescaped character SUB, use "\
|
= help: Replace with escape sequence
Suggested fix
Fix
18 18 |
19 19 | cr_ok = '\\r'
20 20 |

View File

@@ -12,7 +12,7 @@ invalid_characters.py:25:16: PLE2513 [*] Invalid unescaped character ESC, use "\
|
= help: Replace with escape sequence
Suggested fix
Fix
22 22 |
23 23 | sub_ok = '\x1a'
24 24 |

View File

@@ -12,7 +12,7 @@ invalid_characters.py:34:13: PLE2515 [*] Invalid unescaped character zero-width-
|
= help: Replace with escape sequence
Suggested fix
Fix
31 31 |
32 32 | nul_ok = '\0'
33 33 |
@@ -32,7 +32,7 @@ invalid_characters.py:38:36: PLE2515 [*] Invalid unescaped character zero-width-
|
= help: Replace with escape sequence
Suggested fix
Fix
35 35 |
36 36 | zwsp_ok = '\u200b'
37 37 |
@@ -48,7 +48,7 @@ invalid_characters.py:39:60: PLE2515 [*] Invalid unescaped character zero-width-
|
= help: Replace with escape sequence
Suggested fix
Fix
36 36 | zwsp_ok = '\u200b'
37 37 |
38 38 | zwsp_after_multibyte_character = "ಫ​"
@@ -63,7 +63,7 @@ invalid_characters.py:39:61: PLE2515 [*] Invalid unescaped character zero-width-
|
= help: Replace with escape sequence
Suggested fix
Fix
36 36 | zwsp_ok = '\u200b'
37 37 |
38 38 | zwsp_after_multibyte_character = "ಫ​"

View File

@@ -11,6 +11,22 @@ use crate::rules::ruff::rules::confusables::CONFUSABLES;
use crate::rules::ruff::rules::Context;
use crate::settings::Settings;
/// ## What it does
/// Checks for ambiguous unicode characters in strings.
///
/// ## Why is this bad?
/// The use of ambiguous unicode characters can confuse readers and cause
/// subtle bugs.
///
/// ## Example
/// ```python
/// print("Ηello, world!") # "Η" is the Greek eta (`U+0397`).
/// ```
///
/// Use instead:
/// ```python
/// print("Hello, world!") # "H" is the Latin capital H (`U+0048`).
/// ```
#[violation]
pub struct AmbiguousUnicodeCharacterString {
confusable: char,
@@ -44,6 +60,22 @@ impl AlwaysAutofixableViolation for AmbiguousUnicodeCharacterString {
}
}
/// ## What it does
/// Checks for ambiguous unicode characters in docstrings.
///
/// ## Why is this bad?
/// The use of ambiguous unicode characters can confuse readers and cause
/// subtle bugs.
///
/// ## Example
/// ```python
/// """A lovely docstring (with a `U+FF09` parenthesis."""
/// ```
///
/// Use instead:
/// ```python
/// """A lovely docstring (with no strange parentheses)."""
/// ```
#[violation]
pub struct AmbiguousUnicodeCharacterDocstring {
confusable: char,
@@ -77,6 +109,22 @@ impl AlwaysAutofixableViolation for AmbiguousUnicodeCharacterDocstring {
}
}
/// ## What it does
/// Checks for ambiguous unicode characters in comments.
///
/// ## Why is this bad?
/// The use of ambiguous unicode characters can confuse readers and cause
/// subtle bugs.
///
/// ## Example
/// ```python
/// foo() # nоqa # "о" is Cyrillic (`U+043E`)
/// ```
///
/// Use instead:
/// ```python
/// foo() # noqa # "o" is Latin (`U+006F`)
/// ```
#[violation]
pub struct AmbiguousUnicodeCharacterComment {
confusable: char,

View File

@@ -13,6 +13,36 @@ pub struct CollectionLiteralConcatenation {
expr: String,
}
/// ## What it does
/// Checks for uses of the `+` operator to concatenate collections.
///
/// ## Why is this bad?
/// In Python, the `+` operator can be used to concatenate collections (e.g.,
/// `x + y` to concatenate the lists `x` and `y`).
///
/// However, collections can be concatenated more efficiently using the
/// unpacking operator (e.g., `[*x, *y]` to concatenate `x` and `y`).
///
/// Prefer the unpacking operator to concatenate collections, as it is more
/// readable and flexible. The `*` operator can unpack any iterable, whereas
/// `+` operates only on particular sequences which, in many cases, must be of
/// the same type.
///
/// ## Example
/// ```python
/// foo = [2, 3, 4]
/// bar = [1] + foo + [5, 6]
/// ```
///
/// Use instead:
/// ```python
/// foo = [2, 3, 4]
/// bar = [1, *foo, 5, 6]
/// ```
///
/// ## References
/// - [PEP 448 Additional Unpacking Generalizations](https://peps.python.org/pep-0448/)
/// - [Python docs: Sequence Types — `list`, `tuple`, `range`](https://docs.python.org/3/library/stdtypes.html#sequence-types-list-tuple-range)
impl Violation for CollectionLiteralConcatenation {
const AUTOFIX: AutofixKind = AutofixKind::Sometimes;

View File

@@ -6,7 +6,6 @@ use rustpython_parser::ast::{self, Expr, Ranged};
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::prelude::ConversionFlag;
use ruff_python_ast::source_code::{Locator, Stylist};
use crate::autofix::codemods::CodegenStylist;
@@ -22,62 +21,36 @@ use crate::registry::AsRule;
/// f-strings support dedicated conversion flags for these types, which are
/// more succinct and idiomatic.
///
/// In the case of `str()`, it's also redundant, since `str()` is the default
/// conversion.
/// Note that, in many cases, calling `str()` within an f-string is
/// unnecessary and can be removed entirely, as the value will be converted
/// to a string automatically, the notable exception being for classes that
/// implement a custom `__format__` method.
///
/// ## Example
/// ```python
/// a = "some string"
/// f"{str(a)}"
/// f"{repr(a)}"
/// ```
///
/// Use instead:
/// ```python
/// a = "some string"
/// f"{a}"
/// f"{a!r}"
/// ```
#[violation]
pub struct ExplicitFStringTypeConversion {
operation: Operation,
}
pub struct ExplicitFStringTypeConversion;
impl AlwaysAutofixableViolation for ExplicitFStringTypeConversion {
#[derive_message_formats]
fn message(&self) -> String {
let ExplicitFStringTypeConversion { operation } = self;
match operation {
Operation::ConvertCallToConversionFlag => {
format!("Use explicit conversion flag")
}
Operation::RemoveCall => format!("Remove unnecessary `str` conversion"),
Operation::RemoveConversionFlag => format!("Remove unnecessary conversion flag"),
}
format!("Use explicit conversion flag")
}
fn autofix_title(&self) -> String {
let ExplicitFStringTypeConversion { operation } = self;
match operation {
Operation::ConvertCallToConversionFlag => {
format!("Replace with conversion flag")
}
Operation::RemoveCall => format!("Remove `str` call"),
Operation::RemoveConversionFlag => format!("Remove conversion flag"),
}
"Replace with conversion flag".to_string()
}
}
#[derive(Debug, PartialEq, Eq)]
enum Operation {
/// Ex) Convert `f"{repr(bla)}"` to `f"{bla!r}"`
ConvertCallToConversionFlag,
/// Ex) Convert `f"{bla!s}"` to `f"{bla}"`
RemoveConversionFlag,
/// Ex) Convert `f"{str(bla)}"` to `f"{bla}"`
RemoveCall,
}
/// RUF010
pub(crate) fn explicit_f_string_type_conversion(
checker: &mut Checker,
@@ -96,156 +69,50 @@ pub(crate) fn explicit_f_string_type_conversion(
.enumerate()
{
let ast::ExprFormattedValue {
value,
conversion,
format_spec,
range: _,
value, conversion, ..
} = formatted_value;
match conversion {
ConversionFlag::Ascii | ConversionFlag::Repr => {
// Nothing to do.
continue;
}
ConversionFlag::Str => {
// Skip if there's a format spec.
if format_spec.is_some() {
continue;
}
// Remove the conversion flag entirely.
// Ex) `f"{bla!s}"`
let mut diagnostic = Diagnostic::new(
ExplicitFStringTypeConversion {
operation: Operation::RemoveConversionFlag,
},
value.range(),
);
if checker.patch(diagnostic.kind.rule()) {
diagnostic.try_set_fix(|| {
remove_conversion_flag(expr, index, checker.locator, checker.stylist)
});
}
checker.diagnostics.push(diagnostic);
}
ConversionFlag::None => {
// Replace with the appropriate conversion flag.
let Expr::Call(ast::ExprCall {
func,
args,
keywords,
..
}) = value.as_ref() else {
continue;
};
// Can't be a conversion otherwise.
if args.len() != 1 || !keywords.is_empty() {
continue;
}
let Expr::Name(ast::ExprName { id, .. }) = func.as_ref() else {
continue;
};
if !matches!(id.as_str(), "str" | "repr" | "ascii") {
continue;
};
if !checker.semantic().is_builtin(id) {
continue;
}
if id == "str" && format_spec.is_none() {
// Ex) `f"{str(bla)}"`
let mut diagnostic = Diagnostic::new(
ExplicitFStringTypeConversion {
operation: Operation::RemoveCall,
},
value.range(),
);
if checker.patch(diagnostic.kind.rule()) {
diagnostic.try_set_fix(|| {
remove_conversion_call(expr, index, checker.locator, checker.stylist)
});
}
checker.diagnostics.push(diagnostic);
} else {
// Ex) `f"{repr(bla)}"`
let mut diagnostic = Diagnostic::new(
ExplicitFStringTypeConversion {
operation: Operation::ConvertCallToConversionFlag,
},
value.range(),
);
if checker.patch(diagnostic.kind.rule()) {
diagnostic.try_set_fix(|| {
convert_call_to_conversion_flag(
expr,
index,
checker.locator,
checker.stylist,
)
});
}
checker.diagnostics.push(diagnostic);
}
}
// Skip if there's already a conversion flag.
if !conversion.is_none() {
continue;
}
let Expr::Call(ast::ExprCall {
func,
args,
keywords,
..
}) = value.as_ref() else {
continue;
};
// Can't be a conversion otherwise.
if args.len() != 1 || !keywords.is_empty() {
continue;
}
let Expr::Name(ast::ExprName { id, .. }) = func.as_ref() else {
continue;
};
if !matches!(id.as_str(), "str" | "repr" | "ascii") {
continue;
};
if !checker.semantic().is_builtin(id) {
continue;
}
let mut diagnostic = Diagnostic::new(ExplicitFStringTypeConversion, value.range());
if checker.patch(diagnostic.kind.rule()) {
diagnostic.try_set_fix(|| {
convert_call_to_conversion_flag(expr, index, checker.locator, checker.stylist)
});
}
checker.diagnostics.push(diagnostic);
}
}
/// Generate a [`Fix`] to remove a conversion flag from a formatted expression.
fn remove_conversion_flag(
expr: &Expr,
index: usize,
locator: &Locator,
stylist: &Stylist,
) -> Result<Fix> {
// Parenthesize the expression, to support implicit concatenation.
let range = expr.range();
let content = locator.slice(range);
let parenthesized_content = format!("({content})");
let mut expression = match_expression(&parenthesized_content)?;
// Replace the formatted call expression at `index` with a conversion flag.
let formatted_string_expression = match_part(index, &mut expression)?;
formatted_string_expression.conversion = None;
// Remove the parentheses (first and last characters).
let mut content = expression.codegen_stylist(stylist);
content.remove(0);
content.pop();
Ok(Fix::automatic(Edit::range_replacement(content, range)))
}
/// Generate a [`Fix`] to remove a call from a formatted expression.
fn remove_conversion_call(
expr: &Expr,
index: usize,
locator: &Locator,
stylist: &Stylist,
) -> Result<Fix> {
// Parenthesize the expression, to support implicit concatenation.
let range = expr.range();
let content = locator.slice(range);
let parenthesized_content = format!("({content})");
let mut expression = match_expression(&parenthesized_content)?;
// Replace the formatted call expression at `index` with a conversion flag.
let formatted_string_expression = match_part(index, &mut expression)?;
let call = match_call_mut(&mut formatted_string_expression.expression)?;
formatted_string_expression.expression = call.args[0].value.clone();
// Remove the parentheses (first and last characters).
let mut content = expression.codegen_stylist(stylist);
content.remove(0);
content.pop();
Ok(Fix::automatic(Edit::range_replacement(content, range)))
}
/// Generate a [`Fix`] to replace an explicit type conversion with a conversion flag.
fn convert_call_to_conversion_flag(
expr: &Expr,

View File

@@ -55,7 +55,7 @@ use crate::rules::ruff::rules::helpers::{
/// - `flake8-bugbear.extend-immutable-calls`
#[violation]
pub struct FunctionCallInDataclassDefaultArgument {
pub name: Option<String>,
name: Option<String>,
}
impl Violation for FunctionCallInDataclassDefaultArgument {

View File

@@ -18,6 +18,14 @@ pub(super) fn is_class_var_annotation(annotation: &Expr, semantic: &SemanticMode
semantic.match_typing_expr(value, "ClassVar")
}
/// Returns `true` if the given [`Expr`] is a `typing.Final` annotation.
pub(super) fn is_final_annotation(annotation: &Expr, semantic: &SemanticModel) -> bool {
let Expr::Subscript(ast::ExprSubscript { value, .. }) = &annotation else {
return false;
};
semantic.match_typing_expr(value, "Final")
}
/// Returns `true` if the given class is a dataclass.
pub(super) fn is_dataclass(class_def: &ast::StmtClassDef, semantic: &SemanticModel) -> bool {
class_def.decorator_list.iter().any(|decorator| {
@@ -28,3 +36,12 @@ pub(super) fn is_dataclass(class_def: &ast::StmtClassDef, semantic: &SemanticMod
})
})
}
/// Returns `true` if the given class is a Pydantic `BaseModel`.
pub(super) fn is_pydantic_model(class_def: &ast::StmtClassDef, semantic: &SemanticModel) -> bool {
class_def.bases.iter().any(|expr| {
semantic.resolve_call_path(expr).map_or(false, |call_path| {
matches!(call_path.as_slice(), ["pydantic", "BaseModel"])
})
})
}

View File

@@ -4,9 +4,11 @@ use anyhow::Result;
use ruff_text_size::TextRange;
use rustpython_parser::ast::{self, ArgWithDefault, Arguments, Constant, Expr, Operator, Ranged};
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix};
use ruff_diagnostics::{AutofixKind, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::is_const_none;
use ruff_python_ast::source_code::Locator;
use ruff_python_ast::typing::parse_type_annotation;
use ruff_python_semantic::SemanticModel;
use crate::checkers::ast::Checker;
@@ -65,14 +67,16 @@ pub struct ImplicitOptional {
conversion_type: ConversionType,
}
impl AlwaysAutofixableViolation for ImplicitOptional {
impl Violation for ImplicitOptional {
const AUTOFIX: AutofixKind = AutofixKind::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
format!("PEP 484 prohibits implicit `Optional`")
}
fn autofix_title(&self) -> String {
format!("Convert to `{}`", self.conversion_type)
fn autofix_title(&self) -> Option<String> {
Some(format!("Convert to `{}`", self.conversion_type))
}
}
@@ -143,13 +147,14 @@ enum TypingTarget<'a> {
Any,
Object,
Optional,
ForwardReference(Expr),
Union(Vec<&'a Expr>),
Literal(Vec<&'a Expr>),
Annotated(&'a Expr),
}
impl<'a> TypingTarget<'a> {
fn try_from_expr(expr: &'a Expr, semantic: &SemanticModel) -> Option<Self> {
fn try_from_expr(expr: &'a Expr, semantic: &SemanticModel, locator: &Locator) -> Option<Self> {
match expr {
Expr::Subscript(ast::ExprSubscript { value, slice, .. }) => {
if semantic.match_typing_expr(value, "Optional") {
@@ -175,6 +180,15 @@ impl<'a> TypingTarget<'a> {
value: Constant::None,
..
}) => Some(TypingTarget::None),
Expr::Constant(ast::ExprConstant {
value: Constant::Str(string),
range,
..
}) => parse_type_annotation(string, *range, locator)
// In case of a parse error, we return `Any` to avoid false positives.
.map_or(Some(TypingTarget::Any), |(expr, _)| {
Some(TypingTarget::ForwardReference(expr))
}),
_ => semantic.resolve_call_path(expr).and_then(|call_path| {
if semantic.match_typing_call_path(&call_path, "Any") {
Some(TypingTarget::Any)
@@ -188,41 +202,41 @@ impl<'a> TypingTarget<'a> {
}
/// Check if the [`TypingTarget`] explicitly allows `None`.
fn contains_none(&self, semantic: &SemanticModel) -> bool {
fn contains_none(&self, semantic: &SemanticModel, locator: &Locator) -> bool {
match self {
TypingTarget::None
| TypingTarget::Optional
| TypingTarget::Any
| TypingTarget::Object => true,
TypingTarget::Literal(elements) => elements.iter().any(|element| {
let Some(new_target) = TypingTarget::try_from_expr(element, semantic) else {
return false;
};
let Some(new_target) = TypingTarget::try_from_expr(element, semantic, locator) else {
return false;
};
// Literal can only contain `None`, a literal value, other `Literal`
// or an enum value.
match new_target {
TypingTarget::None => true,
TypingTarget::Literal(_) => new_target.contains_none(semantic),
TypingTarget::Literal(_) => new_target.contains_none(semantic, locator),
_ => false,
}
}),
TypingTarget::Union(elements) => elements.iter().any(|element| {
let Some(new_target) = TypingTarget::try_from_expr(element, semantic) else {
return false;
};
match new_target {
TypingTarget::None => true,
_ => new_target.contains_none(semantic),
}
let Some(new_target) = TypingTarget::try_from_expr(element, semantic, locator) else {
return false;
};
new_target.contains_none(semantic, locator)
}),
TypingTarget::Annotated(element) => {
let Some(new_target) = TypingTarget::try_from_expr(element, semantic) else {
return false;
};
match new_target {
TypingTarget::None => true,
_ => new_target.contains_none(semantic),
}
let Some(new_target) = TypingTarget::try_from_expr(element, semantic, locator) else {
return false;
};
new_target.contains_none(semantic, locator)
}
TypingTarget::ForwardReference(expr) => {
let Some(new_target) = TypingTarget::try_from_expr(expr, semantic, locator) else {
return false;
};
new_target.contains_none(semantic, locator)
}
}
}
@@ -238,8 +252,9 @@ impl<'a> TypingTarget<'a> {
fn type_hint_explicitly_allows_none<'a>(
annotation: &'a Expr,
semantic: &SemanticModel,
locator: &Locator,
) -> Option<&'a Expr> {
let Some(target) = TypingTarget::try_from_expr(annotation, semantic) else {
let Some(target) = TypingTarget::try_from_expr(annotation, semantic, locator) else {
return Some(annotation);
};
match target {
@@ -249,9 +264,9 @@ fn type_hint_explicitly_allows_none<'a>(
// return the inner type if it doesn't allow `None`. If `Annotated`
// is found nested inside another type, then the outer type should
// be returned.
TypingTarget::Annotated(expr) => type_hint_explicitly_allows_none(expr, semantic),
TypingTarget::Annotated(expr) => type_hint_explicitly_allows_none(expr, semantic, locator),
_ => {
if target.contains_none(semantic) {
if target.contains_none(semantic, locator) {
None
} else {
Some(annotation)
@@ -305,7 +320,7 @@ fn generate_fix(checker: &Checker, conversion_type: ConversionType, expr: &Expr)
}
}
/// RUF011
/// RUF013
pub(crate) fn implicit_optional(checker: &mut Checker, arguments: &Arguments) {
for ArgWithDefault {
def,
@@ -326,15 +341,42 @@ pub(crate) fn implicit_optional(checker: &mut Checker, arguments: &Arguments) {
let Some(annotation) = &def.annotation else {
continue
};
let Some(expr) = type_hint_explicitly_allows_none(annotation, checker.semantic()) else {
continue;
};
let conversion_type = checker.settings.target_version.into();
let mut diagnostic = Diagnostic::new(ImplicitOptional { conversion_type }, expr.range());
if checker.patch(diagnostic.kind.rule()) {
diagnostic.try_set_fix(|| generate_fix(checker, conversion_type, expr));
if let Expr::Constant(ast::ExprConstant {
range,
value: Constant::Str(string),
..
}) = annotation.as_ref()
{
// Quoted annotation.
if let Ok((annotation, kind)) = parse_type_annotation(string, *range, checker.locator) {
let Some(expr) = type_hint_explicitly_allows_none(&annotation, checker.semantic(), checker.locator) else {
continue;
};
let conversion_type = checker.settings.target_version.into();
let mut diagnostic =
Diagnostic::new(ImplicitOptional { conversion_type }, expr.range());
if checker.patch(diagnostic.kind.rule()) {
if kind.is_simple() {
diagnostic.try_set_fix(|| generate_fix(checker, conversion_type, expr));
}
}
checker.diagnostics.push(diagnostic);
}
} else {
// Unquoted annotation.
let Some(expr) = type_hint_explicitly_allows_none(annotation, checker.semantic(), checker.locator) else {
continue;
};
let conversion_type = checker.settings.target_version.into();
let mut diagnostic =
Diagnostic::new(ImplicitOptional { conversion_type }, expr.range());
if checker.patch(diagnostic.kind.rule()) {
diagnostic.try_set_fix(|| generate_fix(checker, conversion_type, expr));
}
checker.diagnostics.push(diagnostic);
}
checker.diagnostics.push(diagnostic);
}
}

View File

@@ -5,7 +5,9 @@ use ruff_macros::{derive_message_formats, violation};
use ruff_python_semantic::analyze::typing::{is_immutable_annotation, is_mutable_expr};
use crate::checkers::ast::Checker;
use crate::rules::ruff::rules::helpers::{is_class_var_annotation, is_dataclass};
use crate::rules::ruff::rules::helpers::{
is_class_var_annotation, is_dataclass, is_final_annotation, is_pydantic_model,
};
/// ## What it does
/// Checks for mutable default values in class attributes.
@@ -54,9 +56,15 @@ pub(crate) fn mutable_class_default(checker: &mut Checker, class_def: &ast::Stmt
}) => {
if is_mutable_expr(value, checker.semantic())
&& !is_class_var_annotation(annotation, checker.semantic())
&& !is_final_annotation(annotation, checker.semantic())
&& !is_immutable_annotation(annotation, checker.semantic())
&& !is_dataclass(class_def, checker.semantic())
{
// Avoid Pydantic models, which end up copying defaults on instance creation.
if is_pydantic_model(class_def, checker.semantic()) {
return;
}
checker
.diagnostics
.push(Diagnostic::new(MutableClassDefault, value.range()));
@@ -64,6 +72,11 @@ pub(crate) fn mutable_class_default(checker: &mut Checker, class_def: &ast::Stmt
}
Stmt::Assign(ast::StmtAssign { value, .. }) => {
if is_mutable_expr(value, checker.semantic()) {
// Avoid Pydantic models, which end up copying defaults on instance creation.
if is_pydantic_model(class_def, checker.semantic()) {
return;
}
checker
.diagnostics
.push(Diagnostic::new(MutableClassDefault, value.range()));

View File

@@ -6,6 +6,32 @@ use ruff_macros::{derive_message_formats, violation};
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for use of `zip()` to iterate over successive pairs of elements.
///
/// ## Why is this bad?
/// When iterating over successive pairs of elements, prefer
/// `itertools.pairwise()` over `zip()`.
///
/// `itertools.pairwise()` is more readable and conveys the intent of the code
/// more clearly.
///
/// ## Example
/// ```python
/// letters = "ABCD"
/// zip(letters, letters[1:]) # ("A", "B"), ("B", "C"), ("C", "D")
/// ```
///
/// Use instead:
/// ```python
/// from itertools import pairwise
///
/// letters = "ABCD"
/// pairwise(letters) # ("A", "B"), ("B", "C"), ("C", "D")
/// ```
///
/// ## References
/// - [Python documentation: `itertools.pairwise`](https://docs.python.org/3/library/itertools.html#itertools.pairwise)
#[violation]
pub struct PairwiseOverZipped;

View File

@@ -3,7 +3,7 @@ source: crates/ruff/src/rules/ruff/mod.rs
---
RUF013_0.py:21:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
21 | def f(arg: int = None): # RUF011
21 | def f(arg: int = None): # RUF013
| ^^^ RUF013
22 | pass
|
@@ -13,15 +13,15 @@ RUF013_0.py:21:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
18 18 | pass
19 19 |
20 20 |
21 |-def f(arg: int = None): # RUF011
21 |+def f(arg: Optional[int] = None): # RUF011
21 |-def f(arg: int = None): # RUF013
21 |+def f(arg: Optional[int] = None): # RUF013
22 22 | pass
23 23 |
24 24 |
RUF013_0.py:25:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
25 | def f(arg: str = None): # RUF011
25 | def f(arg: str = None): # RUF013
| ^^^ RUF013
26 | pass
|
@@ -31,15 +31,15 @@ RUF013_0.py:25:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
22 22 | pass
23 23 |
24 24 |
25 |-def f(arg: str = None): # RUF011
25 |+def f(arg: Optional[str] = None): # RUF011
25 |-def f(arg: str = None): # RUF013
25 |+def f(arg: Optional[str] = None): # RUF013
26 26 | pass
27 27 |
28 28 |
RUF013_0.py:29:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
29 | def f(arg: typing.List[str] = None): # RUF011
29 | def f(arg: typing.List[str] = None): # RUF013
| ^^^^^^^^^^^^^^^^ RUF013
30 | pass
|
@@ -49,15 +49,15 @@ RUF013_0.py:29:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
26 26 | pass
27 27 |
28 28 |
29 |-def f(arg: typing.List[str] = None): # RUF011
29 |+def f(arg: Optional[typing.List[str]] = None): # RUF011
29 |-def f(arg: typing.List[str] = None): # RUF013
29 |+def f(arg: Optional[typing.List[str]] = None): # RUF013
30 30 | pass
31 31 |
32 32 |
RUF013_0.py:33:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
33 | def f(arg: Tuple[str] = None): # RUF011
33 | def f(arg: Tuple[str] = None): # RUF013
| ^^^^^^^^^^ RUF013
34 | pass
|
@@ -67,15 +67,15 @@ RUF013_0.py:33:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
30 30 | pass
31 31 |
32 32 |
33 |-def f(arg: Tuple[str] = None): # RUF011
33 |+def f(arg: Optional[Tuple[str]] = None): # RUF011
33 |-def f(arg: Tuple[str] = None): # RUF013
33 |+def f(arg: Optional[Tuple[str]] = None): # RUF013
34 34 | pass
35 35 |
36 36 |
RUF013_0.py:67:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
67 | def f(arg: Union = None): # RUF011
67 | def f(arg: Union = None): # RUF013
| ^^^^^ RUF013
68 | pass
|
@@ -85,15 +85,15 @@ RUF013_0.py:67:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
64 64 | pass
65 65 |
66 66 |
67 |-def f(arg: Union = None): # RUF011
67 |+def f(arg: Optional[Union] = None): # RUF011
67 |-def f(arg: Union = None): # RUF013
67 |+def f(arg: Optional[Union] = None): # RUF013
68 68 | pass
69 69 |
70 70 |
RUF013_0.py:71:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
71 | def f(arg: Union[int, str] = None): # RUF011
71 | def f(arg: Union[int, str] = None): # RUF013
| ^^^^^^^^^^^^^^^ RUF013
72 | pass
|
@@ -103,15 +103,15 @@ RUF013_0.py:71:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
68 68 | pass
69 69 |
70 70 |
71 |-def f(arg: Union[int, str] = None): # RUF011
71 |+def f(arg: Optional[Union[int, str]] = None): # RUF011
71 |-def f(arg: Union[int, str] = None): # RUF013
71 |+def f(arg: Optional[Union[int, str]] = None): # RUF013
72 72 | pass
73 73 |
74 74 |
RUF013_0.py:75:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
75 | def f(arg: typing.Union[int, str] = None): # RUF011
75 | def f(arg: typing.Union[int, str] = None): # RUF013
| ^^^^^^^^^^^^^^^^^^^^^^ RUF013
76 | pass
|
@@ -121,15 +121,15 @@ RUF013_0.py:75:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
72 72 | pass
73 73 |
74 74 |
75 |-def f(arg: typing.Union[int, str] = None): # RUF011
75 |+def f(arg: Optional[typing.Union[int, str]] = None): # RUF011
75 |-def f(arg: typing.Union[int, str] = None): # RUF013
75 |+def f(arg: Optional[typing.Union[int, str]] = None): # RUF013
76 76 | pass
77 77 |
78 78 |
RUF013_0.py:94:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
94 | def f(arg: int | float = None): # RUF011
94 | def f(arg: int | float = None): # RUF013
| ^^^^^^^^^^^ RUF013
95 | pass
|
@@ -139,15 +139,15 @@ RUF013_0.py:94:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
91 91 | pass
92 92 |
93 93 |
94 |-def f(arg: int | float = None): # RUF011
94 |+def f(arg: Optional[int | float] = None): # RUF011
94 |-def f(arg: int | float = None): # RUF013
94 |+def f(arg: Optional[int | float] = None): # RUF013
95 95 | pass
96 96 |
97 97 |
RUF013_0.py:98:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
98 | def f(arg: int | float | str | bytes = None): # RUF011
98 | def f(arg: int | float | str | bytes = None): # RUF013
| ^^^^^^^^^^^^^^^^^^^^^^^^^ RUF013
99 | pass
|
@@ -157,15 +157,15 @@ RUF013_0.py:98:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
95 95 | pass
96 96 |
97 97 |
98 |-def f(arg: int | float | str | bytes = None): # RUF011
98 |+def f(arg: Optional[int | float | str | bytes] = None): # RUF011
98 |-def f(arg: int | float | str | bytes = None): # RUF013
98 |+def f(arg: Optional[int | float | str | bytes] = None): # RUF013
99 99 | pass
100 100 |
101 101 |
RUF013_0.py:113:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
113 | def f(arg: Literal[1, "foo"] = None): # RUF011
113 | def f(arg: Literal[1, "foo"] = None): # RUF013
| ^^^^^^^^^^^^^^^^^ RUF013
114 | pass
|
@@ -175,15 +175,15 @@ RUF013_0.py:113:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
110 110 | pass
111 111 |
112 112 |
113 |-def f(arg: Literal[1, "foo"] = None): # RUF011
113 |+def f(arg: Optional[Literal[1, "foo"]] = None): # RUF011
113 |-def f(arg: Literal[1, "foo"] = None): # RUF013
113 |+def f(arg: Optional[Literal[1, "foo"]] = None): # RUF013
114 114 | pass
115 115 |
116 116 |
RUF013_0.py:117:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
117 | def f(arg: typing.Literal[1, "foo", True] = None): # RUF011
117 | def f(arg: typing.Literal[1, "foo", True] = None): # RUF013
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF013
118 | pass
|
@@ -193,15 +193,15 @@ RUF013_0.py:117:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
114 114 | pass
115 115 |
116 116 |
117 |-def f(arg: typing.Literal[1, "foo", True] = None): # RUF011
117 |+def f(arg: Optional[typing.Literal[1, "foo", True]] = None): # RUF011
117 |-def f(arg: typing.Literal[1, "foo", True] = None): # RUF013
117 |+def f(arg: Optional[typing.Literal[1, "foo", True]] = None): # RUF013
118 118 | pass
119 119 |
120 120 |
RUF013_0.py:136:22: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
136 | def f(arg: Annotated[int, ...] = None): # RUF011
136 | def f(arg: Annotated[int, ...] = None): # RUF013
| ^^^ RUF013
137 | pass
|
@@ -211,15 +211,15 @@ RUF013_0.py:136:22: RUF013 [*] PEP 484 prohibits implicit `Optional`
133 133 | pass
134 134 |
135 135 |
136 |-def f(arg: Annotated[int, ...] = None): # RUF011
136 |+def f(arg: Annotated[Optional[int], ...] = None): # RUF011
136 |-def f(arg: Annotated[int, ...] = None): # RUF013
136 |+def f(arg: Annotated[Optional[int], ...] = None): # RUF013
137 137 | pass
138 138 |
139 139 |
RUF013_0.py:140:32: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
140 | def f(arg: Annotated[Annotated[int | str, ...], ...] = None): # RUF011
140 | def f(arg: Annotated[Annotated[int | str, ...], ...] = None): # RUF013
| ^^^^^^^^^ RUF013
141 | pass
|
@@ -229,8 +229,8 @@ RUF013_0.py:140:32: RUF013 [*] PEP 484 prohibits implicit `Optional`
137 137 | pass
138 138 |
139 139 |
140 |-def f(arg: Annotated[Annotated[int | str, ...], ...] = None): # RUF011
140 |+def f(arg: Annotated[Annotated[Optional[int | str], ...], ...] = None): # RUF011
140 |-def f(arg: Annotated[Annotated[int | str, ...], ...] = None): # RUF013
140 |+def f(arg: Annotated[Annotated[Optional[int | str], ...], ...] = None): # RUF013
141 141 | pass
142 142 |
143 143 |
@@ -238,10 +238,10 @@ RUF013_0.py:140:32: RUF013 [*] PEP 484 prohibits implicit `Optional`
RUF013_0.py:156:11: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
155 | def f(
156 | arg1: int = None, # RUF011
156 | arg1: int = None, # RUF013
| ^^^ RUF013
157 | arg2: Union[int, float] = None, # RUF011
158 | arg3: Literal[1, 2, 3] = None, # RUF011
157 | arg2: Union[int, float] = None, # RUF013
158 | arg3: Literal[1, 2, 3] = None, # RUF013
|
= help: Convert to `Optional[T]`
@@ -249,19 +249,19 @@ RUF013_0.py:156:11: RUF013 [*] PEP 484 prohibits implicit `Optional`
153 153 |
154 154 |
155 155 | def f(
156 |- arg1: int = None, # RUF011
156 |+ arg1: Optional[int] = None, # RUF011
157 157 | arg2: Union[int, float] = None, # RUF011
158 158 | arg3: Literal[1, 2, 3] = None, # RUF011
156 |- arg1: int = None, # RUF013
156 |+ arg1: Optional[int] = None, # RUF013
157 157 | arg2: Union[int, float] = None, # RUF013
158 158 | arg3: Literal[1, 2, 3] = None, # RUF013
159 159 | ):
RUF013_0.py:157:11: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
155 | def f(
156 | arg1: int = None, # RUF011
157 | arg2: Union[int, float] = None, # RUF011
156 | arg1: int = None, # RUF013
157 | arg2: Union[int, float] = None, # RUF013
| ^^^^^^^^^^^^^^^^^ RUF013
158 | arg3: Literal[1, 2, 3] = None, # RUF011
158 | arg3: Literal[1, 2, 3] = None, # RUF013
159 | ):
|
= help: Convert to `Optional[T]`
@@ -269,18 +269,18 @@ RUF013_0.py:157:11: RUF013 [*] PEP 484 prohibits implicit `Optional`
Suggested fix
154 154 |
155 155 | def f(
156 156 | arg1: int = None, # RUF011
157 |- arg2: Union[int, float] = None, # RUF011
157 |+ arg2: Optional[Union[int, float]] = None, # RUF011
158 158 | arg3: Literal[1, 2, 3] = None, # RUF011
156 156 | arg1: int = None, # RUF013
157 |- arg2: Union[int, float] = None, # RUF013
157 |+ arg2: Optional[Union[int, float]] = None, # RUF013
158 158 | arg3: Literal[1, 2, 3] = None, # RUF013
159 159 | ):
160 160 | pass
RUF013_0.py:158:11: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
156 | arg1: int = None, # RUF011
157 | arg2: Union[int, float] = None, # RUF011
158 | arg3: Literal[1, 2, 3] = None, # RUF011
156 | arg1: int = None, # RUF013
157 | arg2: Union[int, float] = None, # RUF013
158 | arg3: Literal[1, 2, 3] = None, # RUF013
| ^^^^^^^^^^^^^^^^ RUF013
159 | ):
160 | pass
@@ -289,17 +289,17 @@ RUF013_0.py:158:11: RUF013 [*] PEP 484 prohibits implicit `Optional`
Suggested fix
155 155 | def f(
156 156 | arg1: int = None, # RUF011
157 157 | arg2: Union[int, float] = None, # RUF011
158 |- arg3: Literal[1, 2, 3] = None, # RUF011
158 |+ arg3: Optional[Literal[1, 2, 3]] = None, # RUF011
156 156 | arg1: int = None, # RUF013
157 157 | arg2: Union[int, float] = None, # RUF013
158 |- arg3: Literal[1, 2, 3] = None, # RUF013
158 |+ arg3: Optional[Literal[1, 2, 3]] = None, # RUF013
159 159 | ):
160 160 | pass
161 161 |
RUF013_0.py:186:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
186 | def f(arg: Union[Annotated[int, ...], Union[str, bytes]] = None): # RUF011
186 | def f(arg: Union[Annotated[int, ...], Union[str, bytes]] = None): # RUF013
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF013
187 | pass
|
@@ -309,8 +309,72 @@ RUF013_0.py:186:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
183 183 | pass
184 184 |
185 185 |
186 |-def f(arg: Union[Annotated[int, ...], Union[str, bytes]] = None): # RUF011
186 |+def f(arg: Optional[Union[Annotated[int, ...], Union[str, bytes]]] = None): # RUF011
186 |-def f(arg: Union[Annotated[int, ...], Union[str, bytes]] = None): # RUF013
186 |+def f(arg: Optional[Union[Annotated[int, ...], Union[str, bytes]]] = None): # RUF013
187 187 | pass
188 188 |
189 189 |
RUF013_0.py:193:13: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
193 | def f(arg: "int" = None): # RUF013
| ^^^ RUF013
194 | pass
|
= help: Convert to `Optional[T]`
Suggested fix
190 190 | # Quoted
191 191 |
192 192 |
193 |-def f(arg: "int" = None): # RUF013
193 |+def f(arg: "Optional[int]" = None): # RUF013
194 194 | pass
195 195 |
196 196 |
RUF013_0.py:197:13: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
197 | def f(arg: "str" = None): # RUF013
| ^^^ RUF013
198 | pass
|
= help: Convert to `Optional[T]`
Suggested fix
194 194 | pass
195 195 |
196 196 |
197 |-def f(arg: "str" = None): # RUF013
197 |+def f(arg: "Optional[str]" = None): # RUF013
198 198 | pass
199 199 |
200 200 |
RUF013_0.py:201:12: RUF013 PEP 484 prohibits implicit `Optional`
|
201 | def f(arg: "st" "r" = None): # RUF013
| ^^^^^^^^ RUF013
202 | pass
|
= help: Convert to `Optional[T]`
RUF013_0.py:209:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
209 | def f(arg: Union["int", "str"] = None): # RUF013
| ^^^^^^^^^^^^^^^^^^^ RUF013
210 | pass
|
= help: Convert to `Optional[T]`
Suggested fix
206 206 | pass
207 207 |
208 208 |
209 |-def f(arg: Union["int", "str"] = None): # RUF013
209 |+def f(arg: Optional[Union["int", "str"]] = None): # RUF013
210 210 | pass
211 211 |
212 212 |

View File

@@ -1,21 +1,21 @@
---
source: crates/ruff/src/rules/ruff/mod.rs
---
RUF010.py:9:4: RUF010 [*] Remove unnecessary `str` conversion
RUF010.py:9:4: RUF010 [*] Use explicit conversion flag
|
9 | f"{str(bla)}, {repr(bla)}, {ascii(bla)}" # RUF010
| ^^^^^^^^ RUF010
10 |
11 | f"{str(d['a'])}, {repr(d['b'])}, {ascii(d['c'])}" # RUF010
|
= help: Remove `str` call
= help: Replace with conversion flag
Fix
6 6 | pass
7 7 |
8 8 |
9 |-f"{str(bla)}, {repr(bla)}, {ascii(bla)}" # RUF010
9 |+f"{bla}, {repr(bla)}, {ascii(bla)}" # RUF010
9 |+f"{bla!s}, {repr(bla)}, {ascii(bla)}" # RUF010
10 10 |
11 11 | f"{str(d['a'])}, {repr(d['b'])}, {ascii(d['c'])}" # RUF010
12 12 |
@@ -58,7 +58,7 @@ RUF010.py:9:29: RUF010 [*] Use explicit conversion flag
11 11 | f"{str(d['a'])}, {repr(d['b'])}, {ascii(d['c'])}" # RUF010
12 12 |
RUF010.py:11:4: RUF010 [*] Remove unnecessary `str` conversion
RUF010.py:11:4: RUF010 [*] Use explicit conversion flag
|
9 | f"{str(bla)}, {repr(bla)}, {ascii(bla)}" # RUF010
10 |
@@ -67,14 +67,14 @@ RUF010.py:11:4: RUF010 [*] Remove unnecessary `str` conversion
12 |
13 | f"{(str(bla))}, {(repr(bla))}, {(ascii(bla))}" # RUF010
|
= help: Remove `str` call
= help: Replace with conversion flag
Fix
8 8 |
9 9 | f"{str(bla)}, {repr(bla)}, {ascii(bla)}" # RUF010
10 10 |
11 |-f"{str(d['a'])}, {repr(d['b'])}, {ascii(d['c'])}" # RUF010
11 |+f"{d['a']}, {repr(d['b'])}, {ascii(d['c'])}" # RUF010
11 |+f"{d['a']!s}, {repr(d['b'])}, {ascii(d['c'])}" # RUF010
12 12 |
13 13 | f"{(str(bla))}, {(repr(bla))}, {(ascii(bla))}" # RUF010
14 14 |
@@ -121,7 +121,7 @@ RUF010.py:11:35: RUF010 [*] Use explicit conversion flag
13 13 | f"{(str(bla))}, {(repr(bla))}, {(ascii(bla))}" # RUF010
14 14 |
RUF010.py:13:5: RUF010 [*] Remove unnecessary `str` conversion
RUF010.py:13:5: RUF010 [*] Use explicit conversion flag
|
11 | f"{str(d['a'])}, {repr(d['b'])}, {ascii(d['c'])}" # RUF010
12 |
@@ -130,14 +130,14 @@ RUF010.py:13:5: RUF010 [*] Remove unnecessary `str` conversion
14 |
15 | f"{bla!s}, {(repr(bla))}, {(ascii(bla))}" # RUF010
|
= help: Remove `str` call
= help: Replace with conversion flag
Fix
10 10 |
11 11 | f"{str(d['a'])}, {repr(d['b'])}, {ascii(d['c'])}" # RUF010
12 12 |
13 |-f"{(str(bla))}, {(repr(bla))}, {(ascii(bla))}" # RUF010
13 |+f"{bla}, {(repr(bla))}, {(ascii(bla))}" # RUF010
13 |+f"{bla!s}, {(repr(bla))}, {(ascii(bla))}" # RUF010
14 14 |
15 15 | f"{bla!s}, {(repr(bla))}, {(ascii(bla))}" # RUF010
16 16 |
@@ -184,27 +184,6 @@ RUF010.py:13:34: RUF010 [*] Use explicit conversion flag
15 15 | f"{bla!s}, {(repr(bla))}, {(ascii(bla))}" # RUF010
16 16 |
RUF010.py:15:4: RUF010 [*] Remove unnecessary conversion flag
|
13 | f"{(str(bla))}, {(repr(bla))}, {(ascii(bla))}" # RUF010
14 |
15 | f"{bla!s}, {(repr(bla))}, {(ascii(bla))}" # RUF010
| ^^^ RUF010
16 |
17 | f"{foo(bla)}" # OK
|
= help: Remove conversion flag
Fix
12 12 |
13 13 | f"{(str(bla))}, {(repr(bla))}, {(ascii(bla))}" # RUF010
14 14 |
15 |-f"{bla!s}, {(repr(bla))}, {(ascii(bla))}" # RUF010
15 |+f"{bla}, {(repr(bla))}, {(ascii(bla))}" # RUF010
16 16 |
17 17 | f"{foo(bla)}" # OK
18 18 |
RUF010.py:15:14: RUF010 [*] Use explicit conversion flag
|
13 | f"{(str(bla))}, {(repr(bla))}, {(ascii(bla))}" # RUF010
@@ -247,27 +226,6 @@ RUF010.py:15:29: RUF010 [*] Use explicit conversion flag
17 17 | f"{foo(bla)}" # OK
18 18 |
RUF010.py:21:4: RUF010 [*] Remove unnecessary conversion flag
|
19 | f"{str(bla, 'ascii')}, {str(bla, encoding='cp1255')}" # OK
20 |
21 | f"{bla!s} {[]!r} {'bar'!a}" # OK
| ^^^ RUF010
22 |
23 | "Not an f-string {str(bla)}, {repr(bla)}, {ascii(bla)}" # OK
|
= help: Remove conversion flag
Fix
18 18 |
19 19 | f"{str(bla, 'ascii')}, {str(bla, encoding='cp1255')}" # OK
20 20 |
21 |-f"{bla!s} {[]!r} {'bar'!a}" # OK
21 |+f"{bla} {[]!r} {'bar'!a}" # OK
22 22 |
23 23 | "Not an f-string {str(bla)}, {repr(bla)}, {ascii(bla)}" # OK
24 24 |
RUF010.py:35:20: RUF010 [*] Use explicit conversion flag
|
33 | f"Member of tuple mismatches type at index {i}. Expected {of_shape_i}. Got "
@@ -285,67 +243,5 @@ RUF010.py:35:20: RUF010 [*] Use explicit conversion flag
35 |- f" that flows {repr(obj)} of type {type(obj)}.{additional_message}" # RUF010
35 |+ f" that flows {obj!r} of type {type(obj)}.{additional_message}" # RUF010
36 36 | )
37 37 |
38 38 |
RUF010.py:39:4: RUF010 [*] Remove unnecessary `str` conversion
|
39 | f"{str(bla)}" # RUF010
| ^^^^^^^^ RUF010
40 |
41 | f"{str(bla):20}" # RUF010
|
= help: Remove `str` call
Fix
36 36 | )
37 37 |
38 38 |
39 |-f"{str(bla)}" # RUF010
39 |+f"{bla}" # RUF010
40 40 |
41 41 | f"{str(bla):20}" # RUF010
42 42 |
RUF010.py:41:4: RUF010 [*] Use explicit conversion flag
|
39 | f"{str(bla)}" # RUF010
40 |
41 | f"{str(bla):20}" # RUF010
| ^^^^^^^^ RUF010
42 |
43 | f"{bla!s}" # RUF010
|
= help: Replace with conversion flag
Fix
38 38 |
39 39 | f"{str(bla)}" # RUF010
40 40 |
41 |-f"{str(bla):20}" # RUF010
41 |+f"{bla!s:20}" # RUF010
42 42 |
43 43 | f"{bla!s}" # RUF010
44 44 |
RUF010.py:43:4: RUF010 [*] Remove unnecessary conversion flag
|
41 | f"{str(bla):20}" # RUF010
42 |
43 | f"{bla!s}" # RUF010
| ^^^ RUF010
44 |
45 | f"{bla!s:20}" # OK
|
= help: Remove conversion flag
Fix
40 40 |
41 41 | f"{str(bla):20}" # RUF010
42 42 |
43 |-f"{bla!s}" # RUF010
43 |+f"{bla}" # RUF010
44 44 |
45 45 | f"{bla!s:20}" # OK

View File

@@ -20,33 +20,33 @@ RUF012.py:10:26: RUF012 Mutable class attributes should be annotated with `typin
12 | class_variable: typing.ClassVar[list[int]] = []
|
RUF012.py:16:34: RUF012 Mutable class attributes should be annotated with `typing.ClassVar`
RUF012.py:17:34: RUF012 Mutable class attributes should be annotated with `typing.ClassVar`
|
15 | class B:
16 | mutable_default: list[int] = []
16 | class B:
17 | mutable_default: list[int] = []
| ^^ RUF012
17 | immutable_annotation: Sequence[int] = []
18 | without_annotation = []
18 | immutable_annotation: Sequence[int] = []
19 | without_annotation = []
|
RUF012.py:18:26: RUF012 Mutable class attributes should be annotated with `typing.ClassVar`
RUF012.py:19:26: RUF012 Mutable class attributes should be annotated with `typing.ClassVar`
|
16 | mutable_default: list[int] = []
17 | immutable_annotation: Sequence[int] = []
18 | without_annotation = []
17 | mutable_default: list[int] = []
18 | immutable_annotation: Sequence[int] = []
19 | without_annotation = []
| ^^ RUF012
19 | correct_code: list[int] = KNOWINGLY_MUTABLE_DEFAULT
20 | class_variable: ClassVar[list[int]] = []
20 | correct_code: list[int] = KNOWINGLY_MUTABLE_DEFAULT
21 | class_variable: ClassVar[list[int]] = []
|
RUF012.py:30:26: RUF012 Mutable class attributes should be annotated with `typing.ClassVar`
RUF012.py:32:26: RUF012 Mutable class attributes should be annotated with `typing.ClassVar`
|
28 | mutable_default: list[int] = []
29 | immutable_annotation: Sequence[int] = []
30 | without_annotation = []
30 | mutable_default: list[int] = []
31 | immutable_annotation: Sequence[int] = []
32 | without_annotation = []
| ^^ RUF012
31 | correct_code: list[int] = KNOWINGLY_MUTABLE_DEFAULT
32 | perfectly_fine: list[int] = field(default_factory=list)
33 | correct_code: list[int] = KNOWINGLY_MUTABLE_DEFAULT
34 | perfectly_fine: list[int] = field(default_factory=list)
|

View File

@@ -3,7 +3,7 @@ source: crates/ruff/src/rules/ruff/mod.rs
---
RUF013_0.py:21:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
21 | def f(arg: int = None): # RUF011
21 | def f(arg: int = None): # RUF013
| ^^^ RUF013
22 | pass
|
@@ -13,15 +13,15 @@ RUF013_0.py:21:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
18 18 | pass
19 19 |
20 20 |
21 |-def f(arg: int = None): # RUF011
21 |+def f(arg: int | None = None): # RUF011
21 |-def f(arg: int = None): # RUF013
21 |+def f(arg: int | None = None): # RUF013
22 22 | pass
23 23 |
24 24 |
RUF013_0.py:25:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
25 | def f(arg: str = None): # RUF011
25 | def f(arg: str = None): # RUF013
| ^^^ RUF013
26 | pass
|
@@ -31,15 +31,15 @@ RUF013_0.py:25:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
22 22 | pass
23 23 |
24 24 |
25 |-def f(arg: str = None): # RUF011
25 |+def f(arg: str | None = None): # RUF011
25 |-def f(arg: str = None): # RUF013
25 |+def f(arg: str | None = None): # RUF013
26 26 | pass
27 27 |
28 28 |
RUF013_0.py:29:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
29 | def f(arg: typing.List[str] = None): # RUF011
29 | def f(arg: typing.List[str] = None): # RUF013
| ^^^^^^^^^^^^^^^^ RUF013
30 | pass
|
@@ -49,15 +49,15 @@ RUF013_0.py:29:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
26 26 | pass
27 27 |
28 28 |
29 |-def f(arg: typing.List[str] = None): # RUF011
29 |+def f(arg: typing.List[str] | None = None): # RUF011
29 |-def f(arg: typing.List[str] = None): # RUF013
29 |+def f(arg: typing.List[str] | None = None): # RUF013
30 30 | pass
31 31 |
32 32 |
RUF013_0.py:33:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
33 | def f(arg: Tuple[str] = None): # RUF011
33 | def f(arg: Tuple[str] = None): # RUF013
| ^^^^^^^^^^ RUF013
34 | pass
|
@@ -67,15 +67,15 @@ RUF013_0.py:33:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
30 30 | pass
31 31 |
32 32 |
33 |-def f(arg: Tuple[str] = None): # RUF011
33 |+def f(arg: Tuple[str] | None = None): # RUF011
33 |-def f(arg: Tuple[str] = None): # RUF013
33 |+def f(arg: Tuple[str] | None = None): # RUF013
34 34 | pass
35 35 |
36 36 |
RUF013_0.py:67:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
67 | def f(arg: Union = None): # RUF011
67 | def f(arg: Union = None): # RUF013
| ^^^^^ RUF013
68 | pass
|
@@ -85,15 +85,15 @@ RUF013_0.py:67:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
64 64 | pass
65 65 |
66 66 |
67 |-def f(arg: Union = None): # RUF011
67 |+def f(arg: Union | None = None): # RUF011
67 |-def f(arg: Union = None): # RUF013
67 |+def f(arg: Union | None = None): # RUF013
68 68 | pass
69 69 |
70 70 |
RUF013_0.py:71:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
71 | def f(arg: Union[int, str] = None): # RUF011
71 | def f(arg: Union[int, str] = None): # RUF013
| ^^^^^^^^^^^^^^^ RUF013
72 | pass
|
@@ -103,15 +103,15 @@ RUF013_0.py:71:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
68 68 | pass
69 69 |
70 70 |
71 |-def f(arg: Union[int, str] = None): # RUF011
71 |+def f(arg: Union[int, str] | None = None): # RUF011
71 |-def f(arg: Union[int, str] = None): # RUF013
71 |+def f(arg: Union[int, str] | None = None): # RUF013
72 72 | pass
73 73 |
74 74 |
RUF013_0.py:75:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
75 | def f(arg: typing.Union[int, str] = None): # RUF011
75 | def f(arg: typing.Union[int, str] = None): # RUF013
| ^^^^^^^^^^^^^^^^^^^^^^ RUF013
76 | pass
|
@@ -121,15 +121,15 @@ RUF013_0.py:75:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
72 72 | pass
73 73 |
74 74 |
75 |-def f(arg: typing.Union[int, str] = None): # RUF011
75 |+def f(arg: typing.Union[int, str] | None = None): # RUF011
75 |-def f(arg: typing.Union[int, str] = None): # RUF013
75 |+def f(arg: typing.Union[int, str] | None = None): # RUF013
76 76 | pass
77 77 |
78 78 |
RUF013_0.py:94:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
94 | def f(arg: int | float = None): # RUF011
94 | def f(arg: int | float = None): # RUF013
| ^^^^^^^^^^^ RUF013
95 | pass
|
@@ -139,15 +139,15 @@ RUF013_0.py:94:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
91 91 | pass
92 92 |
93 93 |
94 |-def f(arg: int | float = None): # RUF011
94 |+def f(arg: int | float | None = None): # RUF011
94 |-def f(arg: int | float = None): # RUF013
94 |+def f(arg: int | float | None = None): # RUF013
95 95 | pass
96 96 |
97 97 |
RUF013_0.py:98:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
98 | def f(arg: int | float | str | bytes = None): # RUF011
98 | def f(arg: int | float | str | bytes = None): # RUF013
| ^^^^^^^^^^^^^^^^^^^^^^^^^ RUF013
99 | pass
|
@@ -157,15 +157,15 @@ RUF013_0.py:98:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
95 95 | pass
96 96 |
97 97 |
98 |-def f(arg: int | float | str | bytes = None): # RUF011
98 |+def f(arg: int | float | str | bytes | None = None): # RUF011
98 |-def f(arg: int | float | str | bytes = None): # RUF013
98 |+def f(arg: int | float | str | bytes | None = None): # RUF013
99 99 | pass
100 100 |
101 101 |
RUF013_0.py:113:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
113 | def f(arg: Literal[1, "foo"] = None): # RUF011
113 | def f(arg: Literal[1, "foo"] = None): # RUF013
| ^^^^^^^^^^^^^^^^^ RUF013
114 | pass
|
@@ -175,15 +175,15 @@ RUF013_0.py:113:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
110 110 | pass
111 111 |
112 112 |
113 |-def f(arg: Literal[1, "foo"] = None): # RUF011
113 |+def f(arg: Literal[1, "foo"] | None = None): # RUF011
113 |-def f(arg: Literal[1, "foo"] = None): # RUF013
113 |+def f(arg: Literal[1, "foo"] | None = None): # RUF013
114 114 | pass
115 115 |
116 116 |
RUF013_0.py:117:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
117 | def f(arg: typing.Literal[1, "foo", True] = None): # RUF011
117 | def f(arg: typing.Literal[1, "foo", True] = None): # RUF013
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF013
118 | pass
|
@@ -193,15 +193,15 @@ RUF013_0.py:117:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
114 114 | pass
115 115 |
116 116 |
117 |-def f(arg: typing.Literal[1, "foo", True] = None): # RUF011
117 |+def f(arg: typing.Literal[1, "foo", True] | None = None): # RUF011
117 |-def f(arg: typing.Literal[1, "foo", True] = None): # RUF013
117 |+def f(arg: typing.Literal[1, "foo", True] | None = None): # RUF013
118 118 | pass
119 119 |
120 120 |
RUF013_0.py:136:22: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
136 | def f(arg: Annotated[int, ...] = None): # RUF011
136 | def f(arg: Annotated[int, ...] = None): # RUF013
| ^^^ RUF013
137 | pass
|
@@ -211,15 +211,15 @@ RUF013_0.py:136:22: RUF013 [*] PEP 484 prohibits implicit `Optional`
133 133 | pass
134 134 |
135 135 |
136 |-def f(arg: Annotated[int, ...] = None): # RUF011
136 |+def f(arg: Annotated[int | None, ...] = None): # RUF011
136 |-def f(arg: Annotated[int, ...] = None): # RUF013
136 |+def f(arg: Annotated[int | None, ...] = None): # RUF013
137 137 | pass
138 138 |
139 139 |
RUF013_0.py:140:32: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
140 | def f(arg: Annotated[Annotated[int | str, ...], ...] = None): # RUF011
140 | def f(arg: Annotated[Annotated[int | str, ...], ...] = None): # RUF013
| ^^^^^^^^^ RUF013
141 | pass
|
@@ -229,8 +229,8 @@ RUF013_0.py:140:32: RUF013 [*] PEP 484 prohibits implicit `Optional`
137 137 | pass
138 138 |
139 139 |
140 |-def f(arg: Annotated[Annotated[int | str, ...], ...] = None): # RUF011
140 |+def f(arg: Annotated[Annotated[int | str | None, ...], ...] = None): # RUF011
140 |-def f(arg: Annotated[Annotated[int | str, ...], ...] = None): # RUF013
140 |+def f(arg: Annotated[Annotated[int | str | None, ...], ...] = None): # RUF013
141 141 | pass
142 142 |
143 143 |
@@ -238,10 +238,10 @@ RUF013_0.py:140:32: RUF013 [*] PEP 484 prohibits implicit `Optional`
RUF013_0.py:156:11: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
155 | def f(
156 | arg1: int = None, # RUF011
156 | arg1: int = None, # RUF013
| ^^^ RUF013
157 | arg2: Union[int, float] = None, # RUF011
158 | arg3: Literal[1, 2, 3] = None, # RUF011
157 | arg2: Union[int, float] = None, # RUF013
158 | arg3: Literal[1, 2, 3] = None, # RUF013
|
= help: Convert to `T | None`
@@ -249,19 +249,19 @@ RUF013_0.py:156:11: RUF013 [*] PEP 484 prohibits implicit `Optional`
153 153 |
154 154 |
155 155 | def f(
156 |- arg1: int = None, # RUF011
156 |+ arg1: int | None = None, # RUF011
157 157 | arg2: Union[int, float] = None, # RUF011
158 158 | arg3: Literal[1, 2, 3] = None, # RUF011
156 |- arg1: int = None, # RUF013
156 |+ arg1: int | None = None, # RUF013
157 157 | arg2: Union[int, float] = None, # RUF013
158 158 | arg3: Literal[1, 2, 3] = None, # RUF013
159 159 | ):
RUF013_0.py:157:11: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
155 | def f(
156 | arg1: int = None, # RUF011
157 | arg2: Union[int, float] = None, # RUF011
156 | arg1: int = None, # RUF013
157 | arg2: Union[int, float] = None, # RUF013
| ^^^^^^^^^^^^^^^^^ RUF013
158 | arg3: Literal[1, 2, 3] = None, # RUF011
158 | arg3: Literal[1, 2, 3] = None, # RUF013
159 | ):
|
= help: Convert to `T | None`
@@ -269,18 +269,18 @@ RUF013_0.py:157:11: RUF013 [*] PEP 484 prohibits implicit `Optional`
Suggested fix
154 154 |
155 155 | def f(
156 156 | arg1: int = None, # RUF011
157 |- arg2: Union[int, float] = None, # RUF011
157 |+ arg2: Union[int, float] | None = None, # RUF011
158 158 | arg3: Literal[1, 2, 3] = None, # RUF011
156 156 | arg1: int = None, # RUF013
157 |- arg2: Union[int, float] = None, # RUF013
157 |+ arg2: Union[int, float] | None = None, # RUF013
158 158 | arg3: Literal[1, 2, 3] = None, # RUF013
159 159 | ):
160 160 | pass
RUF013_0.py:158:11: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
156 | arg1: int = None, # RUF011
157 | arg2: Union[int, float] = None, # RUF011
158 | arg3: Literal[1, 2, 3] = None, # RUF011
156 | arg1: int = None, # RUF013
157 | arg2: Union[int, float] = None, # RUF013
158 | arg3: Literal[1, 2, 3] = None, # RUF013
| ^^^^^^^^^^^^^^^^ RUF013
159 | ):
160 | pass
@@ -289,17 +289,17 @@ RUF013_0.py:158:11: RUF013 [*] PEP 484 prohibits implicit `Optional`
Suggested fix
155 155 | def f(
156 156 | arg1: int = None, # RUF011
157 157 | arg2: Union[int, float] = None, # RUF011
158 |- arg3: Literal[1, 2, 3] = None, # RUF011
158 |+ arg3: Literal[1, 2, 3] | None = None, # RUF011
156 156 | arg1: int = None, # RUF013
157 157 | arg2: Union[int, float] = None, # RUF013
158 |- arg3: Literal[1, 2, 3] = None, # RUF013
158 |+ arg3: Literal[1, 2, 3] | None = None, # RUF013
159 159 | ):
160 160 | pass
161 161 |
RUF013_0.py:186:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
186 | def f(arg: Union[Annotated[int, ...], Union[str, bytes]] = None): # RUF011
186 | def f(arg: Union[Annotated[int, ...], Union[str, bytes]] = None): # RUF013
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF013
187 | pass
|
@@ -309,8 +309,72 @@ RUF013_0.py:186:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
183 183 | pass
184 184 |
185 185 |
186 |-def f(arg: Union[Annotated[int, ...], Union[str, bytes]] = None): # RUF011
186 |+def f(arg: Union[Annotated[int, ...], Union[str, bytes]] | None = None): # RUF011
186 |-def f(arg: Union[Annotated[int, ...], Union[str, bytes]] = None): # RUF013
186 |+def f(arg: Union[Annotated[int, ...], Union[str, bytes]] | None = None): # RUF013
187 187 | pass
188 188 |
189 189 |
RUF013_0.py:193:13: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
193 | def f(arg: "int" = None): # RUF013
| ^^^ RUF013
194 | pass
|
= help: Convert to `T | None`
Suggested fix
190 190 | # Quoted
191 191 |
192 192 |
193 |-def f(arg: "int" = None): # RUF013
193 |+def f(arg: "int | None" = None): # RUF013
194 194 | pass
195 195 |
196 196 |
RUF013_0.py:197:13: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
197 | def f(arg: "str" = None): # RUF013
| ^^^ RUF013
198 | pass
|
= help: Convert to `T | None`
Suggested fix
194 194 | pass
195 195 |
196 196 |
197 |-def f(arg: "str" = None): # RUF013
197 |+def f(arg: "str | None" = None): # RUF013
198 198 | pass
199 199 |
200 200 |
RUF013_0.py:201:12: RUF013 PEP 484 prohibits implicit `Optional`
|
201 | def f(arg: "st" "r" = None): # RUF013
| ^^^^^^^^ RUF013
202 | pass
|
= help: Convert to `T | None`
RUF013_0.py:209:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
209 | def f(arg: Union["int", "str"] = None): # RUF013
| ^^^^^^^^^^^^^^^^^^^ RUF013
210 | pass
|
= help: Convert to `T | None`
Suggested fix
206 206 | pass
207 207 |
208 208 |
209 |-def f(arg: Union["int", "str"] = None): # RUF013
209 |+def f(arg: Union["int", "str"] | None = None): # RUF013
210 210 | pass
211 211 |
212 212 |

View File

@@ -16,8 +16,8 @@ use crate::fs;
use crate::line_width::{LineLength, TabSize};
use crate::rule_selector::RuleSelector;
use crate::rules::{
copyright, flake8_annotations, flake8_bandit, flake8_bugbear, flake8_builtins,
flake8_comprehensions, flake8_errmsg, flake8_gettext, flake8_implicit_str_concat,
flake8_annotations, flake8_bandit, flake8_bugbear, flake8_builtins, flake8_comprehensions,
flake8_copyright, flake8_errmsg, flake8_gettext, flake8_implicit_str_concat,
flake8_import_conventions, flake8_pytest_style, flake8_quotes, flake8_self,
flake8_tidy_imports, flake8_type_checking, flake8_unused_arguments, isort, mccabe, pep8_naming,
pycodestyle, pydocstyle, pyflakes, pylint,
@@ -75,14 +75,14 @@ pub struct Configuration {
pub flake8_bugbear: Option<flake8_bugbear::settings::Options>,
pub flake8_builtins: Option<flake8_builtins::settings::Options>,
pub flake8_comprehensions: Option<flake8_comprehensions::settings::Options>,
pub copyright: Option<copyright::settings::Options>,
pub flake8_copyright: Option<flake8_copyright::settings::Options>,
pub flake8_errmsg: Option<flake8_errmsg::settings::Options>,
pub flake8_gettext: Option<flake8_gettext::settings::Options>,
pub flake8_implicit_str_concat: Option<flake8_implicit_str_concat::settings::Options>,
pub flake8_import_conventions: Option<flake8_import_conventions::settings::Options>,
pub flake8_pytest_style: Option<flake8_pytest_style::settings::Options>,
pub flake8_quotes: Option<flake8_quotes::settings::Options>,
pub flake8_self: Option<flake8_self::settings::Options>,
pub flake8_gettext: Option<flake8_gettext::settings::Options>,
pub flake8_tidy_imports: Option<flake8_tidy_imports::options::Options>,
pub flake8_type_checking: Option<flake8_type_checking::settings::Options>,
pub flake8_unused_arguments: Option<flake8_unused_arguments::settings::Options>,
@@ -229,7 +229,7 @@ impl Configuration {
flake8_bugbear: options.flake8_bugbear,
flake8_builtins: options.flake8_builtins,
flake8_comprehensions: options.flake8_comprehensions,
copyright: options.copyright,
flake8_copyright: options.flake8_copyright,
flake8_errmsg: options.flake8_errmsg,
flake8_gettext: options.flake8_gettext,
flake8_implicit_str_concat: options.flake8_implicit_str_concat,
@@ -308,7 +308,7 @@ impl Configuration {
flake8_comprehensions: self
.flake8_comprehensions
.combine(config.flake8_comprehensions),
copyright: self.copyright.combine(config.copyright),
flake8_copyright: self.flake8_copyright.combine(config.flake8_copyright),
flake8_errmsg: self.flake8_errmsg.combine(config.flake8_errmsg),
flake8_gettext: self.flake8_gettext.combine(config.flake8_gettext),
flake8_implicit_str_concat: self

View File

@@ -10,8 +10,8 @@ use crate::line_width::{LineLength, TabSize};
use crate::registry::Linter;
use crate::rule_selector::{prefix_to_selector, RuleSelector};
use crate::rules::{
copyright, flake8_annotations, flake8_bandit, flake8_bugbear, flake8_builtins,
flake8_comprehensions, flake8_errmsg, flake8_gettext, flake8_implicit_str_concat,
flake8_annotations, flake8_bandit, flake8_bugbear, flake8_builtins, flake8_comprehensions,
flake8_copyright, flake8_errmsg, flake8_gettext, flake8_implicit_str_concat,
flake8_import_conventions, flake8_pytest_style, flake8_quotes, flake8_self,
flake8_tidy_imports, flake8_type_checking, flake8_unused_arguments, isort, mccabe, pep8_naming,
pycodestyle, pydocstyle, pyflakes, pylint,
@@ -96,7 +96,7 @@ impl Default for Settings {
flake8_bugbear: flake8_bugbear::settings::Settings::default(),
flake8_builtins: flake8_builtins::settings::Settings::default(),
flake8_comprehensions: flake8_comprehensions::settings::Settings::default(),
copyright: copyright::settings::Settings::default(),
flake8_copyright: flake8_copyright::settings::Settings::default(),
flake8_errmsg: flake8_errmsg::settings::Settings::default(),
flake8_implicit_str_concat: flake8_implicit_str_concat::settings::Settings::default(),
flake8_import_conventions: flake8_import_conventions::settings::Settings::default(),

View File

@@ -16,8 +16,8 @@ use ruff_macros::CacheKey;
use crate::registry::{Rule, RuleNamespace, RuleSet, INCOMPATIBLE_CODES};
use crate::rule_selector::{RuleSelector, Specificity};
use crate::rules::{
copyright, flake8_annotations, flake8_bandit, flake8_bugbear, flake8_builtins,
flake8_comprehensions, flake8_errmsg, flake8_gettext, flake8_implicit_str_concat,
flake8_annotations, flake8_bandit, flake8_bugbear, flake8_builtins, flake8_comprehensions,
flake8_copyright, flake8_errmsg, flake8_gettext, flake8_implicit_str_concat,
flake8_import_conventions, flake8_pytest_style, flake8_quotes, flake8_self,
flake8_tidy_imports, flake8_type_checking, flake8_unused_arguments, isort, mccabe, pep8_naming,
pycodestyle, pydocstyle, pyflakes, pylint,
@@ -112,11 +112,11 @@ pub struct Settings {
pub flake8_bugbear: flake8_bugbear::settings::Settings,
pub flake8_builtins: flake8_builtins::settings::Settings,
pub flake8_comprehensions: flake8_comprehensions::settings::Settings,
pub copyright: copyright::settings::Settings,
pub flake8_copyright: flake8_copyright::settings::Settings,
pub flake8_errmsg: flake8_errmsg::settings::Settings,
pub flake8_gettext: flake8_gettext::settings::Settings,
pub flake8_implicit_str_concat: flake8_implicit_str_concat::settings::Settings,
pub flake8_import_conventions: flake8_import_conventions::settings::Settings,
pub flake8_gettext: flake8_gettext::settings::Settings,
pub flake8_pytest_style: flake8_pytest_style::settings::Settings,
pub flake8_quotes: flake8_quotes::settings::Settings,
pub flake8_self: flake8_self::settings::Settings,
@@ -210,9 +210,9 @@ impl Settings {
.flake8_comprehensions
.map(flake8_comprehensions::settings::Settings::from)
.unwrap_or_default(),
copyright: config
.copyright
.map(copyright::settings::Settings::try_from)
flake8_copyright: config
.flake8_copyright
.map(flake8_copyright::settings::Settings::try_from)
.transpose()?
.unwrap_or_default(),
flake8_errmsg: config

View File

@@ -8,8 +8,8 @@ use ruff_macros::ConfigurationOptions;
use crate::line_width::{LineLength, TabSize};
use crate::rule_selector::RuleSelector;
use crate::rules::{
copyright, flake8_annotations, flake8_bandit, flake8_bugbear, flake8_builtins,
flake8_comprehensions, flake8_errmsg, flake8_gettext, flake8_implicit_str_concat,
flake8_annotations, flake8_bandit, flake8_bugbear, flake8_builtins, flake8_comprehensions,
flake8_copyright, flake8_errmsg, flake8_gettext, flake8_implicit_str_concat,
flake8_import_conventions, flake8_pytest_style, flake8_quotes, flake8_self,
flake8_tidy_imports, flake8_type_checking, flake8_unused_arguments, isort, mccabe, pep8_naming,
pycodestyle, pydocstyle, pyflakes, pylint,
@@ -499,7 +499,7 @@ pub struct Options {
pub flake8_comprehensions: Option<flake8_comprehensions::settings::Options>,
#[option_group]
/// Options for the `copyright` plugin.
pub copyright: Option<copyright::settings::Options>,
pub flake8_copyright: Option<flake8_copyright::settings::Options>,
#[option_group]
/// Options for the `flake8-errmsg` plugin.
pub flake8_errmsg: Option<flake8_errmsg::settings::Options>,

View File

@@ -1,91 +1,5 @@
# Ruff Micro-benchmarks
# Ruff Benchmarks
Benchmarks for the different Ruff-tools.
The `ruff_benchmark` crate benchmarks the linter and the formatter on individual files.
## Run Benchmark
You can run the benchmarks with
```shell
cargo benchmark
```
## Benchmark driven Development
You can use `--save-baseline=<name>` to store an initial baseline benchmark (e.g. on `main`) and
then use `--benchmark=<name>` to compare against that benchmark. Criterion will print a message
telling you if the benchmark improved/regressed compared to that baseline.
```shell
# Run once on your "baseline" code
cargo benchmark --save-baseline=main
# Then iterate with
cargo benchmark --baseline=main
```
## PR Summary
You can use `--save-baseline` and `critcmp` to get a pretty comparison between two recordings.
This is useful to illustrate the improvements of a PR.
```shell
# On main
cargo benchmark --save-baseline=main
# After applying your changes
cargo benchmark --save-baseline=pr
critcmp main pr
```
You must install [`critcmp`](https://github.com/BurntSushi/critcmp) for the comparison.
```bash
cargo install critcmp
```
## Tips
- Use `cargo benchmark <filter>` to only run specific benchmarks. For example: `cargo benchmark linter/pydantic`
to only run the pydantic tests.
- Use `cargo benchmark --quiet` for a more cleaned up output (without statistical relevance)
- Use `cargo benchmark --quick` to get faster results (more prone to noise)
## Profiling
### Linux
Install `perf` and build `ruff_benchmark` with the `release-debug` profile and then run it with perf
```shell
cargo bench -p ruff_benchmark --no-run --profile=release-debug && perf record -g -F 9999 cargo bench -p ruff_benchmark --profile=release-debug -- --profile-time=1
```
Then convert the recorded profile
```shell
perf script -F +pid > /tmp/test.perf
```
You can now view the converted file with [firefox profiler](https://profiler.firefox.com/)
You can find a more in-depth guide [here](https://profiler.firefox.com/docs/#/./guide-perf-profiling)
### Mac
Install [`cargo-instruments`](https://crates.io/crates/cargo-instruments):
```shell
cargo install cargo-instruments
```
Then run the profiler with
```shell
cargo instruments -t time --bench linter --profile release-debug -p ruff_benchmark -- --profile-time=1
```
- `-t`: Specifies what to profile. Useful options are `time` to profile the wall time and `alloc`
for profiling the allocations.
- You may want to pass an additional filter to run a single test file
See [CONTRIBUTING.md](../../CONTRIBUTING.md) on how to use these benchmarks.

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_cli"
version = "0.0.273"
version = "0.0.274"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -7,6 +7,7 @@ use std::time::Instant;
use anyhow::Result;
use colored::Colorize;
use ignore::Error;
use itertools::Itertools;
use log::{debug, error, warn};
#[cfg(not(target_family = "wasm"))]
use rayon::prelude::*;
@@ -80,15 +81,18 @@ pub(crate) fn run(
// Load the caches.
let caches = bool::from(cache).then(|| {
package_roots
.par_iter()
.map(|(package_root, _)| {
let settings = resolver.resolve_all(package_root, pyproject_config);
.iter()
.map(|(package, package_root)| package_root.unwrap_or(package))
.unique()
.par_bridge()
.map(|cache_root| {
let settings = resolver.resolve_all(cache_root, pyproject_config);
let cache = Cache::open(
&settings.cli.cache_dir,
package_root.to_path_buf(),
cache_root.to_path_buf(),
&settings.lib,
);
(&**package_root, cache)
(cache_root, cache)
})
.collect::<HashMap<&Path, Cache>>()
});
@@ -106,10 +110,16 @@ pub(crate) fn run(
.and_then(|package| *package);
let settings = resolver.resolve_all(path, pyproject_config);
let package_root = package.unwrap_or_else(|| path.parent().unwrap_or(path));
let cache = caches
.as_ref()
.map(|caches| caches.get(&package_root).unwrap());
let cache_root = package.unwrap_or_else(|| path.parent().unwrap_or(path));
let cache = caches.as_ref().and_then(|caches| {
if let Some(cache) = caches.get(&cache_root) {
Some(cache)
} else {
debug!("No cache found for {}", cache_root.display());
None
}
});
lint_path(path, package, settings, cache, noqa, autofix).map_err(|e| {
(Some(path.to_owned()), {

View File

@@ -528,7 +528,22 @@ impl<Context> Format<Context> for LineSuffixBoundary {
/// use ruff_formatter::prelude::*;
/// use ruff_formatter::{format, write, LineWidth};
///
/// enum SomeLabelId {}
/// #[derive(Debug, Copy, Clone)]
/// enum MyLabels {
/// Main
/// }
///
/// impl tag::LabelDefinition for MyLabels {
/// fn value(&self) -> u64 {
/// *self as u64
/// }
///
/// fn name(&self) -> &'static str {
/// match self {
/// Self::Main => "Main"
/// }
/// }
/// }
///
/// # fn main() -> FormatResult<()> {
/// let formatted = format!(
@@ -537,24 +552,24 @@ impl<Context> Format<Context> for LineSuffixBoundary {
/// let mut recording = f.start_recording();
/// write!(recording, [
/// labelled(
/// LabelId::of::<SomeLabelId>(),
/// LabelId::of(MyLabels::Main),
/// &text("'I have a label'")
/// )
/// ])?;
///
/// let recorded = recording.stop();
///
/// let is_labelled = recorded.first().map_or(false, |element| element.has_label(LabelId::of::<SomeLabelId>()));
/// let is_labelled = recorded.first().map_or(false, |element| element.has_label(LabelId::of(MyLabels::Main)));
///
/// if is_labelled {
/// write!(f, [text(" has label SomeLabelId")])
/// write!(f, [text(" has label `Main`")])
/// } else {
/// write!(f, [text(" doesn't have label SomeLabelId")])
/// write!(f, [text(" doesn't have label `Main`")])
/// }
/// })]
/// )?;
///
/// assert_eq!("'I have a label' has label SomeLabelId", formatted.print()?.as_code());
/// assert_eq!("'I have a label' has label `Main`", formatted.print()?.as_code());
/// # Ok(())
/// # }
/// ```

View File

@@ -1,8 +1,5 @@
use crate::format_element::PrintMode;
use crate::{GroupId, TextSize};
#[cfg(debug_assertions)]
use std::any::type_name;
use std::any::TypeId;
use std::cell::Cell;
use std::num::NonZeroU8;
@@ -235,37 +232,48 @@ impl Align {
}
}
#[derive(Eq, PartialEq, Copy, Clone)]
#[derive(Debug, Eq, Copy, Clone)]
pub struct LabelId {
id: TypeId,
value: u64,
#[cfg(debug_assertions)]
label: &'static str,
name: &'static str,
}
#[cfg(debug_assertions)]
impl std::fmt::Debug for LabelId {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(self.label)
}
}
impl PartialEq for LabelId {
fn eq(&self, other: &Self) -> bool {
let is_equal = self.value == other.value;
#[cfg(not(debug_assertions))]
impl std::fmt::Debug for LabelId {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
std::write!(f, "#{:?}", self.id)
#[cfg(debug_assertions)]
{
if is_equal {
assert_eq!(self.name, other.name, "Two `LabelId`s with different names have the same `value`. Are you mixing labels of two different `LabelDefinition` or are the values returned by the `LabelDefinition` not unique?");
}
}
is_equal
}
}
impl LabelId {
pub fn of<T: ?Sized + 'static>() -> Self {
pub fn of<T: LabelDefinition>(label: T) -> Self {
Self {
id: TypeId::of::<T>(),
value: label.value(),
#[cfg(debug_assertions)]
label: type_name::<T>(),
name: label.name(),
}
}
}
/// Defines the valid labels of a language. You want to have at most one implementation per formatter
/// project.
pub trait LabelDefinition {
/// Returns the `u64` uniquely identifying this specific label.
fn value(&self) -> u64;
/// Returns the name of the label that is shown in debug builds.
fn name(&self) -> &'static str;
}
#[derive(Clone, Copy, Eq, PartialEq, Debug)]
pub enum VerbatimKind {
Bogus,

View File

@@ -192,7 +192,8 @@ fn rules_by_prefix(
for (code, rule) in rules {
// Nursery rules have to be explicitly selected, so we ignore them when looking at
// prefixes.
// prefix-level selectors (e.g., `--select SIM10`), but add the rule itself under
// its fully-qualified code (e.g., `--select SIM101`).
if is_nursery(&rule.group) {
rules_by_prefix.insert(code.clone(), vec![(rule.path.clone(), rule.attrs.clone())]);
continue;
@@ -329,10 +330,17 @@ fn generate_iter_impl(
) -> TokenStream {
let mut linter_into_iter_match_arms = quote!();
for (linter, map) in linter_to_rules {
let rule_paths = map.values().map(|Rule { attrs, path, .. }| {
let rule_name = path.segments.last().unwrap();
quote!(#(#attrs)* Rule::#rule_name)
});
let rule_paths = map
.values()
.filter(|rule| {
// Nursery rules have to be explicitly selected, so we ignore them when looking at
// linter-level selectors (e.g., `--select SIM`).
!is_nursery(&rule.group)
})
.map(|Rule { attrs, path, .. }| {
let rule_name = path.segments.last().unwrap();
quote!(#(#attrs)* Rule::#rule_name)
});
linter_into_iter_match_arms.extend(quote! {
Linter::#linter => vec![#(#rule_paths,)*].into_iter(),
});

View File

@@ -3680,7 +3680,7 @@ impl AnyNodeRef<'_> {
/// Compares two any node refs by their pointers (referential equality).
pub fn ptr_eq(self, other: AnyNodeRef) -> bool {
self.as_ptr().eq(&other.as_ptr())
self.as_ptr().eq(&other.as_ptr()) && self.kind() == other.kind()
}
/// Returns the node's [`kind`](NodeKind) that has no data associated and is [`Copy`].

View File

@@ -2,7 +2,7 @@
pub mod preorder;
use rustpython_ast::{ArgWithDefault, Decorator};
use rustpython_ast::Decorator;
use rustpython_parser::ast::{
self, Alias, Arg, Arguments, BoolOp, CmpOp, Comprehension, Constant, ExceptHandler, Expr,
ExprContext, Keyword, MatchCase, Operator, Pattern, Stmt, UnaryOp, WithItem,
@@ -61,9 +61,6 @@ pub trait Visitor<'a> {
fn visit_arg(&mut self, arg: &'a Arg) {
walk_arg(self, arg);
}
fn visit_arg_with_default(&mut self, arg_with_default: &'a ArgWithDefault) {
walk_arg_with_default(self, arg_with_default);
}
fn visit_keyword(&mut self, keyword: &'a Keyword) {
walk_keyword(self, keyword);
}
@@ -99,10 +96,10 @@ pub fn walk_stmt<'a, V: Visitor<'a> + ?Sized>(visitor: &mut V, stmt: &'a Stmt) {
returns,
..
}) => {
visitor.visit_arguments(args);
for decorator in decorator_list {
visitor.visit_decorator(decorator);
}
visitor.visit_arguments(args);
for expr in returns {
visitor.visit_annotation(expr);
}
@@ -115,10 +112,10 @@ pub fn walk_stmt<'a, V: Visitor<'a> + ?Sized>(visitor: &mut V, stmt: &'a Stmt) {
returns,
..
}) => {
visitor.visit_arguments(args);
for decorator in decorator_list {
visitor.visit_decorator(decorator);
}
visitor.visit_arguments(args);
for expr in returns {
visitor.visit_annotation(expr);
}
@@ -131,15 +128,15 @@ pub fn walk_stmt<'a, V: Visitor<'a> + ?Sized>(visitor: &mut V, stmt: &'a Stmt) {
decorator_list,
..
}) => {
for decorator in decorator_list {
visitor.visit_decorator(decorator);
}
for expr in bases {
visitor.visit_expr(expr);
}
for keyword in keywords {
visitor.visit_keyword(keyword);
}
for decorator in decorator_list {
visitor.visit_decorator(decorator);
}
visitor.visit_body(body);
}
Stmt::Return(ast::StmtReturn {
@@ -180,10 +177,10 @@ pub fn walk_stmt<'a, V: Visitor<'a> + ?Sized>(visitor: &mut V, stmt: &'a Stmt) {
value,
..
}) => {
visitor.visit_annotation(annotation);
if let Some(expr) = value {
visitor.visit_expr(expr);
}
visitor.visit_annotation(annotation);
visitor.visit_expr(target);
}
Stmt::For(ast::StmtFor {
@@ -606,17 +603,34 @@ pub fn walk_except_handler<'a, V: Visitor<'a> + ?Sized>(
}
pub fn walk_arguments<'a, V: Visitor<'a> + ?Sized>(visitor: &mut V, arguments: &'a Arguments) {
// Defaults are evaluated before annotations.
for arg in &arguments.posonlyargs {
visitor.visit_arg_with_default(arg);
if let Some(default) = &arg.default {
visitor.visit_expr(default);
}
}
for arg in &arguments.args {
visitor.visit_arg_with_default(arg);
if let Some(default) = &arg.default {
visitor.visit_expr(default);
}
}
for arg in &arguments.kwonlyargs {
if let Some(default) = &arg.default {
visitor.visit_expr(default);
}
}
for arg in &arguments.posonlyargs {
visitor.visit_arg(&arg.def);
}
for arg in &arguments.args {
visitor.visit_arg(&arg.def);
}
if let Some(arg) = &arguments.vararg {
visitor.visit_arg(arg);
}
for arg in &arguments.kwonlyargs {
visitor.visit_arg_with_default(arg);
visitor.visit_arg(&arg.def);
}
if let Some(arg) = &arguments.kwarg {
visitor.visit_arg(arg);
@@ -629,16 +643,6 @@ pub fn walk_arg<'a, V: Visitor<'a> + ?Sized>(visitor: &mut V, arg: &'a Arg) {
}
}
pub fn walk_arg_with_default<'a, V: Visitor<'a> + ?Sized>(
visitor: &mut V,
arg_with_default: &'a ArgWithDefault,
) {
visitor.visit_arg(&arg_with_default.def);
if let Some(expr) = &arg_with_default.default {
visitor.visit_expr(expr);
}
}
pub fn walk_keyword<'a, V: Visitor<'a> + ?Sized>(visitor: &mut V, keyword: &'a Keyword) {
visitor.visit_expr(&keyword.value);
}

View File

@@ -0,0 +1,29 @@
(
a
# comment
.b # trailing comment
)
(
a
# comment
.b # trailing dot comment # trailing identifier comment
)
(
a
# comment
.b # trailing identifier comment
)
(
a
# comment
. # trailing dot comment
# in between
b # trailing identifier comment
)
aaaaaaaaaaaaaaaaaaaaa.lllllllllllllllllllllllllllloooooooooong.chaaaaaaaaaaaaaaaaaaaaaaiiiiiiiiiiiiiiiiiiiiiiinnnnnnnn.ooooooooooooooooooooooooofffffffff.aaaaaaaaaattr

View File

@@ -52,3 +52,138 @@ if (
ccccccccccc
):
pass
# Left only breaks
if [
aaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccc,
dddddddddddddddddddd,
eeeeeeeeee,
] & aaaaaaaaaaaaaaaaaaaaaaaaaa:
...
if [
aaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccc,
dddddddddddddddddddd,
eeeeeeeeee,
] & aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa:
...
# Right only can break
if aaaaaaaaaaaaaaaaaaaaaaaaaa & [
aaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccc,
dddddddddddddddddddd,
eeeeeeeeee,
]:
...
if aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa & [
aaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccc,
dddddddddddddddddddd,
eeeeeeeeee,
]:
...
# Left or right can break
if [2222, 333] & [
aaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccc,
dddddddddddddddddddd,
eeeeeeeeee,
]:
...
if [
aaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccc,
dddddddddddddddddddd,
eeeeeeeeee,
] & [2222, 333]:
...
if [
aaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccc,
dddddddddddddddddddd,
eeeeeeeeee,
] & [fffffffffffffffff, gggggggggggggggggggg, hhhhhhhhhhhhhhhhhhhhh, iiiiiiiiiiiiiiii, jjjjjjjjjjjjj]:
...
if (
# comment
[
aaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccc,
dddddddddddddddddddd,
eeeeeeeeee,
]
) & [
fffffffffffffffff,
gggggggggggggggggggg,
hhhhhhhhhhhhhhhhhhhhh,
iiiiiiiiiiiiiiii,
jjjjjjjjjjjjj,
]:
pass
...
# Nesting
if (aaaa + b) & [
fffffffffffffffff,
gggggggggggggggggggg,
hhhhhhhhhhhhhhhhhhhhh,
iiiiiiiiiiiiiiii,
jjjjjjjjjjjjj,
]:
...
if [
fffffffffffffffff,
gggggggggggggggggggg,
hhhhhhhhhhhhhhhhhhhhh,
iiiiiiiiiiiiiiii,
jjjjjjjjjjjjj,
] & (a + b):
...
if [
fffffffffffffffff,
gggggggggggggggggggg,
hhhhhhhhhhhhhhhhhhhhh,
iiiiiiiiiiiiiiii,
jjjjjjjjjjjjj,
] & (
# comment
a
+ b
):
...
if (
[
fffffffffffffffff,
gggggggggggggggggggg,
hhhhhhhhhhhhhhhhhhhhh,
iiiiiiiiiiiiiiii,
jjjjjjjjjjjjj,
]
&
# comment
a + b
):
...

View File

@@ -0,0 +1,64 @@
if (
self._proc
# has the child process finished?
and self._returncode
# the child process has finished, but the
# transport hasn't been notified yet?
and self._proc.poll()
):
pass
if (
self._proc
and self._returncode
and self._proc.poll()
and self._proc
and self._returncode
and self._proc.poll()
):
...
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
and aaaaaaaaaaaaaaaaa
and aaaaaaaaaaaaaaaaaaaaaa
and aaaaaaaaaaaaaaaaaaaaaaaa
and aaaaaaaaaaaaaaaaaaaaaaaaaa
and aaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
...
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaas
and aaaaaaaaaaaaaaaaa
):
...
if [2222, 333] and [
aaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccc,
dddddddddddddddddddd,
eeeeeeeeee,
]:
...
if [
aaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccc,
dddddddddddddddddddd,
eeeeeeeeee,
] and [2222, 333]:
pass
# Break right only applies for boolean operations with a left and right side
if (
aaaaaaaaaaaaaaaaaaaaaaaaaa
and bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
and ccccccccccccccccc
and [dddddddddddddd, eeeeeeeeee, fffffffffffffff]
):
pass

View File

@@ -0,0 +1,82 @@
# Handle comments both when lower and upper exist and when they don't
a1 = "a"[
# a
1 # b
: # c
2 # d
]
a2 = "a"[
# a
# b
: # c
# d
]
# Check all places where comments can exist
b1 = "b"[ # a
# b
1 # c
# d
: # e
# f
2 # g
# h
: # i
# j
3 # k
# l
]
# Handle the spacing from the colon correctly with upper leading comments
c1 = "c"[
1
: # e
# f
2
]
c2 = "c"[
1
: # e
2
]
c3 = "c"[
1
:
# f
2
]
c4 = "c"[
1
: # f
2
]
# End of line comments
d1 = "d"[ # comment
:
]
d2 = "d"[ # comment
1:
]
d3 = "d"[
1 # comment
:
]
# Spacing around the colon(s)
def a():
...
e00 = "e"[:]
e01 = "e"[:1]
e02 = "e"[: a()]
e10 = "e"[1:]
e11 = "e"[1:1]
e12 = "e"[1 : a()]
e20 = "e"[a() :]
e21 = "e"[a() : 1]
e22 = "e"[a() : a()]
e200 = "e"[a() :: ]
e201 = "e"[a() :: 1]
e202 = "e"[a() :: a()]
e210 = "e"[a() : 1 :]

View File

@@ -0,0 +1,138 @@
if not aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb:
pass
a = True
not a
b = 10
-b
+b
## Leading operand comments
if not (
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
if ~(
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb):
pass
if -(
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb):
pass
if +(
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb):
pass
if (
not
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
if (
~
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb):
pass
if (
-
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb):
pass
if (
+
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb):
pass
## Parentheses
if (
# unary comment
not
# operand comment
(
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
)
):
pass
if (not (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
)
):
pass
if aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa & (not (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
)
):
pass
if (
not (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
)
& aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
pass
## Trailing operator comments
if (
not # comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
if (
~ # comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
if (
- # comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
if (
+ # comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
## Varia
if not \
a:
pass

View File

@@ -0,0 +1,34 @@
for x in y: # trailing test comment
pass # trailing last statement comment
# trailing for body comment
# leading else comment
else: # trailing else comment
pass
# trailing else body comment
for aVeryLongNameThatSpillsOverToTheNextLineBecauseItIsExtremelyLongAndGoesOnAndOnAndOnAndOnAndOnAndOnAndOnAndOnAndOn in anotherVeryLongNameThatSpillsOverToTheNextLineBecauseItIsExtremelyLongAndGoesOnAndOnAndOnAndOnAndOnAndOnAndOnAndOnAndOn: # trailing comment
pass
else:
...
for (
x,
y,
) in z: # comment
...
# remove brackets around x,y but keep them around z,w
for (x, y) in (z, w):
...
# type comment
for x in (): # type: int
...

View File

@@ -90,3 +90,144 @@ else:
# Regression test for https://github.com/python/cpython/blob/7199584ac8632eab57612f595a7162ab8d2ebbc0/Lib/warnings.py#L513
def f(arg1=1, *, kwonlyarg1, kwonlyarg2=2):
pass
# Regression test for https://github.com/astral-sh/ruff/issues/5176#issuecomment-1598171989
def foo(
b=3 + 2 # comment
):
...
# Comments on the slash or the star, both of which don't have a node
def f11(
a,
# positional only comment, leading
/, # positional only comment, trailing
b,
):
pass
def f12(
a=1,
# positional only comment, leading
/, # positional only comment, trailing
b=2,
):
pass
def f13(
a,
# positional only comment, leading
/, # positional only comment, trailing
):
pass
def f21(
a=1,
# keyword only comment, leading
*, # keyword only comment, trailing
b=2,
):
pass
def f22(
a,
# keyword only comment, leading
*, # keyword only comment, trailing
b,
):
pass
def f23(
a,
# keyword only comment, leading
*args, # keyword only comment, trailing
b,
):
pass
def f24(
# keyword only comment, leading
*, # keyword only comment, trailing
a
):
pass
def f31(
a=1,
# positional only comment, leading
/, # positional only comment, trailing
b=2,
# keyword only comment, leading
*, # keyword only comment, trailing
c=3,
):
pass
def f32(
a,
# positional only comment, leading
/, # positional only comment, trailing
b,
# keyword only comment, leading
*, # keyword only comment, trailing
c,
):
pass
def f33(
a,
# positional only comment, leading
/, # positional only comment, trailing
# keyword only comment, leading
*args, # keyword only comment, trailing
c,
):
pass
def f34(
a,
# positional only comment, leading
/, # positional only comment, trailing
# keyword only comment, leading
*, # keyword only comment, trailing
c,
):
pass
def f35(
# keyword only comment, leading
*, # keyword only comment, trailing
c,
):
pass
# Multiple trailing comments
def f41(
a,
/ # 1
, # 2
# 3
* # 4
, # 5
c,
):
pass
# Multiple trailing comments strangely places. The goal here is only stable formatting,
# the comments are placed to strangely to keep their relative position intact
def f42(
a,
/ # 1
# 2
, # 3
# 4
* # 5
# 6
, # 7
c,
):
pass

View File

@@ -35,3 +35,30 @@ elif aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa +
else:
...
# Regression test: Don't drop the trailing comment by associating it with the elif
# instead of the else.
# Originally found in https://github.com/python/cpython/blob/ab3823a97bdeefb0266b3c8d493f7f6223ce3686/Lib/dataclasses.py#L539
if "if 1":
pass
elif "elif 1":
pass
# Don't drop this comment 1
x = 1
if "if 2":
pass
elif "elif 2":
pass
else:
pass
# Don't drop this comment 2
x = 2
if "if 3":
pass
else:
pass
# Don't drop this comment 3
x = 3

View File

@@ -63,7 +63,7 @@ pub fn format_and_debug_print(input: &str, cli: &Cli) -> Result<String> {
}
if cli.print_comments {
println!(
"{:?}",
"{:#?}",
formatted.context().comments().debug(SourceCode::new(input))
);
}

View File

@@ -26,7 +26,7 @@ impl Debug for DebugComment<'_> {
strut
.field("text", &self.comment.slice.text(self.source_code))
.field("position", &self.comment.position);
.field("position", &self.comment.line_position);
#[cfg(debug_assertions)]
strut.field("formatted", &self.comment.formatted.get());
@@ -177,7 +177,7 @@ impl Debug for DebugNodeCommentSlice<'_> {
#[cfg(test)]
mod tests {
use crate::comments::map::MultiMap;
use crate::comments::{CommentTextPosition, Comments, CommentsMap, SourceComment};
use crate::comments::{CommentLinePosition, Comments, CommentsMap, SourceComment};
use insta::assert_debug_snapshot;
use ruff_formatter::SourceCode;
use ruff_python_ast::node::AnyNode;
@@ -208,7 +208,7 @@ break;
continue_statement.as_ref().into(),
SourceComment::new(
source_code.slice(TextRange::at(TextSize::new(0), TextSize::new(17))),
CommentTextPosition::OwnLine,
CommentLinePosition::OwnLine,
),
);
@@ -216,7 +216,7 @@ break;
continue_statement.as_ref().into(),
SourceComment::new(
source_code.slice(TextRange::at(TextSize::new(28), TextSize::new(10))),
CommentTextPosition::EndOfLine,
CommentLinePosition::EndOfLine,
),
);
@@ -224,7 +224,7 @@ break;
break_statement.as_ref().into(),
SourceComment::new(
source_code.slice(TextRange::at(TextSize::new(39), TextSize::new(15))),
CommentTextPosition::OwnLine,
CommentLinePosition::OwnLine,
),
);

View File

@@ -136,7 +136,7 @@ impl Format<PyFormatContext<'_>> for FormatTrailingComments<'_> {
{
let slice = trailing.slice();
has_trailing_own_line_comment |= trailing.position().is_own_line();
has_trailing_own_line_comment |= trailing.line_position().is_own_line();
if has_trailing_own_line_comment {
let lines_before_comment = lines_before(slice.start(), f.context().contents());
@@ -208,7 +208,7 @@ impl Format<PyFormatContext<'_>> for FormatDanglingComments<'_> {
.iter()
.filter(|comment| comment.is_unformatted())
{
if first && comment.position().is_end_of_line() {
if first && comment.line_position().is_end_of_line() {
write!(f, [space(), space()])?;
}

View File

@@ -117,14 +117,14 @@ pub(crate) struct SourceComment {
slice: SourceCodeSlice,
/// Whether the comment has been formatted or not.
formatted: Cell<bool>,
position: CommentTextPosition,
line_position: CommentLinePosition,
}
impl SourceComment {
fn new(slice: SourceCodeSlice, position: CommentTextPosition) -> Self {
fn new(slice: SourceCodeSlice, position: CommentLinePosition) -> Self {
Self {
slice,
position,
line_position: position,
formatted: Cell::new(false),
}
}
@@ -135,8 +135,8 @@ impl SourceComment {
&self.slice
}
pub(crate) const fn position(&self) -> CommentTextPosition {
self.position
pub(crate) const fn line_position(&self) -> CommentLinePosition {
self.line_position
}
/// Marks the comment as formatted
@@ -163,7 +163,7 @@ impl SourceComment {
/// The position of a comment in the source text.
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
pub(crate) enum CommentTextPosition {
pub(crate) enum CommentLinePosition {
/// A comment that is on the same line as the preceding token and is separated by at least one line break from the following token.
///
/// # Examples
@@ -176,7 +176,7 @@ pub(crate) enum CommentTextPosition {
/// ```
///
/// `# comment` is an end of line comments because it is separated by at least one line break from the following token `b`.
/// Comments that not only end, but also start on a new line are [`OwnLine`](CommentTextPosition::OwnLine) comments.
/// Comments that not only end, but also start on a new line are [`OwnLine`](CommentLinePosition::OwnLine) comments.
EndOfLine,
/// A Comment that is separated by at least one line break from the preceding token.
@@ -193,13 +193,13 @@ pub(crate) enum CommentTextPosition {
OwnLine,
}
impl CommentTextPosition {
impl CommentLinePosition {
pub(crate) const fn is_own_line(self) -> bool {
matches!(self, CommentTextPosition::OwnLine)
matches!(self, CommentLinePosition::OwnLine)
}
pub(crate) const fn is_end_of_line(self) -> bool {
matches!(self, CommentTextPosition::EndOfLine)
matches!(self, CommentLinePosition::EndOfLine)
}
}
@@ -335,7 +335,7 @@ impl<'a> Comments<'a> {
{
self.trailing_comments(node)
.iter()
.any(|comment| comment.position().is_own_line())
.any(|comment| comment.line_position().is_own_line())
}
/// Returns an iterator over the [leading](self#leading-comments) and [trailing comments](self#trailing-comments) of `node`.

View File

@@ -1,38 +1,45 @@
use std::cmp::Ordering;
use ruff_text_size::{TextRange, TextSize};
use rustpython_parser::ast::Ranged;
use ruff_python_ast::node::AnyNodeRef;
use crate::comments::visitor::{CommentPlacement, DecoratedComment};
use crate::expression::expr_slice::{assign_comment_in_slice, ExprSliceCommentSection};
use crate::other::arguments::{
assign_argument_separator_comment_placement, find_argument_separators,
};
use crate::trivia::{first_non_trivia_token_rev, SimpleTokenizer, Token, TokenKind};
use ruff_python_ast::node::{AnyNodeRef, AstNode};
use ruff_python_ast::source_code::Locator;
use ruff_python_ast::whitespace;
use ruff_python_whitespace::{PythonWhitespace, UniversalNewlines};
use crate::comments::visitor::{CommentPlacement, DecoratedComment};
use crate::comments::CommentTextPosition;
use crate::trivia::{SimpleTokenizer, Token, TokenKind};
use ruff_text_size::TextRange;
use rustpython_parser::ast::{Expr, ExprSlice, Ranged};
use std::cmp::Ordering;
/// Implements the custom comment placement logic.
pub(super) fn place_comment<'a>(
comment: DecoratedComment<'a>,
mut comment: DecoratedComment<'a>,
locator: &Locator,
) -> CommentPlacement<'a> {
handle_in_between_except_handlers_or_except_handler_and_else_or_finally_comment(
comment, locator,
)
.or_else(|comment| handle_match_comment(comment, locator))
.or_else(|comment| handle_in_between_bodies_own_line_comment(comment, locator))
.or_else(|comment| handle_in_between_bodies_end_of_line_comment(comment, locator))
.or_else(|comment| handle_trailing_body_comment(comment, locator))
.or_else(handle_trailing_end_of_line_body_comment)
.or_else(|comment| handle_trailing_end_of_line_condition_comment(comment, locator))
.or_else(|comment| {
handle_module_level_own_line_comment_before_class_or_function_comment(comment, locator)
})
.or_else(|comment| handle_positional_only_arguments_separator_comment(comment, locator))
.or_else(|comment| handle_trailing_binary_expression_left_or_operator_comment(comment, locator))
.or_else(handle_leading_function_with_decorators_comment)
.or_else(|comment| handle_dict_unpacking_comment(comment, locator))
static HANDLERS: &[for<'a> fn(DecoratedComment<'a>, &Locator) -> CommentPlacement<'a>] = &[
handle_in_between_except_handlers_or_except_handler_and_else_or_finally_comment,
handle_match_comment,
handle_in_between_bodies_own_line_comment,
handle_in_between_bodies_end_of_line_comment,
handle_trailing_body_comment,
handle_trailing_end_of_line_body_comment,
handle_trailing_end_of_line_condition_comment,
handle_module_level_own_line_comment_before_class_or_function_comment,
handle_arguments_separator_comment,
handle_trailing_binary_expression_left_or_operator_comment,
handle_leading_function_with_decorators_comment,
handle_dict_unpacking_comment,
handle_slice_comments,
handle_attribute_comment,
];
for handler in HANDLERS {
comment = match handler(comment, locator) {
CommentPlacement::Default(comment) => comment,
placement => return placement,
};
}
CommentPlacement::Default(comment)
}
/// Handles leading comments in front of a match case or a trailing comment of the `match` statement.
@@ -49,7 +56,7 @@ fn handle_match_comment<'a>(
locator: &Locator,
) -> CommentPlacement<'a> {
// Must be an own line comment after the last statement in a match case
if comment.text_position().is_end_of_line() || comment.following_node().is_some() {
if comment.line_position().is_end_of_line() || comment.following_node().is_some() {
return CommentPlacement::Default(comment);
}
@@ -147,7 +154,7 @@ fn handle_in_between_except_handlers_or_except_handler_and_else_or_finally_comme
comment: DecoratedComment<'a>,
locator: &Locator,
) -> CommentPlacement<'a> {
if comment.text_position().is_end_of_line() || comment.following_node().is_none() {
if comment.line_position().is_end_of_line() || comment.following_node().is_none() {
return CommentPlacement::Default(comment);
}
@@ -201,7 +208,7 @@ fn handle_in_between_bodies_own_line_comment<'a>(
comment: DecoratedComment<'a>,
locator: &Locator,
) -> CommentPlacement<'a> {
if !comment.text_position().is_own_line() {
if !comment.line_position().is_own_line() {
return CommentPlacement::Default(comment);
}
@@ -310,7 +317,7 @@ fn handle_in_between_bodies_end_of_line_comment<'a>(
comment: DecoratedComment<'a>,
locator: &Locator,
) -> CommentPlacement<'a> {
if !comment.text_position().is_end_of_line() {
if !comment.line_position().is_end_of_line() {
return CommentPlacement::Default(comment);
}
@@ -393,12 +400,16 @@ fn handle_trailing_body_comment<'a>(
comment: DecoratedComment<'a>,
locator: &Locator,
) -> CommentPlacement<'a> {
if comment.text_position().is_end_of_line() {
if comment.line_position().is_end_of_line() {
return CommentPlacement::Default(comment);
}
// Only do something if the preceding node has a body (has indented statements).
let Some(last_child) = comment.preceding_node().and_then(last_child_in_body) else {
let Some(preceding_node) = comment.preceding_node() else {
return CommentPlacement::Default(comment);
};
let Some(last_child) = last_child_in_body(preceding_node) else {
return CommentPlacement::Default(comment);
};
@@ -411,8 +422,24 @@ fn handle_trailing_body_comment<'a>(
// the indent-level doesn't depend on the tab width (the indent level must be the same if the tab width is 1 or 8).
let comment_indentation_len = comment_indentation.len();
// Keep the comment on the entire statement in case it's a trailing comment
// ```python
// if "first if":
// pass
// elif "first elif":
// pass
// # Trailing if comment
// ```
// Here we keep the comment a trailing comment of the `if`
let Some(preceding_node_indentation) = whitespace::indentation_at_offset(locator, preceding_node.start()) else {
return CommentPlacement::Default(comment);
};
if comment_indentation_len == preceding_node_indentation.len() {
return CommentPlacement::Default(comment);
}
let mut current_child = last_child;
let mut parent_body = comment.preceding_node();
let mut parent_body = Some(preceding_node);
let mut grand_parent_body = None;
loop {
@@ -483,9 +510,12 @@ fn handle_trailing_body_comment<'a>(
/// if something.changed:
/// do.stuff() # trailing comment
/// ```
fn handle_trailing_end_of_line_body_comment(comment: DecoratedComment<'_>) -> CommentPlacement<'_> {
fn handle_trailing_end_of_line_body_comment<'a>(
comment: DecoratedComment<'a>,
_locator: &Locator,
) -> CommentPlacement<'a> {
// Must be an end of line comment
if comment.text_position().is_own_line() {
if comment.line_position().is_own_line() {
return CommentPlacement::Default(comment);
}
@@ -526,7 +556,7 @@ fn handle_trailing_end_of_line_condition_comment<'a>(
use ruff_python_ast::prelude::*;
// Must be an end of line comment
if comment.text_position().is_own_line() {
if comment.line_position().is_own_line() {
return CommentPlacement::Default(comment);
}
@@ -602,18 +632,11 @@ fn handle_trailing_end_of_line_condition_comment<'a>(
CommentPlacement::Default(comment)
}
/// Attaches comments for the positional-only arguments separator `/` as trailing comments to the
/// enclosing [`Arguments`] node.
/// Attaches comments for the positional only arguments separator `/` or the keywords only arguments
/// separator `*` as dangling comments to the enclosing [`Arguments`] node.
///
/// ```python
/// def test(
/// a,
/// # Positional arguments only after here
/// /, # trailing positional argument comment.
/// b,
/// ): pass
/// ```
fn handle_positional_only_arguments_separator_comment<'a>(
/// See [`assign_argument_separator_comment_placement`]
fn handle_arguments_separator_comment<'a>(
comment: DecoratedComment<'a>,
locator: &Locator,
) -> CommentPlacement<'a> {
@@ -621,55 +644,19 @@ fn handle_positional_only_arguments_separator_comment<'a>(
return CommentPlacement::Default(comment);
};
// Using the `/` without any leading arguments is a syntax error.
let Some(last_argument_or_default) = comment.preceding_node() else {
return CommentPlacement::Default(comment);
};
let is_last_positional_argument =
// If the preceding node is the identifier for the last positional argument (`a`).
// ```python
// def test(a, /, b): pass
// ```
are_same_optional(last_argument_or_default, arguments.posonlyargs.last().map(|arg| &arg.def))
// If the preceding node is the default for the last positional argument (`10`).
// ```python
// def test(a=10, /, b): pass
// ```
|| are_same_optional(last_argument_or_default, arguments
.posonlyargs.last().and_then(|arg| arg.default.as_deref()));
if !is_last_positional_argument {
return CommentPlacement::Default(comment);
let (slash, star) = find_argument_separators(locator.contents(), arguments);
let comment_range = comment.slice().range();
let placement = assign_argument_separator_comment_placement(
slash.as_ref(),
star.as_ref(),
comment_range,
comment.line_position(),
);
if placement.is_some() {
return CommentPlacement::dangling(comment.enclosing_node(), comment);
}
let trivia_end = comment
.following_node()
.map_or(arguments.end(), |following| following.start());
let trivia_range = TextRange::new(last_argument_or_default.end(), trivia_end);
if let Some(slash_offset) = find_pos_only_slash_offset(trivia_range, locator) {
let comment_start = comment.slice().range().start();
let is_slash_comment = match comment.text_position() {
CommentTextPosition::EndOfLine => {
let preceding_end_line = locator.line_end(last_argument_or_default.end());
let slash_comments_start = preceding_end_line.min(slash_offset);
comment_start >= slash_comments_start
&& locator.line_end(slash_offset) > comment_start
}
CommentTextPosition::OwnLine => comment_start < slash_offset,
};
if is_slash_comment {
CommentPlacement::dangling(comment.enclosing_node(), comment)
} else {
CommentPlacement::Default(comment)
}
} else {
// Should not happen, but let's go with it
CommentPlacement::Default(comment)
}
CommentPlacement::Default(comment)
}
/// Handles comments between the left side and the operator of a binary expression (trailing comments of the left),
@@ -721,7 +708,7 @@ fn handle_trailing_binary_expression_left_or_operator_comment<'a>(
// )
// ```
CommentPlacement::trailing(AnyNodeRef::from(binary_expression.left.as_ref()), comment)
} else if comment.text_position().is_end_of_line() {
} else if comment.line_position().is_end_of_line() {
// Is the operator on its own line.
if locator.contains_line_break(TextRange::new(
binary_expression.left.end(),
@@ -810,7 +797,7 @@ fn handle_module_level_own_line_comment_before_class_or_function_comment<'a>(
locator: &Locator,
) -> CommentPlacement<'a> {
// Only applies for own line comments on the module level...
if !comment.text_position().is_own_line() || !comment.enclosing_node().is_module() {
if !comment.line_position().is_own_line() || !comment.enclosing_node().is_module() {
return CommentPlacement::Default(comment);
}
@@ -839,33 +826,85 @@ fn handle_module_level_own_line_comment_before_class_or_function_comment<'a>(
}
}
/// Finds the offset of the `/` that separates the positional only and arguments from the other arguments.
/// Returns `None` if the positional only separator `/` isn't present in the specified range.
fn find_pos_only_slash_offset(
between_arguments_range: TextRange,
/// Handles the attaching comments left or right of the colon in a slice as trailing comment of the
/// preceding node or leading comment of the following node respectively.
/// ```python
/// a = "input"[
/// 1 # c
/// # d
/// :2
/// ]
/// ```
fn handle_slice_comments<'a>(
comment: DecoratedComment<'a>,
locator: &Locator,
) -> Option<TextSize> {
let mut tokens =
SimpleTokenizer::new(locator.contents(), between_arguments_range).skip_trivia();
if let Some(comma) = tokens.next() {
debug_assert_eq!(comma.kind(), TokenKind::Comma);
if let Some(maybe_slash) = tokens.next() {
if maybe_slash.kind() == TokenKind::Slash {
return Some(maybe_slash.start());
) -> CommentPlacement<'a> {
let expr_slice = match comment.enclosing_node() {
AnyNodeRef::ExprSlice(expr_slice) => expr_slice,
AnyNodeRef::ExprSubscript(expr_subscript) => {
if expr_subscript.value.end() < expr_subscript.slice.start() {
if let Expr::Slice(expr_slice) = expr_subscript.slice.as_ref() {
expr_slice
} else {
return CommentPlacement::Default(comment);
}
} else {
return CommentPlacement::Default(comment);
}
debug_assert_eq!(
maybe_slash.kind(),
TokenKind::RParen,
"{:?}",
maybe_slash.kind()
);
}
_ => return CommentPlacement::Default(comment),
};
let ExprSlice {
range: _,
lower,
upper,
step,
} = expr_slice;
// Check for `foo[ # comment`, but only if they are on the same line
let after_lbracket = matches!(
first_non_trivia_token_rev(comment.slice().start(), locator.contents()),
Some(Token {
kind: TokenKind::LBracket,
..
})
);
if comment.line_position().is_end_of_line() && after_lbracket {
// Keep comments after the opening bracket there by formatting them outside the
// soft block indent
// ```python
// "a"[ # comment
// 1:
// ]
// ```
debug_assert!(
matches!(comment.enclosing_node(), AnyNodeRef::ExprSubscript(_)),
"{:?}",
comment.enclosing_node()
);
return CommentPlacement::dangling(comment.enclosing_node(), comment);
}
None
let assignment =
assign_comment_in_slice(comment.slice().range(), locator.contents(), expr_slice);
let node = match assignment {
ExprSliceCommentSection::Lower => lower,
ExprSliceCommentSection::Upper => upper,
ExprSliceCommentSection::Step => step,
};
if let Some(node) = node {
if comment.slice().start() < node.start() {
CommentPlacement::leading(node.as_ref().into(), comment)
} else {
// If a trailing comment is an end of line comment that's fine because we have a node
// ahead of it
CommentPlacement::trailing(node.as_ref().into(), comment)
}
} else {
CommentPlacement::dangling(expr_slice.as_any_node_ref(), comment)
}
}
/// Handles own line comments between the last function decorator and the *header* of the function.
@@ -878,7 +917,10 @@ fn find_pos_only_slash_offset(
/// def test():
/// ...
/// ```
fn handle_leading_function_with_decorators_comment(comment: DecoratedComment) -> CommentPlacement {
fn handle_leading_function_with_decorators_comment<'a>(
comment: DecoratedComment<'a>,
_locator: &Locator,
) -> CommentPlacement<'a> {
let is_preceding_decorator = comment
.preceding_node()
.map_or(false, |node| node.is_decorator());
@@ -887,7 +929,7 @@ fn handle_leading_function_with_decorators_comment(comment: DecoratedComment) ->
.following_node()
.map_or(false, |node| node.is_arguments());
if comment.text_position().is_own_line() && is_preceding_decorator && is_following_arguments {
if comment.line_position().is_own_line() && is_preceding_decorator && is_following_arguments {
CommentPlacement::dangling(comment.enclosing_node(), comment)
} else {
CommentPlacement::Default(comment)
@@ -964,6 +1006,43 @@ fn handle_dict_unpacking_comment<'a>(
CommentPlacement::Default(comment)
}
// Own line comments coming after the node are always dangling comments
// ```python
// (
// a
// # trailing a comment
// . # dangling comment
// # or this
// b
// )
// ```
fn handle_attribute_comment<'a>(
comment: DecoratedComment<'a>,
locator: &Locator,
) -> CommentPlacement<'a> {
let Some(attribute) = comment.enclosing_node().expr_attribute() else {
return CommentPlacement::Default(comment);
};
// It must be a comment AFTER the name
if comment.preceding_node().is_none() {
return CommentPlacement::Default(comment);
}
let between_value_and_attr = TextRange::new(attribute.value.end(), attribute.attr.start());
let dot = SimpleTokenizer::new(locator.contents(), between_value_and_attr)
.skip_trivia()
.next()
.expect("Expected the `.` character after the value");
if TextRange::new(dot.end(), attribute.attr.start()).contains(comment.slice().start()) {
CommentPlacement::dangling(attribute.into(), comment)
} else {
CommentPlacement::Default(comment)
}
}
/// Returns `true` if `right` is `Some` and `left` and `right` are referentially equal.
fn are_same_optional<'a, T>(left: AnyNodeRef, right: Option<T>) -> bool
where

View File

@@ -19,9 +19,9 @@ expression: comments.debug(test_case.source_code)
"trailing": [],
},
Node {
kind: Arg,
range: 90..91,
source: `b`,
kind: ArgWithDefault,
range: 90..94,
source: `b=20`,
}: {
"leading": [
SourceComment {

View File

@@ -24,9 +24,9 @@ expression: comments.debug(test_case.source_code)
"trailing": [],
},
Node {
kind: ExprConstant,
range: 17..19,
source: `10`,
kind: ArgWithDefault,
range: 15..19,
source: `a=10`,
}: {
"leading": [],
"dangling": [],
@@ -39,9 +39,9 @@ expression: comments.debug(test_case.source_code)
],
},
Node {
kind: Arg,
range: 173..174,
source: `b`,
kind: ArgWithDefault,
range: 173..177,
source: `b=20`,
}: {
"leading": [
SourceComment {

View File

@@ -24,7 +24,7 @@ expression: comments.debug(test_case.source_code)
"trailing": [],
},
Node {
kind: Arg,
kind: ArgWithDefault,
range: 15..16,
source: `a`,
}: {
@@ -39,7 +39,7 @@ expression: comments.debug(test_case.source_code)
],
},
Node {
kind: Arg,
kind: ArgWithDefault,
range: 166..167,
source: `b`,
}: {

Some files were not shown because too many files have changed in this diff Show More