Compare commits

...

48 Commits

Author SHA1 Message Date
Dhruv Manilawala
2e37cf6b3b Bump version to v0.3.7 (#10895) 2024-04-12 03:39:45 +00:00
wolfgangshi
a9e4393008 [pylint] Implement rule to prefer augmented assignment (PLR6104) (#9932)
## Summary

Implement new rule: Prefer augmented assignment (#8877). It checks for
the assignment statement with the form of `<expr> = <expr>
<binary-operator> …` with a unsafe fix to use augmented assignment
instead.

## Test Plan

1. Snapshot test is included in the PR.
2. Manually test with playground.
2024-04-11 23:08:42 -04:00
Charlie Marsh
312f43475f [pylint] Recode nan-comparison rule to W0177 (#10894)
## Summary

This was accidentally committed under `W0117`, but the actual Pylint
code is `W0177`:
https://pylint.readthedocs.io/en/latest/user_guide/checkers/features.html.

Closes https://github.com/astral-sh/ruff/issues/10791.
2024-04-11 22:49:20 -04:00
Carl Meyer
563daa8a86 Fix docs and add overlap test for negated per-file-ignores (#10863)
Refs #3172 

## Summary

Fix a typo in the docs example, and add a test for the case where a
negative pattern and a positive pattern overlap.

The behavior here is simple: patterns (positive or negative) are always
additive if they hit (i.e. match for a positive pattern, don't match for
a negated pattern). We never "un-ignore" previously-ignored rules based
on a pattern (positive or negative) failing to hit.

It's simple enough that I don't really see other cases we need to add
tests for (the tests we have cover all branches in the ignores_from_path
function that implements the core logic), but open to reviewer feedback.

I also didn't end up changing the docs to explain this more, because I
think they are accurate as written and don't wrongly imply any more
complex behavior. Open to reviewer feedback on this as well!

After some discussion, I think allowing negative patterns to un-ignore
rules is too confusing and easy to get wrong; if we need that, we should
add `per-file-selects` instead.

## Test Plan

Test/docs only change; tests pass, docs render and look right.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@gmail.com>
2024-04-11 19:30:28 -06:00
Carl Meyer
7ae15c6e0a Fix comment copy/paste typo in newtype_index (#10892)
## Summary

This comment looks wrongly copy-pasted from the comment above, and
mentions the wrong type.

## Test Plan

Comment-only change.
2024-04-11 18:43:52 -06:00
Martin Imre
03899dcba3 [flake8-bugbear] Implement loop-iterator-mutation (B909) (#9578)
## Summary
This PR adds the implementation for the current
[flake8-bugbear](https://github.com/PyCQA/flake8-bugbear)'s B038 rule.
The B038 rule checks for mutation of loop iterators in the body of a for
loop and alerts when found.

Rational: 
Editing the loop iterator can lead to undesired behavior and is probably
a bug in most cases.

Closes #9511.

Note there will be a second iteration of B038 implemented in
`flake8-bugbear` soon, and this PR currently only implements the weakest
form of the rule.
I'd be happy to also implement the further improvements to B038 here in
ruff 🙂
See https://github.com/PyCQA/flake8-bugbear/issues/454 for more
information on the planned improvements.

## Test Plan
Re-using the same test file that I've used for `flake8-bugbear`, which
is included in this PR (look for the `B038.py` file).


Note: this is my first time using `rust` (beside `rustlings`) - I'd be
very happy about thorough feedback on what I could've done better
🙂 - Bring it on 😀
2024-04-11 19:52:52 +00:00
Carl Meyer
25f5a8b201 Struct not tuple for compiled per-file ignores (#10864)
## Summary

Code cleanup for per-file ignores; use a struct instead of a tuple.

Named the structs for individual ignores and the list of ignores
`CompiledPerFileIgnore` and `CompiledPerFileIgnoreList`. Name choice is
because we already have a `PerFileIgnore` struct for a
pre-compiled-matchers form of the config. Name bikeshedding welcome.

## Test Plan

Refactor, should not change behavior; existing tests pass.

---------

Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2024-04-11 13:47:57 -06:00
Charlie Marsh
e7d1d43f39 [pylint] Reverse min-max logic in if-stmt-min-max (#10890)
Closes https://github.com/astral-sh/ruff/issues/10889.
2024-04-11 14:16:13 -04:00
Charlie Marsh
9b9098c3dc Downgrade ESLint to v8 (#10888)
## Summary

Some of our plugins aren't compatible with v9.

Originally shipped in #10827.

## Test Plan

- `npm install`
- `npm ci`
2024-04-11 17:23:46 +00:00
Charlie Marsh
0cc154c2a9 Avoid TOCTOU errors in cache initialization (#10884)
## Summary

I believe this should close
https://github.com/astral-sh/ruff/issues/10880? The `.gitignore`
creation seems ok, since it truncates, but using `cachedir::is_tagged`
followed by `cachedir::add_tag` is not safe, as `cachedir::add_tag`
_fails_ if the file already exists.

This also matches the structure of the code in `uv`.

Closes https://github.com/astral-sh/ruff/issues/10880.
2024-04-11 12:09:07 -04:00
Dhruv Manilawala
4e8a84617c Bump version to v0.3.6 (#10883)
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2024-04-11 15:53:01 +00:00
Auguste Lalande
ffea1bb0a3 [refurb] Implement write-whole-file (FURB103) (#10802)
## Summary

Implement `write-whole-file` (`FURB103`), part of #1348. This is largely
a copy and paste of `read-whole-file` #7682.

## Test Plan

Text fixture added.

---------

Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
2024-04-11 14:21:45 +05:30
Charlie Marsh
ac14d187c6 Update clearscreen to v3.0.0 (#10869)
## Summary

Closes https://github.com/astral-sh/ruff/issues/10627.
2024-04-11 00:41:35 -04:00
Charlie Marsh
1eee6f16e4 [flake8-pytest-style] Fix single-tuple conversion in pytest-parametrize-values-wrong-type (#10862)
## Summary

This looks like a typo (without test coverage).

Closes https://github.com/astral-sh/ruff/issues/10861.
2024-04-10 14:20:09 -04:00
Auguste Lalande
de46a36bbc [pygrep-hooks] Improve blanket-noqa error message (PGH004) (#10851)
## Summary

Improve `blanket-noqa` error message in cases where codes are provided
but not detected due to formatting issues. Namely `# noqa X100` (missing
colon) or `noqa : X100` (space before colon). The behavior is similar to
`NQA002` and `NQA003` from `flake8-noqa` mentioned in #850. The idea to
merge the rules into `PGH004` was suggested by @MichaReiser
https://github.com/astral-sh/ruff/pull/10325#issuecomment-2025535444.

## Test Plan

Test cases added to fixture.
2024-04-10 04:30:25 +00:00
Charlie Marsh
dbf8d0c82c Show negated condition in needless-bool diagnostics (#10854)
## Summary

Closes https://github.com/astral-sh/ruff/issues/10843.
2024-04-10 04:29:43 +00:00
Carl Meyer
02e88fdbb1 Support negated patterns in [extend-]per-file-ignores (#10852)
Fixes #3172 

## Summary

Allow prefixing [extend-]per-file-ignores patterns with `!` to negate
the pattern; listed rules / prefixes will be ignored in all files that
don't match the pattern.

## Test Plan

Added tests for the feature.

Rendered docs and checked rendered output.
2024-04-09 21:53:41 -06:00
Carl Meyer
42d52ebbec Support FORCE_COLOR env var (#10839)
Fixes #5499 

## Summary

Add support for `FORCE_COLOR` env var, as specified at
https://force-color.org/

## Test Plan

I wrote an integration test for this, and then realized that can't work,
since we use a dev-dependency on `colored` with the `no-color` feature
to avoid ANSI color codes in test snapshots.

So this is just tested manually.

`cargo run --features test-rules -- check --no-cache --isolated -
--select RUF901 --diff < /dev/null` shows a colored diff.
`cargo run --features test-rules -- check --no-cache --isolated -
--select RUF901 --diff < /dev/null | less` does not have color, since we
pipe it to `less`.
`FORCE_COLOR=1 cargo run --features test-rules -- check --no-cache
--isolated - --select RUF901 --diff < /dev/null | less` does have color
(after this diff), even though we pipe it to `less`.
2024-04-08 15:29:29 -06:00
renovate[bot]
3fd22973da Update pre-commit dependencies (#10822) 2024-04-08 21:31:38 +01:00
Carl Meyer
e13e57e024 Localize cleanup for FunctionDef and ClassDef (#10837)
## Summary

Came across this code while digging into the semantic model with
@AlexWaygood, and found it confusing because of how it splits
`push_scope` from the paired `pop_scope` (took me a few minutes to even
figure out if/where we were popping the pushed scope). Since this
"cleanup" is already totally split by node type, there doesn't seem to
be any gain in having it as a separate "step" rather than just
incorporating it into the traversal clauses for those node types.

I left the equivalent cleanup step alone for the expression case,
because in that case it is actually generic across several different
node types, and due to the use of the common `visit_generators` utility
there isn't a clear way to keep the pushes and corresponding pops
localized.

Feel free to just reject this if I've missed a good reason for it to
stay this way!

## Test Plan

Tests and clippy.
2024-04-08 13:29:38 -06:00
Jane Lewis
c3e28f9d55 The linter and code actions can now be disabled in client settings for ruff server (#10800)
## Summary

This is a follow-up to https://github.com/astral-sh/ruff/pull/10764.
Support for diagnostics, quick fixes, and source actions can now be
disabled via client settings.

## Test Plan

### Manual Testing

Set up your workspace as described in the test plan in
https://github.com/astral-sh/ruff/pull/10764, up to step 2. You don't
need to add a debug statement.
The configuration for `folder_a` and `folder_b` should be as follows:
`folder_a`:
```json
{
    "ruff.codeAction.fixViolation": {
        "enable": true
    }
}
```

`folder_b`
```json
{
    "ruff.codeAction.fixViolation": {
        "enable": false
    }
}
```
Finally, open up your VS Code User Settings and un-check the `Ruff > Fix
All` setting.

1. Open a Python file in `folder_a` that has existing problems. The
problems should be highlighted, and quick fix should be available.
`source.fixAll` should not be available as a source action.
2. Open a Python file in `folder_b` that has existing problems. The
problems should be highlighted, but quick fixes should not be available
for any of them. `source.fixAll` should not be available as a source
action.
3. Open up your VS Code Workspace Settings (second tab under the search
bar) and un-check `Ruff > Lint: Enable`
4. Both files you tested in steps 1 and 2 should now lack any visible
diagnostics. `source.organizeImports` should still be available as a
source action.
2024-04-08 07:53:28 -07:00
renovate[bot]
a188ba5c26 chore(deps): update rust crate quick-junit to v0.3.6 (#10834)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-08 11:48:46 +01:00
renovate[bot]
86419c8ab9 chore(deps): update npm development dependencies (#10827)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-04-08 07:00:42 +00:00
renovate[bot]
a9ebfe6ec0 chore(deps): update rust crate libcst to v1.3.1 (#10824) 2024-04-07 22:20:48 -04:00
renovate[bot]
0a50874c01 chore(deps): update rust crate syn to v2.0.58 (#10823) 2024-04-07 22:20:40 -04:00
Aleksei Latyshev
6050bab5db [refurb] Support itemgetter in reimplemented-operator (FURB118) (#10526)
## Summary
Lint about function like expressions which are equivalent to
`operator.itemgetter`.
See:
https://github.com/astral-sh/ruff/issues/1348#issuecomment-1909421747

## Test Plan
cargo test
2024-04-07 02:31:59 +00:00
Alex Waygood
2a51dcfdf7 [pyflakes] Allow forward references in class bases in stub files (F821) (#10779)
## Summary

Fixes #3011.

Type checkers currently allow forward references in all contexts in stub
files, and stubs frequently make use of this capability (although it
doesn't actually seem to be specc'd anywhere --neither in PEP 484, nor
https://typing.readthedocs.io/en/latest/source/stubs.html#id6, nor the
CPython typing docs). Implementing it so that Ruff allows forward
references in _all contexts_ in stub files seems non-trivial, however
(or at least, I couldn't figure out how to do it easily), so this PR
does not do that. Perhaps it _should_; if we think this apporach isn't
principled enough, I'm happy to close it and postpone changing anything
here.

However, this does reduce the number of F821 errors Ruff emits on
typeshed down from 76 to 2, which would mean that we could enable the
rule at typeshed. The remaining 2 F821 errors can be trivially fixed at
typeshed by moving definitions around; forward references in class bases
were really the only remaining places where there was a real _use case_
for forward references in stub files that Ruff wasn't yet allowing.

## Test plan

`cargo test`. I also ran this PR branch on typeshed to check to see if
there were any new false positives caused by the changes here; there
were none.
2024-04-07 01:15:58 +01:00
Alex Waygood
86588695e3 [flake8-slots] Flag subclasses of call-based typing.NamedTuples as well as subclasses of collections.namedtuple() (SLOT002) (#10808) 2024-04-07 00:16:06 +01:00
Alex Waygood
47e0cb8985 [flake8-pyi] Various improvements to PYI034 (#10807)
More accurately identify whether a class is a metaclass, a subclass of `collections.abc.Iterator`, or a subclass of `collections.abc.AsyncIterator`
2024-04-07 00:15:48 +01:00
renovate[bot]
388658efdb Update pre-commit dependencies (#10698)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Zanie Blue <contact@zanie.dev>
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2024-04-06 23:00:41 +00:00
Tibor Reiss
3194f90db1 [pylint] Implement if-stmt-min-max (PLR1730, PLR1731) (#10002)
Add rule [consider-using-min-builtin
(R1730)](https://pylint.readthedocs.io/en/latest/user_guide/messages/refactor/consider-using-min-builtin.html)
and [consider-using-max-builtin
(R1731)](https://pylint.readthedocs.io/en/latest/user_guide/messages/refactor/consider-using-max-builtin.html)

See https://github.com/astral-sh/ruff/issues/970 for rules

Test Plan: `cargo test`
2024-04-06 17:32:05 +00:00
Charlie Marsh
ee4bff3475 Add comment test for FURB110 (#10804) 2024-04-06 16:49:22 +00:00
Auguste Lalande
7fb012d0df [refurb] Do not allow any keyword arguments for read-whole-file in rb mode (FURB101) (#10803)
## Summary

`Path.read_bytes()` does not support any keyword arguments, so `FURB101`
should not be triggered if the file is opened in `rb` mode with any
keyword arguments.

## Test Plan

Move erroneous test to "Non-error" section of fixture.
2024-04-06 12:41:39 -04:00
Steve C
44459f92ef [refurb] Implement if-expr-instead-of-or-operator (FURB110) (#10687)
## Summary

Add
[`FURB110`](https://github.com/dosisod/refurb/blob/master/refurb/checks/logical/use_or.py)

See: #1348 

## Test Plan

`cargo test`
2024-04-06 16:39:40 +00:00
Alex Waygood
1dc93107dc Improve internal documentation for the semantic model (#10788) 2024-04-06 16:28:32 +00:00
Charlie Marsh
7fb5f47efe Respect # noqa directives on __all__ openers (#10798)
## Summary

Historically, given:

```python
__all__ = [  # noqa: F822
    "Bernoulli",
    "Beta",
    "Binomial",
]
```

The F822 violations would be attached to the `__all__`, so this `# noqa`
would be enforced for _all_ definitions in the list. This changed in
https://github.com/astral-sh/ruff/pull/10525 for the better, in that we
now use the range of each string. But these `# noqa` directives stopped
working.

This PR sets the `__all__` as a parent range in the diagnostic, so that
these directives are respected once again.

Closes https://github.com/astral-sh/ruff/issues/10795.

## Test Plan

`cargo test`
2024-04-06 14:51:17 +00:00
Charlie Marsh
83db62bcda Use within-scope shadowed bindings in asyncio-dangling-task (#10793)
## Summary

I think this is using the wrong shadowing, as seen by the change in the
test fixture.
2024-04-06 10:44:03 -04:00
Bohdan
b45fd61ec5 [pyupgrade] Replace str, Enum with StrEnum (UP042) (#10713)
## Summary

Add new rule `pyupgrade - UP042` (I picked next available number).
Closes https://github.com/astral-sh/ruff/discussions/3867
Closes https://github.com/astral-sh/ruff/issues/9569

It should warn + provide a fix `class A(str, Enum)` -> `class
A(StrEnum)` for py311+.

## Test Plan

Added UP042.py test.

## Notes

I did not find a way to call `remove_argument` 2 times consecutively, so
the automatic fixing works only for classes that inherit exactly `str,
Enum` (regardless of the order).

I also plan to extend this rule to support IntEnum in next PR.
2024-04-06 01:56:28 +00:00
Jane Lewis
323264dec2 Remove debug print when resolving client settings in ruff server (#10799)
This was a statement used as part of the test plan in #10764 that was
erroneously committed in 8aa31f4c74.
2024-04-06 00:13:25 +00:00
Jane Lewis
c11e6d709c ruff server now supports commands for auto-fixing, organizing imports, and formatting (#10654)
## Summary

This builds off of the work in
https://github.com/astral-sh/ruff/pull/10652 to implement a command
executor, backwards compatible with the commands from the previous LSP
(`ruff.applyAutofix`, `ruff.applyFormat` and
`ruff.applyOrganizeImports`).

This involved a lot of refactoring and tweaks to the code action
resolution code - the most notable change is that workspace edits are
specified in a slightly different way, using the more general `changes`
field instead of the `document_changes` field (which isn't supported on
all LSP clients). Additionally, the API for synchronous request handlers
has been updated to include access to the `Requester`, which we use to
send a `workspace/applyEdit` request to the client.

## Test Plan



https://github.com/astral-sh/ruff/assets/19577865/7932e30f-d944-4e35-b828-1d81aa56c087
2024-04-05 23:27:35 +00:00
Auguste Lalande
1b31d4e9f1 Correct some oversites in the documentation from #10756 (#10796)
## Summary

Correct some oversites in the documentation from #10756
2024-04-05 22:45:48 +00:00
Jane Lewis
a184dc68f5 Implement client setting initialization and resolution for ruff server (#10764)
## Summary

When a language server initializes, it is passed a serialized JSON
object, which is known as its "initialization options". Until now, `ruff
server` has ignored those initialization options, meaning that
user-provided settings haven't worked. This PR is the first step for
supporting settings from the LSP client. It implements procedures to
deserialize initialization options into a settings object, and then
resolve those settings objects into concrete settings for each
workspace.

One of the goals for user settings implementation in `ruff server` is
backwards compatibility with `ruff-lsp`'s settings. We won't support all
settings that `ruff-lsp` had, but the ones that we do support should
work the same and use the same schema as `ruff-lsp`.

These are the existing settings from `ruff-lsp` that we will continue to
support, and which are part of the settings schema in this PR:

| Setting | Default Value | Description |

|----------------------------------------|---------------|----------------------------------------------------------------------------------------|
| `codeAction.disableRuleComment.enable` | `true` | Whether to display
Quick Fix actions to disable rules via `noqa` suppression comments. |
| `codeAction.fixViolation.enable` | `true` | Whether to display Quick
Fix actions to autofix violations. |
| `fixAll` | `true` | Whether to register Ruff as capable of handling
`source.fixAll` actions. |
| `lint.enable` | `true` | Whether to enable linting. Set to `false` to
use Ruff exclusively as a formatter. |
| `organizeImports` | `true` | Whether to register Ruff as capable of
handling `source.organizeImports` actions. |

To be clear: this PR does not implement 'support' for these settings,
individually. Rather, it constructs a framework for these settings to be
used by the server in the future.

Notably, we are choosing *not* to support `lint.args` and `format.args`
as settings for `ruff server`. This is because we're now interfacing
with Ruff at a lower level than its CLI, and converting CLI arguments
back into configuration is too involved.

We will have support for linter and formatter specific settings in
follow-up PRs. We will also 'hook up' user settings to work with the
server in follow up PRs.

## Test Plan

### Snapshot Tests

Tests have been created in
`crates/ruff_server/src/session/settings/tests.rs` to ensure that
deserialization and settings resolution works as expected.

### Manual Testing

Since we aren't using the resolved settings anywhere yet, we'll have to
add a few printing statements.

We want to capture what the resolved settings look like when sent as
part of a snapshot, so modify `Session::take_snapshot` to be the
following:

```rust
    pub(crate) fn take_snapshot(&self, url: &Url) -> Option<DocumentSnapshot> {
        let resolved_settings = self.workspaces.client_settings(url, &self.global_settings);
        tracing::info!("Resolved settings for document {url}: {resolved_settings:?}");
        Some(DocumentSnapshot {
            configuration: self.workspaces.configuration(url)?.clone(),
            resolved_client_capabilities: self.resolved_client_capabilities.clone(),
            client_settings: resolved_settings,
            document_ref: self.workspaces.snapshot(url)?,
            position_encoding: self.position_encoding,
            url: url.clone(),
        })
    }
```

Once you've done that, build the server and start up your extension
testing environment.

1. Set up a workspace in VS Code with two workspace folders, each one
having some variant of Ruff file-based configuration (`pyproject.toml`,
`ruff.toml`, etc.). We'll call these folders `folder_a` and `folder_b`.
2. In each folder, open up `.vscode/settings.json`.
3. In folder A, use these settings:
```json
{
    "ruff.codeAction.disableRuleComment": {
        "enable": true
    }
}
```
4. In folder B, use these settings:
```json
{
    
    "ruff.codeAction.disableRuleComment": {
        "enable": false
    }
}
```
5. Finally, open up your VS Code User Settings and un-check the `Ruff >
Code Action: Disable Rule Comment` setting.
6. When opening files in `folder_a`, you should see logs that look like
this:
```
Resolved settings for document <file>: ResolvedClientSettings { fix_all: true, organize_imports: true, lint_enable: true, disable_rule_comment_enable: true, fix_violation_enable: true }
```
7. When opening files in `folder_b`, you should see logs that look like
this:
```
Resolved settings for document <file>: ResolvedClientSettings { fix_all: true, organize_imports: true, lint_enable: true, disable_rule_comment_enable: false, fix_violation_enable: true }
```
8. To test invalid configuration, change `.vscode/settings.json` in
either folder to be this:
```json
{
    "ruff.codeAction.disableRuleComment": {
        "enable": "invalid"
    },
}
```
10. You should now see these error logs:
```
<time> [info]    <duration> ERROR ruff_server::session::settings Failed to deserialize initialization options: data did not match any variant of untagged enum InitializationOptions. Falling back to default client settings...

<time> [info]    <duration> WARN ruff_server::server No workspace settings found for file:///Users/jane/testbed/pandas
   <duration> WARN ruff_server::server No workspace settings found for file:///Users/jane/foss/scipy
```
11. Opening files in either folder should now print the following
configuration:
```
Resolved settings for document <file>: ResolvedClientSettings { fix_all: true, organize_imports: true, lint_enable: true, disable_rule_comment_enable: true, fix_violation_enable: true }
```
2024-04-05 22:41:50 +00:00
buhtz
a4ee9c1978 doc(FAQ): More precise PyLint comparision (#10756)
Section about comparing Ruff to PyLint now is more precise about the
following two points:
- Ruff do count branches different and there for earlier give
too-many-branches warning.
- Activating all Pylint rules in Ruff also activates pylint rules that
are not active by default in Pylint itself because they are implemented
via pylint plugins.
2024-04-05 22:12:33 +00:00
Auguste Lalande
c2790f912b [pylint] Implement bad-staticmethod-argument (PLW0211) (#10781)
## Summary

Implement `bad-staticmethod-argument` from pylint, part of #970.

## Test Plan

Text fixture added.
2024-04-05 21:33:39 +00:00
Micha Reiser
2e7a1a4cb1 D403: Require capitalizing single word sentence (#10776) 2024-04-05 08:42:00 +02:00
Jane Lewis
d050d6da2e ruff server now supports the source.organizeImports source action (#10652)
## Summary

This builds on top of the work in
https://github.com/astral-sh/ruff/pull/10597 to support `Ruff: Organize
imports` as an available source action.

To do this, we have to support `Clone`-ing for linter settings, since we
need to modify them in place to select import-related diagnostics
specifically (`I001` and `I002`).

## Test Plan


https://github.com/astral-sh/ruff/assets/19577865/04282d01-dfda-4ac5-aa8f-6a92d5f85bfd
2024-04-04 22:20:50 +00:00
NotWearingPants
fd8da66fcb docs: Lint -> Format in formatter.md (#10777)
## Summary

Since #10217 the [formatter
docs](https://docs.astral.sh/ruff/formatter/) contained

```
ruff format                   # Format all files in the current directory.
ruff format path/to/code/     # Lint all files in `path/to/code` (and any subdirectories).
ruff format path/to/file.py   # Format a single file.
```

I believe the `Lint` here is a copy-paste typo from the [linter
docs](https://docs.astral.sh/ruff/linter/).

## Test Plan

N/A
2024-04-04 16:50:55 +00:00
Dhruv Manilawala
d02b1069b5 Add semantic model flag when inside f-string replacement field (#10766)
## Summary

This PR adds a new semantic model flag to indicate that the checker is
inside an f-string replacement field. This will be used to ignore
certain checks if the target version doesn't support a specific feature
like PEP 701.

fixes: #10761 

## Test Plan

Add a test case from the raised issue.
2024-04-04 09:08:48 +05:30
177 changed files with 6834 additions and 1192 deletions

View File

@@ -17,4 +17,4 @@ MD013: false
# MD024/no-duplicate-heading
MD024:
# Allow when nested under different parents e.g. CHANGELOG.md
allow_different_nesting: true
siblings_only: true

View File

@@ -13,7 +13,7 @@ exclude: |
repos:
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.15
rev: v0.16
hooks:
- id: validate-pyproject
@@ -31,7 +31,7 @@ repos:
)$
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.37.0
rev: v0.39.0
hooks:
- id: markdownlint-fix
exclude: |
@@ -41,7 +41,7 @@ repos:
)$
- repo: https://github.com/crate-ci/typos
rev: v1.16.22
rev: v1.20.4
hooks:
- id: typos
@@ -55,7 +55,7 @@ repos:
pass_filenames: false # This makes it a lot faster
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.1.4
rev: v0.3.5
hooks:
- id: ruff-format
- id: ruff
@@ -70,7 +70,7 @@ repos:
# Prettier
- repo: https://github.com/pre-commit/mirrors-prettier
rev: v3.0.3
rev: v3.1.0
hooks:
- id: prettier
types: [yaml]

View File

@@ -1,5 +1,63 @@
# Changelog
## 0.3.7
### Preview features
- \[`flake8-bugbear`\] Implement `loop-iterator-mutation` (`B909`) ([#9578](https://github.com/astral-sh/ruff/pull/9578))
- \[`pylint`\] Implement rule to prefer augmented assignment (`PLR6104`) ([#9932](https://github.com/astral-sh/ruff/pull/9932))
### Bug fixes
- Avoid TOCTOU errors in cache initialization ([#10884](https://github.com/astral-sh/ruff/pull/10884))
- \[`pylint`\] Recode `nan-comparison` rule to `W0177` ([#10894](https://github.com/astral-sh/ruff/pull/10894))
- \[`pylint`\] Reverse min-max logic in `if-stmt-min-max` ([#10890](https://github.com/astral-sh/ruff/pull/10890))
## 0.3.6
### Preview features
- \[`pylint`\] Implement `bad-staticmethod-argument` (`PLW0211`) ([#10781](https://github.com/astral-sh/ruff/pull/10781))
- \[`pylint`\] Implement `if-stmt-min-max` (`PLR1730`, `PLR1731`) ([#10002](https://github.com/astral-sh/ruff/pull/10002))
- \[`pyupgrade`\] Replace `str,Enum` multiple inheritance with `StrEnum` `UP042` ([#10713](https://github.com/astral-sh/ruff/pull/10713))
- \[`refurb`\] Implement `if-expr-instead-of-or-operator` (`FURB110`) ([#10687](https://github.com/astral-sh/ruff/pull/10687))
- \[`refurb`\] Implement `int-on-sliced-str` (`FURB166`) ([#10650](https://github.com/astral-sh/ruff/pull/10650))
- \[`refurb`\] Implement `write-whole-file` (`FURB103`) ([#10802](https://github.com/astral-sh/ruff/pull/10802))
- \[`refurb`\] Support `itemgetter` in `reimplemented-operator` (`FURB118`) ([#10526](https://github.com/astral-sh/ruff/pull/10526))
- \[`flake8_comprehensions`\] Add `sum`/`min`/`max` to unnecessary comprehension check (`C419`) ([#10759](https://github.com/astral-sh/ruff/pull/10759))
### Rule changes
- \[`pydocstyle`\] Require capitalizing docstrings where the first sentence is a single word (`D403`) ([#10776](https://github.com/astral-sh/ruff/pull/10776))
- \[`pycodestyle`\] Ignore annotated lambdas in class scopes (`E731`) ([#10720](https://github.com/astral-sh/ruff/pull/10720))
- \[`flake8-pyi`\] Various improvements to PYI034 ([#10807](https://github.com/astral-sh/ruff/pull/10807))
- \[`flake8-slots`\] Flag subclasses of call-based `typing.NamedTuple`s as well as subclasses of `collections.namedtuple()` (`SLOT002`) ([#10808](https://github.com/astral-sh/ruff/pull/10808))
- \[`pyflakes`\] Allow forward references in class bases in stub files (`F821`) ([#10779](https://github.com/astral-sh/ruff/pull/10779))
- \[`pygrep-hooks`\] Improve `blanket-noqa` error message (`PGH004`) ([#10851](https://github.com/astral-sh/ruff/pull/10851))
### CLI
- Support `FORCE_COLOR` env var ([#10839](https://github.com/astral-sh/ruff/pull/10839))
### Configuration
- Support negated patterns in `[extend-]per-file-ignores` ([#10852](https://github.com/astral-sh/ruff/pull/10852))
### Bug fixes
- \[`flake8-import-conventions`\] Accept non-aliased (but correct) import in `unconventional-import-alias` (`ICN001`) ([#10729](https://github.com/astral-sh/ruff/pull/10729))
- \[`flake8-quotes`\] Add semantic model flag when inside f-string replacement field ([#10766](https://github.com/astral-sh/ruff/pull/10766))
- \[`pep8-naming`\] Recursively resolve `TypeDicts` for N815 violations ([#10719](https://github.com/astral-sh/ruff/pull/10719))
- \[`flake8-quotes`\] Respect `Q00*` ignores in `flake8-quotes` rules ([#10728](https://github.com/astral-sh/ruff/pull/10728))
- \[`flake8-simplify`\] Show negated condition in `needless-bool` diagnostics (`SIM103`) ([#10854](https://github.com/astral-sh/ruff/pull/10854))
- \[`ruff`\] Use within-scope shadowed bindings in `asyncio-dangling-task` (`RUF006`) ([#10793](https://github.com/astral-sh/ruff/pull/10793))
- \[`flake8-pytest-style`\] Fix single-tuple conversion in `pytest-parametrize-values-wrong-type` (`PT007`) ([#10862](https://github.com/astral-sh/ruff/pull/10862))
- \[`flake8-return`\] Ignore assignments to annotated variables in `unnecessary-assign` (`RET504`) ([#10741](https://github.com/astral-sh/ruff/pull/10741))
- \[`refurb`\] Do not allow any keyword arguments for `read-whole-file` in `rb` mode (`FURB101`) ([#10803](https://github.com/astral-sh/ruff/pull/10803))
- \[`pylint`\] Don't recommend decorating staticmethods with `@singledispatch` (`PLE1519`, `PLE1520`) ([#10637](https://github.com/astral-sh/ruff/pull/10637))
- \[`pydocstyle`\] Use section name range for all section-related docstring diagnostics ([#10740](https://github.com/astral-sh/ruff/pull/10740))
- Respect `# noqa` directives on `__all__` openers ([#10798](https://github.com/astral-sh/ruff/pull/10798))
## 0.3.5
### Preview features

100
Cargo.lock generated
View File

@@ -244,6 +244,12 @@ version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
[[package]]
name = "cfg_aliases"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fd16c4719339c4530435d38e511904438d07cce7950afa3718a84ac36c10e89e"
[[package]]
name = "chic"
version = "1.2.2"
@@ -365,7 +371,7 @@ dependencies = [
"heck 0.5.0",
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -376,9 +382,9 @@ checksum = "98cc8fbded0c607b7ba9dd60cd98df59af97e84d24e49c8557331cfc26d301ce"
[[package]]
name = "clearscreen"
version = "2.0.1"
version = "3.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72f3f22f1a586604e62efd23f78218f3ccdecf7a33c4500db2d37d85a24fe994"
checksum = "2f8c93eb5f77c9050c7750e14f13ef1033a40a0aac70c6371535b6763a01438c"
dependencies = [
"nix",
"terminfo",
@@ -596,7 +602,7 @@ dependencies = [
"proc-macro2",
"quote",
"strsim 0.10.0",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -607,7 +613,7 @@ checksum = "a668eda54683121533a393014d8692171709ff57a7d61f187b6e782719f8933f"
dependencies = [
"darling_core",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -1108,7 +1114,7 @@ dependencies = [
"Inflector",
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -1271,9 +1277,9 @@ checksum = "9c198f91728a82281a64e1f4f9eeb25d82cb32a5de251c6bd1b5154d63a8e7bd"
[[package]]
name = "libcst"
version = "1.2.0"
version = "1.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "890ee958b936e712c6f1c184f208176973e73c2e4f8d3cf499f94eb112f647f9"
checksum = "6f1e25d1b119ab5c2f15a6e081bb94a8d547c5c2ad065f5fd0dbb683f31ced91"
dependencies = [
"chic",
"libcst_derive",
@@ -1286,12 +1292,12 @@ dependencies = [
[[package]]
name = "libcst_derive"
version = "1.2.0"
version = "1.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1dbd2f3cd9346422ebdc3a614aed6969d4e0b3e9c10517f33b30326acf894c11"
checksum = "4a5011f2d59093de14a4a90e01b9d85dee9276e58a25f0107dcee167dd601be0"
dependencies = [
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -1437,20 +1443,15 @@ version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e4a24736216ec316047a1fc4252e27dabb04218aa4a3f37c6e7ddbf1f9782b54"
[[package]]
name = "nextest-workspace-hack"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d906846a98739ed9d73d66e62c2641eef8321f1734b7a1156ab045a0248fb2b3"
[[package]]
name = "nix"
version = "0.26.4"
version = "0.28.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "598beaf3cc6fdd9a5dfb1630c2800c7acd31df7aaf0f565796fba2b53ca1af1b"
checksum = "ab2156c4fce2f8df6c499cc1c763e4394b7482525bf2a9701c9d79d215f519e4"
dependencies = [
"bitflags 1.3.2",
"bitflags 2.5.0",
"cfg-if",
"cfg_aliases",
"libc",
]
@@ -1757,7 +1758,7 @@ checksum = "52a40bc70c2c58040d2d8b167ba9a5ff59fc9dab7ad44771cfde3dcfde7a09c6"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -1812,13 +1813,12 @@ dependencies = [
[[package]]
name = "quick-junit"
version = "0.3.5"
version = "0.3.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1b9599bffc2cd7511355996e0cfd979266b2cfa3f3ff5247d07a3a6e1ded6158"
checksum = "d1a341ae463320e9f8f34adda49c8a85d81d4e8f34cce4397fb0350481552224"
dependencies = [
"chrono",
"indexmap",
"nextest-workspace-hack",
"quick-xml",
"strip-ansi-escapes",
"thiserror",
@@ -1975,7 +1975,7 @@ dependencies = [
"pmutil",
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -1995,7 +1995,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.3.5"
version = "0.3.7"
dependencies = [
"anyhow",
"argfile",
@@ -2157,7 +2157,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.3.5"
version = "0.3.7"
dependencies = [
"aho-corasick",
"annotate-snippets 0.9.2",
@@ -2225,7 +2225,7 @@ dependencies = [
"proc-macro2",
"quote",
"ruff_python_trivia",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -2429,7 +2429,7 @@ dependencies = [
[[package]]
name = "ruff_shrinking"
version = "0.3.5"
version = "0.3.7"
dependencies = [
"anyhow",
"clap",
@@ -2671,7 +2671,7 @@ checksum = "7eb0b34b42edc17f6b7cac84a52a1c5f0e1bb2227e997ca9011ea3dd34e8610b"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -2704,7 +2704,7 @@ checksum = "0b2e6b945e9d3df726b65d6ee24060aff8e3533d431f677a9695db04eff9dfdb"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -2745,7 +2745,7 @@ dependencies = [
"darling",
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -2855,7 +2855,7 @@ dependencies = [
"proc-macro2",
"quote",
"rustversion",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -2877,9 +2877,9 @@ dependencies = [
[[package]]
name = "syn"
version = "2.0.57"
version = "2.0.58"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "11a6ae1e52eb25aab8f3fb9fca13be982a373b8f1157ca14b897a825ba4a2d35"
checksum = "44cfb93f38070beee36b3fef7d4f5a16f27751d94b187b666a5cc5e9b0d30687"
dependencies = [
"proc-macro2",
"quote",
@@ -2950,7 +2950,7 @@ dependencies = [
"cfg-if",
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -2961,7 +2961,7 @@ checksum = "5c89e72a01ed4c579669add59014b9a524d609c0c88c6a585ce37485879f6ffb"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
"test-case-core",
]
@@ -2982,7 +2982,7 @@ checksum = "c61f3ba182994efc43764a46c018c347bc492c79f024e705f46567b418f6d4f7"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -3103,7 +3103,7 @@ checksum = "34704c8d6ebcbc939824180af020566b01a7c01f80641264eba0999f6c2b6be7"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -3339,7 +3339,7 @@ checksum = "9881bea7cbe687e36c9ab3b778c36cd0487402e270304e8b1296d5085303c1a2"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -3424,7 +3424,7 @@ dependencies = [
"once_cell",
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
"wasm-bindgen-shared",
]
@@ -3458,7 +3458,7 @@ checksum = "e94f17b526d0a461a191c78ea52bbce64071ed5c04c9ffe424dcb38f74171bb7"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
"wasm-bindgen-backend",
"wasm-bindgen-shared",
]
@@ -3491,7 +3491,7 @@ checksum = "b7f89739351a2e03cb94beb799d47fb2cac01759b40ec441f7de39b00cbf7ef0"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -3515,14 +3515,14 @@ dependencies = [
[[package]]
name = "which"
version = "4.4.2"
version = "6.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "87ba24419a2078cd2b0f2ede2691b6c66d8e47836da3b6db8265ebad47afbfc7"
checksum = "8211e4f58a2b2805adfbefbc07bab82958fc91e3836339b1ab7ae32465dce0d7"
dependencies = [
"either",
"home",
"once_cell",
"rustix",
"winsafe",
]
[[package]]
@@ -3715,6 +3715,12 @@ dependencies = [
"memchr",
]
[[package]]
name = "winsafe"
version = "0.0.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d135d17ab770252ad95e9a872d365cf3090e3be864a34ab46f48555993efc904"
[[package]]
name = "yansi"
version = "0.5.1"
@@ -3747,7 +3753,7 @@ checksum = "9ce1b18ccd8e73a9321186f97e46f9f04b778851177567b1975109d26a08d2a6"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]

View File

@@ -23,7 +23,7 @@ cachedir = { version = "0.3.1" }
chrono = { version = "0.4.35", default-features = false, features = ["clock"] }
clap = { version = "4.5.3", features = ["derive"] }
clap_complete_command = { version = "0.5.1" }
clearscreen = { version = "2.0.0" }
clearscreen = { version = "3.0.0" }
codspeed-criterion-compat = { version = "2.4.0", default-features = false }
colored = { version = "2.1.0" }
console_error_panic_hook = { version = "0.1.7" }

View File

@@ -151,7 +151,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.3.5
rev: v0.3.7
hooks:
# Run the linter.
- id: ruff

View File

@@ -3,9 +3,11 @@
extend-exclude = ["**/resources/**/*", "**/snapshots/**/*"]
[default.extend-words]
"arange" = "arange" # e.g. `numpy.arange`
hel = "hel"
whos = "whos"
spawnve = "spawnve"
ned = "ned"
pn = "pn" # `import panel as pd` is a thing
poit = "poit"
BA = "BA" # acronym for "Bad Allowed", used in testing.

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.3.5"
version = "0.3.7"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -375,15 +375,17 @@ pub(crate) fn init(path: &Path) -> Result<()> {
fs::create_dir_all(path.join(VERSION))?;
// Add the CACHEDIR.TAG.
if !cachedir::is_tagged(path)? {
cachedir::add_tag(path)?;
}
cachedir::ensure_tag(path)?;
// Add the .gitignore.
let gitignore_path = path.join(".gitignore");
if !gitignore_path.exists() {
let mut file = fs::File::create(gitignore_path)?;
file.write_all(b"# Automatically created by ruff.\n*\n")?;
match fs::OpenOptions::new()
.write(true)
.create_new(true)
.open(path.join(".gitignore"))
{
Ok(mut file) => file.write_all(b"# Automatically created by ruff.\n*\n")?,
Err(err) if err.kind() == io::ErrorKind::AlreadyExists => (),
Err(err) => return Err(err.into()),
}
Ok(())

View File

@@ -149,6 +149,13 @@ pub fn run(
#[cfg(windows)]
assert!(colored::control::set_virtual_terminal(true).is_ok());
// support FORCE_COLOR env var
if let Some(force_color) = std::env::var_os("FORCE_COLOR") {
if force_color.len() > 0 {
colored::control::set_override(true);
}
}
set_up_logging(global_options.log_level())?;
if let Some(deprecated_alias_warning) = deprecated_alias_warning {

View File

@@ -1168,3 +1168,119 @@ def func():
Ok(())
}
/// Per-file selects via ! negation in per-file-ignores
#[test]
fn negated_per_file_ignores() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
[lint.per-file-ignores]
"!selected.py" = ["RUF"]
"#,
)?;
let selected = tempdir.path().join("selected.py");
fs::write(selected, "")?;
let ignored = tempdir.path().join("ignored.py");
fs::write(ignored, "")?;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
.arg("--select")
.arg("RUF901")
.current_dir(&tempdir)
, @r###"
success: false
exit_code: 1
----- stdout -----
selected.py:1:1: RUF901 [*] Hey this is a stable test rule with a safe fix.
Found 1 error.
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
Ok(())
}
#[test]
fn negated_per_file_ignores_absolute() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
[lint.per-file-ignores]
"!src/**.py" = ["RUF"]
"#,
)?;
let src_dir = tempdir.path().join("src");
fs::create_dir(&src_dir)?;
let selected = src_dir.join("selected.py");
fs::write(selected, "")?;
let ignored = tempdir.path().join("ignored.py");
fs::write(ignored, "")?;
insta::with_settings!({filters => vec![
// Replace windows paths
(r"\\", "/"),
]}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
.arg("--select")
.arg("RUF901")
.current_dir(&tempdir)
, @r###"
success: false
exit_code: 1
----- stdout -----
src/selected.py:1:1: RUF901 [*] Hey this is a stable test rule with a safe fix.
Found 1 error.
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
});
Ok(())
}
/// patterns are additive, can't use negative patterns to "un-ignore"
#[test]
fn negated_per_file_ignores_overlap() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
[lint.per-file-ignores]
"*.py" = ["RUF"]
"!foo.py" = ["RUF"]
"#,
)?;
let foo_file = tempdir.path().join("foo.py");
fs::write(foo_file, "")?;
let bar_file = tempdir.path().join("bar.py");
fs::write(bar_file, "")?;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
.arg("--select")
.arg("RUF901")
.current_dir(&tempdir)
, @r###"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
"###);
Ok(())
}

View File

@@ -50,6 +50,7 @@ file_resolver.exclude = [
"venv",
]
file_resolver.extend_exclude = [
"crates/ruff/resources/",
"crates/ruff_linter/resources/",
"crates/ruff_python_formatter/resources/",
]

View File

@@ -71,6 +71,14 @@ impl Diagnostic {
}
}
/// Consumes `self` and returns a new `Diagnostic` with the given parent node.
#[inline]
#[must_use]
pub fn with_parent(mut self, parent: TextSize) -> Self {
self.set_parent(parent);
self
}
/// Set the location of the diagnostic's parent node.
#[inline]
pub fn set_parent(&mut self, parent: TextSize) {

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.3.5"
version = "0.3.7"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -0,0 +1,160 @@
"""
Should emit:
B909 - on lines 11, 25, 26, 40, 46
"""
# lists
some_list = [1, 2, 3]
some_other_list = [1, 2, 3]
for elem in some_list:
# errors
some_list.remove(0)
del some_list[2]
some_list.append(elem)
some_list.sort()
some_list.reverse()
some_list.clear()
some_list.extend([1, 2])
some_list.insert(1, 1)
some_list.pop(1)
some_list.pop()
# conditional break should error
if elem == 2:
some_list.remove(0)
if elem == 3:
break
# non-errors
some_other_list.remove(elem)
del some_list
del some_other_list
found_idx = some_list.index(elem)
some_list = 3
# unconditional break should not error
if elem == 2:
some_list.remove(elem)
break
# dicts
mydicts = {"a": {"foo": 1, "bar": 2}}
for elem in mydicts:
# errors
mydicts.popitem()
mydicts.setdefault("foo", 1)
mydicts.update({"foo": "bar"})
# no errors
elem.popitem()
elem.setdefault("foo", 1)
elem.update({"foo": "bar"})
# sets
myset = {1, 2, 3}
for _ in myset:
# errors
myset.update({4, 5})
myset.intersection_update({4, 5})
myset.difference_update({4, 5})
myset.symmetric_difference_update({4, 5})
myset.add(4)
myset.discard(3)
# no errors
del myset
# members
class A:
some_list: list
def __init__(self, ls):
self.some_list = list(ls)
a = A((1, 2, 3))
# ensure member accesses are handled as errors
for elem in a.some_list:
a.some_list.remove(0)
del a.some_list[2]
# Augassign should error
foo = [1, 2, 3]
bar = [4, 5, 6]
for _ in foo:
foo *= 2
foo += bar
foo[1] = 9
foo[1:2] = bar
foo[1:2:3] = bar
foo = {1, 2, 3}
bar = {4, 5, 6}
for _ in foo: # should error
foo |= bar
foo &= bar
foo -= bar
foo ^= bar
# more tests for unconditional breaks - should not error
for _ in foo:
foo.remove(1)
for _ in bar:
bar.remove(1)
break
break
# should not error
for _ in foo:
foo.remove(1)
for _ in bar:
...
break
# should error (?)
for _ in foo:
foo.remove(1)
if bar:
bar.remove(1)
break
break
# should error
for _ in foo:
if bar:
pass
else:
foo.remove(1)
# should error
for elem in some_list:
if some_list.pop() == 2:
pass
# should not error
for elem in some_list:
if some_list.pop() == 2:
break
# should error
for elem in some_list:
if some_list.pop() == 2:
pass
else:
break
# should not error
for elem in some_list:
del some_list[elem]
some_list[elem] = 1
some_list.remove(elem)
some_list.discard(elem)

View File

@@ -195,6 +195,13 @@ class BadAsyncIterator(collections.abc.AsyncIterator[str]):
def __aiter__(self) -> typing.AsyncIterator[str]:
... # Y034 "__aiter__" methods in classes like "BadAsyncIterator" usually return "self" at runtime. Consider using "typing_extensions.Self" in "BadAsyncIterator.__aiter__", e.g. "def __aiter__(self) -> Self: ..." # Y022 Use "collections.abc.AsyncIterator[T]" instead of "typing.AsyncIterator[T]" (PEP 585 syntax)
class SubclassOfBadIterator3(BadIterator3):
def __iter__(self) -> Iterator[int]: # Y034
...
class SubclassOfBadAsyncIterator(BadAsyncIterator):
def __aiter__(self) -> collections.abc.AsyncIterator[str]: # Y034
...
class AsyncIteratorReturningAsyncIterable:
def __aiter__(self) -> AsyncIterable[str]:
@@ -225,6 +232,11 @@ class MetaclassInWhichSelfCannotBeUsed4(ABCMeta):
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed4: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed4) -> MetaclassInWhichSelfCannotBeUsed4: ...
class SubclassOfMetaclassInWhichSelfCannotBeUsed(MetaclassInWhichSelfCannotBeUsed4):
def __new__(cls) -> SubclassOfMetaclassInWhichSelfCannotBeUsed: ...
def __enter__(self) -> SubclassOfMetaclassInWhichSelfCannotBeUsed: ...
async def __aenter__(self) -> SubclassOfMetaclassInWhichSelfCannotBeUsed: ...
def __isub__(self, other: SubclassOfMetaclassInWhichSelfCannotBeUsed) -> SubclassOfMetaclassInWhichSelfCannotBeUsed: ...
class Abstract(Iterator[str]):
@abstractmethod

View File

@@ -80,5 +80,13 @@ def test_single_list_of_lists(param):
@pytest.mark.parametrize("a", [1, 2])
@pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
@pytest.mark.parametrize("d", [3,])
def test_multiple_decorators(a, b, c):
@pytest.mark.parametrize(
"d",
[("3", "4")],
)
@pytest.mark.parametrize(
"e",
[("3", "4"),],
)
def test_multiple_decorators(a, b, c, d, e):
pass

View File

@@ -5,3 +5,5 @@ this_should_be_linted = f'double {"quote"} string'
# https://github.com/astral-sh/ruff/issues/10546
x: "Literal['foo', 'bar']"
# https://github.com/astral-sh/ruff/issues/10761
f"Before {f'x {x}' if y else f'foo {z}'} after"

View File

@@ -52,32 +52,32 @@ def f():
return False
def f():
# SIM103
if a:
return False
else:
return True
def f():
# OK
if a:
return False
else:
return False
def f():
# OK
if a:
return True
else:
return True
def f():
# SIM103 (but not fixable)
if a:
return False
else:
return True
def f():
# OK
if a:
return False
else:
return False
def f():
# OK
if a:
return True
else:
return True
def f():
# OK
def bool():
return False
if a:
@@ -86,6 +86,14 @@ def f():
return False
def f():
# SIM103
if keys is not None and notice.key not in keys:
return False
else:
return True
###
# Positive cases (preview)
###

View File

@@ -6,6 +6,14 @@ class Bad(namedtuple("foo", ["str", "int"])): # SLOT002
pass
class UnusualButStillBad(NamedTuple("foo", [("x", int, "y", int)])): # SLOT002
pass
class UnusualButOkay(NamedTuple("foo", [("x", int, "y", int)])):
__slots__ = ()
class Good(namedtuple("foo", ["str", "int"])): # OK
__slots__ = ("foo",)

View File

@@ -25,3 +25,9 @@ def non_ascii():
def all_caps():
"""th•s is not capitalized."""
def single_word():
"""singleword."""
def single_word_no_dot():
"""singleword"""

View File

@@ -1,6 +1,6 @@
"""Tests for constructs allowed in `.pyi` stub files but not at runtime"""
from typing import Optional, TypeAlias, Union
from typing import Generic, NewType, Optional, TypeAlias, TypeVar, Union
__version__: str
__author__: str
@@ -33,6 +33,19 @@ class Leaf: ...
class Tree(list[Tree | Leaf]): ... # valid in a `.pyi` stub file, not in a `.py` runtime file
class Tree2(list["Tree | Leaf"]): ... # always okay
# Generic bases can have forward references in stubs
class Foo(Generic[T]): ...
T = TypeVar("T")
class Bar(Foo[Baz]): ...
class Baz: ...
# bases in general can be forward references in stubs
class Eggs(Spam): ...
class Spam: ...
# NewType can have forward references
MyNew = NewType("MyNew", MyClass)
# Annotations are treated as assignments in .pyi files, but not in .py files
class MyClass:
foo: int
@@ -42,3 +55,6 @@ class MyClass:
baz: MyClass
eggs = baz # valid in a `.pyi` stub file, not in a `.py` runtime file
eggs = "baz" # always okay
class Blah:
class Blah2(Blah): ...

View File

@@ -0,0 +1,13 @@
"""Respect `# noqa` directives on `__all__` definitions."""
__all__ = [ # noqa: F822
"Bernoulli",
"Beta",
"Binomial",
]
__all__ += [
"ContinuousBernoulli", # noqa: F822
"ExponentialFamily",
]

View File

@@ -9,3 +9,22 @@ x = 1
x = 1 # noqa: F401, W203
# noqa: F401
# noqa: F401, W203
# OK
x = 2 # noqa: X100
x = 2 # noqa:X100
# PGH004
x = 2 # noqa X100
# PGH004
x = 2 # noqa X100, X200
# PGH004
x = 2 # noqa : X300
# PGH004
x = 2 # noqa : X400
# PGH004
x = 2 # noqa :X500

View File

@@ -0,0 +1,44 @@
class Wolf:
@staticmethod
def eat(self): # [bad-staticmethod-argument]
pass
class Wolf:
@staticmethod
def eat(sheep):
pass
class Sheep:
@staticmethod
def eat(cls, x, y, z): # [bad-staticmethod-argument]
pass
@staticmethod
def sleep(self, x, y, z): # [bad-staticmethod-argument]
pass
def grow(self, x, y, z):
pass
@classmethod
def graze(cls, x, y, z):
pass
class Foo:
@staticmethod
def eat(x, self, z):
pass
@staticmethod
def sleep(x, cls, z):
pass
def grow(self, x, y, z):
pass
@classmethod
def graze(cls, x, y, z):
pass

View File

@@ -0,0 +1,136 @@
# pylint: disable=missing-docstring, invalid-name, too-few-public-methods, redefined-outer-name
value = 10
value2 = 0
value3 = 3
# Positive
if value < 10: # [max-instead-of-if]
value = 10
if value <= 10: # [max-instead-of-if]
value = 10
if value < value2: # [max-instead-of-if]
value = value2
if value > 10: # [min-instead-of-if]
value = 10
if value >= 10: # [min-instead-of-if]
value = 10
if value > value2: # [min-instead-of-if]
value = value2
class A:
def __init__(self):
self.value = 13
A1 = A()
if A1.value < 10: # [max-instead-of-if]
A1.value = 10
if A1.value > 10: # [min-instead-of-if]
A1.value = 10
class AA:
def __init__(self, value):
self.value = value
def __gt__(self, b):
return self.value > b
def __ge__(self, b):
return self.value >= b
def __lt__(self, b):
return self.value < b
def __le__(self, b):
return self.value <= b
A1 = AA(0)
A2 = AA(3)
if A2 < A1: # [max-instead-of-if]
A2 = A1
if A2 <= A1: # [max-instead-of-if]
A2 = A1
if A2 > A1: # [min-instead-of-if]
A2 = A1
if A2 >= A1: # [min-instead-of-if]
A2 = A1
# Negative
if value < 10:
value = 2
if value <= 3:
value = 5
if value < 10:
value = 2
value2 = 3
if value < value2:
value = value3
if value < 5:
value = value3
if 2 < value <= 3:
value = 1
if value < 10:
value = 10
else:
value = 3
if value <= 3:
value = 5
elif value == 3:
value = 2
if value > 10:
value = 2
if value >= 3:
value = 5
if value > 10:
value = 2
value2 = 3
if value > value2:
value = value3
if value > 5:
value = value3
if 2 > value >= 3:
value = 1
if value > 10:
value = 10
else:
value = 3
if value >= 3:
value = 5
elif value == 3:
value = 2
# Parenthesized expressions
if value.attr > 3:
(
value.
attr
) = 3

View File

@@ -0,0 +1,55 @@
# Errors
some_string = "some string"
index, a_number, to_multiply, to_divide, to_cube, timeDiffSeconds, flags = (
0,
1,
2,
3,
4,
5,
0x3,
)
a_list = [1, 2]
some_set = {"elem"}
mat1, mat2 = None, None
some_string = some_string + "a very long end of string"
index = index - 1
a_list = a_list + ["to concat"]
some_set = some_set | {"to concat"}
to_multiply = to_multiply * 5
to_divide = to_divide / 5
to_divide = to_divide // 5
to_cube = to_cube**3
to_cube = 3**to_cube
to_cube = to_cube**to_cube
timeDiffSeconds = timeDiffSeconds % 60
flags = flags & 0x1
flags = flags | 0x1
flags = flags ^ 0x1
flags = flags << 1
flags = flags >> 1
mat1 = mat1 @ mat2
a_list[1] = a_list[1] + 1
a_list[0:2] = a_list[0:2] * 3
a_list[:2] = a_list[:2] * 3
a_list[1:] = a_list[1:] * 3
a_list[:] = a_list[:] * 3
index = index * (index + 10)
class T:
def t(self):
self.a = self.a + 1
obj = T()
obj.a = obj.a + 1
# OK
a_list[0] = a_list[:] * 3
index = a_number = a_number + 1
a_number = index = a_number + 1
index = index * index + 10

View File

@@ -0,0 +1,13 @@
from enum import Enum
class A(str, Enum): ...
class B(Enum, str): ...
class D(int, str, Enum): ...
class E(str, int, Enum): ...

View File

@@ -28,10 +28,6 @@ with open("file.txt", encoding="utf8") as f:
with open("file.txt", errors="ignore") as f:
x = f.read()
# FURB101
with open("file.txt", errors="ignore", mode="rb") as f:
x = f.read()
# FURB101
with open("file.txt", mode="r") as f: # noqa: FURB120
x = f.read()
@@ -60,6 +56,11 @@ with foo() as a, open("file.txt") as b, foo() as c:
# Non-errors.
# Path.read_bytes does not support any kwargs
with open("file.txt", errors="ignore", mode="rb") as f:
x = f.read()
f2 = open("file2.txt")
with open("file.txt") as f:
x = f2.read()

View File

@@ -0,0 +1,132 @@
def foo():
...
def bar(x):
...
# Errors.
# FURB103
with open("file.txt", "w") as f:
f.write("test")
# FURB103
with open("file.txt", "wb") as f:
f.write(foobar)
# FURB103
with open("file.txt", mode="wb") as f:
f.write(b"abc")
# FURB103
with open("file.txt", "w", encoding="utf8") as f:
f.write(foobar)
# FURB103
with open("file.txt", "w", errors="ignore") as f:
f.write(foobar)
# FURB103
with open("file.txt", mode="w") as f:
f.write(foobar)
# FURB103
with open(foo(), "wb") as f:
# The body of `with` is non-trivial, but the recommendation holds.
bar("pre")
f.write(bar())
bar("post")
print("Done")
# FURB103
with open("a.txt", "w") as a, open("b.txt", "wb") as b:
a.write(x)
b.write(y)
# FURB103
with foo() as a, open("file.txt", "w") as b, foo() as c:
# We have other things in here, multiple with items, but the user
# writes a single time to file and that bit they can replace.
bar(a)
b.write(bar(bar(a + x)))
bar(c)
# FURB103
with open("file.txt", "w", newline="\r\n") as f:
f.write(foobar)
# Non-errors.
with open("file.txt", errors="ignore", mode="wb") as f:
# Path.write_bytes() does not support errors
f.write(foobar)
f2 = open("file2.txt", "w")
with open("file.txt", "w") as f:
f2.write(x)
# mode is dynamic
with open("file.txt", foo()) as f:
f.write(x)
# keyword mode is incorrect
with open("file.txt", mode="a+") as f:
f.write(x)
# enables line buffering, not supported in write_text()
with open("file.txt", buffering=1) as f:
f.write(x)
# dont mistake "newline" for "mode"
with open("file.txt", newline="wb") as f:
f.write(x)
# I guess we can possibly also report this case, but the question
# is why the user would put "w+" here in the first place.
with open("file.txt", "w+") as f:
f.write(x)
# Even though we write the whole file, we do other things.
with open("file.txt", "w") as f:
f.write(x)
f.seek(0)
x += f.read(100)
# This shouldn't error, since it could contain unsupported arguments, like `buffering`.
with open(*filename, mode="w") as f:
f.write(x)
# This shouldn't error, since it could contain unsupported arguments, like `buffering`.
with open(**kwargs) as f:
f.write(x)
# This shouldn't error, since it could contain unsupported arguments, like `buffering`.
with open("file.txt", **kwargs) as f:
f.write(x)
# This shouldn't error, since it could contain unsupported arguments, like `buffering`.
with open("file.txt", mode="w", **kwargs) as f:
f.write(x)
# This could error (but doesn't), since it can't contain unsupported arguments, like
# `buffering`.
with open(*filename, mode="w") as f:
f.write(x)
# This could error (but doesn't), since it can't contain unsupported arguments, like
# `buffering`.
with open(*filename, file="file.txt", mode="w") as f:
f.write(x)
# Loops imply multiple writes
with open("file.txt", "w") as f:
while x < 0:
f.write(foobar)
with open("file.txt", "w") as f:
for line in text:
f.write(line)

View File

@@ -0,0 +1,40 @@
z = x if x else y # FURB110
z = x \
if x else y # FURB110
z = x if x \
else \
y # FURB110
z = x() if x() else y() # FURB110
# FURB110
z = x if (
# Test for x.
x
) else (
# Test for y.
y
)
# FURB110
z = (
x if (
# Test for x.
x
) else (
# Test for y.
y
)
)
# FURB110
z = (
x if
# If true, use x.
x
# Otherwise, use y.
else
y
)

View File

@@ -10,7 +10,7 @@ op_mult = lambda x, y: x * y
op_matmutl = lambda x, y: x @ y
op_truediv = lambda x, y: x / y
op_mod = lambda x, y: x % y
op_pow = lambda x, y: x ** y
op_pow = lambda x, y: x**y
op_lshift = lambda x, y: x << y
op_rshift = lambda x, y: x >> y
op_bitor = lambda x, y: x | y
@@ -27,6 +27,10 @@ op_gte = lambda x, y: x >= y
op_is = lambda x, y: x is y
op_isnot = lambda x, y: x is not y
op_in = lambda x, y: y in x
op_itemgetter = lambda x: x[0]
op_itemgetter = lambda x: (x[0], x[1], x[2])
op_itemgetter = lambda x: (x[1:], x[2])
op_itemgetter = lambda x: x[:]
def op_not2(x):
@@ -41,21 +45,30 @@ class Adder:
def add(x, y):
return x + y
# OK.
op_add3 = lambda x, y = 1: x + y
op_add3 = lambda x, y=1: x + y
op_neg2 = lambda x, y: y - x
op_notin = lambda x, y: y not in x
op_and = lambda x, y: y and x
op_or = lambda x, y: y or x
op_in = lambda x, y: x in y
op_itemgetter = lambda x: (1, x[1], x[2])
op_itemgetter = lambda x: (x.y, x[1], x[2])
op_itemgetter = lambda x, y: (x[0], y[0])
op_itemgetter = lambda x, y: (x[0], y[0])
op_itemgetter = lambda x: ()
op_itemgetter = lambda x: (*x[0], x[1])
def op_neg3(x, y):
return y - x
def op_add4(x, y = 1):
def op_add4(x, y=1):
return x + y
def op_add5(x, y):
print("op_add5")
return x + y

View File

@@ -30,6 +30,9 @@ pub(crate) fn deferred_for_loops(checker: &mut Checker) {
if checker.enabled(Rule::EnumerateForLoop) {
flake8_simplify::rules::enumerate_for_loop(checker, stmt_for);
}
if checker.enabled(Rule::LoopIteratorMutation) {
flake8_bugbear::rules::loop_iterator_mutation(checker, stmt_for);
}
}
}
}

View File

@@ -15,15 +15,19 @@ use crate::rules::{
pub(crate) fn deferred_scopes(checker: &mut Checker) {
if !checker.any_enabled(&[
Rule::AsyncioDanglingTask,
Rule::BadStaticmethodArgument,
Rule::BuiltinAttributeShadowing,
Rule::GlobalVariableNotAssigned,
Rule::ImportPrivateName,
Rule::ImportShadowedByLoopVar,
Rule::InvalidFirstArgumentNameForMethod,
Rule::InvalidFirstArgumentNameForClassMethod,
Rule::InvalidFirstArgumentNameForMethod,
Rule::NoSelfUse,
Rule::RedefinedArgumentFromLocal,
Rule::RedefinedWhileUnused,
Rule::RuntimeImportInTypeCheckingBlock,
Rule::SingledispatchMethod,
Rule::SingledispatchmethodFunction,
Rule::TooManyLocals,
Rule::TypingOnlyFirstPartyImport,
Rule::TypingOnlyStandardLibraryImport,
@@ -31,19 +35,16 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
Rule::UndefinedLocal,
Rule::UnusedAnnotation,
Rule::UnusedClassMethodArgument,
Rule::BuiltinAttributeShadowing,
Rule::UnusedFunctionArgument,
Rule::UnusedImport,
Rule::UnusedLambdaArgument,
Rule::UnusedMethodArgument,
Rule::UnusedPrivateProtocol,
Rule::UnusedPrivateTypeAlias,
Rule::UnusedPrivateTypeVar,
Rule::UnusedPrivateTypedDict,
Rule::UnusedPrivateTypeVar,
Rule::UnusedStaticMethodArgument,
Rule::UnusedVariable,
Rule::SingledispatchMethod,
Rule::SingledispatchmethodFunction,
]) {
return;
}
@@ -424,6 +425,10 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
pylint::rules::singledispatchmethod_function(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::BadStaticmethodArgument) {
pylint::rules::bad_staticmethod_argument(checker, scope, &mut diagnostics);
}
if checker.any_enabled(&[
Rule::InvalidFirstArgumentNameForClassMethod,
Rule::InvalidFirstArgumentNameForMethod,

View File

@@ -32,10 +32,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if let Some(operator) = typing::to_pep604_operator(value, slice, &checker.semantic)
{
if checker.enabled(Rule::FutureRewritableTypeAnnotation) {
if !checker.source_type.is_stub()
if !checker.semantic.future_annotations_or_stub()
&& checker.settings.target_version < PythonVersion::Py310
&& checker.settings.target_version >= PythonVersion::Py37
&& !checker.semantic.future_annotations()
&& checker.semantic.in_annotation()
&& !checker.settings.pyupgrade.keep_runtime_typing
{
@@ -48,7 +47,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.source_type.is_stub()
|| checker.settings.target_version >= PythonVersion::Py310
|| (checker.settings.target_version >= PythonVersion::Py37
&& checker.semantic.future_annotations()
&& checker.semantic.future_annotations_or_stub()
&& checker.semantic.in_annotation()
&& !checker.settings.pyupgrade.keep_runtime_typing)
{
@@ -60,9 +59,8 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
// Ex) list[...]
if checker.enabled(Rule::FutureRequiredTypeAnnotation) {
if !checker.source_type.is_stub()
if !checker.semantic.future_annotations_or_stub()
&& checker.settings.target_version < PythonVersion::Py39
&& !checker.semantic.future_annotations()
&& checker.semantic.in_annotation()
&& typing::is_pep585_generic(value, &checker.semantic)
{
@@ -186,10 +184,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
typing::to_pep585_generic(expr, &checker.semantic)
{
if checker.enabled(Rule::FutureRewritableTypeAnnotation) {
if !checker.source_type.is_stub()
if !checker.semantic.future_annotations_or_stub()
&& checker.settings.target_version < PythonVersion::Py39
&& checker.settings.target_version >= PythonVersion::Py37
&& !checker.semantic.future_annotations()
&& checker.semantic.in_annotation()
&& !checker.settings.pyupgrade.keep_runtime_typing
{
@@ -200,7 +197,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.source_type.is_stub()
|| checker.settings.target_version >= PythonVersion::Py39
|| (checker.settings.target_version >= PythonVersion::Py37
&& checker.semantic.future_annotations()
&& checker.semantic.future_annotations_or_stub()
&& checker.semantic.in_annotation()
&& !checker.settings.pyupgrade.keep_runtime_typing)
{
@@ -270,10 +267,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
]) {
if let Some(replacement) = typing::to_pep585_generic(expr, &checker.semantic) {
if checker.enabled(Rule::FutureRewritableTypeAnnotation) {
if !checker.source_type.is_stub()
if !checker.semantic.future_annotations_or_stub()
&& checker.settings.target_version < PythonVersion::Py39
&& checker.settings.target_version >= PythonVersion::Py37
&& !checker.semantic.future_annotations()
&& checker.semantic.in_annotation()
&& !checker.settings.pyupgrade.keep_runtime_typing
{
@@ -286,7 +282,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.source_type.is_stub()
|| checker.settings.target_version >= PythonVersion::Py39
|| (checker.settings.target_version >= PythonVersion::Py37
&& checker.semantic.future_annotations()
&& checker.semantic.future_annotations_or_stub()
&& checker.semantic.in_annotation()
&& !checker.settings.pyupgrade.keep_runtime_typing)
{
@@ -1176,9 +1172,8 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
}) => {
// Ex) `str | None`
if checker.enabled(Rule::FutureRequiredTypeAnnotation) {
if !checker.source_type.is_stub()
if !checker.semantic.future_annotations_or_stub()
&& checker.settings.target_version < PythonVersion::Py310
&& !checker.semantic.future_annotations()
&& checker.semantic.in_annotation()
{
flake8_future_annotations::rules::future_required_type_annotation(
@@ -1363,6 +1358,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::IfExprMinMax) {
refurb::rules::if_expr_min_max(checker, if_exp);
}
if checker.enabled(Rule::IfExpInsteadOfOrOperator) {
refurb::rules::if_exp_instead_of_or_operator(checker, if_exp);
}
}
Expr::ListComp(
comp @ ast::ExprListComp {

View File

@@ -406,6 +406,11 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::UselessObjectInheritance) {
pyupgrade::rules::useless_object_inheritance(checker, class_def);
}
if checker.enabled(Rule::ReplaceStrEnum) {
if checker.settings.target_version >= PythonVersion::Py311 {
pyupgrade::rules::replace_str_enum(checker, class_def);
}
}
if checker.enabled(Rule::UnnecessaryClassParentheses) {
pyupgrade::rules::unnecessary_class_parentheses(checker, class_def);
}
@@ -1112,6 +1117,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::TooManyBooleanExpressions) {
pylint::rules::too_many_boolean_expressions(checker, if_);
}
if checker.enabled(Rule::IfStmtMinMax) {
pylint::rules::if_stmt_min_max(checker, if_);
}
if checker.source_type.is_stub() {
if checker.any_enabled(&[
Rule::UnrecognizedVersionInfoCheck,
@@ -1217,6 +1225,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::ReadWholeFile) {
refurb::rules::read_whole_file(checker, with_stmt);
}
if checker.enabled(Rule::WriteWholeFile) {
refurb::rules::write_whole_file(checker, with_stmt);
}
if checker.enabled(Rule::UselessWithLock) {
pylint::rules::useless_with_lock(checker, with_stmt);
}
@@ -1257,6 +1268,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.any_enabled(&[
Rule::EnumerateForLoop,
Rule::IncorrectDictIterator,
Rule::LoopIteratorMutation,
Rule::UnnecessaryEnumerate,
Rule::UnusedLoopControlVariable,
Rule::YieldInForLoop,
@@ -1467,6 +1479,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.settings.rules.enabled(Rule::TypeBivariance) {
pylint::rules::type_bivariance(checker, value);
}
if checker.enabled(Rule::NonAugmentedAssignment) {
pylint::rules::non_augmented_assignment(checker, assign);
}
if checker.settings.rules.enabled(Rule::UnsortedDunderAll) {
ruff::rules::sort_dunder_all_assign(checker, assign);
}

View File

@@ -56,9 +56,10 @@ impl AnnotationContext {
_ => {}
}
// If `__future__` annotations are enabled, then annotations are never evaluated
// at runtime, so we can treat them as typing-only.
if semantic.future_annotations() {
// If `__future__` annotations are enabled or it's a stub file,
// then annotations are never evaluated at runtime,
// so we can treat them as typing-only.
if semantic.future_annotations_or_stub() {
return Self::TypingOnly;
}
@@ -87,7 +88,7 @@ impl AnnotationContext {
semantic,
) {
Self::RuntimeRequired
} else if semantic.future_annotations() {
} else if semantic.future_annotations_or_stub() {
Self::TypingOnly
} else {
Self::RuntimeEvaluated

View File

@@ -12,6 +12,8 @@ pub(crate) struct Visit<'a> {
pub(crate) type_param_definitions: Vec<(&'a Expr, Snapshot)>,
pub(crate) functions: Vec<Snapshot>,
pub(crate) lambdas: Vec<Snapshot>,
/// N.B. This field should always be empty unless it's a stub file
pub(crate) class_bases: Vec<(&'a Expr, Snapshot)>,
}
impl Visit<'_> {
@@ -22,6 +24,7 @@ impl Visit<'_> {
&& self.type_param_definitions.is_empty()
&& self.functions.is_empty()
&& self.lambdas.is_empty()
&& self.class_bases.is_empty()
}
}

View File

@@ -31,15 +31,14 @@ use std::path::Path;
use itertools::Itertools;
use log::debug;
use ruff_python_ast::{
self as ast, all::DunderAllName, Comprehension, ElifElseClause, ExceptHandler, Expr,
ExprContext, Keyword, MatchCase, Parameter, ParameterWithDefault, Parameters, Pattern, Stmt,
Suite, UnaryOp,
self as ast, Comprehension, ElifElseClause, ExceptHandler, Expr, ExprContext, FStringElement,
Keyword, MatchCase, Parameter, ParameterWithDefault, Parameters, Pattern, Stmt, Suite, UnaryOp,
};
use ruff_text_size::{Ranged, TextRange, TextSize};
use ruff_diagnostics::{Diagnostic, IsolationLevel};
use ruff_notebook::{CellOffsets, NotebookIndex};
use ruff_python_ast::all::{extract_all_names, DunderAllFlags};
use ruff_python_ast::all::{extract_all_names, DunderAllDefinition, DunderAllFlags};
use ruff_python_ast::helpers::{
collect_import_from_member, extract_handled_exceptions, is_docstring_stmt, to_module_path,
};
@@ -572,6 +571,7 @@ impl<'a> Visitor<'a> for Checker<'a> {
match stmt {
Stmt::FunctionDef(
function_def @ ast::StmtFunctionDef {
name,
body,
parameters,
decorator_list,
@@ -691,9 +691,21 @@ impl<'a> Visitor<'a> for Checker<'a> {
if let Some(globals) = Globals::from_body(body) {
self.semantic.set_globals(globals);
}
let scope_id = self.semantic.scope_id;
self.analyze.scopes.push(scope_id);
self.semantic.pop_scope(); // Function scope
self.semantic.pop_definition();
self.semantic.pop_scope(); // Type parameter scope
self.add_binding(
name,
stmt.identifier(),
BindingKind::FunctionDefinition(scope_id),
BindingFlags::empty(),
);
}
Stmt::ClassDef(
class_def @ ast::StmtClassDef {
name,
body,
arguments,
decorator_list,
@@ -712,7 +724,9 @@ impl<'a> Visitor<'a> for Checker<'a> {
}
if let Some(arguments) = arguments {
self.semantic.flags |= SemanticModelFlags::CLASS_BASE;
self.visit_arguments(arguments);
self.semantic.flags -= SemanticModelFlags::CLASS_BASE;
}
let definition = docstrings::extraction::extract_definition(
@@ -732,6 +746,18 @@ impl<'a> Visitor<'a> for Checker<'a> {
// Set the docstring state before visiting the class body.
self.docstring_state = DocstringState::Expected;
self.visit_body(body);
let scope_id = self.semantic.scope_id;
self.analyze.scopes.push(scope_id);
self.semantic.pop_scope(); // Class scope
self.semantic.pop_definition();
self.semantic.pop_scope(); // Type parameter scope
self.add_binding(
name,
stmt.identifier(),
BindingKind::ClassDefinition(scope_id),
BindingFlags::empty(),
);
}
Stmt::TypeAlias(ast::StmtTypeAlias {
range: _,
@@ -888,35 +914,6 @@ impl<'a> Visitor<'a> for Checker<'a> {
};
// Step 3: Clean-up
match stmt {
Stmt::FunctionDef(ast::StmtFunctionDef { name, .. }) => {
let scope_id = self.semantic.scope_id;
self.analyze.scopes.push(scope_id);
self.semantic.pop_scope(); // Function scope
self.semantic.pop_definition();
self.semantic.pop_scope(); // Type parameter scope
self.add_binding(
name,
stmt.identifier(),
BindingKind::FunctionDefinition(scope_id),
BindingFlags::empty(),
);
}
Stmt::ClassDef(ast::StmtClassDef { name, .. }) => {
let scope_id = self.semantic.scope_id;
self.analyze.scopes.push(scope_id);
self.semantic.pop_scope(); // Class scope
self.semantic.pop_definition();
self.semantic.pop_scope(); // Type parameter scope
self.add_binding(
name,
stmt.identifier(),
BindingKind::ClassDefinition(scope_id),
BindingFlags::empty(),
);
}
_ => {}
}
// Step 4: Analysis
analyze::statement(stmt, self);
@@ -935,10 +932,23 @@ impl<'a> Visitor<'a> for Checker<'a> {
fn visit_expr(&mut self, expr: &'a Expr) {
// Step 0: Pre-processing
if self.source_type.is_stub()
&& self.semantic.in_class_base()
&& !self.semantic.in_deferred_class_base()
{
self.visit
.class_bases
.push((expr, self.semantic.snapshot()));
return;
}
if !self.semantic.in_typing_literal()
// `in_deferred_type_definition()` will only be `true` if we're now visiting the deferred nodes
// after having already traversed the source tree once. If we're now visiting the deferred nodes,
// we can't defer again, or we'll infinitely recurse!
&& !self.semantic.in_deferred_type_definition()
&& self.semantic.in_type_definition()
&& self.semantic.future_annotations()
&& self.semantic.future_annotations_or_stub()
&& (self.semantic.in_annotation() || self.source_type.is_stub())
{
if let Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) = expr {
@@ -1580,6 +1590,15 @@ impl<'a> Visitor<'a> for Checker<'a> {
.push((bound, self.semantic.snapshot()));
}
}
fn visit_f_string_element(&mut self, f_string_element: &'a FStringElement) {
let snapshot = self.semantic.flags;
if f_string_element.is_expression() {
self.semantic.flags |= SemanticModelFlags::F_STRING_REPLACEMENT_FIELD;
}
visitor::walk_f_string_element(self, f_string_element);
self.semantic.flags = snapshot;
}
}
impl<'a> Checker<'a> {
@@ -1956,6 +1975,52 @@ impl<'a> Checker<'a> {
scope.add(id, binding_id);
}
/// After initial traversal of the AST, visit all class bases that were deferred.
///
/// This method should only be relevant in stub files, where forward references are
/// legal in class bases. For other kinds of Python files, using a forward reference
/// in a class base is never legal, so `self.visit.class_bases` should always be empty.
///
/// For example, in a stub file:
/// ```python
/// class Foo(list[Bar]): ... # <-- `Bar` is a forward reference in a class base
/// class Bar: ...
/// ```
fn visit_deferred_class_bases(&mut self) {
let snapshot = self.semantic.snapshot();
let deferred_bases = std::mem::take(&mut self.visit.class_bases);
debug_assert!(
self.source_type.is_stub() || deferred_bases.is_empty(),
"Class bases should never be deferred outside of stub files"
);
for (expr, snapshot) in deferred_bases {
self.semantic.restore(snapshot);
// Set this flag to avoid infinite recursion, or we'll just defer it again:
self.semantic.flags |= SemanticModelFlags::DEFERRED_CLASS_BASE;
self.visit_expr(expr);
}
self.semantic.restore(snapshot);
}
/// After initial traversal of the AST, visit all "future type definitions".
///
/// A "future type definition" is a type definition where [PEP 563] semantics
/// apply (i.e., an annotation in a module that has `from __future__ import annotations`
/// at the top of the file, or an annotation in a stub file). These type definitions
/// support forward references, so they are deferred on initial traversal
/// of the source tree.
///
/// For example:
/// ```python
/// from __future__ import annotations
///
/// def foo() -> Bar: # <-- return annotation is a "future type definition"
/// return Bar()
///
/// class Bar: pass
/// ```
///
/// [PEP 563]: https://peps.python.org/pep-0563/
fn visit_deferred_future_type_definitions(&mut self) {
let snapshot = self.semantic.snapshot();
while !self.visit.future_type_definitions.is_empty() {
@@ -1963,6 +2028,14 @@ impl<'a> Checker<'a> {
for (expr, snapshot) in type_definitions {
self.semantic.restore(snapshot);
// Type definitions should only be considered "`__future__` type definitions"
// if they are annotations in a module where `from __future__ import
// annotations` is active, or they are type definitions in a stub file.
debug_assert!(
self.semantic.future_annotations_or_stub()
&& (self.source_type.is_stub() || self.semantic.in_annotation())
);
self.semantic.flags |= SemanticModelFlags::TYPE_DEFINITION
| SemanticModelFlags::FUTURE_TYPE_DEFINITION;
self.visit_expr(expr);
@@ -1971,6 +2044,19 @@ impl<'a> Checker<'a> {
self.semantic.restore(snapshot);
}
/// After initial traversal of the AST, visit all [type parameter definitions].
///
/// Type parameters natively support forward references,
/// so are always deferred during initial traversal of the source tree.
///
/// For example:
/// ```python
/// class Foo[T: Bar]: pass # <-- Forward reference used in definition of type parameter `T`
/// type X[T: Bar] = Foo[T] # <-- Ditto
/// class Bar: pass
/// ```
///
/// [type parameter definitions]: https://docs.python.org/3/reference/executionmodel.html#annotation-scopes
fn visit_deferred_type_param_definitions(&mut self) {
let snapshot = self.semantic.snapshot();
while !self.visit.type_param_definitions.is_empty() {
@@ -1986,6 +2072,17 @@ impl<'a> Checker<'a> {
self.semantic.restore(snapshot);
}
/// After initial traversal of the AST, visit all "string type definitions",
/// i.e., type definitions that are enclosed within quotes so as to allow
/// the type definition to use forward references.
///
/// For example:
/// ```python
/// def foo() -> "Bar": # <-- return annotation is a "string type definition"
/// return Bar()
///
/// class Bar: pass
/// ```
fn visit_deferred_string_type_definitions(&mut self, allocator: &'a typed_arena::Arena<Expr>) {
let snapshot = self.semantic.snapshot();
while !self.visit.string_type_definitions.is_empty() {
@@ -1998,7 +2095,7 @@ impl<'a> Checker<'a> {
self.semantic.restore(snapshot);
if self.semantic.in_annotation() && self.semantic.future_annotations() {
if self.semantic.in_annotation() && self.semantic.future_annotations_or_stub() {
if self.enabled(Rule::QuotedAnnotation) {
pyupgrade::rules::quoted_annotation(self, value, range);
}
@@ -2034,6 +2131,11 @@ impl<'a> Checker<'a> {
self.semantic.restore(snapshot);
}
/// After initial traversal of the AST, visit all function bodies.
///
/// Function bodies are always deferred on initial traversal of the source tree,
/// as the body of a function may validly contain references to global-scope symbols
/// that were not yet defined at the point when the function was defined.
fn visit_deferred_functions(&mut self) {
let snapshot = self.semantic.snapshot();
while !self.visit.functions.is_empty() {
@@ -2057,8 +2159,9 @@ impl<'a> Checker<'a> {
self.semantic.restore(snapshot);
}
/// Visit all deferred lambdas. Returns a list of snapshots, such that the caller can restore
/// the semantic model to the state it was in before visiting the deferred lambdas.
/// After initial traversal of the source tree has been completed,
/// visit all lambdas. Lambdas are deferred during the initial traversal
/// for the same reason as function bodies.
fn visit_deferred_lambdas(&mut self) {
let snapshot = self.semantic.snapshot();
while !self.visit.lambdas.is_empty() {
@@ -2084,10 +2187,12 @@ impl<'a> Checker<'a> {
self.semantic.restore(snapshot);
}
/// Recursively visit all deferred AST nodes, including lambdas, functions, and type
/// annotations.
/// After initial traversal of the source tree has been completed,
/// recursively visit all AST nodes that were deferred on the first pass.
/// This includes lambdas, functions, type parameters, and type annotations.
fn visit_deferred(&mut self, allocator: &'a typed_arena::Arena<Expr>) {
while !self.visit.is_empty() {
self.visit_deferred_class_bases();
self.visit_deferred_functions();
self.visit_deferred_type_param_definitions();
self.visit_deferred_lambdas();
@@ -2100,45 +2205,54 @@ impl<'a> Checker<'a> {
fn visit_exports(&mut self) {
let snapshot = self.semantic.snapshot();
let exports: Vec<DunderAllName> = self
let definitions: Vec<DunderAllDefinition> = self
.semantic
.global_scope()
.get_all("__all__")
.map(|binding_id| &self.semantic.bindings[binding_id])
.filter_map(|binding| match &binding.kind {
BindingKind::Export(Export { names }) => Some(names.iter().copied()),
BindingKind::Export(Export { names }) => {
Some(DunderAllDefinition::new(binding.range(), names.to_vec()))
}
_ => None,
})
.flatten()
.collect();
for export in exports {
let (name, range) = (export.name(), export.range());
if let Some(binding_id) = self.semantic.global_scope().get(name) {
self.semantic.flags |= SemanticModelFlags::DUNDER_ALL_DEFINITION;
// Mark anything referenced in `__all__` as used.
self.semantic
.add_global_reference(binding_id, ExprContext::Load, range);
self.semantic.flags -= SemanticModelFlags::DUNDER_ALL_DEFINITION;
} else {
if self.semantic.global_scope().uses_star_imports() {
if self.enabled(Rule::UndefinedLocalWithImportStarUsage) {
self.diagnostics.push(Diagnostic::new(
pyflakes::rules::UndefinedLocalWithImportStarUsage {
name: name.to_string(),
},
range,
));
}
for definition in definitions {
for export in definition.names() {
let (name, range) = (export.name(), export.range());
if let Some(binding_id) = self.semantic.global_scope().get(name) {
self.semantic.flags |= SemanticModelFlags::DUNDER_ALL_DEFINITION;
// Mark anything referenced in `__all__` as used.
self.semantic
.add_global_reference(binding_id, ExprContext::Load, range);
self.semantic.flags -= SemanticModelFlags::DUNDER_ALL_DEFINITION;
} else {
if self.enabled(Rule::UndefinedExport) {
if !self.path.ends_with("__init__.py") {
self.diagnostics.push(Diagnostic::new(
pyflakes::rules::UndefinedExport {
name: name.to_string(),
},
range,
));
if self.semantic.global_scope().uses_star_imports() {
if self.enabled(Rule::UndefinedLocalWithImportStarUsage) {
self.diagnostics.push(
Diagnostic::new(
pyflakes::rules::UndefinedLocalWithImportStarUsage {
name: name.to_string(),
},
range,
)
.with_parent(definition.start()),
);
}
} else {
if self.enabled(Rule::UndefinedExport) {
if !self.path.ends_with("__init__.py") {
self.diagnostics.push(
Diagnostic::new(
pyflakes::rules::UndefinedExport {
name: name.to_string(),
},
range,
)
.with_parent(definition.start()),
);
}
}
}
}

View File

@@ -225,12 +225,12 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "C0208") => (RuleGroup::Stable, rules::pylint::rules::IterationOverSet),
(Pylint, "C0414") => (RuleGroup::Stable, rules::pylint::rules::UselessImportAlias),
(Pylint, "C0415") => (RuleGroup::Preview, rules::pylint::rules::ImportOutsideTopLevel),
(Pylint, "C2401") => (RuleGroup::Preview, rules::pylint::rules::NonAsciiName),
(Pylint, "C2403") => (RuleGroup::Preview, rules::pylint::rules::NonAsciiImportName),
(Pylint, "C2801") => (RuleGroup::Preview, rules::pylint::rules::UnnecessaryDunderCall),
#[allow(deprecated)]
(Pylint, "C1901") => (RuleGroup::Nursery, rules::pylint::rules::CompareToEmptyString),
(Pylint, "C2401") => (RuleGroup::Preview, rules::pylint::rules::NonAsciiName),
(Pylint, "C2403") => (RuleGroup::Preview, rules::pylint::rules::NonAsciiImportName),
(Pylint, "C2701") => (RuleGroup::Preview, rules::pylint::rules::ImportPrivateName),
(Pylint, "C2801") => (RuleGroup::Preview, rules::pylint::rules::UnnecessaryDunderCall),
(Pylint, "C3002") => (RuleGroup::Stable, rules::pylint::rules::UnnecessaryDirectLambdaCall),
(Pylint, "E0100") => (RuleGroup::Stable, rules::pylint::rules::YieldInInit),
(Pylint, "E0101") => (RuleGroup::Stable, rules::pylint::rules::ReturnInInit),
@@ -272,6 +272,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "R0203") => (RuleGroup::Preview, rules::pylint::rules::NoStaticmethodDecorator),
(Pylint, "R0206") => (RuleGroup::Stable, rules::pylint::rules::PropertyWithParameters),
(Pylint, "R0402") => (RuleGroup::Stable, rules::pylint::rules::ManualFromImport),
(Pylint, "R0904") => (RuleGroup::Preview, rules::pylint::rules::TooManyPublicMethods),
(Pylint, "R0911") => (RuleGroup::Stable, rules::pylint::rules::TooManyReturnStatements),
(Pylint, "R0912") => (RuleGroup::Stable, rules::pylint::rules::TooManyBranches),
(Pylint, "R0913") => (RuleGroup::Stable, rules::pylint::rules::TooManyArguments),
@@ -282,31 +283,34 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "R1701") => (RuleGroup::Stable, rules::pylint::rules::RepeatedIsinstanceCalls),
(Pylint, "R1702") => (RuleGroup::Preview, rules::pylint::rules::TooManyNestedBlocks),
(Pylint, "R1704") => (RuleGroup::Preview, rules::pylint::rules::RedefinedArgumentFromLocal),
(Pylint, "R1706") => (RuleGroup::Removed, rules::pylint::rules::AndOrTernary),
(Pylint, "R1711") => (RuleGroup::Stable, rules::pylint::rules::UselessReturn),
(Pylint, "R1714") => (RuleGroup::Stable, rules::pylint::rules::RepeatedEqualityComparison),
(Pylint, "R1706") => (RuleGroup::Removed, rules::pylint::rules::AndOrTernary),
(Pylint, "R1722") => (RuleGroup::Stable, rules::pylint::rules::SysExitAlias),
(Pylint, "R1730") => (RuleGroup::Preview, rules::pylint::rules::IfStmtMinMax),
(Pylint, "R1733") => (RuleGroup::Preview, rules::pylint::rules::UnnecessaryDictIndexLookup),
(Pylint, "R1736") => (RuleGroup::Preview, rules::pylint::rules::UnnecessaryListIndexLookup),
(Pylint, "R2004") => (RuleGroup::Stable, rules::pylint::rules::MagicValueComparison),
(Pylint, "R2044") => (RuleGroup::Preview, rules::pylint::rules::EmptyComment),
(Pylint, "R5501") => (RuleGroup::Stable, rules::pylint::rules::CollapsibleElseIf),
(Pylint, "R6104") => (RuleGroup::Preview, rules::pylint::rules::NonAugmentedAssignment),
(Pylint, "R6201") => (RuleGroup::Preview, rules::pylint::rules::LiteralMembership),
#[allow(deprecated)]
(Pylint, "R6301") => (RuleGroup::Nursery, rules::pylint::rules::NoSelfUse),
(Pylint, "W0108") => (RuleGroup::Preview, rules::pylint::rules::UnnecessaryLambda),
(Pylint, "W0117") => (RuleGroup::Preview, rules::pylint::rules::NanComparison),
(Pylint, "W0177") => (RuleGroup::Preview, rules::pylint::rules::NanComparison),
(Pylint, "W0120") => (RuleGroup::Stable, rules::pylint::rules::UselessElseOnLoop),
(Pylint, "W0127") => (RuleGroup::Stable, rules::pylint::rules::SelfAssigningVariable),
(Pylint, "W0128") => (RuleGroup::Preview, rules::pylint::rules::RedeclaredAssignedName),
(Pylint, "W0129") => (RuleGroup::Stable, rules::pylint::rules::AssertOnStringLiteral),
(Pylint, "W0131") => (RuleGroup::Stable, rules::pylint::rules::NamedExprWithoutContext),
(Pylint, "W0133") => (RuleGroup::Preview, rules::pylint::rules::UselessExceptionStatement),
(Pylint, "W0211") => (RuleGroup::Preview, rules::pylint::rules::BadStaticmethodArgument),
(Pylint, "W0245") => (RuleGroup::Preview, rules::pylint::rules::SuperWithoutBrackets),
(Pylint, "W0406") => (RuleGroup::Stable, rules::pylint::rules::ImportSelf),
(Pylint, "W0602") => (RuleGroup::Stable, rules::pylint::rules::GlobalVariableNotAssigned),
(Pylint, "W0604") => (RuleGroup::Preview, rules::pylint::rules::GlobalAtModuleLevel),
(Pylint, "W0603") => (RuleGroup::Stable, rules::pylint::rules::GlobalStatement),
(Pylint, "W0604") => (RuleGroup::Preview, rules::pylint::rules::GlobalAtModuleLevel),
(Pylint, "W0711") => (RuleGroup::Stable, rules::pylint::rules::BinaryOpException),
(Pylint, "W1501") => (RuleGroup::Preview, rules::pylint::rules::BadOpenMode),
(Pylint, "W1508") => (RuleGroup::Stable, rules::pylint::rules::InvalidEnvvarDefault),
@@ -316,7 +320,6 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
#[allow(deprecated)]
(Pylint, "W1641") => (RuleGroup::Nursery, rules::pylint::rules::EqWithoutHash),
(Pylint, "W2101") => (RuleGroup::Preview, rules::pylint::rules::UselessWithLock),
(Pylint, "R0904") => (RuleGroup::Preview, rules::pylint::rules::TooManyPublicMethods),
(Pylint, "W2901") => (RuleGroup::Stable, rules::pylint::rules::RedefinedLoopName),
#[allow(deprecated)]
(Pylint, "W3201") => (RuleGroup::Nursery, rules::pylint::rules::BadDunderMethodName),
@@ -376,6 +379,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Bugbear, "035") => (RuleGroup::Stable, rules::flake8_bugbear::rules::StaticKeyDictComprehension),
(Flake8Bugbear, "904") => (RuleGroup::Stable, rules::flake8_bugbear::rules::RaiseWithoutFromInsideExcept),
(Flake8Bugbear, "905") => (RuleGroup::Stable, rules::flake8_bugbear::rules::ZipWithoutExplicitStrict),
(Flake8Bugbear, "909") => (RuleGroup::Preview, rules::flake8_bugbear::rules::LoopIteratorMutation),
// flake8-blind-except
(Flake8BlindExcept, "001") => (RuleGroup::Stable, rules::flake8_blind_except::rules::BlindExcept),
@@ -546,6 +550,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pyupgrade, "039") => (RuleGroup::Stable, rules::pyupgrade::rules::UnnecessaryClassParentheses),
(Pyupgrade, "040") => (RuleGroup::Stable, rules::pyupgrade::rules::NonPEP695TypeAlias),
(Pyupgrade, "041") => (RuleGroup::Stable, rules::pyupgrade::rules::TimeoutErrorAlias),
(Pyupgrade, "042") => (RuleGroup::Preview, rules::pyupgrade::rules::ReplaceStrEnum),
// pydocstyle
(Pydocstyle, "100") => (RuleGroup::Stable, rules::pydocstyle::rules::UndocumentedPublicModule),
@@ -1034,7 +1039,9 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
// refurb
(Refurb, "101") => (RuleGroup::Preview, rules::refurb::rules::ReadWholeFile),
(Refurb, "103") => (RuleGroup::Preview, rules::refurb::rules::WriteWholeFile),
(Refurb, "105") => (RuleGroup::Preview, rules::refurb::rules::PrintEmptyString),
(Refurb, "110") => (RuleGroup::Preview, rules::refurb::rules::IfExpInsteadOfOrOperator),
#[allow(deprecated)]
(Refurb, "113") => (RuleGroup::Nursery, rules::refurb::rules::RepeatedAppend),
(Refurb, "118") => (RuleGroup::Preview, rules::refurb::rules::ReimplementedOperator),

View File

@@ -1,36 +1,46 @@
use std::path::{Path, PathBuf};
use globset::GlobMatcher;
use log::debug;
use path_absolutize::Absolutize;
use crate::registry::RuleSet;
use crate::settings::types::CompiledPerFileIgnoreList;
/// Create a set with codes matching the pattern/code pairs.
pub(crate) fn ignores_from_path(
path: &Path,
pattern_code_pairs: &[(GlobMatcher, GlobMatcher, RuleSet)],
) -> RuleSet {
pub(crate) fn ignores_from_path(path: &Path, ignore_list: &CompiledPerFileIgnoreList) -> RuleSet {
let file_name = path.file_name().expect("Unable to parse filename");
pattern_code_pairs
ignore_list
.iter()
.filter_map(|(absolute, basename, rules)| {
if basename.is_match(file_name) {
.filter_map(|entry| {
if entry.basename_matcher.is_match(file_name) {
if entry.negated { None } else {
debug!(
"Adding per-file ignores for {:?} due to basename match on {:?}: {:?}",
path,
entry.basename_matcher.glob().regex(),
entry.rules
);
Some(&entry.rules)
}
} else if entry.absolute_matcher.is_match(path) {
if entry.negated { None } else {
debug!(
"Adding per-file ignores for {:?} due to absolute match on {:?}: {:?}",
path,
entry.absolute_matcher.glob().regex(),
entry.rules
);
Some(&entry.rules)
}
} else if entry.negated {
debug!(
"Adding per-file ignores for {:?} due to basename match on {:?}: {:?}",
"Adding per-file ignores for {:?} due to negated pattern matching neither {:?} nor {:?}: {:?}",
path,
basename.glob().regex(),
rules
entry.basename_matcher.glob().regex(),
entry.absolute_matcher.glob().regex(),
entry.rules
);
Some(rules)
} else if absolute.is_match(path) {
debug!(
"Adding per-file ignores for {:?} due to absolute match on {:?}: {:?}",
path,
absolute.glob().regex(),
rules
);
Some(rules)
Some(&entry.rules)
} else {
None
}

View File

@@ -147,7 +147,7 @@ impl<'a> Directive<'a> {
/// Lex an individual rule code (e.g., `F401`).
#[inline]
fn lex_code(line: &str) -> Option<&str> {
pub(crate) fn lex_code(line: &str) -> Option<&str> {
// Extract, e.g., the `F` in `F401`.
let prefix = line.chars().take_while(char::is_ascii_uppercase).count();
// Extract, e.g., the `401` in `F401`.

View File

@@ -109,5 +109,7 @@ static REDIRECTS: Lazy<HashMap<&'static str, &'static str>> = Lazy::new(|| {
// Test redirect by prefix
#[cfg(feature = "test-rules")]
("RUF96", "RUF95"),
// See: https://github.com/astral-sh/ruff/issues/10791
("PLW0117", "PLW0177"),
])
});

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
#[allow(clippy::struct_excessive_bools)]
pub struct Settings {
pub mypy_init_return: bool,

View File

@@ -10,7 +10,7 @@ pub fn default_tmp_dirs() -> Vec<String> {
.to_vec()
}
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub hardcoded_tmp_directory: Vec<String>,
pub check_typed_exception: bool,

View File

@@ -6,7 +6,7 @@ use ruff_macros::CacheKey;
use crate::display_settings;
#[derive(Debug, CacheKey, Default)]
#[derive(Debug, Clone, CacheKey, Default)]
pub struct Settings {
pub extend_allowed_calls: Vec<String>,
}

View File

@@ -61,6 +61,7 @@ mod tests {
#[test_case(Rule::UselessContextlibSuppress, Path::new("B022.py"))]
#[test_case(Rule::UselessExpression, Path::new("B018.ipynb"))]
#[test_case(Rule::UselessExpression, Path::new("B018.py"))]
#[test_case(Rule::LoopIteratorMutation, Path::new("B909.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
let diagnostics = test_path(

View File

@@ -0,0 +1,295 @@
use std::collections::HashMap;
use ruff_diagnostics::Diagnostic;
use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::name::UnqualifiedName;
use ruff_python_ast::{
visitor::{self, Visitor},
Arguments, Expr, ExprAttribute, ExprCall, ExprSubscript, Stmt, StmtAssign, StmtAugAssign,
StmtBreak, StmtDelete, StmtFor, StmtIf,
};
use ruff_text_size::TextRange;
use crate::checkers::ast::Checker;
use crate::fix::snippet::SourceCodeSnippet;
/// ## What it does
/// Checks for mutations to an iterable during a loop iteration.
///
/// ## Why is this bad?
/// When iterating over an iterable, mutating the iterable can lead to unexpected
/// behavior, like skipping elements or infinite loops.
///
/// ## Example
/// ```python
/// items = [1, 2, 3]
///
/// for item in items:
/// print(item)
///
/// # Create an infinite loop by appending to the list.
/// items.append(item)
/// ```
///
/// ## References
/// - [Python documentation: Mutable Sequence Types](https://docs.python.org/3/library/stdtypes.html#typesseq-mutable)
#[violation]
pub struct LoopIteratorMutation {
name: Option<SourceCodeSnippet>,
}
impl Violation for LoopIteratorMutation {
#[derive_message_formats]
fn message(&self) -> String {
let LoopIteratorMutation { name } = self;
if let Some(name) = name.as_ref().and_then(SourceCodeSnippet::full_display) {
format!("Mutation to loop iterable `{name}` during iteration")
} else {
format!("Mutation to loop iterable during iteration")
}
}
}
/// B909
pub(crate) fn loop_iterator_mutation(checker: &mut Checker, stmt_for: &StmtFor) {
let StmtFor {
target,
iter,
body,
orelse: _,
is_async: _,
range: _,
} = stmt_for;
if !matches!(iter.as_ref(), Expr::Name(_) | Expr::Attribute(_)) {
return;
}
// Collect mutations to the iterable.
let mutations = {
let mut visitor = LoopMutationsVisitor::new(iter, target);
visitor.visit_body(body);
visitor.mutations
};
// Create a diagnostic for each mutation.
for mutation in mutations.values().flatten() {
let name = UnqualifiedName::from_expr(iter)
.map(|name| name.to_string())
.map(SourceCodeSnippet::new);
checker
.diagnostics
.push(Diagnostic::new(LoopIteratorMutation { name }, *mutation));
}
}
/// Returns `true` if the method mutates when called on an iterator.
fn is_mutating_function(function_name: &str) -> bool {
matches!(
function_name,
"append"
| "sort"
| "reverse"
| "remove"
| "clear"
| "extend"
| "insert"
| "pop"
| "popitem"
| "setdefault"
| "update"
| "intersection_update"
| "difference_update"
| "symmetric_difference_update"
| "add"
| "discard"
)
}
/// A visitor to collect mutations to a variable in a loop.
#[derive(Debug, Clone)]
struct LoopMutationsVisitor<'a> {
iter: &'a Expr,
target: &'a Expr,
mutations: HashMap<u8, Vec<TextRange>>,
branches: Vec<u8>,
branch: u8,
}
impl<'a> LoopMutationsVisitor<'a> {
/// Initialize the visitor.
fn new(iter: &'a Expr, target: &'a Expr) -> Self {
Self {
iter,
target,
mutations: HashMap::new(),
branches: vec![0],
branch: 0,
}
}
/// Register a mutation.
fn add_mutation(&mut self, range: TextRange) {
self.mutations.entry(self.branch).or_default().push(range);
}
/// Handle, e.g., `del items[0]`.
fn handle_delete(&mut self, range: TextRange, targets: &[Expr]) {
for target in targets {
if let Expr::Subscript(ExprSubscript {
range: _,
value,
slice,
ctx: _,
}) = target
{
// Find, e.g., `del items[0]`.
if ComparableExpr::from(self.iter) == ComparableExpr::from(value) {
// But allow, e.g., `for item in items: del items[item]`.
if ComparableExpr::from(self.target) != ComparableExpr::from(slice) {
self.add_mutation(range);
}
}
}
}
}
/// Handle, e.g., `items[0] = 1`.
fn handle_assign(&mut self, range: TextRange, targets: &[Expr]) {
for target in targets {
if let Expr::Subscript(ExprSubscript {
range: _,
value,
slice,
ctx: _,
}) = target
{
// Find, e.g., `items[0] = 1`.
if ComparableExpr::from(self.iter) == ComparableExpr::from(value) {
// But allow, e.g., `for item in items: items[item] = 1`.
if ComparableExpr::from(self.target) != ComparableExpr::from(slice) {
self.add_mutation(range);
}
}
}
}
}
/// Handle, e.g., `items += [1]`.
fn handle_aug_assign(&mut self, range: TextRange, target: &Expr) {
if ComparableExpr::from(self.iter) == ComparableExpr::from(target) {
self.add_mutation(range);
}
}
/// Handle, e.g., `items.append(1)`.
fn handle_call(&mut self, func: &Expr, arguments: &Arguments) {
if let Expr::Attribute(ExprAttribute {
range,
value,
attr,
ctx: _,
}) = func
{
if is_mutating_function(attr.as_str()) {
// Find, e.g., `items.remove(1)`.
if ComparableExpr::from(self.iter) == ComparableExpr::from(value) {
// But allow, e.g., `for item in items: items.remove(item)`.
if matches!(attr.as_str(), "remove" | "discard" | "pop") {
if arguments.len() == 1 {
if let [arg] = &*arguments.args {
if ComparableExpr::from(self.target) == ComparableExpr::from(arg) {
return;
}
}
}
}
self.add_mutation(*range);
}
}
}
}
}
/// `Visitor` to collect all used identifiers in a statement.
impl<'a> Visitor<'a> for LoopMutationsVisitor<'a> {
fn visit_stmt(&mut self, stmt: &'a Stmt) {
match stmt {
// Ex) `del items[0]`
Stmt::Delete(StmtDelete { range, targets }) => {
self.handle_delete(*range, targets);
visitor::walk_stmt(self, stmt);
}
// Ex) `items[0] = 1`
Stmt::Assign(StmtAssign { range, targets, .. }) => {
self.handle_assign(*range, targets);
visitor::walk_stmt(self, stmt);
}
// Ex) `items += [1]`
Stmt::AugAssign(StmtAugAssign { range, target, .. }) => {
self.handle_aug_assign(*range, target);
visitor::walk_stmt(self, stmt);
}
// Ex) `if True: items.append(1)`
Stmt::If(StmtIf {
test,
body,
elif_else_clauses,
..
}) => {
// Handle the `if` branch.
self.branch += 1;
self.branches.push(self.branch);
self.visit_expr(test);
self.visit_body(body);
self.branches.pop();
// Handle the `elif` and `else` branches.
for clause in elif_else_clauses {
self.branch += 1;
self.branches.push(self.branch);
if let Some(test) = &clause.test {
self.visit_expr(test);
}
self.visit_body(&clause.body);
self.branches.pop();
}
}
// On break, clear the mutations for the current branch.
Stmt::Break(StmtBreak { range: _ }) => {
if let Some(mutations) = self.mutations.get_mut(&self.branch) {
mutations.clear();
}
visitor::walk_stmt(self, stmt);
}
// Avoid recursion for class and function definitions.
Stmt::ClassDef(_) | Stmt::FunctionDef(_) => {}
// Default case.
_ => {
visitor::walk_stmt(self, stmt);
}
}
}
fn visit_expr(&mut self, expr: &'a Expr) {
// Ex) `items.append(1)`
if let Expr::Call(ExprCall {
func, arguments, ..
}) = expr
{
self.handle_call(func, arguments);
}
visitor::walk_expr(self, expr);
}
}

View File

@@ -12,6 +12,7 @@ pub(crate) use function_call_in_argument_default::*;
pub(crate) use function_uses_loop_variable::*;
pub(crate) use getattr_with_constant::*;
pub(crate) use jump_statement_in_finally::*;
pub(crate) use loop_iterator_mutation::*;
pub(crate) use loop_variable_overrides_iterator::*;
pub(crate) use mutable_argument_default::*;
pub(crate) use no_explicit_stacklevel::*;
@@ -47,6 +48,7 @@ mod function_call_in_argument_default;
mod function_uses_loop_variable;
mod getattr_with_constant;
mod jump_statement_in_finally;
mod loop_iterator_mutation;
mod loop_variable_overrides_iterator;
mod mutable_argument_default;
mod no_explicit_stacklevel;

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub extend_immutable_calls: Vec<String>,
}

View File

@@ -0,0 +1,341 @@
---
source: crates/ruff_linter/src/rules/flake8_bugbear/mod.rs
---
B909.py:12:5: B909 Mutation to loop iterable `some_list` during iteration
|
10 | for elem in some_list:
11 | # errors
12 | some_list.remove(0)
| ^^^^^^^^^^^^^^^^ B909
13 | del some_list[2]
14 | some_list.append(elem)
|
B909.py:13:5: B909 Mutation to loop iterable `some_list` during iteration
|
11 | # errors
12 | some_list.remove(0)
13 | del some_list[2]
| ^^^^^^^^^^^^^^^^ B909
14 | some_list.append(elem)
15 | some_list.sort()
|
B909.py:14:5: B909 Mutation to loop iterable `some_list` during iteration
|
12 | some_list.remove(0)
13 | del some_list[2]
14 | some_list.append(elem)
| ^^^^^^^^^^^^^^^^ B909
15 | some_list.sort()
16 | some_list.reverse()
|
B909.py:15:5: B909 Mutation to loop iterable `some_list` during iteration
|
13 | del some_list[2]
14 | some_list.append(elem)
15 | some_list.sort()
| ^^^^^^^^^^^^^^ B909
16 | some_list.reverse()
17 | some_list.clear()
|
B909.py:16:5: B909 Mutation to loop iterable `some_list` during iteration
|
14 | some_list.append(elem)
15 | some_list.sort()
16 | some_list.reverse()
| ^^^^^^^^^^^^^^^^^ B909
17 | some_list.clear()
18 | some_list.extend([1, 2])
|
B909.py:17:5: B909 Mutation to loop iterable `some_list` during iteration
|
15 | some_list.sort()
16 | some_list.reverse()
17 | some_list.clear()
| ^^^^^^^^^^^^^^^ B909
18 | some_list.extend([1, 2])
19 | some_list.insert(1, 1)
|
B909.py:18:5: B909 Mutation to loop iterable `some_list` during iteration
|
16 | some_list.reverse()
17 | some_list.clear()
18 | some_list.extend([1, 2])
| ^^^^^^^^^^^^^^^^ B909
19 | some_list.insert(1, 1)
20 | some_list.pop(1)
|
B909.py:19:5: B909 Mutation to loop iterable `some_list` during iteration
|
17 | some_list.clear()
18 | some_list.extend([1, 2])
19 | some_list.insert(1, 1)
| ^^^^^^^^^^^^^^^^ B909
20 | some_list.pop(1)
21 | some_list.pop()
|
B909.py:20:5: B909 Mutation to loop iterable `some_list` during iteration
|
18 | some_list.extend([1, 2])
19 | some_list.insert(1, 1)
20 | some_list.pop(1)
| ^^^^^^^^^^^^^ B909
21 | some_list.pop()
|
B909.py:21:5: B909 Mutation to loop iterable `some_list` during iteration
|
19 | some_list.insert(1, 1)
20 | some_list.pop(1)
21 | some_list.pop()
| ^^^^^^^^^^^^^ B909
22 |
23 | # conditional break should error
|
B909.py:25:9: B909 Mutation to loop iterable `some_list` during iteration
|
23 | # conditional break should error
24 | if elem == 2:
25 | some_list.remove(0)
| ^^^^^^^^^^^^^^^^ B909
26 | if elem == 3:
27 | break
|
B909.py:47:5: B909 Mutation to loop iterable `mydicts` during iteration
|
45 | for elem in mydicts:
46 | # errors
47 | mydicts.popitem()
| ^^^^^^^^^^^^^^^ B909
48 | mydicts.setdefault("foo", 1)
49 | mydicts.update({"foo": "bar"})
|
B909.py:48:5: B909 Mutation to loop iterable `mydicts` during iteration
|
46 | # errors
47 | mydicts.popitem()
48 | mydicts.setdefault("foo", 1)
| ^^^^^^^^^^^^^^^^^^ B909
49 | mydicts.update({"foo": "bar"})
|
B909.py:49:5: B909 Mutation to loop iterable `mydicts` during iteration
|
47 | mydicts.popitem()
48 | mydicts.setdefault("foo", 1)
49 | mydicts.update({"foo": "bar"})
| ^^^^^^^^^^^^^^ B909
50 |
51 | # no errors
|
B909.py:62:5: B909 Mutation to loop iterable `myset` during iteration
|
60 | for _ in myset:
61 | # errors
62 | myset.update({4, 5})
| ^^^^^^^^^^^^ B909
63 | myset.intersection_update({4, 5})
64 | myset.difference_update({4, 5})
|
B909.py:63:5: B909 Mutation to loop iterable `myset` during iteration
|
61 | # errors
62 | myset.update({4, 5})
63 | myset.intersection_update({4, 5})
| ^^^^^^^^^^^^^^^^^^^^^^^^^ B909
64 | myset.difference_update({4, 5})
65 | myset.symmetric_difference_update({4, 5})
|
B909.py:64:5: B909 Mutation to loop iterable `myset` during iteration
|
62 | myset.update({4, 5})
63 | myset.intersection_update({4, 5})
64 | myset.difference_update({4, 5})
| ^^^^^^^^^^^^^^^^^^^^^^^ B909
65 | myset.symmetric_difference_update({4, 5})
66 | myset.add(4)
|
B909.py:65:5: B909 Mutation to loop iterable `myset` during iteration
|
63 | myset.intersection_update({4, 5})
64 | myset.difference_update({4, 5})
65 | myset.symmetric_difference_update({4, 5})
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ B909
66 | myset.add(4)
67 | myset.discard(3)
|
B909.py:66:5: B909 Mutation to loop iterable `myset` during iteration
|
64 | myset.difference_update({4, 5})
65 | myset.symmetric_difference_update({4, 5})
66 | myset.add(4)
| ^^^^^^^^^ B909
67 | myset.discard(3)
|
B909.py:67:5: B909 Mutation to loop iterable `myset` during iteration
|
65 | myset.symmetric_difference_update({4, 5})
66 | myset.add(4)
67 | myset.discard(3)
| ^^^^^^^^^^^^^ B909
68 |
69 | # no errors
|
B909.py:84:5: B909 Mutation to loop iterable `a.some_list` during iteration
|
82 | # ensure member accesses are handled as errors
83 | for elem in a.some_list:
84 | a.some_list.remove(0)
| ^^^^^^^^^^^^^^^^^^ B909
85 | del a.some_list[2]
|
B909.py:85:5: B909 Mutation to loop iterable `a.some_list` during iteration
|
83 | for elem in a.some_list:
84 | a.some_list.remove(0)
85 | del a.some_list[2]
| ^^^^^^^^^^^^^^^^^^ B909
|
B909.py:93:5: B909 Mutation to loop iterable `foo` during iteration
|
91 | bar = [4, 5, 6]
92 | for _ in foo:
93 | foo *= 2
| ^^^^^^^^ B909
94 | foo += bar
95 | foo[1] = 9
|
B909.py:94:5: B909 Mutation to loop iterable `foo` during iteration
|
92 | for _ in foo:
93 | foo *= 2
94 | foo += bar
| ^^^^^^^^^^ B909
95 | foo[1] = 9
96 | foo[1:2] = bar
|
B909.py:95:5: B909 Mutation to loop iterable `foo` during iteration
|
93 | foo *= 2
94 | foo += bar
95 | foo[1] = 9
| ^^^^^^^^^^ B909
96 | foo[1:2] = bar
97 | foo[1:2:3] = bar
|
B909.py:96:5: B909 Mutation to loop iterable `foo` during iteration
|
94 | foo += bar
95 | foo[1] = 9
96 | foo[1:2] = bar
| ^^^^^^^^^^^^^^ B909
97 | foo[1:2:3] = bar
|
B909.py:97:5: B909 Mutation to loop iterable `foo` during iteration
|
95 | foo[1] = 9
96 | foo[1:2] = bar
97 | foo[1:2:3] = bar
| ^^^^^^^^^^^^^^^^ B909
98 |
99 | foo = {1, 2, 3}
|
B909.py:102:5: B909 Mutation to loop iterable `foo` during iteration
|
100 | bar = {4, 5, 6}
101 | for _ in foo: # should error
102 | foo |= bar
| ^^^^^^^^^^ B909
103 | foo &= bar
104 | foo -= bar
|
B909.py:103:5: B909 Mutation to loop iterable `foo` during iteration
|
101 | for _ in foo: # should error
102 | foo |= bar
103 | foo &= bar
| ^^^^^^^^^^ B909
104 | foo -= bar
105 | foo ^= bar
|
B909.py:104:5: B909 Mutation to loop iterable `foo` during iteration
|
102 | foo |= bar
103 | foo &= bar
104 | foo -= bar
| ^^^^^^^^^^ B909
105 | foo ^= bar
|
B909.py:105:5: B909 Mutation to loop iterable `foo` during iteration
|
103 | foo &= bar
104 | foo -= bar
105 | foo ^= bar
| ^^^^^^^^^^ B909
|
B909.py:125:5: B909 Mutation to loop iterable `foo` during iteration
|
123 | # should error (?)
124 | for _ in foo:
125 | foo.remove(1)
| ^^^^^^^^^^ B909
126 | if bar:
127 | bar.remove(1)
|
B909.py:136:9: B909 Mutation to loop iterable `foo` during iteration
|
134 | pass
135 | else:
136 | foo.remove(1)
| ^^^^^^^^^^ B909
137 |
138 | # should error
|
B909.py:140:8: B909 Mutation to loop iterable `some_list` during iteration
|
138 | # should error
139 | for elem in some_list:
140 | if some_list.pop() == 2:
| ^^^^^^^^^^^^^ B909
141 | pass
|
B909.py:150:8: B909 Mutation to loop iterable `some_list` during iteration
|
148 | # should error
149 | for elem in some_list:
150 | if some_list.pop() == 2:
| ^^^^^^^^^^^^^ B909
151 | pass
152 | else:
|

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub builtins_ignorelist: Vec<String>,
}

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub allow_dict_calls_with_keyword_arguments: bool,
}

View File

@@ -7,7 +7,7 @@ use std::fmt::{Display, Formatter};
use crate::display_settings;
use ruff_macros::CacheKey;
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub notice_rgx: Regex,
pub author: Option<String>,

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub max_string_length: usize,
}

View File

@@ -2,7 +2,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub functions_names: Vec<String>,
}

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub allow_multiline: bool,
}

View File

@@ -57,7 +57,7 @@ impl FromIterator<String> for BannedAliases {
}
}
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub aliases: FxHashMap<String, String>,
pub banned_aliases: FxHashMap<String, BannedAliases>,

View File

@@ -1,9 +1,10 @@
use ruff_python_ast::{self as ast, Arguments, Decorator, Expr, Parameters, Stmt};
use ruff_python_ast::{self as ast, Decorator, Expr, Parameters, Stmt};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::map_subscript;
use ruff_python_ast::identifier::Identifier;
use ruff_python_semantic::analyze;
use ruff_python_semantic::analyze::visibility::{is_abstract, is_final, is_overload};
use ruff_python_semantic::{ScopeKind, SemanticModel};
@@ -119,7 +120,9 @@ pub(crate) fn non_self_return_type(
returns: Option<&Expr>,
parameters: &Parameters,
) {
let ScopeKind::Class(class_def) = checker.semantic().current_scope().kind else {
let semantic = checker.semantic();
let ScopeKind::Class(class_def) = semantic.current_scope().kind else {
return;
};
@@ -132,21 +135,19 @@ pub(crate) fn non_self_return_type(
};
// PEP 673 forbids the use of `typing(_extensions).Self` in metaclasses.
if is_metaclass(class_def, checker.semantic()) {
if is_metaclass(class_def, semantic) {
return;
}
// Skip any abstract or overloaded methods.
if is_abstract(decorator_list, checker.semantic())
|| is_overload(decorator_list, checker.semantic())
{
if is_abstract(decorator_list, semantic) || is_overload(decorator_list, semantic) {
return;
}
if is_async {
if name == "__aenter__"
&& is_name(returns, &class_def.name)
&& !is_final(&class_def.decorator_list, checker.semantic())
&& !is_final(&class_def.decorator_list, semantic)
{
checker.diagnostics.push(Diagnostic::new(
NonSelfReturnType {
@@ -161,7 +162,7 @@ pub(crate) fn non_self_return_type(
// In-place methods that are expected to return `Self`.
if is_inplace_bin_op(name) {
if !is_self(returns, checker.semantic()) {
if !is_self(returns, semantic) {
checker.diagnostics.push(Diagnostic::new(
NonSelfReturnType {
class_name: class_def.name.to_string(),
@@ -174,8 +175,7 @@ pub(crate) fn non_self_return_type(
}
if is_name(returns, &class_def.name) {
if matches!(name, "__enter__" | "__new__")
&& !is_final(&class_def.decorator_list, checker.semantic())
if matches!(name, "__enter__" | "__new__") && !is_final(&class_def.decorator_list, semantic)
{
checker.diagnostics.push(Diagnostic::new(
NonSelfReturnType {
@@ -190,8 +190,8 @@ pub(crate) fn non_self_return_type(
match name {
"__iter__" => {
if is_iterable(returns, checker.semantic())
&& is_iterator(class_def.arguments.as_deref(), checker.semantic())
if is_iterable_or_iterator(returns, semantic)
&& subclasses_iterator(class_def, semantic)
{
checker.diagnostics.push(Diagnostic::new(
NonSelfReturnType {
@@ -203,8 +203,8 @@ pub(crate) fn non_self_return_type(
}
}
"__aiter__" => {
if is_async_iterable(returns, checker.semantic())
&& is_async_iterator(class_def.arguments.as_deref(), checker.semantic())
if is_async_iterable_or_iterator(returns, semantic)
&& subclasses_async_iterator(class_def, semantic)
{
checker.diagnostics.push(Diagnostic::new(
NonSelfReturnType {
@@ -221,26 +221,14 @@ pub(crate) fn non_self_return_type(
/// Returns `true` if the given class is a metaclass.
fn is_metaclass(class_def: &ast::StmtClassDef, semantic: &SemanticModel) -> bool {
class_def.arguments.as_ref().is_some_and(|arguments| {
arguments
.args
.iter()
.any(|expr| is_metaclass_base(expr, semantic))
analyze::class::any_qualified_name(class_def, semantic, &|qualified_name| {
matches!(
qualified_name.segments(),
["" | "builtins", "type"] | ["abc", "ABCMeta"] | ["enum", "EnumMeta" | "EnumType"]
)
})
}
/// Returns `true` if the given expression resolves to a metaclass.
fn is_metaclass_base(base: &Expr, semantic: &SemanticModel) -> bool {
semantic
.resolve_qualified_name(base)
.is_some_and(|qualified_name| {
matches!(
qualified_name.segments(),
["" | "builtins", "type"] | ["abc", "ABCMeta"] | ["enum", "EnumMeta" | "EnumType"]
)
})
}
/// Returns `true` if the method is an in-place binary operator.
fn is_inplace_bin_op(name: &str) -> bool {
matches!(
@@ -275,24 +263,17 @@ fn is_self(expr: &Expr, semantic: &SemanticModel) -> bool {
}
/// Return `true` if the given class extends `collections.abc.Iterator`.
fn is_iterator(arguments: Option<&Arguments>, semantic: &SemanticModel) -> bool {
let Some(Arguments { args: bases, .. }) = arguments else {
return false;
};
bases.iter().any(|expr| {
semantic
.resolve_qualified_name(map_subscript(expr))
.is_some_and(|qualified_name| {
matches!(
qualified_name.segments(),
["typing", "Iterator"] | ["collections", "abc", "Iterator"]
)
})
fn subclasses_iterator(class_def: &ast::StmtClassDef, semantic: &SemanticModel) -> bool {
analyze::class::any_qualified_name(class_def, semantic, &|qualified_name| {
matches!(
qualified_name.segments(),
["typing", "Iterator"] | ["collections", "abc", "Iterator"]
)
})
}
/// Return `true` if the given expression resolves to `collections.abc.Iterable`.
fn is_iterable(expr: &Expr, semantic: &SemanticModel) -> bool {
/// Return `true` if the given expression resolves to `collections.abc.Iterable` or `collections.abc.Iterator`.
fn is_iterable_or_iterator(expr: &Expr, semantic: &SemanticModel) -> bool {
semantic
.resolve_qualified_name(map_subscript(expr))
.is_some_and(|qualified_name| {
@@ -305,24 +286,17 @@ fn is_iterable(expr: &Expr, semantic: &SemanticModel) -> bool {
}
/// Return `true` if the given class extends `collections.abc.AsyncIterator`.
fn is_async_iterator(arguments: Option<&Arguments>, semantic: &SemanticModel) -> bool {
let Some(Arguments { args: bases, .. }) = arguments else {
return false;
};
bases.iter().any(|expr| {
semantic
.resolve_qualified_name(map_subscript(expr))
.is_some_and(|qualified_name| {
matches!(
qualified_name.segments(),
["typing", "AsyncIterator"] | ["collections", "abc", "AsyncIterator"]
)
})
fn subclasses_async_iterator(class_def: &ast::StmtClassDef, semantic: &SemanticModel) -> bool {
analyze::class::any_qualified_name(class_def, semantic, &|qualified_name| {
matches!(
qualified_name.segments(),
["typing", "AsyncIterator"] | ["collections", "abc", "AsyncIterator"]
)
})
}
/// Return `true` if the given expression resolves to `collections.abc.AsyncIterable`.
fn is_async_iterable(expr: &Expr, semantic: &SemanticModel) -> bool {
/// Return `true` if the given expression resolves to `collections.abc.AsyncIterable` or `collections.abc.AsyncIterator`.
fn is_async_iterable_or_iterator(expr: &Expr, semantic: &SemanticModel) -> bool {
semantic
.resolve_qualified_name(map_subscript(expr))
.is_some_and(|qualified_name| {

View File

@@ -89,4 +89,20 @@ PYI034.py:195:9: PYI034 `__aiter__` methods in classes like `BadAsyncIterator` u
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.py:199:9: PYI034 `__iter__` methods in classes like `SubclassOfBadIterator3` usually return `self` at runtime
|
198 | class SubclassOfBadIterator3(BadIterator3):
199 | def __iter__(self) -> Iterator[int]: # Y034
| ^^^^^^^^ PYI034
200 | ...
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.py:203:9: PYI034 `__aiter__` methods in classes like `SubclassOfBadAsyncIterator` usually return `self` at runtime
|
202 | class SubclassOfBadAsyncIterator(BadAsyncIterator):
203 | def __aiter__(self) -> collections.abc.AsyncIterator[str]: # Y034
| ^^^^^^^^^ PYI034
204 | ...
|
= help: Consider using `typing_extensions.Self` as return type

View File

@@ -567,7 +567,7 @@ fn check_values(checker: &mut Checker, names: &Expr, values: &Expr) {
// Replace `]` with `)` or `,)`.
let values_end = Edit::replacement(
if needs_trailing_comma {
"),".into()
",)".into()
} else {
")".into()
},

View File

@@ -24,7 +24,7 @@ pub fn default_broad_exceptions() -> Vec<IdentifierPattern> {
.to_vec()
}
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub fixture_parentheses: bool,
pub parametrize_names_type: types::ParametrizeNameType,

View File

@@ -168,7 +168,7 @@ PT007.py:81:38: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
| ^^^^^^^^^^^^^^^^ PT007
82 | @pytest.mark.parametrize("d", [3,])
83 | def test_multiple_decorators(a, b, c):
83 | @pytest.mark.parametrize(
|
= help: Use `list` of `list` for parameter values
@@ -179,8 +179,8 @@ PT007.py:81:38: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 |-@pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
81 |+@pytest.mark.parametrize(("b", "c"), [(3, 4), (5, 6)])
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",
PT007.py:81:39: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `list` of `list`
|
@@ -188,7 +188,7 @@ PT007.py:81:39: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
| ^^^^^^ PT007
82 | @pytest.mark.parametrize("d", [3,])
83 | def test_multiple_decorators(a, b, c):
83 | @pytest.mark.parametrize(
|
= help: Use `list` of `list` for parameter values
@@ -199,8 +199,8 @@ PT007.py:81:39: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 |-@pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
81 |+@pytest.mark.parametrize(("b", "c"), ([3, 4], (5, 6)))
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",
PT007.py:81:47: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `list` of `list`
|
@@ -208,7 +208,7 @@ PT007.py:81:47: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
| ^^^^^^ PT007
82 | @pytest.mark.parametrize("d", [3,])
83 | def test_multiple_decorators(a, b, c):
83 | @pytest.mark.parametrize(
|
= help: Use `list` of `list` for parameter values
@@ -219,5 +219,5 @@ PT007.py:81:47: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 |-@pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
81 |+@pytest.mark.parametrize(("b", "c"), ((3, 4), [5, 6]))
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",

View File

@@ -210,7 +210,7 @@ PT007.py:81:38: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
| ^^^^^^^^^^^^^^^^ PT007
82 | @pytest.mark.parametrize("d", [3,])
83 | def test_multiple_decorators(a, b, c):
83 | @pytest.mark.parametrize(
|
= help: Use `list` of `tuple` for parameter values
@@ -221,5 +221,5 @@ PT007.py:81:38: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 |-@pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
81 |+@pytest.mark.parametrize(("b", "c"), [(3, 4), (5, 6)])
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",

View File

@@ -237,7 +237,7 @@ PT007.py:80:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
80 |+@pytest.mark.parametrize("a", (1, 2))
81 81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
83 83 | @pytest.mark.parametrize(
PT007.py:81:39: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `list`
|
@@ -245,7 +245,7 @@ PT007.py:81:39: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
| ^^^^^^ PT007
82 | @pytest.mark.parametrize("d", [3,])
83 | def test_multiple_decorators(a, b, c):
83 | @pytest.mark.parametrize(
|
= help: Use `tuple` of `list` for parameter values
@@ -256,8 +256,8 @@ PT007.py:81:39: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 |-@pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
81 |+@pytest.mark.parametrize(("b", "c"), ([3, 4], (5, 6)))
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",
PT007.py:81:47: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `list`
|
@@ -265,7 +265,7 @@ PT007.py:81:47: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
| ^^^^^^ PT007
82 | @pytest.mark.parametrize("d", [3,])
83 | def test_multiple_decorators(a, b, c):
83 | @pytest.mark.parametrize(
|
= help: Use `tuple` of `list` for parameter values
@@ -276,8 +276,8 @@ PT007.py:81:47: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 |-@pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
81 |+@pytest.mark.parametrize(("b", "c"), ((3, 4), [5, 6]))
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",
PT007.py:82:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `list`
|
@@ -285,8 +285,8 @@ PT007.py:82:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
82 | @pytest.mark.parametrize("d", [3,])
| ^^^^ PT007
83 | def test_multiple_decorators(a, b, c):
84 | pass
83 | @pytest.mark.parametrize(
84 | "d",
|
= help: Use `tuple` of `list` for parameter values
@@ -296,5 +296,48 @@ PT007.py:82:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
82 |-@pytest.mark.parametrize("d", [3,])
82 |+@pytest.mark.parametrize("d", (3,))
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",
85 85 | [("3", "4")],
PT007.py:85:5: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `list`
|
83 | @pytest.mark.parametrize(
84 | "d",
85 | [("3", "4")],
| ^^^^^^^^^^^^ PT007
86 | )
87 | @pytest.mark.parametrize(
|
= help: Use `tuple` of `list` for parameter values
Unsafe fix
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | @pytest.mark.parametrize(
84 84 | "d",
85 |- [("3", "4")],
85 |+ (("3", "4"),),
86 86 | )
87 87 | @pytest.mark.parametrize(
88 88 | "e",
PT007.py:89:5: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `list`
|
87 | @pytest.mark.parametrize(
88 | "e",
89 | [("3", "4"),],
| ^^^^^^^^^^^^^ PT007
90 | )
91 | def test_multiple_decorators(a, b, c, d, e):
|
= help: Use `tuple` of `list` for parameter values
Unsafe fix
86 86 | )
87 87 | @pytest.mark.parametrize(
88 88 | "e",
89 |- [("3", "4"),],
89 |+ (("3", "4"),),
90 90 | )
91 91 | def test_multiple_decorators(a, b, c, d, e):
92 92 | pass

View File

@@ -279,7 +279,7 @@ PT007.py:80:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
80 |+@pytest.mark.parametrize("a", (1, 2))
81 81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
83 83 | @pytest.mark.parametrize(
PT007.py:82:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `tuple`
|
@@ -287,8 +287,8 @@ PT007.py:82:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
82 | @pytest.mark.parametrize("d", [3,])
| ^^^^ PT007
83 | def test_multiple_decorators(a, b, c):
84 | pass
83 | @pytest.mark.parametrize(
84 | "d",
|
= help: Use `tuple` of `tuple` for parameter values
@@ -298,5 +298,48 @@ PT007.py:82:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
82 |-@pytest.mark.parametrize("d", [3,])
82 |+@pytest.mark.parametrize("d", (3,))
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",
85 85 | [("3", "4")],
PT007.py:85:5: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `tuple`
|
83 | @pytest.mark.parametrize(
84 | "d",
85 | [("3", "4")],
| ^^^^^^^^^^^^ PT007
86 | )
87 | @pytest.mark.parametrize(
|
= help: Use `tuple` of `tuple` for parameter values
Unsafe fix
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | @pytest.mark.parametrize(
84 84 | "d",
85 |- [("3", "4")],
85 |+ (("3", "4"),),
86 86 | )
87 87 | @pytest.mark.parametrize(
88 88 | "e",
PT007.py:89:5: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `tuple`
|
87 | @pytest.mark.parametrize(
88 | "e",
89 | [("3", "4"),],
| ^^^^^^^^^^^^^ PT007
90 | )
91 | def test_multiple_decorators(a, b, c, d, e):
|
= help: Use `tuple` of `tuple` for parameter values
Unsafe fix
86 86 | )
87 87 | @pytest.mark.parametrize(
88 88 | "e",
89 |- [("3", "4"),],
89 |+ (("3", "4"),),
90 90 | )
91 91 | def test_multiple_decorators(a, b, c, d, e):
92 92 | pass

View File

@@ -449,13 +449,8 @@ pub(crate) fn check_string_quotes(checker: &mut Checker, string_like: StringLike
return;
}
// If the string is part of a f-string, ignore it.
if checker
.indexer()
.fstring_ranges()
.outermost(string_like.start())
.is_some_and(|outer| outer.start() < string_like.start() && string_like.end() < outer.end())
{
// TODO(dhruvmanila): Support checking for escaped quotes in f-strings.
if checker.semantic().in_f_string_replacement_field() {
return;
}

View File

@@ -31,7 +31,7 @@ impl From<ruff_python_ast::str::Quote> for Quote {
}
}
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub inline_quotes: Quote,
pub multiline_quotes: Quote,

View File

@@ -17,7 +17,7 @@ pub const IGNORE_NAMES: [&str; 7] = [
"_value_",
];
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub ignore_names: Vec<String>,
}

View File

@@ -41,8 +41,8 @@ use crate::fix::snippet::SourceCodeSnippet;
/// [preview]: https://docs.astral.sh/ruff/preview/
#[violation]
pub struct NeedlessBool {
condition: SourceCodeSnippet,
replacement: Option<SourceCodeSnippet>,
condition: Option<SourceCodeSnippet>,
negate: bool,
}
impl Violation for NeedlessBool {
@@ -50,21 +50,22 @@ impl Violation for NeedlessBool {
#[derive_message_formats]
fn message(&self) -> String {
let NeedlessBool { condition, .. } = self;
if let Some(condition) = condition.full_display() {
let NeedlessBool { condition, negate } = self;
if let Some(condition) = condition.as_ref().and_then(SourceCodeSnippet::full_display) {
format!("Return the condition `{condition}` directly")
} else if *negate {
format!("Return the negated condition directly")
} else {
format!("Return the condition directly")
}
}
fn fix_title(&self) -> Option<String> {
let NeedlessBool { replacement, .. } = self;
if let Some(replacement) = replacement
.as_ref()
.and_then(SourceCodeSnippet::full_display)
{
Some(format!("Replace with `{replacement}`"))
let NeedlessBool { condition, .. } = self;
if let Some(condition) = condition.as_ref().and_then(SourceCodeSnippet::full_display) {
Some(format!("Replace with `return {condition}`"))
} else {
Some(format!("Inline condition"))
}
@@ -191,29 +192,21 @@ pub(crate) fn needless_bool(checker: &mut Checker, stmt: &Stmt) {
return;
}
let condition = checker.locator().slice(if_test);
let replacement = if checker.indexer().has_comments(&range, checker.locator()) {
// Generate the replacement condition.
let condition = if checker.indexer().has_comments(&range, checker.locator()) {
None
} else {
// If the return values are inverted, wrap the condition in a `not`.
if inverted {
let node = ast::StmtReturn {
value: Some(Box::new(Expr::UnaryOp(ast::ExprUnaryOp {
op: ast::UnaryOp::Not,
operand: Box::new(if_test.clone()),
range: TextRange::default(),
}))),
Some(Expr::UnaryOp(ast::ExprUnaryOp {
op: ast::UnaryOp::Not,
operand: Box::new(if_test.clone()),
range: TextRange::default(),
};
Some(checker.generator().stmt(&node.into()))
}))
} else if if_test.is_compare_expr() {
// If the condition is a comparison, we can replace it with the condition, since we
// know it's a boolean.
let node = ast::StmtReturn {
value: Some(Box::new(if_test.clone())),
range: TextRange::default(),
};
Some(checker.generator().stmt(&node.into()))
Some(if_test.clone())
} else if checker.semantic().is_builtin("bool") {
// Otherwise, we need to wrap the condition in a call to `bool`.
let func_node = ast::ExprName {
@@ -221,7 +214,7 @@ pub(crate) fn needless_bool(checker: &mut Checker, stmt: &Stmt) {
ctx: ExprContext::Load,
range: TextRange::default(),
};
let value_node = ast::ExprCall {
let call_node = ast::ExprCall {
func: Box::new(func_node.into()),
arguments: Arguments {
args: Box::from([if_test.clone()]),
@@ -230,20 +223,32 @@ pub(crate) fn needless_bool(checker: &mut Checker, stmt: &Stmt) {
},
range: TextRange::default(),
};
let return_node = ast::StmtReturn {
value: Some(Box::new(value_node.into())),
range: TextRange::default(),
};
Some(checker.generator().stmt(&return_node.into()))
Some(Expr::Call(call_node))
} else {
None
}
};
// Generate the replacement `return` statement.
let replacement = condition.as_ref().map(|expr| {
Stmt::Return(ast::StmtReturn {
value: Some(Box::new(expr.clone())),
range: TextRange::default(),
})
});
// Generate source code.
let replacement = replacement
.as_ref()
.map(|stmt| checker.generator().stmt(stmt));
let condition = condition
.as_ref()
.map(|expr| checker.generator().expr(expr));
let mut diagnostic = Diagnostic::new(
NeedlessBool {
condition: SourceCodeSnippet::from_str(condition),
replacement: replacement.clone().map(SourceCodeSnippet::new),
condition: condition.map(SourceCodeSnippet::new),
negate: inverted,
},
range,
);

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/flake8_simplify/mod.rs
---
SIM103.py:3:5: SIM103 [*] Return the condition `a` directly
SIM103.py:3:5: SIM103 [*] Return the condition `bool(a)` directly
|
1 | def f():
2 | # SIM103
@@ -52,7 +52,7 @@ SIM103.py:11:5: SIM103 [*] Return the condition `a == b` directly
16 13 |
17 14 | def f():
SIM103.py:21:5: SIM103 [*] Return the condition `b` directly
SIM103.py:21:5: SIM103 [*] Return the condition `bool(b)` directly
|
19 | if a:
20 | return 1
@@ -78,7 +78,7 @@ SIM103.py:21:5: SIM103 [*] Return the condition `b` directly
26 23 |
27 24 | def f():
SIM103.py:32:9: SIM103 [*] Return the condition `b` directly
SIM103.py:32:9: SIM103 [*] Return the condition `bool(b)` directly
|
30 | return 1
31 | else:
@@ -104,10 +104,10 @@ SIM103.py:32:9: SIM103 [*] Return the condition `b` directly
37 34 |
38 35 | def f():
SIM103.py:57:5: SIM103 [*] Return the condition `a` directly
SIM103.py:57:5: SIM103 [*] Return the condition `not a` directly
|
55 | def f():
56 | # SIM103 (but not fixable)
56 | # SIM103
57 | if a:
| _____^
58 | | return False
@@ -120,7 +120,7 @@ SIM103.py:57:5: SIM103 [*] Return the condition `a` directly
Unsafe fix
54 54 |
55 55 | def f():
56 56 | # SIM103 (but not fixable)
56 56 | # SIM103
57 |- if a:
58 |- return False
59 |- else:
@@ -130,7 +130,7 @@ SIM103.py:57:5: SIM103 [*] Return the condition `a` directly
62 59 |
63 60 | def f():
SIM103.py:83:5: SIM103 Return the condition `a` directly
SIM103.py:83:5: SIM103 Return the condition directly
|
81 | def bool():
82 | return False
@@ -142,3 +142,29 @@ SIM103.py:83:5: SIM103 Return the condition `a` directly
| |____________________^ SIM103
|
= help: Inline condition
SIM103.py:91:5: SIM103 [*] Return the condition `not (keys is not None and notice.key not in keys)` directly
|
89 | def f():
90 | # SIM103
91 | if keys is not None and notice.key not in keys:
| _____^
92 | | return False
93 | | else:
94 | | return True
| |___________________^ SIM103
|
= help: Replace with `return not (keys is not None and notice.key not in keys)`
Unsafe fix
88 88 |
89 89 | def f():
90 90 | # SIM103
91 |- if keys is not None and notice.key not in keys:
92 |- return False
93 |- else:
94 |- return True
91 |+ return not (keys is not None and notice.key not in keys)
95 92 |
96 93 |
97 94 | ###

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/flake8_simplify/mod.rs
---
SIM103.py:3:5: SIM103 [*] Return the condition `a` directly
SIM103.py:3:5: SIM103 [*] Return the condition `bool(a)` directly
|
1 | def f():
2 | # SIM103
@@ -52,7 +52,7 @@ SIM103.py:11:5: SIM103 [*] Return the condition `a == b` directly
16 13 |
17 14 | def f():
SIM103.py:21:5: SIM103 [*] Return the condition `b` directly
SIM103.py:21:5: SIM103 [*] Return the condition `bool(b)` directly
|
19 | if a:
20 | return 1
@@ -78,7 +78,7 @@ SIM103.py:21:5: SIM103 [*] Return the condition `b` directly
26 23 |
27 24 | def f():
SIM103.py:32:9: SIM103 [*] Return the condition `b` directly
SIM103.py:32:9: SIM103 [*] Return the condition `bool(b)` directly
|
30 | return 1
31 | else:
@@ -104,10 +104,10 @@ SIM103.py:32:9: SIM103 [*] Return the condition `b` directly
37 34 |
38 35 | def f():
SIM103.py:57:5: SIM103 [*] Return the condition `a` directly
SIM103.py:57:5: SIM103 [*] Return the condition `not a` directly
|
55 | def f():
56 | # SIM103 (but not fixable)
56 | # SIM103
57 | if a:
| _____^
58 | | return False
@@ -120,7 +120,7 @@ SIM103.py:57:5: SIM103 [*] Return the condition `a` directly
Unsafe fix
54 54 |
55 55 | def f():
56 56 | # SIM103 (but not fixable)
56 56 | # SIM103
57 |- if a:
58 |- return False
59 |- else:
@@ -130,7 +130,7 @@ SIM103.py:57:5: SIM103 [*] Return the condition `a` directly
62 59 |
63 60 | def f():
SIM103.py:83:5: SIM103 Return the condition `a` directly
SIM103.py:83:5: SIM103 Return the condition directly
|
81 | def bool():
82 | return False
@@ -143,47 +143,73 @@ SIM103.py:83:5: SIM103 Return the condition `a` directly
|
= help: Inline condition
SIM103.py:96:5: SIM103 [*] Return the condition `a` directly
SIM103.py:91:5: SIM103 [*] Return the condition `not (keys is not None and notice.key not in keys)` directly
|
94 | def f():
95 | # SIM103
96 | if a:
89 | def f():
90 | # SIM103
91 | if keys is not None and notice.key not in keys:
| _____^
97 | | return True
98 | | return False
| |________________^ SIM103
92 | | return False
93 | | else:
94 | | return True
| |___________________^ SIM103
|
= help: Replace with `return bool(a)`
= help: Replace with `return not (keys is not None and notice.key not in keys)`
Unsafe fix
93 93 |
94 94 | def f():
95 95 | # SIM103
96 |- if a:
97 |- return True
98 |- return False
96 |+ return bool(a)
99 97 |
100 98 |
101 99 | def f():
88 88 |
89 89 | def f():
90 90 | # SIM103
91 |- if keys is not None and notice.key not in keys:
92 |- return False
93 |- else:
94 |- return True
91 |+ return not (keys is not None and notice.key not in keys)
95 92 |
96 93 |
97 94 | ###
SIM103.py:103:5: SIM103 [*] Return the condition `a` directly
SIM103.py:104:5: SIM103 [*] Return the condition `bool(a)` directly
|
101 | def f():
102 | # SIM103
103 | if a:
102 | def f():
103 | # SIM103
104 | if a:
| _____^
104 | | return False
105 | | return True
105 | | return True
106 | | return False
| |________________^ SIM103
|
= help: Replace with `return bool(a)`
Unsafe fix
101 101 |
102 102 | def f():
103 103 | # SIM103
104 |- if a:
105 |- return True
106 |- return False
104 |+ return bool(a)
107 105 |
108 106 |
109 107 | def f():
SIM103.py:111:5: SIM103 [*] Return the condition `not a` directly
|
109 | def f():
110 | # SIM103
111 | if a:
| _____^
112 | | return False
113 | | return True
| |_______________^ SIM103
|
= help: Replace with `return not a`
Unsafe fix
100 100 |
101 101 | def f():
102 102 | # SIM103
103 |- if a:
104 |- return False
105 |- return True
103 |+ return not a
108 108 |
109 109 | def f():
110 110 | # SIM103
111 |- if a:
112 |- return False
113 |- return True
111 |+ return not a

View File

@@ -1,17 +1,16 @@
use ruff_python_ast as ast;
use ruff_python_ast::{Arguments, Expr, StmtClassDef};
use std::fmt;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::identifier::Identifier;
use ruff_python_ast::Stmt;
use ruff_python_ast::{self as ast, identifier::Identifier, Arguments, Expr, Stmt, StmtClassDef};
use ruff_python_semantic::SemanticModel;
use crate::checkers::ast::Checker;
use crate::rules::flake8_slots::rules::helpers::has_slots;
/// ## What it does
/// Checks for subclasses of `collections.namedtuple` that lack a `__slots__`
/// definition.
/// Checks for subclasses of `collections.namedtuple` or `typing.NamedTuple`
/// that lack a `__slots__` definition.
///
/// ## Why is this bad?
/// In Python, the `__slots__` attribute allows you to explicitly define the
@@ -48,12 +47,28 @@ use crate::rules::flake8_slots::rules::helpers::has_slots;
/// ## References
/// - [Python documentation: `__slots__`](https://docs.python.org/3/reference/datamodel.html#slots)
#[violation]
pub struct NoSlotsInNamedtupleSubclass;
pub struct NoSlotsInNamedtupleSubclass(NamedTupleKind);
impl Violation for NoSlotsInNamedtupleSubclass {
#[derive_message_formats]
fn message(&self) -> String {
format!("Subclasses of `collections.namedtuple()` should define `__slots__`")
let NoSlotsInNamedtupleSubclass(namedtuple_kind) = self;
format!("Subclasses of {namedtuple_kind} should define `__slots__`")
}
}
#[derive(Debug, PartialEq, Eq, Clone, Copy)]
enum NamedTupleKind {
Collections,
Typing,
}
impl fmt::Display for NamedTupleKind {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.write_str(match self {
Self::Collections => "`collections.namedtuple()`",
Self::Typing => "call-based `typing.NamedTuple()`",
})
}
}
@@ -67,22 +82,33 @@ pub(crate) fn no_slots_in_namedtuple_subclass(
return;
};
if bases.iter().any(|base| {
let Expr::Call(ast::ExprCall { func, .. }) = base else {
return false;
};
checker
.semantic()
.resolve_qualified_name(func)
.is_some_and(|qualified_name| {
matches!(qualified_name.segments(), ["collections", "namedtuple"])
})
}) {
if let Some(namedtuple_kind) = namedtuple_base(bases, checker.semantic()) {
if !has_slots(&class.body) {
checker.diagnostics.push(Diagnostic::new(
NoSlotsInNamedtupleSubclass,
NoSlotsInNamedtupleSubclass(namedtuple_kind),
stmt.identifier(),
));
}
}
}
/// If the class has a call-based namedtuple in its bases,
/// return the kind of namedtuple it is
/// (either `collections.namedtuple()`, or `typing.NamedTuple()`).
/// Else, return `None`.
fn namedtuple_base(bases: &[Expr], semantic: &SemanticModel) -> Option<NamedTupleKind> {
for base in bases {
let Expr::Call(ast::ExprCall { func, .. }) = base else {
continue;
};
let Some(qualified_name) = semantic.resolve_qualified_name(func) else {
continue;
};
match qualified_name.segments() {
["collections", "namedtuple"] => return Some(NamedTupleKind::Collections),
["typing", "NamedTuple"] => return Some(NamedTupleKind::Typing),
_ => continue,
}
}
None
}

View File

@@ -8,4 +8,9 @@ SLOT002.py:5:7: SLOT002 Subclasses of `collections.namedtuple()` should define `
6 | pass
|
SLOT002.py:9:7: SLOT002 Subclasses of call-based `typing.NamedTuple()` should define `__slots__`
|
9 | class UnusualButStillBad(NamedTuple("foo", [("x", int, "y", int)])): # SLOT002
| ^^^^^^^^^^^^^^^^^^ SLOT002
10 | pass
|

View File

@@ -39,7 +39,7 @@ impl Display for Strictness {
}
}
#[derive(Debug, CacheKey, Default)]
#[derive(Debug, Clone, CacheKey, Default)]
pub struct Settings {
pub ban_relative_imports: Strictness,
pub banned_api: FxHashMap<String, ApiBan>,

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub strict: bool,
pub exempt_modules: Vec<String>,

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub ignore_variadic_names: bool,
}

View File

@@ -270,7 +270,7 @@ pub(crate) fn categorize_imports<'a>(
block_by_type
}
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct KnownModules {
/// A map of known modules to their section.
known: Vec<(glob::Pattern, ImportSection)>,

View File

@@ -44,7 +44,7 @@ impl Display for RelativeImportsOrder {
}
}
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
#[allow(clippy::struct_excessive_bools)]
pub struct Settings {
pub required_imports: BTreeSet<String>,

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub max_complexity: usize,
}

View File

@@ -11,7 +11,7 @@ use ruff_macros::CacheKey;
use crate::display_settings;
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub ignore_names: IgnoreNames,
pub classmethod_decorators: Vec<String>,
@@ -85,7 +85,7 @@ static DEFAULTS: &[&str] = &[
"maxDiff",
];
#[derive(Debug)]
#[derive(Debug, Clone)]
pub enum IgnoreNames {
Default,
UserProvided {

View File

@@ -6,7 +6,7 @@ use std::fmt;
use crate::line_width::LineLength;
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub max_line_length: LineLength,
pub max_doc_length: Option<LineLength>,

View File

@@ -59,26 +59,36 @@ pub(crate) fn capitalized(checker: &mut Checker, docstring: &Docstring) {
}
let body = docstring.body();
let Some(first_word) = body.split(' ').next() else {
return;
};
// Like pydocstyle, we only support ASCII for now.
for char in first_word.chars() {
if !char.is_ascii_alphabetic() && char != '\'' {
return;
}
}
let first_word = body.split_once(' ').map_or_else(
|| {
// If the docstring is a single word, trim the punctuation marks because
// it makes the ASCII test below fail.
body.trim_end_matches(['.', '!', '?'])
},
|(first_word, _)| first_word,
);
let mut first_word_chars = first_word.chars();
let Some(first_char) = first_word_chars.next() else {
return;
};
if !first_char.is_ascii() {
return;
}
let uppercase_first_char = first_char.to_ascii_uppercase();
if first_char == uppercase_first_char {
return;
}
// Like pydocstyle, we only support ASCII for now.
for char in first_word.chars().skip(1) {
if !char.is_ascii_alphabetic() && char != '\'' {
return;
}
}
let capitalized_word = uppercase_first_char.to_string() + first_word_chars.as_str();
let mut diagnostic = Diagnostic::new(

View File

@@ -83,7 +83,7 @@ impl fmt::Display for Convention {
}
}
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub convention: Option<Convention>,
pub ignore_decorators: BTreeSet<String>,

View File

@@ -19,4 +19,37 @@ D403.py:2:5: D403 [*] First word of the first line should be capitalized: `this`
4 4 | def good_function():
5 5 | """This docstring is capitalized."""
D403.py:30:5: D403 [*] First word of the first line should be capitalized: `singleword` -> `Singleword`
|
29 | def single_word():
30 | """singleword."""
| ^^^^^^^^^^^^^^^^^ D403
31 |
32 | def single_word_no_dot():
|
= help: Capitalize `singleword` to `Singleword`
Safe fix
27 27 | """th•s is not capitalized."""
28 28 |
29 29 | def single_word():
30 |- """singleword."""
30 |+ """Singleword."""
31 31 |
32 32 | def single_word_no_dot():
33 33 | """singleword"""
D403.py:33:5: D403 [*] First word of the first line should be capitalized: `singleword` -> `Singleword`
|
32 | def single_word_no_dot():
33 | """singleword"""
| ^^^^^^^^^^^^^^^^ D403
|
= help: Capitalize `singleword` to `Singleword`
Safe fix
30 30 | """singleword."""
31 31 |
32 32 | def single_word_no_dot():
33 |- """singleword"""
33 |+ """Singleword"""

View File

@@ -162,6 +162,7 @@ mod tests {
#[test_case(Rule::UndefinedExport, Path::new("F822_0.pyi"))]
#[test_case(Rule::UndefinedExport, Path::new("F822_1.py"))]
#[test_case(Rule::UndefinedExport, Path::new("F822_2.py"))]
#[test_case(Rule::UndefinedExport, Path::new("F822_3.py"))]
#[test_case(Rule::UndefinedLocal, Path::new("F823.py"))]
#[test_case(Rule::UnusedVariable, Path::new("F841_0.py"))]
#[test_case(Rule::UnusedVariable, Path::new("F841_1.py"))]

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt;
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub extend_generics: Vec<String>,
}

View File

@@ -0,0 +1,11 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
F822_3.py:12:5: F822 Undefined name `ExponentialFamily` in `__all__`
|
10 | __all__ += [
11 | "ContinuousBernoulli", # noqa: F822
12 | "ExponentialFamily",
| ^^^^^^^^^^^^^^^^^^^ F822
13 | ]
|

View File

@@ -1,8 +1,9 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_index::Indexer;
use ruff_python_trivia::Cursor;
use ruff_source_file::Locator;
use ruff_text_size::Ranged;
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::noqa::Directive;
@@ -27,15 +28,56 @@ use crate::noqa::Directive;
/// from .base import * # noqa: F403
/// ```
///
/// ## Fix safety
/// This rule will attempt to fix blanket `noqa` annotations that appear to
/// be unintentional. For example, given `# noqa F401`, the rule will suggest
/// inserting a colon, as in `# noqa: F401`.
///
/// While modifying `noqa` comments is generally safe, doing so may introduce
/// additional diagnostics.
///
/// ## References
/// - [Ruff documentation](https://docs.astral.sh/ruff/configuration/#error-suppression)
#[violation]
pub struct BlanketNOQA;
pub struct BlanketNOQA {
missing_colon: bool,
space_before_colon: bool,
}
impl Violation for BlanketNOQA {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
format!("Use specific rule codes when using `noqa`")
let BlanketNOQA {
missing_colon,
space_before_colon,
} = self;
// This awkward branching is necessary to ensure that the generic message is picked up by
// `derive_message_formats`.
if !missing_colon && !space_before_colon {
format!("Use specific rule codes when using `noqa`")
} else if *missing_colon {
format!("Use a colon when specifying `noqa` rule codes")
} else {
format!("Do not add spaces between `noqa` and its colon")
}
}
fn fix_title(&self) -> Option<String> {
let BlanketNOQA {
missing_colon,
space_before_colon,
} = self;
if *missing_colon {
Some("Add missing colon".to_string())
} else if *space_before_colon {
Some("Remove space(s) before colon".to_string())
} else {
None
}
}
}
@@ -47,8 +89,54 @@ pub(crate) fn blanket_noqa(
) {
for range in indexer.comment_ranges() {
let line = locator.slice(*range);
if let Ok(Some(Directive::All(all))) = Directive::try_extract(line, range.start()) {
diagnostics.push(Diagnostic::new(BlanketNOQA, all.range()));
let offset = range.start();
if let Ok(Some(Directive::All(all))) = Directive::try_extract(line, TextSize::new(0)) {
// The `all` range is that of the `noqa` directive in, e.g., `# noqa` or `# noqa F401`.
let noqa_start = offset + all.start();
let noqa_end = offset + all.end();
// Skip the `# noqa`, plus any trailing whitespace.
let mut cursor = Cursor::new(&line[all.end().to_usize()..]);
cursor.eat_while(char::is_whitespace);
// Check for extraneous spaces before the colon.
// Ex) `# noqa : F401`
if cursor.first() == ':' {
let start = offset + all.end();
let end = start + cursor.token_len();
let mut diagnostic = Diagnostic::new(
BlanketNOQA {
missing_colon: false,
space_before_colon: true,
},
TextRange::new(noqa_start, end),
);
diagnostic.set_fix(Fix::unsafe_edit(Edit::deletion(start, end)));
diagnostics.push(diagnostic);
} else if Directive::lex_code(cursor.chars().as_str()).is_some() {
// Check for a missing colon.
// Ex) `# noqa F401`
let start = offset + all.end();
let end = start + TextSize::new(1);
let mut diagnostic = Diagnostic::new(
BlanketNOQA {
missing_colon: true,
space_before_colon: false,
},
TextRange::new(noqa_start, end),
);
diagnostic.set_fix(Fix::unsafe_edit(Edit::insertion(':'.to_string(), start)));
diagnostics.push(diagnostic);
} else {
// Otherwise, it looks like an intentional blanket `noqa` annotation.
diagnostics.push(Diagnostic::new(
BlanketNOQA {
missing_colon: false,
space_before_colon: false,
},
TextRange::new(noqa_start, noqa_end),
));
}
}
}
}

View File

@@ -29,4 +29,97 @@ PGH004_0.py:4:1: PGH004 Use specific rule codes when using `noqa`
6 | # noqa:F401,W203
|
PGH004_0.py:18:8: PGH004 [*] Use a colon when specifying `noqa` rule codes
|
17 | # PGH004
18 | x = 2 # noqa X100
| ^^^^^^^ PGH004
19 |
20 | # PGH004
|
= help: Add missing colon
Unsafe fix
15 15 | x = 2 # noqa:X100
16 16 |
17 17 | # PGH004
18 |-x = 2 # noqa X100
18 |+x = 2 # noqa: X100
19 19 |
20 20 | # PGH004
21 21 | x = 2 # noqa X100, X200
PGH004_0.py:21:8: PGH004 [*] Use a colon when specifying `noqa` rule codes
|
20 | # PGH004
21 | x = 2 # noqa X100, X200
| ^^^^^^^ PGH004
22 |
23 | # PGH004
|
= help: Add missing colon
Unsafe fix
18 18 | x = 2 # noqa X100
19 19 |
20 20 | # PGH004
21 |-x = 2 # noqa X100, X200
21 |+x = 2 # noqa: X100, X200
22 22 |
23 23 | # PGH004
24 24 | x = 2 # noqa : X300
PGH004_0.py:24:8: PGH004 [*] Do not add spaces between `noqa` and its colon
|
23 | # PGH004
24 | x = 2 # noqa : X300
| ^^^^^^^ PGH004
25 |
26 | # PGH004
|
= help: Remove space(s) before colon
Unsafe fix
21 21 | x = 2 # noqa X100, X200
22 22 |
23 23 | # PGH004
24 |-x = 2 # noqa : X300
24 |+x = 2 # noqa: X300
25 25 |
26 26 | # PGH004
27 27 | x = 2 # noqa : X400
PGH004_0.py:27:8: PGH004 [*] Do not add spaces between `noqa` and its colon
|
26 | # PGH004
27 | x = 2 # noqa : X400
| ^^^^^^^^ PGH004
28 |
29 | # PGH004
|
= help: Remove space(s) before colon
Unsafe fix
24 24 | x = 2 # noqa : X300
25 25 |
26 26 | # PGH004
27 |-x = 2 # noqa : X400
27 |+x = 2 # noqa: X400
28 28 |
29 29 | # PGH004
30 30 | x = 2 # noqa :X500
PGH004_0.py:30:8: PGH004 [*] Do not add spaces between `noqa` and its colon
|
29 | # PGH004
30 | x = 2 # noqa :X500
| ^^^^^^^ PGH004
|
= help: Remove space(s) before colon
Unsafe fix
27 27 | x = 2 # noqa : X400
28 28 |
29 29 | # PGH004
30 |-x = 2 # noqa :X500
30 |+x = 2 # noqa:X500

View File

@@ -47,6 +47,7 @@ mod tests {
#[test_case(Rule::EqWithoutHash, Path::new("eq_without_hash.py"))]
#[test_case(Rule::EmptyComment, Path::new("empty_comment.py"))]
#[test_case(Rule::ManualFromImport, Path::new("import_aliasing.py"))]
#[test_case(Rule::IfStmtMinMax, Path::new("if_stmt_min_max.py"))]
#[test_case(Rule::SingleStringSlots, Path::new("single_string_slots.py"))]
#[test_case(Rule::SysExitAlias, Path::new("sys_exit_alias_0.py"))]
#[test_case(Rule::SysExitAlias, Path::new("sys_exit_alias_1.py"))]
@@ -185,11 +186,16 @@ mod tests {
Rule::UnnecessaryDictIndexLookup,
Path::new("unnecessary_dict_index_lookup.py")
)]
#[test_case(Rule::NonAugmentedAssignment, Path::new("non_augmented_assignment.py"))]
#[test_case(
Rule::UselessExceptionStatement,
Path::new("useless_exception_statement.py")
)]
#[test_case(Rule::NanComparison, Path::new("nan_comparison.py"))]
#[test_case(
Rule::BadStaticmethodArgument,
Path::new("bad_staticmethod_argument.py")
)]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
let diagnostics = test_path(

View File

@@ -0,0 +1,103 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast as ast;
use ruff_python_ast::ParameterWithDefault;
use ruff_python_semantic::analyze::function_type;
use ruff_python_semantic::Scope;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for static methods that use `self` or `cls` as their first argument.
///
/// ## Why is this bad?
/// [PEP 8] recommends the use of `self` and `cls` as the first arguments for
/// instance methods and class methods, respectively. Naming the first argument
/// of a static method as `self` or `cls` can be misleading, as static methods
/// do not receive an instance or class reference as their first argument.
///
/// ## Example
/// ```python
/// class Wolf:
/// @staticmethod
/// def eat(self):
/// pass
/// ```
///
/// Use instead:
/// ```python
/// class Wolf:
/// @staticmethod
/// def eat(sheep):
/// pass
/// ```
///
/// [PEP 8]: https://peps.python.org/pep-0008/#function-and-method-arguments
#[violation]
pub struct BadStaticmethodArgument {
argument_name: String,
}
impl Violation for BadStaticmethodArgument {
#[derive_message_formats]
fn message(&self) -> String {
let Self { argument_name } = self;
format!("First argument of a static method should not be named `{argument_name}`")
}
}
/// PLW0211
pub(crate) fn bad_staticmethod_argument(
checker: &Checker,
scope: &Scope,
diagnostics: &mut Vec<Diagnostic>,
) {
let Some(func) = scope.kind.as_function() else {
return;
};
let ast::StmtFunctionDef {
name,
decorator_list,
parameters,
..
} = func;
let Some(parent) = &checker.semantic().first_non_type_parent_scope(scope) else {
return;
};
let type_ = function_type::classify(
name,
decorator_list,
parent,
checker.semantic(),
&checker.settings.pep8_naming.classmethod_decorators,
&checker.settings.pep8_naming.staticmethod_decorators,
);
if !matches!(type_, function_type::FunctionType::StaticMethod) {
return;
}
let Some(ParameterWithDefault {
parameter: self_or_cls,
..
}) = parameters
.posonlyargs
.first()
.or_else(|| parameters.args.first())
else {
return;
};
if matches!(self_or_cls.name.as_str(), "self" | "cls") {
let diagnostic = Diagnostic::new(
BadStaticmethodArgument {
argument_name: self_or_cls.name.to_string(),
},
self_or_cls.range(),
);
diagnostics.push(diagnostic);
}
}

View File

@@ -0,0 +1,190 @@
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::{self as ast, CmpOp, Stmt};
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::fix::snippet::SourceCodeSnippet;
/// ## What it does
/// Checks for `if` statements that can be replaced with `min()` or `max()`
/// calls.
///
/// ## Why is this bad?
/// An `if` statement that selects the lesser or greater of two sub-expressions
/// can be replaced with a `min()` or `max()` call respectively. When possible,
/// prefer `min()` and `max()`, as they're more concise and readable than the
/// equivalent `if` statements.
///
/// ## Example
/// ```python
/// if score > highest_score:
/// highest_score = score
/// ```
///
/// Use instead:
/// ```python
/// highest_score = max(highest_score, score)
/// ```
///
/// ## References
/// - [Python documentation: max function](https://docs.python.org/3/library/functions.html#max)
/// - [Python documentation: min function](https://docs.python.org/3/library/functions.html#min)
#[violation]
pub struct IfStmtMinMax {
min_max: MinMax,
replacement: SourceCodeSnippet,
}
impl Violation for IfStmtMinMax {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let Self {
min_max,
replacement,
} = self;
if let Some(replacement) = replacement.full_display() {
format!("Replace `if` statement with `{replacement}`")
} else {
format!("Replace `if` statement with `{min_max}` call")
}
}
fn fix_title(&self) -> Option<String> {
let Self {
min_max,
replacement,
} = self;
if let Some(replacement) = replacement.full_display() {
Some(format!("Replace with `{replacement}`"))
} else {
Some(format!("Replace with `{min_max}` call"))
}
}
}
/// R1730, R1731
pub(crate) fn if_stmt_min_max(checker: &mut Checker, stmt_if: &ast::StmtIf) {
let ast::StmtIf {
test,
body,
elif_else_clauses,
range: _,
} = stmt_if;
if !elif_else_clauses.is_empty() {
return;
}
let [body @ Stmt::Assign(ast::StmtAssign {
targets: body_targets,
value: body_value,
..
})] = body.as_slice()
else {
return;
};
let [body_target] = body_targets.as_slice() else {
return;
};
let Some(ast::ExprCompare {
ops,
left,
comparators,
..
}) = test.as_compare_expr()
else {
return;
};
// Ignore, e.g., `foo < bar < baz`.
let [op] = &**ops else {
return;
};
// Determine whether to use `min()` or `max()`, and whether to flip the
// order of the arguments, which is relevant for breaking ties.
let (min_max, flip_args) = match op {
CmpOp::Gt => (MinMax::Min, true),
CmpOp::GtE => (MinMax::Min, false),
CmpOp::Lt => (MinMax::Max, true),
CmpOp::LtE => (MinMax::Max, false),
_ => return,
};
let [right] = &**comparators else {
return;
};
let left_cmp = ComparableExpr::from(left);
let body_target_cmp = ComparableExpr::from(body_target);
let right_statement_cmp = ComparableExpr::from(right);
let body_value_cmp = ComparableExpr::from(body_value);
if left_cmp != body_target_cmp || right_statement_cmp != body_value_cmp {
return;
}
let (arg1, arg2) = if flip_args {
(left.as_ref(), right)
} else {
(right, left.as_ref())
};
let replacement = format!(
"{} = {min_max}({}, {})",
checker.locator().slice(
parenthesized_range(
body_target.into(),
body.into(),
checker.indexer().comment_ranges(),
checker.locator().contents()
)
.unwrap_or(body_target.range())
),
checker.locator().slice(arg1),
checker.locator().slice(arg2),
);
let mut diagnostic = Diagnostic::new(
IfStmtMinMax {
min_max,
replacement: SourceCodeSnippet::from_str(replacement.as_str()),
},
stmt_if.range(),
);
if checker.semantic().is_builtin(min_max.as_str()) {
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
replacement,
stmt_if.range(),
)));
}
checker.diagnostics.push(diagnostic);
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
enum MinMax {
Min,
Max,
}
impl MinMax {
fn as_str(self) -> &'static str {
match self {
Self::Min => "min",
Self::Max => "max",
}
}
}
impl std::fmt::Display for MinMax {
fn fmt(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(fmt, "{}", self.as_str())
}
}

View File

@@ -3,6 +3,7 @@ pub(crate) use assert_on_string_literal::*;
pub(crate) use await_outside_async::*;
pub(crate) use bad_dunder_method_name::*;
pub(crate) use bad_open_mode::*;
pub(crate) use bad_staticmethod_argument::*;
pub(crate) use bad_str_strip_call::*;
pub(crate) use bad_string_format_character::BadStringFormatCharacter;
pub(crate) use bad_string_format_type::*;
@@ -20,6 +21,7 @@ pub(crate) use eq_without_hash::*;
pub(crate) use global_at_module_level::*;
pub(crate) use global_statement::*;
pub(crate) use global_variable_not_assigned::*;
pub(crate) use if_stmt_min_max::*;
pub(crate) use import_outside_top_level::*;
pub(crate) use import_private_name::*;
pub(crate) use import_self::*;
@@ -45,6 +47,7 @@ pub(crate) use no_method_decorator::*;
pub(crate) use no_self_use::*;
pub(crate) use non_ascii_module_import::*;
pub(crate) use non_ascii_name::*;
pub(crate) use non_augmented_assignment::*;
pub(crate) use non_slot_assignment::*;
pub(crate) use nonlocal_and_global::*;
pub(crate) use nonlocal_without_binding::*;
@@ -97,6 +100,7 @@ mod assert_on_string_literal;
mod await_outside_async;
mod bad_dunder_method_name;
mod bad_open_mode;
mod bad_staticmethod_argument;
mod bad_str_strip_call;
pub(crate) mod bad_string_format_character;
mod bad_string_format_type;
@@ -114,6 +118,7 @@ mod eq_without_hash;
mod global_at_module_level;
mod global_statement;
mod global_variable_not_assigned;
mod if_stmt_min_max;
mod import_outside_top_level;
mod import_private_name;
mod import_self;
@@ -139,6 +144,7 @@ mod no_method_decorator;
mod no_self_use;
mod non_ascii_module_import;
mod non_ascii_name;
mod non_augmented_assignment;
mod non_slot_assignment;
mod nonlocal_and_global;
mod nonlocal_without_binding;

View File

@@ -47,7 +47,7 @@ impl Violation for NanComparison {
}
}
/// PLW0117
/// PLW0177
pub(crate) fn nan_comparison(checker: &mut Checker, left: &Expr, comparators: &[Expr]) {
for expr in std::iter::once(left).chain(comparators.iter()) {
if let Some(qualified_name) = checker.semantic().resolve_qualified_name(expr) {

View File

@@ -0,0 +1,202 @@
use ast::{Expr, StmtAugAssign};
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast as ast;
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::Operator;
use ruff_python_codegen::Generator;
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for assignments that can be replaced with augmented assignment
/// statements.
///
/// ## Why is this bad?
/// If an assignment statement consists of a binary operation in which one
/// operand is the same as the assignment target, it can be rewritten as an
/// augmented assignment. For example, `x = x + 1` can be rewritten as
/// `x += 1`.
///
/// When performing such an operation, augmented assignments are more concise
/// and idiomatic.
///
/// ## Example
/// ```python
/// x = x + 1
/// ```
///
/// Use instead:
/// ```python
/// x += 1
/// ```
///
/// ## Fix safety
/// This rule's fix is marked as unsafe, as augmented assignments have
/// different semantics when the target is a mutable data type, like a list or
/// dictionary.
///
/// For example, consider the following:
///
/// ```python
/// foo = [1]
/// bar = foo
/// foo = foo + [2]
/// assert (foo, bar) == ([1, 2], [1])
/// ```
///
/// If the assignment is replaced with an augmented assignment, the update
/// operation will apply to both `foo` and `bar`, as they refer to the same
/// object:
///
/// ```python
/// foo = [1]
/// bar = foo
/// foo += [2]
/// assert (foo, bar) == ([1, 2], [1, 2])
/// ```
#[violation]
pub struct NonAugmentedAssignment {
operator: AugmentedOperator,
}
impl AlwaysFixableViolation for NonAugmentedAssignment {
#[derive_message_formats]
fn message(&self) -> String {
let NonAugmentedAssignment { operator } = self;
format!("Use `{operator}` to perform an augmented assignment directly")
}
fn fix_title(&self) -> String {
"Replace with augmented assignment".to_string()
}
}
/// PLR6104
pub(crate) fn non_augmented_assignment(checker: &mut Checker, assign: &ast::StmtAssign) {
// Ignore multiple assignment targets.
let [target] = assign.targets.as_slice() else {
return;
};
// Match, e.g., `x = x + 1`.
let Expr::BinOp(value) = &*assign.value else {
return;
};
// Match, e.g., `x = x + 1`.
if ComparableExpr::from(target) == ComparableExpr::from(&value.left) {
let mut diagnostic = Diagnostic::new(
NonAugmentedAssignment {
operator: AugmentedOperator::from(value.op),
},
assign.range(),
);
diagnostic.set_fix(Fix::unsafe_edit(augmented_assignment(
checker.generator(),
target,
value.op,
&value.right,
assign.range(),
)));
checker.diagnostics.push(diagnostic);
return;
}
// Match, e.g., `x = 1 + x`.
if ComparableExpr::from(target) == ComparableExpr::from(&value.right) {
let mut diagnostic = Diagnostic::new(
NonAugmentedAssignment {
operator: AugmentedOperator::from(value.op),
},
assign.range(),
);
diagnostic.set_fix(Fix::unsafe_edit(augmented_assignment(
checker.generator(),
target,
value.op,
&value.left,
assign.range(),
)));
checker.diagnostics.push(diagnostic);
}
}
/// Generate a fix to convert an assignment statement to an augmented assignment.
///
/// For example, given `x = x + 1`, the fix would be `x += 1`.
fn augmented_assignment(
generator: Generator,
target: &Expr,
operator: Operator,
right_operand: &Expr,
range: TextRange,
) -> Edit {
Edit::range_replacement(
generator.stmt(&ast::Stmt::AugAssign(StmtAugAssign {
range: TextRange::default(),
target: Box::new(target.clone()),
op: operator,
value: Box::new(right_operand.clone()),
})),
range,
)
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
enum AugmentedOperator {
Add,
BitAnd,
BitOr,
BitXor,
Div,
FloorDiv,
LShift,
MatMult,
Mod,
Mult,
Pow,
RShift,
Sub,
}
impl From<Operator> for AugmentedOperator {
fn from(value: Operator) -> Self {
match value {
Operator::Add => Self::Add,
Operator::BitAnd => Self::BitAnd,
Operator::BitOr => Self::BitOr,
Operator::BitXor => Self::BitXor,
Operator::Div => Self::Div,
Operator::FloorDiv => Self::FloorDiv,
Operator::LShift => Self::LShift,
Operator::MatMult => Self::MatMult,
Operator::Mod => Self::Mod,
Operator::Mult => Self::Mult,
Operator::Pow => Self::Pow,
Operator::RShift => Self::RShift,
Operator::Sub => Self::Sub,
}
}
}
impl std::fmt::Display for AugmentedOperator {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Add => f.write_str("+="),
Self::BitAnd => f.write_str("&="),
Self::BitOr => f.write_str("|="),
Self::BitXor => f.write_str("^="),
Self::Div => f.write_str("/="),
Self::FloorDiv => f.write_str("//="),
Self::LShift => f.write_str("<<="),
Self::MatMult => f.write_str("@="),
Self::Mod => f.write_str("%="),
Self::Mult => f.write_str("*="),
Self::Pow => f.write_str("**="),
Self::RShift => f.write_str(">>="),
Self::Sub => f.write_str("-="),
}
}
}

View File

@@ -48,7 +48,7 @@ impl fmt::Display for ConstantType {
}
}
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub allow_magic_value_types: Vec<ConstantType>,
pub allow_dunder_method_names: FxHashSet<String>,

View File

@@ -0,0 +1,296 @@
---
source: crates/ruff_linter/src/rules/pylint/mod.rs
---
if_stmt_min_max.py:8:1: PLR1730 [*] Replace `if` statement with `value = max(value, 10)`
|
7 | # Positive
8 | / if value < 10: # [max-instead-of-if]
9 | | value = 10
| |______________^ PLR1730
10 |
11 | if value <= 10: # [max-instead-of-if]
|
= help: Replace with `value = max(value, 10)`
Safe fix
5 5 | value3 = 3
6 6 |
7 7 | # Positive
8 |-if value < 10: # [max-instead-of-if]
9 |- value = 10
8 |+value = max(value, 10)
10 9 |
11 10 | if value <= 10: # [max-instead-of-if]
12 11 | value = 10
if_stmt_min_max.py:11:1: PLR1730 [*] Replace `if` statement with `value = max(10, value)`
|
9 | value = 10
10 |
11 | / if value <= 10: # [max-instead-of-if]
12 | | value = 10
| |______________^ PLR1730
13 |
14 | if value < value2: # [max-instead-of-if]
|
= help: Replace with `value = max(10, value)`
Safe fix
8 8 | if value < 10: # [max-instead-of-if]
9 9 | value = 10
10 10 |
11 |-if value <= 10: # [max-instead-of-if]
12 |- value = 10
11 |+value = max(10, value)
13 12 |
14 13 | if value < value2: # [max-instead-of-if]
15 14 | value = value2
if_stmt_min_max.py:14:1: PLR1730 [*] Replace `if` statement with `value = max(value, value2)`
|
12 | value = 10
13 |
14 | / if value < value2: # [max-instead-of-if]
15 | | value = value2
| |__________________^ PLR1730
16 |
17 | if value > 10: # [min-instead-of-if]
|
= help: Replace with `value = max(value, value2)`
Safe fix
11 11 | if value <= 10: # [max-instead-of-if]
12 12 | value = 10
13 13 |
14 |-if value < value2: # [max-instead-of-if]
15 |- value = value2
14 |+value = max(value, value2)
16 15 |
17 16 | if value > 10: # [min-instead-of-if]
18 17 | value = 10
if_stmt_min_max.py:17:1: PLR1730 [*] Replace `if` statement with `value = min(value, 10)`
|
15 | value = value2
16 |
17 | / if value > 10: # [min-instead-of-if]
18 | | value = 10
| |______________^ PLR1730
19 |
20 | if value >= 10: # [min-instead-of-if]
|
= help: Replace with `value = min(value, 10)`
Safe fix
14 14 | if value < value2: # [max-instead-of-if]
15 15 | value = value2
16 16 |
17 |-if value > 10: # [min-instead-of-if]
18 |- value = 10
17 |+value = min(value, 10)
19 18 |
20 19 | if value >= 10: # [min-instead-of-if]
21 20 | value = 10
if_stmt_min_max.py:20:1: PLR1730 [*] Replace `if` statement with `value = min(10, value)`
|
18 | value = 10
19 |
20 | / if value >= 10: # [min-instead-of-if]
21 | | value = 10
| |______________^ PLR1730
22 |
23 | if value > value2: # [min-instead-of-if]
|
= help: Replace with `value = min(10, value)`
Safe fix
17 17 | if value > 10: # [min-instead-of-if]
18 18 | value = 10
19 19 |
20 |-if value >= 10: # [min-instead-of-if]
21 |- value = 10
20 |+value = min(10, value)
22 21 |
23 22 | if value > value2: # [min-instead-of-if]
24 23 | value = value2
if_stmt_min_max.py:23:1: PLR1730 [*] Replace `if` statement with `value = min(value, value2)`
|
21 | value = 10
22 |
23 | / if value > value2: # [min-instead-of-if]
24 | | value = value2
| |__________________^ PLR1730
|
= help: Replace with `value = min(value, value2)`
Safe fix
20 20 | if value >= 10: # [min-instead-of-if]
21 21 | value = 10
22 22 |
23 |-if value > value2: # [min-instead-of-if]
24 |- value = value2
23 |+value = min(value, value2)
25 24 |
26 25 |
27 26 | class A:
if_stmt_min_max.py:33:1: PLR1730 [*] Replace `if` statement with `A1.value = max(A1.value, 10)`
|
32 | A1 = A()
33 | / if A1.value < 10: # [max-instead-of-if]
34 | | A1.value = 10
| |_________________^ PLR1730
35 |
36 | if A1.value > 10: # [min-instead-of-if]
|
= help: Replace with `A1.value = max(A1.value, 10)`
Safe fix
30 30 |
31 31 |
32 32 | A1 = A()
33 |-if A1.value < 10: # [max-instead-of-if]
34 |- A1.value = 10
33 |+A1.value = max(A1.value, 10)
35 34 |
36 35 | if A1.value > 10: # [min-instead-of-if]
37 36 | A1.value = 10
if_stmt_min_max.py:36:1: PLR1730 [*] Replace `if` statement with `A1.value = min(A1.value, 10)`
|
34 | A1.value = 10
35 |
36 | / if A1.value > 10: # [min-instead-of-if]
37 | | A1.value = 10
| |_________________^ PLR1730
|
= help: Replace with `A1.value = min(A1.value, 10)`
Safe fix
33 33 | if A1.value < 10: # [max-instead-of-if]
34 34 | A1.value = 10
35 35 |
36 |-if A1.value > 10: # [min-instead-of-if]
37 |- A1.value = 10
36 |+A1.value = min(A1.value, 10)
38 37 |
39 38 |
40 39 | class AA:
if_stmt_min_max.py:60:1: PLR1730 [*] Replace `if` statement with `A2 = max(A2, A1)`
|
58 | A2 = AA(3)
59 |
60 | / if A2 < A1: # [max-instead-of-if]
61 | | A2 = A1
| |___________^ PLR1730
62 |
63 | if A2 <= A1: # [max-instead-of-if]
|
= help: Replace with `A2 = max(A2, A1)`
Safe fix
57 57 | A1 = AA(0)
58 58 | A2 = AA(3)
59 59 |
60 |-if A2 < A1: # [max-instead-of-if]
61 |- A2 = A1
60 |+A2 = max(A2, A1)
62 61 |
63 62 | if A2 <= A1: # [max-instead-of-if]
64 63 | A2 = A1
if_stmt_min_max.py:63:1: PLR1730 [*] Replace `if` statement with `A2 = max(A1, A2)`
|
61 | A2 = A1
62 |
63 | / if A2 <= A1: # [max-instead-of-if]
64 | | A2 = A1
| |___________^ PLR1730
65 |
66 | if A2 > A1: # [min-instead-of-if]
|
= help: Replace with `A2 = max(A1, A2)`
Safe fix
60 60 | if A2 < A1: # [max-instead-of-if]
61 61 | A2 = A1
62 62 |
63 |-if A2 <= A1: # [max-instead-of-if]
64 |- A2 = A1
63 |+A2 = max(A1, A2)
65 64 |
66 65 | if A2 > A1: # [min-instead-of-if]
67 66 | A2 = A1
if_stmt_min_max.py:66:1: PLR1730 [*] Replace `if` statement with `A2 = min(A2, A1)`
|
64 | A2 = A1
65 |
66 | / if A2 > A1: # [min-instead-of-if]
67 | | A2 = A1
| |___________^ PLR1730
68 |
69 | if A2 >= A1: # [min-instead-of-if]
|
= help: Replace with `A2 = min(A2, A1)`
Safe fix
63 63 | if A2 <= A1: # [max-instead-of-if]
64 64 | A2 = A1
65 65 |
66 |-if A2 > A1: # [min-instead-of-if]
67 |- A2 = A1
66 |+A2 = min(A2, A1)
68 67 |
69 68 | if A2 >= A1: # [min-instead-of-if]
70 69 | A2 = A1
if_stmt_min_max.py:69:1: PLR1730 [*] Replace `if` statement with `A2 = min(A1, A2)`
|
67 | A2 = A1
68 |
69 | / if A2 >= A1: # [min-instead-of-if]
70 | | A2 = A1
| |___________^ PLR1730
71 |
72 | # Negative
|
= help: Replace with `A2 = min(A1, A2)`
Safe fix
66 66 | if A2 > A1: # [min-instead-of-if]
67 67 | A2 = A1
68 68 |
69 |-if A2 >= A1: # [min-instead-of-if]
70 |- A2 = A1
69 |+A2 = min(A1, A2)
71 70 |
72 71 | # Negative
73 72 | if value < 10:
if_stmt_min_max.py:132:1: PLR1730 [*] Replace `if` statement with `min` call
|
131 | # Parenthesized expressions
132 | / if value.attr > 3:
133 | | (
134 | | value.
135 | | attr
136 | | ) = 3
| |_________^ PLR1730
|
= help: Replace with `min` call
Safe fix
129 129 | value = 2
130 130 |
131 131 | # Parenthesized expressions
132 |-if value.attr > 3:
133 |- (
132 |+(
134 133 | value.
135 134 | attr
136 |- ) = 3
135 |+ ) = min(value.attr, 3)

View File

@@ -0,0 +1,518 @@
---
source: crates/ruff_linter/src/rules/pylint/mod.rs
---
non_augmented_assignment.py:16:1: PLR6104 [*] Use `+=` to perform an augmented assignment directly
|
14 | mat1, mat2 = None, None
15 |
16 | some_string = some_string + "a very long end of string"
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PLR6104
17 | index = index - 1
18 | a_list = a_list + ["to concat"]
|
= help: Replace with augmented assignment
Unsafe fix
13 13 | some_set = {"elem"}
14 14 | mat1, mat2 = None, None
15 15 |
16 |-some_string = some_string + "a very long end of string"
16 |+some_string += "a very long end of string"
17 17 | index = index - 1
18 18 | a_list = a_list + ["to concat"]
19 19 | some_set = some_set | {"to concat"}
non_augmented_assignment.py:17:1: PLR6104 [*] Use `-=` to perform an augmented assignment directly
|
16 | some_string = some_string + "a very long end of string"
17 | index = index - 1
| ^^^^^^^^^^^^^^^^^ PLR6104
18 | a_list = a_list + ["to concat"]
19 | some_set = some_set | {"to concat"}
|
= help: Replace with augmented assignment
Unsafe fix
14 14 | mat1, mat2 = None, None
15 15 |
16 16 | some_string = some_string + "a very long end of string"
17 |-index = index - 1
17 |+index -= 1
18 18 | a_list = a_list + ["to concat"]
19 19 | some_set = some_set | {"to concat"}
20 20 | to_multiply = to_multiply * 5
non_augmented_assignment.py:18:1: PLR6104 [*] Use `+=` to perform an augmented assignment directly
|
16 | some_string = some_string + "a very long end of string"
17 | index = index - 1
18 | a_list = a_list + ["to concat"]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PLR6104
19 | some_set = some_set | {"to concat"}
20 | to_multiply = to_multiply * 5
|
= help: Replace with augmented assignment
Unsafe fix
15 15 |
16 16 | some_string = some_string + "a very long end of string"
17 17 | index = index - 1
18 |-a_list = a_list + ["to concat"]
18 |+a_list += ["to concat"]
19 19 | some_set = some_set | {"to concat"}
20 20 | to_multiply = to_multiply * 5
21 21 | to_divide = to_divide / 5
non_augmented_assignment.py:19:1: PLR6104 [*] Use `|=` to perform an augmented assignment directly
|
17 | index = index - 1
18 | a_list = a_list + ["to concat"]
19 | some_set = some_set | {"to concat"}
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PLR6104
20 | to_multiply = to_multiply * 5
21 | to_divide = to_divide / 5
|
= help: Replace with augmented assignment
Unsafe fix
16 16 | some_string = some_string + "a very long end of string"
17 17 | index = index - 1
18 18 | a_list = a_list + ["to concat"]
19 |-some_set = some_set | {"to concat"}
19 |+some_set |= {"to concat"}
20 20 | to_multiply = to_multiply * 5
21 21 | to_divide = to_divide / 5
22 22 | to_divide = to_divide // 5
non_augmented_assignment.py:20:1: PLR6104 [*] Use `*=` to perform an augmented assignment directly
|
18 | a_list = a_list + ["to concat"]
19 | some_set = some_set | {"to concat"}
20 | to_multiply = to_multiply * 5
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PLR6104
21 | to_divide = to_divide / 5
22 | to_divide = to_divide // 5
|
= help: Replace with augmented assignment
Unsafe fix
17 17 | index = index - 1
18 18 | a_list = a_list + ["to concat"]
19 19 | some_set = some_set | {"to concat"}
20 |-to_multiply = to_multiply * 5
20 |+to_multiply *= 5
21 21 | to_divide = to_divide / 5
22 22 | to_divide = to_divide // 5
23 23 | to_cube = to_cube**3
non_augmented_assignment.py:21:1: PLR6104 [*] Use `/=` to perform an augmented assignment directly
|
19 | some_set = some_set | {"to concat"}
20 | to_multiply = to_multiply * 5
21 | to_divide = to_divide / 5
| ^^^^^^^^^^^^^^^^^^^^^^^^^ PLR6104
22 | to_divide = to_divide // 5
23 | to_cube = to_cube**3
|
= help: Replace with augmented assignment
Unsafe fix
18 18 | a_list = a_list + ["to concat"]
19 19 | some_set = some_set | {"to concat"}
20 20 | to_multiply = to_multiply * 5
21 |-to_divide = to_divide / 5
21 |+to_divide /= 5
22 22 | to_divide = to_divide // 5
23 23 | to_cube = to_cube**3
24 24 | to_cube = 3**to_cube
non_augmented_assignment.py:22:1: PLR6104 [*] Use `//=` to perform an augmented assignment directly
|
20 | to_multiply = to_multiply * 5
21 | to_divide = to_divide / 5
22 | to_divide = to_divide // 5
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ PLR6104
23 | to_cube = to_cube**3
24 | to_cube = 3**to_cube
|
= help: Replace with augmented assignment
Unsafe fix
19 19 | some_set = some_set | {"to concat"}
20 20 | to_multiply = to_multiply * 5
21 21 | to_divide = to_divide / 5
22 |-to_divide = to_divide // 5
22 |+to_divide //= 5
23 23 | to_cube = to_cube**3
24 24 | to_cube = 3**to_cube
25 25 | to_cube = to_cube**to_cube
non_augmented_assignment.py:23:1: PLR6104 [*] Use `**=` to perform an augmented assignment directly
|
21 | to_divide = to_divide / 5
22 | to_divide = to_divide // 5
23 | to_cube = to_cube**3
| ^^^^^^^^^^^^^^^^^^^^ PLR6104
24 | to_cube = 3**to_cube
25 | to_cube = to_cube**to_cube
|
= help: Replace with augmented assignment
Unsafe fix
20 20 | to_multiply = to_multiply * 5
21 21 | to_divide = to_divide / 5
22 22 | to_divide = to_divide // 5
23 |-to_cube = to_cube**3
23 |+to_cube **= 3
24 24 | to_cube = 3**to_cube
25 25 | to_cube = to_cube**to_cube
26 26 | timeDiffSeconds = timeDiffSeconds % 60
non_augmented_assignment.py:24:1: PLR6104 [*] Use `**=` to perform an augmented assignment directly
|
22 | to_divide = to_divide // 5
23 | to_cube = to_cube**3
24 | to_cube = 3**to_cube
| ^^^^^^^^^^^^^^^^^^^^ PLR6104
25 | to_cube = to_cube**to_cube
26 | timeDiffSeconds = timeDiffSeconds % 60
|
= help: Replace with augmented assignment
Unsafe fix
21 21 | to_divide = to_divide / 5
22 22 | to_divide = to_divide // 5
23 23 | to_cube = to_cube**3
24 |-to_cube = 3**to_cube
24 |+to_cube **= 3
25 25 | to_cube = to_cube**to_cube
26 26 | timeDiffSeconds = timeDiffSeconds % 60
27 27 | flags = flags & 0x1
non_augmented_assignment.py:25:1: PLR6104 [*] Use `**=` to perform an augmented assignment directly
|
23 | to_cube = to_cube**3
24 | to_cube = 3**to_cube
25 | to_cube = to_cube**to_cube
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ PLR6104
26 | timeDiffSeconds = timeDiffSeconds % 60
27 | flags = flags & 0x1
|
= help: Replace with augmented assignment
Unsafe fix
22 22 | to_divide = to_divide // 5
23 23 | to_cube = to_cube**3
24 24 | to_cube = 3**to_cube
25 |-to_cube = to_cube**to_cube
25 |+to_cube **= to_cube
26 26 | timeDiffSeconds = timeDiffSeconds % 60
27 27 | flags = flags & 0x1
28 28 | flags = flags | 0x1
non_augmented_assignment.py:26:1: PLR6104 [*] Use `%=` to perform an augmented assignment directly
|
24 | to_cube = 3**to_cube
25 | to_cube = to_cube**to_cube
26 | timeDiffSeconds = timeDiffSeconds % 60
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PLR6104
27 | flags = flags & 0x1
28 | flags = flags | 0x1
|
= help: Replace with augmented assignment
Unsafe fix
23 23 | to_cube = to_cube**3
24 24 | to_cube = 3**to_cube
25 25 | to_cube = to_cube**to_cube
26 |-timeDiffSeconds = timeDiffSeconds % 60
26 |+timeDiffSeconds %= 60
27 27 | flags = flags & 0x1
28 28 | flags = flags | 0x1
29 29 | flags = flags ^ 0x1
non_augmented_assignment.py:27:1: PLR6104 [*] Use `&=` to perform an augmented assignment directly
|
25 | to_cube = to_cube**to_cube
26 | timeDiffSeconds = timeDiffSeconds % 60
27 | flags = flags & 0x1
| ^^^^^^^^^^^^^^^^^^^ PLR6104
28 | flags = flags | 0x1
29 | flags = flags ^ 0x1
|
= help: Replace with augmented assignment
Unsafe fix
24 24 | to_cube = 3**to_cube
25 25 | to_cube = to_cube**to_cube
26 26 | timeDiffSeconds = timeDiffSeconds % 60
27 |-flags = flags & 0x1
27 |+flags &= 1
28 28 | flags = flags | 0x1
29 29 | flags = flags ^ 0x1
30 30 | flags = flags << 1
non_augmented_assignment.py:28:1: PLR6104 [*] Use `|=` to perform an augmented assignment directly
|
26 | timeDiffSeconds = timeDiffSeconds % 60
27 | flags = flags & 0x1
28 | flags = flags | 0x1
| ^^^^^^^^^^^^^^^^^^^ PLR6104
29 | flags = flags ^ 0x1
30 | flags = flags << 1
|
= help: Replace with augmented assignment
Unsafe fix
25 25 | to_cube = to_cube**to_cube
26 26 | timeDiffSeconds = timeDiffSeconds % 60
27 27 | flags = flags & 0x1
28 |-flags = flags | 0x1
28 |+flags |= 1
29 29 | flags = flags ^ 0x1
30 30 | flags = flags << 1
31 31 | flags = flags >> 1
non_augmented_assignment.py:29:1: PLR6104 [*] Use `^=` to perform an augmented assignment directly
|
27 | flags = flags & 0x1
28 | flags = flags | 0x1
29 | flags = flags ^ 0x1
| ^^^^^^^^^^^^^^^^^^^ PLR6104
30 | flags = flags << 1
31 | flags = flags >> 1
|
= help: Replace with augmented assignment
Unsafe fix
26 26 | timeDiffSeconds = timeDiffSeconds % 60
27 27 | flags = flags & 0x1
28 28 | flags = flags | 0x1
29 |-flags = flags ^ 0x1
29 |+flags ^= 1
30 30 | flags = flags << 1
31 31 | flags = flags >> 1
32 32 | mat1 = mat1 @ mat2
non_augmented_assignment.py:30:1: PLR6104 [*] Use `<<=` to perform an augmented assignment directly
|
28 | flags = flags | 0x1
29 | flags = flags ^ 0x1
30 | flags = flags << 1
| ^^^^^^^^^^^^^^^^^^ PLR6104
31 | flags = flags >> 1
32 | mat1 = mat1 @ mat2
|
= help: Replace with augmented assignment
Unsafe fix
27 27 | flags = flags & 0x1
28 28 | flags = flags | 0x1
29 29 | flags = flags ^ 0x1
30 |-flags = flags << 1
30 |+flags <<= 1
31 31 | flags = flags >> 1
32 32 | mat1 = mat1 @ mat2
33 33 | a_list[1] = a_list[1] + 1
non_augmented_assignment.py:31:1: PLR6104 [*] Use `>>=` to perform an augmented assignment directly
|
29 | flags = flags ^ 0x1
30 | flags = flags << 1
31 | flags = flags >> 1
| ^^^^^^^^^^^^^^^^^^ PLR6104
32 | mat1 = mat1 @ mat2
33 | a_list[1] = a_list[1] + 1
|
= help: Replace with augmented assignment
Unsafe fix
28 28 | flags = flags | 0x1
29 29 | flags = flags ^ 0x1
30 30 | flags = flags << 1
31 |-flags = flags >> 1
31 |+flags >>= 1
32 32 | mat1 = mat1 @ mat2
33 33 | a_list[1] = a_list[1] + 1
34 34 |
non_augmented_assignment.py:32:1: PLR6104 [*] Use `@=` to perform an augmented assignment directly
|
30 | flags = flags << 1
31 | flags = flags >> 1
32 | mat1 = mat1 @ mat2
| ^^^^^^^^^^^^^^^^^^ PLR6104
33 | a_list[1] = a_list[1] + 1
|
= help: Replace with augmented assignment
Unsafe fix
29 29 | flags = flags ^ 0x1
30 30 | flags = flags << 1
31 31 | flags = flags >> 1
32 |-mat1 = mat1 @ mat2
32 |+mat1 @= mat2
33 33 | a_list[1] = a_list[1] + 1
34 34 |
35 35 | a_list[0:2] = a_list[0:2] * 3
non_augmented_assignment.py:33:1: PLR6104 [*] Use `+=` to perform an augmented assignment directly
|
31 | flags = flags >> 1
32 | mat1 = mat1 @ mat2
33 | a_list[1] = a_list[1] + 1
| ^^^^^^^^^^^^^^^^^^^^^^^^^ PLR6104
34 |
35 | a_list[0:2] = a_list[0:2] * 3
|
= help: Replace with augmented assignment
Unsafe fix
30 30 | flags = flags << 1
31 31 | flags = flags >> 1
32 32 | mat1 = mat1 @ mat2
33 |-a_list[1] = a_list[1] + 1
33 |+a_list[1] += 1
34 34 |
35 35 | a_list[0:2] = a_list[0:2] * 3
36 36 | a_list[:2] = a_list[:2] * 3
non_augmented_assignment.py:35:1: PLR6104 [*] Use `*=` to perform an augmented assignment directly
|
33 | a_list[1] = a_list[1] + 1
34 |
35 | a_list[0:2] = a_list[0:2] * 3
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PLR6104
36 | a_list[:2] = a_list[:2] * 3
37 | a_list[1:] = a_list[1:] * 3
|
= help: Replace with augmented assignment
Unsafe fix
32 32 | mat1 = mat1 @ mat2
33 33 | a_list[1] = a_list[1] + 1
34 34 |
35 |-a_list[0:2] = a_list[0:2] * 3
35 |+a_list[0:2] *= 3
36 36 | a_list[:2] = a_list[:2] * 3
37 37 | a_list[1:] = a_list[1:] * 3
38 38 | a_list[:] = a_list[:] * 3
non_augmented_assignment.py:36:1: PLR6104 [*] Use `*=` to perform an augmented assignment directly
|
35 | a_list[0:2] = a_list[0:2] * 3
36 | a_list[:2] = a_list[:2] * 3
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ PLR6104
37 | a_list[1:] = a_list[1:] * 3
38 | a_list[:] = a_list[:] * 3
|
= help: Replace with augmented assignment
Unsafe fix
33 33 | a_list[1] = a_list[1] + 1
34 34 |
35 35 | a_list[0:2] = a_list[0:2] * 3
36 |-a_list[:2] = a_list[:2] * 3
36 |+a_list[:2] *= 3
37 37 | a_list[1:] = a_list[1:] * 3
38 38 | a_list[:] = a_list[:] * 3
39 39 |
non_augmented_assignment.py:37:1: PLR6104 [*] Use `*=` to perform an augmented assignment directly
|
35 | a_list[0:2] = a_list[0:2] * 3
36 | a_list[:2] = a_list[:2] * 3
37 | a_list[1:] = a_list[1:] * 3
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ PLR6104
38 | a_list[:] = a_list[:] * 3
|
= help: Replace with augmented assignment
Unsafe fix
34 34 |
35 35 | a_list[0:2] = a_list[0:2] * 3
36 36 | a_list[:2] = a_list[:2] * 3
37 |-a_list[1:] = a_list[1:] * 3
37 |+a_list[1:] *= 3
38 38 | a_list[:] = a_list[:] * 3
39 39 |
40 40 | index = index * (index + 10)
non_augmented_assignment.py:38:1: PLR6104 [*] Use `*=` to perform an augmented assignment directly
|
36 | a_list[:2] = a_list[:2] * 3
37 | a_list[1:] = a_list[1:] * 3
38 | a_list[:] = a_list[:] * 3
| ^^^^^^^^^^^^^^^^^^^^^^^^^ PLR6104
39 |
40 | index = index * (index + 10)
|
= help: Replace with augmented assignment
Unsafe fix
35 35 | a_list[0:2] = a_list[0:2] * 3
36 36 | a_list[:2] = a_list[:2] * 3
37 37 | a_list[1:] = a_list[1:] * 3
38 |-a_list[:] = a_list[:] * 3
38 |+a_list[:] *= 3
39 39 |
40 40 | index = index * (index + 10)
41 41 |
non_augmented_assignment.py:40:1: PLR6104 [*] Use `*=` to perform an augmented assignment directly
|
38 | a_list[:] = a_list[:] * 3
39 |
40 | index = index * (index + 10)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PLR6104
|
= help: Replace with augmented assignment
Unsafe fix
37 37 | a_list[1:] = a_list[1:] * 3
38 38 | a_list[:] = a_list[:] * 3
39 39 |
40 |-index = index * (index + 10)
40 |+index *= index + 10
41 41 |
42 42 |
43 43 | class T:
non_augmented_assignment.py:45:9: PLR6104 [*] Use `+=` to perform an augmented assignment directly
|
43 | class T:
44 | def t(self):
45 | self.a = self.a + 1
| ^^^^^^^^^^^^^^^^^^^ PLR6104
|
= help: Replace with augmented assignment
Unsafe fix
42 42 |
43 43 | class T:
44 44 | def t(self):
45 |- self.a = self.a + 1
45 |+ self.a += 1
46 46 |
47 47 |
48 48 | obj = T()
non_augmented_assignment.py:49:1: PLR6104 [*] Use `+=` to perform an augmented assignment directly
|
48 | obj = T()
49 | obj.a = obj.a + 1
| ^^^^^^^^^^^^^^^^^ PLR6104
50 |
51 | # OK
|
= help: Replace with augmented assignment
Unsafe fix
46 46 |
47 47 |
48 48 | obj = T()
49 |-obj.a = obj.a + 1
49 |+obj.a += 1
50 50 |
51 51 | # OK
52 52 | a_list[0] = a_list[:] * 3

View File

@@ -1,82 +0,0 @@
---
source: crates/ruff_linter/src/rules/pylint/mod.rs
---
nan_comparison.py:11:9: PLW0117 Comparing against a NaN value; use `math.isnan` instead
|
10 | # PLW0117
11 | if x == float("nan"):
| ^^^^^^^^^^^^ PLW0117
12 | pass
|
nan_comparison.py:15:9: PLW0117 Comparing against a NaN value; use `math.isnan` instead
|
14 | # PLW0117
15 | if x == float("NaN"):
| ^^^^^^^^^^^^ PLW0117
16 | pass
|
nan_comparison.py:19:9: PLW0117 Comparing against a NaN value; use `math.isnan` instead
|
18 | # PLW0117
19 | if x == float("NAN"):
| ^^^^^^^^^^^^ PLW0117
20 | pass
|
nan_comparison.py:23:9: PLW0117 Comparing against a NaN value; use `math.isnan` instead
|
22 | # PLW0117
23 | if x == float("Nan"):
| ^^^^^^^^^^^^ PLW0117
24 | pass
|
nan_comparison.py:27:9: PLW0117 Comparing against a NaN value; use `math.isnan` instead
|
26 | # PLW0117
27 | if x == math.nan:
| ^^^^^^^^ PLW0117
28 | pass
|
nan_comparison.py:31:9: PLW0117 Comparing against a NaN value; use `math.isnan` instead
|
30 | # PLW0117
31 | if x == bad_val:
| ^^^^^^^ PLW0117
32 | pass
|
nan_comparison.py:35:9: PLW0117 Comparing against a NaN value; use `np.isnan` instead
|
34 | # PLW0117
35 | if y == np.NaN:
| ^^^^^^ PLW0117
36 | pass
|
nan_comparison.py:39:9: PLW0117 Comparing against a NaN value; use `np.isnan` instead
|
38 | # PLW0117
39 | if y == np.NAN:
| ^^^^^^ PLW0117
40 | pass
|
nan_comparison.py:43:9: PLW0117 Comparing against a NaN value; use `np.isnan` instead
|
42 | # PLW0117
43 | if y == np.nan:
| ^^^^^^ PLW0117
44 | pass
|
nan_comparison.py:47:9: PLW0117 Comparing against a NaN value; use `np.isnan` instead
|
46 | # PLW0117
47 | if y == npy_nan:
| ^^^^^^^ PLW0117
48 | pass
|

Some files were not shown because too many files have changed in this diff Show More