Compare commits

...

38 Commits

Author SHA1 Message Date
Dhruv Manilawala
4e8a84617c Bump version to v0.3.6 (#10883)
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2024-04-11 15:53:01 +00:00
Auguste Lalande
ffea1bb0a3 [refurb] Implement write-whole-file (FURB103) (#10802)
## Summary

Implement `write-whole-file` (`FURB103`), part of #1348. This is largely
a copy and paste of `read-whole-file` #7682.

## Test Plan

Text fixture added.

---------

Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
2024-04-11 14:21:45 +05:30
Charlie Marsh
ac14d187c6 Update clearscreen to v3.0.0 (#10869)
## Summary

Closes https://github.com/astral-sh/ruff/issues/10627.
2024-04-11 00:41:35 -04:00
Charlie Marsh
1eee6f16e4 [flake8-pytest-style] Fix single-tuple conversion in pytest-parametrize-values-wrong-type (#10862)
## Summary

This looks like a typo (without test coverage).

Closes https://github.com/astral-sh/ruff/issues/10861.
2024-04-10 14:20:09 -04:00
Auguste Lalande
de46a36bbc [pygrep-hooks] Improve blanket-noqa error message (PGH004) (#10851)
## Summary

Improve `blanket-noqa` error message in cases where codes are provided
but not detected due to formatting issues. Namely `# noqa X100` (missing
colon) or `noqa : X100` (space before colon). The behavior is similar to
`NQA002` and `NQA003` from `flake8-noqa` mentioned in #850. The idea to
merge the rules into `PGH004` was suggested by @MichaReiser
https://github.com/astral-sh/ruff/pull/10325#issuecomment-2025535444.

## Test Plan

Test cases added to fixture.
2024-04-10 04:30:25 +00:00
Charlie Marsh
dbf8d0c82c Show negated condition in needless-bool diagnostics (#10854)
## Summary

Closes https://github.com/astral-sh/ruff/issues/10843.
2024-04-10 04:29:43 +00:00
Carl Meyer
02e88fdbb1 Support negated patterns in [extend-]per-file-ignores (#10852)
Fixes #3172 

## Summary

Allow prefixing [extend-]per-file-ignores patterns with `!` to negate
the pattern; listed rules / prefixes will be ignored in all files that
don't match the pattern.

## Test Plan

Added tests for the feature.

Rendered docs and checked rendered output.
2024-04-09 21:53:41 -06:00
Carl Meyer
42d52ebbec Support FORCE_COLOR env var (#10839)
Fixes #5499 

## Summary

Add support for `FORCE_COLOR` env var, as specified at
https://force-color.org/

## Test Plan

I wrote an integration test for this, and then realized that can't work,
since we use a dev-dependency on `colored` with the `no-color` feature
to avoid ANSI color codes in test snapshots.

So this is just tested manually.

`cargo run --features test-rules -- check --no-cache --isolated -
--select RUF901 --diff < /dev/null` shows a colored diff.
`cargo run --features test-rules -- check --no-cache --isolated -
--select RUF901 --diff < /dev/null | less` does not have color, since we
pipe it to `less`.
`FORCE_COLOR=1 cargo run --features test-rules -- check --no-cache
--isolated - --select RUF901 --diff < /dev/null | less` does have color
(after this diff), even though we pipe it to `less`.
2024-04-08 15:29:29 -06:00
renovate[bot]
3fd22973da Update pre-commit dependencies (#10822) 2024-04-08 21:31:38 +01:00
Carl Meyer
e13e57e024 Localize cleanup for FunctionDef and ClassDef (#10837)
## Summary

Came across this code while digging into the semantic model with
@AlexWaygood, and found it confusing because of how it splits
`push_scope` from the paired `pop_scope` (took me a few minutes to even
figure out if/where we were popping the pushed scope). Since this
"cleanup" is already totally split by node type, there doesn't seem to
be any gain in having it as a separate "step" rather than just
incorporating it into the traversal clauses for those node types.

I left the equivalent cleanup step alone for the expression case,
because in that case it is actually generic across several different
node types, and due to the use of the common `visit_generators` utility
there isn't a clear way to keep the pushes and corresponding pops
localized.

Feel free to just reject this if I've missed a good reason for it to
stay this way!

## Test Plan

Tests and clippy.
2024-04-08 13:29:38 -06:00
Jane Lewis
c3e28f9d55 The linter and code actions can now be disabled in client settings for ruff server (#10800)
## Summary

This is a follow-up to https://github.com/astral-sh/ruff/pull/10764.
Support for diagnostics, quick fixes, and source actions can now be
disabled via client settings.

## Test Plan

### Manual Testing

Set up your workspace as described in the test plan in
https://github.com/astral-sh/ruff/pull/10764, up to step 2. You don't
need to add a debug statement.
The configuration for `folder_a` and `folder_b` should be as follows:
`folder_a`:
```json
{
    "ruff.codeAction.fixViolation": {
        "enable": true
    }
}
```

`folder_b`
```json
{
    "ruff.codeAction.fixViolation": {
        "enable": false
    }
}
```
Finally, open up your VS Code User Settings and un-check the `Ruff > Fix
All` setting.

1. Open a Python file in `folder_a` that has existing problems. The
problems should be highlighted, and quick fix should be available.
`source.fixAll` should not be available as a source action.
2. Open a Python file in `folder_b` that has existing problems. The
problems should be highlighted, but quick fixes should not be available
for any of them. `source.fixAll` should not be available as a source
action.
3. Open up your VS Code Workspace Settings (second tab under the search
bar) and un-check `Ruff > Lint: Enable`
4. Both files you tested in steps 1 and 2 should now lack any visible
diagnostics. `source.organizeImports` should still be available as a
source action.
2024-04-08 07:53:28 -07:00
renovate[bot]
a188ba5c26 chore(deps): update rust crate quick-junit to v0.3.6 (#10834)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-08 11:48:46 +01:00
renovate[bot]
86419c8ab9 chore(deps): update npm development dependencies (#10827)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-04-08 07:00:42 +00:00
renovate[bot]
a9ebfe6ec0 chore(deps): update rust crate libcst to v1.3.1 (#10824) 2024-04-07 22:20:48 -04:00
renovate[bot]
0a50874c01 chore(deps): update rust crate syn to v2.0.58 (#10823) 2024-04-07 22:20:40 -04:00
Aleksei Latyshev
6050bab5db [refurb] Support itemgetter in reimplemented-operator (FURB118) (#10526)
## Summary
Lint about function like expressions which are equivalent to
`operator.itemgetter`.
See:
https://github.com/astral-sh/ruff/issues/1348#issuecomment-1909421747

## Test Plan
cargo test
2024-04-07 02:31:59 +00:00
Alex Waygood
2a51dcfdf7 [pyflakes] Allow forward references in class bases in stub files (F821) (#10779)
## Summary

Fixes #3011.

Type checkers currently allow forward references in all contexts in stub
files, and stubs frequently make use of this capability (although it
doesn't actually seem to be specc'd anywhere --neither in PEP 484, nor
https://typing.readthedocs.io/en/latest/source/stubs.html#id6, nor the
CPython typing docs). Implementing it so that Ruff allows forward
references in _all contexts_ in stub files seems non-trivial, however
(or at least, I couldn't figure out how to do it easily), so this PR
does not do that. Perhaps it _should_; if we think this apporach isn't
principled enough, I'm happy to close it and postpone changing anything
here.

However, this does reduce the number of F821 errors Ruff emits on
typeshed down from 76 to 2, which would mean that we could enable the
rule at typeshed. The remaining 2 F821 errors can be trivially fixed at
typeshed by moving definitions around; forward references in class bases
were really the only remaining places where there was a real _use case_
for forward references in stub files that Ruff wasn't yet allowing.

## Test plan

`cargo test`. I also ran this PR branch on typeshed to check to see if
there were any new false positives caused by the changes here; there
were none.
2024-04-07 01:15:58 +01:00
Alex Waygood
86588695e3 [flake8-slots] Flag subclasses of call-based typing.NamedTuples as well as subclasses of collections.namedtuple() (SLOT002) (#10808) 2024-04-07 00:16:06 +01:00
Alex Waygood
47e0cb8985 [flake8-pyi] Various improvements to PYI034 (#10807)
More accurately identify whether a class is a metaclass, a subclass of `collections.abc.Iterator`, or a subclass of `collections.abc.AsyncIterator`
2024-04-07 00:15:48 +01:00
renovate[bot]
388658efdb Update pre-commit dependencies (#10698)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Zanie Blue <contact@zanie.dev>
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2024-04-06 23:00:41 +00:00
Tibor Reiss
3194f90db1 [pylint] Implement if-stmt-min-max (PLR1730, PLR1731) (#10002)
Add rule [consider-using-min-builtin
(R1730)](https://pylint.readthedocs.io/en/latest/user_guide/messages/refactor/consider-using-min-builtin.html)
and [consider-using-max-builtin
(R1731)](https://pylint.readthedocs.io/en/latest/user_guide/messages/refactor/consider-using-max-builtin.html)

See https://github.com/astral-sh/ruff/issues/970 for rules

Test Plan: `cargo test`
2024-04-06 17:32:05 +00:00
Charlie Marsh
ee4bff3475 Add comment test for FURB110 (#10804) 2024-04-06 16:49:22 +00:00
Auguste Lalande
7fb012d0df [refurb] Do not allow any keyword arguments for read-whole-file in rb mode (FURB101) (#10803)
## Summary

`Path.read_bytes()` does not support any keyword arguments, so `FURB101`
should not be triggered if the file is opened in `rb` mode with any
keyword arguments.

## Test Plan

Move erroneous test to "Non-error" section of fixture.
2024-04-06 12:41:39 -04:00
Steve C
44459f92ef [refurb] Implement if-expr-instead-of-or-operator (FURB110) (#10687)
## Summary

Add
[`FURB110`](https://github.com/dosisod/refurb/blob/master/refurb/checks/logical/use_or.py)

See: #1348 

## Test Plan

`cargo test`
2024-04-06 16:39:40 +00:00
Alex Waygood
1dc93107dc Improve internal documentation for the semantic model (#10788) 2024-04-06 16:28:32 +00:00
Charlie Marsh
7fb5f47efe Respect # noqa directives on __all__ openers (#10798)
## Summary

Historically, given:

```python
__all__ = [  # noqa: F822
    "Bernoulli",
    "Beta",
    "Binomial",
]
```

The F822 violations would be attached to the `__all__`, so this `# noqa`
would be enforced for _all_ definitions in the list. This changed in
https://github.com/astral-sh/ruff/pull/10525 for the better, in that we
now use the range of each string. But these `# noqa` directives stopped
working.

This PR sets the `__all__` as a parent range in the diagnostic, so that
these directives are respected once again.

Closes https://github.com/astral-sh/ruff/issues/10795.

## Test Plan

`cargo test`
2024-04-06 14:51:17 +00:00
Charlie Marsh
83db62bcda Use within-scope shadowed bindings in asyncio-dangling-task (#10793)
## Summary

I think this is using the wrong shadowing, as seen by the change in the
test fixture.
2024-04-06 10:44:03 -04:00
Bohdan
b45fd61ec5 [pyupgrade] Replace str, Enum with StrEnum (UP042) (#10713)
## Summary

Add new rule `pyupgrade - UP042` (I picked next available number).
Closes https://github.com/astral-sh/ruff/discussions/3867
Closes https://github.com/astral-sh/ruff/issues/9569

It should warn + provide a fix `class A(str, Enum)` -> `class
A(StrEnum)` for py311+.

## Test Plan

Added UP042.py test.

## Notes

I did not find a way to call `remove_argument` 2 times consecutively, so
the automatic fixing works only for classes that inherit exactly `str,
Enum` (regardless of the order).

I also plan to extend this rule to support IntEnum in next PR.
2024-04-06 01:56:28 +00:00
Jane Lewis
323264dec2 Remove debug print when resolving client settings in ruff server (#10799)
This was a statement used as part of the test plan in #10764 that was
erroneously committed in 8aa31f4c74.
2024-04-06 00:13:25 +00:00
Jane Lewis
c11e6d709c ruff server now supports commands for auto-fixing, organizing imports, and formatting (#10654)
## Summary

This builds off of the work in
https://github.com/astral-sh/ruff/pull/10652 to implement a command
executor, backwards compatible with the commands from the previous LSP
(`ruff.applyAutofix`, `ruff.applyFormat` and
`ruff.applyOrganizeImports`).

This involved a lot of refactoring and tweaks to the code action
resolution code - the most notable change is that workspace edits are
specified in a slightly different way, using the more general `changes`
field instead of the `document_changes` field (which isn't supported on
all LSP clients). Additionally, the API for synchronous request handlers
has been updated to include access to the `Requester`, which we use to
send a `workspace/applyEdit` request to the client.

## Test Plan



https://github.com/astral-sh/ruff/assets/19577865/7932e30f-d944-4e35-b828-1d81aa56c087
2024-04-05 23:27:35 +00:00
Auguste Lalande
1b31d4e9f1 Correct some oversites in the documentation from #10756 (#10796)
## Summary

Correct some oversites in the documentation from #10756
2024-04-05 22:45:48 +00:00
Jane Lewis
a184dc68f5 Implement client setting initialization and resolution for ruff server (#10764)
## Summary

When a language server initializes, it is passed a serialized JSON
object, which is known as its "initialization options". Until now, `ruff
server` has ignored those initialization options, meaning that
user-provided settings haven't worked. This PR is the first step for
supporting settings from the LSP client. It implements procedures to
deserialize initialization options into a settings object, and then
resolve those settings objects into concrete settings for each
workspace.

One of the goals for user settings implementation in `ruff server` is
backwards compatibility with `ruff-lsp`'s settings. We won't support all
settings that `ruff-lsp` had, but the ones that we do support should
work the same and use the same schema as `ruff-lsp`.

These are the existing settings from `ruff-lsp` that we will continue to
support, and which are part of the settings schema in this PR:

| Setting | Default Value | Description |

|----------------------------------------|---------------|----------------------------------------------------------------------------------------|
| `codeAction.disableRuleComment.enable` | `true` | Whether to display
Quick Fix actions to disable rules via `noqa` suppression comments. |
| `codeAction.fixViolation.enable` | `true` | Whether to display Quick
Fix actions to autofix violations. |
| `fixAll` | `true` | Whether to register Ruff as capable of handling
`source.fixAll` actions. |
| `lint.enable` | `true` | Whether to enable linting. Set to `false` to
use Ruff exclusively as a formatter. |
| `organizeImports` | `true` | Whether to register Ruff as capable of
handling `source.organizeImports` actions. |

To be clear: this PR does not implement 'support' for these settings,
individually. Rather, it constructs a framework for these settings to be
used by the server in the future.

Notably, we are choosing *not* to support `lint.args` and `format.args`
as settings for `ruff server`. This is because we're now interfacing
with Ruff at a lower level than its CLI, and converting CLI arguments
back into configuration is too involved.

We will have support for linter and formatter specific settings in
follow-up PRs. We will also 'hook up' user settings to work with the
server in follow up PRs.

## Test Plan

### Snapshot Tests

Tests have been created in
`crates/ruff_server/src/session/settings/tests.rs` to ensure that
deserialization and settings resolution works as expected.

### Manual Testing

Since we aren't using the resolved settings anywhere yet, we'll have to
add a few printing statements.

We want to capture what the resolved settings look like when sent as
part of a snapshot, so modify `Session::take_snapshot` to be the
following:

```rust
    pub(crate) fn take_snapshot(&self, url: &Url) -> Option<DocumentSnapshot> {
        let resolved_settings = self.workspaces.client_settings(url, &self.global_settings);
        tracing::info!("Resolved settings for document {url}: {resolved_settings:?}");
        Some(DocumentSnapshot {
            configuration: self.workspaces.configuration(url)?.clone(),
            resolved_client_capabilities: self.resolved_client_capabilities.clone(),
            client_settings: resolved_settings,
            document_ref: self.workspaces.snapshot(url)?,
            position_encoding: self.position_encoding,
            url: url.clone(),
        })
    }
```

Once you've done that, build the server and start up your extension
testing environment.

1. Set up a workspace in VS Code with two workspace folders, each one
having some variant of Ruff file-based configuration (`pyproject.toml`,
`ruff.toml`, etc.). We'll call these folders `folder_a` and `folder_b`.
2. In each folder, open up `.vscode/settings.json`.
3. In folder A, use these settings:
```json
{
    "ruff.codeAction.disableRuleComment": {
        "enable": true
    }
}
```
4. In folder B, use these settings:
```json
{
    
    "ruff.codeAction.disableRuleComment": {
        "enable": false
    }
}
```
5. Finally, open up your VS Code User Settings and un-check the `Ruff >
Code Action: Disable Rule Comment` setting.
6. When opening files in `folder_a`, you should see logs that look like
this:
```
Resolved settings for document <file>: ResolvedClientSettings { fix_all: true, organize_imports: true, lint_enable: true, disable_rule_comment_enable: true, fix_violation_enable: true }
```
7. When opening files in `folder_b`, you should see logs that look like
this:
```
Resolved settings for document <file>: ResolvedClientSettings { fix_all: true, organize_imports: true, lint_enable: true, disable_rule_comment_enable: false, fix_violation_enable: true }
```
8. To test invalid configuration, change `.vscode/settings.json` in
either folder to be this:
```json
{
    "ruff.codeAction.disableRuleComment": {
        "enable": "invalid"
    },
}
```
10. You should now see these error logs:
```
<time> [info]    <duration> ERROR ruff_server::session::settings Failed to deserialize initialization options: data did not match any variant of untagged enum InitializationOptions. Falling back to default client settings...

<time> [info]    <duration> WARN ruff_server::server No workspace settings found for file:///Users/jane/testbed/pandas
   <duration> WARN ruff_server::server No workspace settings found for file:///Users/jane/foss/scipy
```
11. Opening files in either folder should now print the following
configuration:
```
Resolved settings for document <file>: ResolvedClientSettings { fix_all: true, organize_imports: true, lint_enable: true, disable_rule_comment_enable: true, fix_violation_enable: true }
```
2024-04-05 22:41:50 +00:00
buhtz
a4ee9c1978 doc(FAQ): More precise PyLint comparision (#10756)
Section about comparing Ruff to PyLint now is more precise about the
following two points:
- Ruff do count branches different and there for earlier give
too-many-branches warning.
- Activating all Pylint rules in Ruff also activates pylint rules that
are not active by default in Pylint itself because they are implemented
via pylint plugins.
2024-04-05 22:12:33 +00:00
Auguste Lalande
c2790f912b [pylint] Implement bad-staticmethod-argument (PLW0211) (#10781)
## Summary

Implement `bad-staticmethod-argument` from pylint, part of #970.

## Test Plan

Text fixture added.
2024-04-05 21:33:39 +00:00
Micha Reiser
2e7a1a4cb1 D403: Require capitalizing single word sentence (#10776) 2024-04-05 08:42:00 +02:00
Jane Lewis
d050d6da2e ruff server now supports the source.organizeImports source action (#10652)
## Summary

This builds on top of the work in
https://github.com/astral-sh/ruff/pull/10597 to support `Ruff: Organize
imports` as an available source action.

To do this, we have to support `Clone`-ing for linter settings, since we
need to modify them in place to select import-related diagnostics
specifically (`I001` and `I002`).

## Test Plan


https://github.com/astral-sh/ruff/assets/19577865/04282d01-dfda-4ac5-aa8f-6a92d5f85bfd
2024-04-04 22:20:50 +00:00
NotWearingPants
fd8da66fcb docs: Lint -> Format in formatter.md (#10777)
## Summary

Since #10217 the [formatter
docs](https://docs.astral.sh/ruff/formatter/) contained

```
ruff format                   # Format all files in the current directory.
ruff format path/to/code/     # Lint all files in `path/to/code` (and any subdirectories).
ruff format path/to/file.py   # Format a single file.
```

I believe the `Lint` here is a copy-paste typo from the [linter
docs](https://docs.astral.sh/ruff/linter/).

## Test Plan

N/A
2024-04-04 16:50:55 +00:00
Dhruv Manilawala
d02b1069b5 Add semantic model flag when inside f-string replacement field (#10766)
## Summary

This PR adds a new semantic model flag to indicate that the checker is
inside an f-string replacement field. This will be used to ignore
certain checks if the target version doesn't support a specific feature
like PEP 701.

fixes: #10761 

## Test Plan

Add a test case from the raised issue.
2024-04-04 09:08:48 +05:30
159 changed files with 4961 additions and 996 deletions

View File

@@ -17,4 +17,4 @@ MD013: false
# MD024/no-duplicate-heading
MD024:
# Allow when nested under different parents e.g. CHANGELOG.md
allow_different_nesting: true
siblings_only: true

View File

@@ -13,7 +13,7 @@ exclude: |
repos:
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.15
rev: v0.16
hooks:
- id: validate-pyproject
@@ -31,7 +31,7 @@ repos:
)$
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.37.0
rev: v0.39.0
hooks:
- id: markdownlint-fix
exclude: |
@@ -41,7 +41,7 @@ repos:
)$
- repo: https://github.com/crate-ci/typos
rev: v1.16.22
rev: v1.20.4
hooks:
- id: typos
@@ -55,7 +55,7 @@ repos:
pass_filenames: false # This makes it a lot faster
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.1.4
rev: v0.3.5
hooks:
- id: ruff-format
- id: ruff
@@ -70,7 +70,7 @@ repos:
# Prettier
- repo: https://github.com/pre-commit/mirrors-prettier
rev: v3.0.3
rev: v3.1.0
hooks:
- id: prettier
types: [yaml]

View File

@@ -1,5 +1,50 @@
# Changelog
## 0.3.6
### Preview features
- \[`pylint`\] Implement `bad-staticmethod-argument` (`PLW0211`) ([#10781](https://github.com/astral-sh/ruff/pull/10781))
- \[`pylint`\] Implement `if-stmt-min-max` (`PLR1730`, `PLR1731`) ([#10002](https://github.com/astral-sh/ruff/pull/10002))
- \[`pyupgrade`\] Replace `str,Enum` multiple inheritance with `StrEnum` `UP042` ([#10713](https://github.com/astral-sh/ruff/pull/10713))
- \[`refurb`\] Implement `if-expr-instead-of-or-operator` (`FURB110`) ([#10687](https://github.com/astral-sh/ruff/pull/10687))
- \[`refurb`\] Implement `int-on-sliced-str` (`FURB166`) ([#10650](https://github.com/astral-sh/ruff/pull/10650))
- \[`refurb`\] Implement `write-whole-file` (`FURB103`) ([#10802](https://github.com/astral-sh/ruff/pull/10802))
- \[`refurb`\] Support `itemgetter` in `reimplemented-operator` (`FURB118`) ([#10526](https://github.com/astral-sh/ruff/pull/10526))
- \[`flake8_comprehensions`\] Add `sum`/`min`/`max` to unnecessary comprehension check (`C419`) ([#10759](https://github.com/astral-sh/ruff/pull/10759))
### Rule changes
- \[`pydocstyle`\] Require capitalizing docstrings where the first sentence is a single word (`D403`) ([#10776](https://github.com/astral-sh/ruff/pull/10776))
- \[`pycodestyle`\] Ignore annotated lambdas in class scopes (`E731`) ([#10720](https://github.com/astral-sh/ruff/pull/10720))
- \[`flake8-pyi`\] Various improvements to PYI034 ([#10807](https://github.com/astral-sh/ruff/pull/10807))
- \[`flake8-slots`\] Flag subclasses of call-based `typing.NamedTuple`s as well as subclasses of `collections.namedtuple()` (`SLOT002`) ([#10808](https://github.com/astral-sh/ruff/pull/10808))
- \[`pyflakes`\] Allow forward references in class bases in stub files (`F821`) ([#10779](https://github.com/astral-sh/ruff/pull/10779))
- \[`pygrep-hooks`\] Improve `blanket-noqa` error message (`PGH004`) ([#10851](https://github.com/astral-sh/ruff/pull/10851))
### CLI
- Support `FORCE_COLOR` env var ([#10839](https://github.com/astral-sh/ruff/pull/10839))
### Configuration
- Support negated patterns in `[extend-]per-file-ignores` ([#10852](https://github.com/astral-sh/ruff/pull/10852))
### Bug fixes
- \[`flake8-import-conventions`\] Accept non-aliased (but correct) import in `unconventional-import-alias` (`ICN001`) ([#10729](https://github.com/astral-sh/ruff/pull/10729))
- \[`flake8-quotes`\] Add semantic model flag when inside f-string replacement field ([#10766](https://github.com/astral-sh/ruff/pull/10766))
- \[`pep8-naming`\] Recursively resolve `TypeDicts` for N815 violations ([#10719](https://github.com/astral-sh/ruff/pull/10719))
- \[`flake8-quotes`\] Respect `Q00*` ignores in `flake8-quotes` rules ([#10728](https://github.com/astral-sh/ruff/pull/10728))
- \[`flake8-simplify`\] Show negated condition in `needless-bool` diagnostics (`SIM103`) ([#10854](https://github.com/astral-sh/ruff/pull/10854))
- \[`ruff`\] Use within-scope shadowed bindings in `asyncio-dangling-task` (`RUF006`) ([#10793](https://github.com/astral-sh/ruff/pull/10793))
- \[`flake8-pytest-style`\] Fix single-tuple conversion in `pytest-parametrize-values-wrong-type` (`PT007`) ([#10862](https://github.com/astral-sh/ruff/pull/10862))
- \[`flake8-return`\] Ignore assignments to annotated variables in `unnecessary-assign` (`RET504`) ([#10741](https://github.com/astral-sh/ruff/pull/10741))
- \[`refurb`\] Do not allow any keyword arguments for `read-whole-file` in `rb` mode (`FURB101`) ([#10803](https://github.com/astral-sh/ruff/pull/10803))
- \[`pylint`\] Don't recommend decorating staticmethods with `@singledispatch` (`PLE1519`, `PLE1520`) ([#10637](https://github.com/astral-sh/ruff/pull/10637))
- \[`pydocstyle`\] Use section name range for all section-related docstring diagnostics ([#10740](https://github.com/astral-sh/ruff/pull/10740))
- Respect `# noqa` directives on `__all__` openers ([#10798](https://github.com/astral-sh/ruff/pull/10798))
## 0.3.5
### Preview features

100
Cargo.lock generated
View File

@@ -244,6 +244,12 @@ version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
[[package]]
name = "cfg_aliases"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fd16c4719339c4530435d38e511904438d07cce7950afa3718a84ac36c10e89e"
[[package]]
name = "chic"
version = "1.2.2"
@@ -365,7 +371,7 @@ dependencies = [
"heck 0.5.0",
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -376,9 +382,9 @@ checksum = "98cc8fbded0c607b7ba9dd60cd98df59af97e84d24e49c8557331cfc26d301ce"
[[package]]
name = "clearscreen"
version = "2.0.1"
version = "3.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72f3f22f1a586604e62efd23f78218f3ccdecf7a33c4500db2d37d85a24fe994"
checksum = "2f8c93eb5f77c9050c7750e14f13ef1033a40a0aac70c6371535b6763a01438c"
dependencies = [
"nix",
"terminfo",
@@ -596,7 +602,7 @@ dependencies = [
"proc-macro2",
"quote",
"strsim 0.10.0",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -607,7 +613,7 @@ checksum = "a668eda54683121533a393014d8692171709ff57a7d61f187b6e782719f8933f"
dependencies = [
"darling_core",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -1108,7 +1114,7 @@ dependencies = [
"Inflector",
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -1271,9 +1277,9 @@ checksum = "9c198f91728a82281a64e1f4f9eeb25d82cb32a5de251c6bd1b5154d63a8e7bd"
[[package]]
name = "libcst"
version = "1.2.0"
version = "1.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "890ee958b936e712c6f1c184f208176973e73c2e4f8d3cf499f94eb112f647f9"
checksum = "6f1e25d1b119ab5c2f15a6e081bb94a8d547c5c2ad065f5fd0dbb683f31ced91"
dependencies = [
"chic",
"libcst_derive",
@@ -1286,12 +1292,12 @@ dependencies = [
[[package]]
name = "libcst_derive"
version = "1.2.0"
version = "1.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1dbd2f3cd9346422ebdc3a614aed6969d4e0b3e9c10517f33b30326acf894c11"
checksum = "4a5011f2d59093de14a4a90e01b9d85dee9276e58a25f0107dcee167dd601be0"
dependencies = [
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -1437,20 +1443,15 @@ version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e4a24736216ec316047a1fc4252e27dabb04218aa4a3f37c6e7ddbf1f9782b54"
[[package]]
name = "nextest-workspace-hack"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d906846a98739ed9d73d66e62c2641eef8321f1734b7a1156ab045a0248fb2b3"
[[package]]
name = "nix"
version = "0.26.4"
version = "0.28.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "598beaf3cc6fdd9a5dfb1630c2800c7acd31df7aaf0f565796fba2b53ca1af1b"
checksum = "ab2156c4fce2f8df6c499cc1c763e4394b7482525bf2a9701c9d79d215f519e4"
dependencies = [
"bitflags 1.3.2",
"bitflags 2.5.0",
"cfg-if",
"cfg_aliases",
"libc",
]
@@ -1757,7 +1758,7 @@ checksum = "52a40bc70c2c58040d2d8b167ba9a5ff59fc9dab7ad44771cfde3dcfde7a09c6"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -1812,13 +1813,12 @@ dependencies = [
[[package]]
name = "quick-junit"
version = "0.3.5"
version = "0.3.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1b9599bffc2cd7511355996e0cfd979266b2cfa3f3ff5247d07a3a6e1ded6158"
checksum = "d1a341ae463320e9f8f34adda49c8a85d81d4e8f34cce4397fb0350481552224"
dependencies = [
"chrono",
"indexmap",
"nextest-workspace-hack",
"quick-xml",
"strip-ansi-escapes",
"thiserror",
@@ -1975,7 +1975,7 @@ dependencies = [
"pmutil",
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -1995,7 +1995,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.3.5"
version = "0.3.6"
dependencies = [
"anyhow",
"argfile",
@@ -2157,7 +2157,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.3.5"
version = "0.3.6"
dependencies = [
"aho-corasick",
"annotate-snippets 0.9.2",
@@ -2225,7 +2225,7 @@ dependencies = [
"proc-macro2",
"quote",
"ruff_python_trivia",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -2429,7 +2429,7 @@ dependencies = [
[[package]]
name = "ruff_shrinking"
version = "0.3.5"
version = "0.3.6"
dependencies = [
"anyhow",
"clap",
@@ -2671,7 +2671,7 @@ checksum = "7eb0b34b42edc17f6b7cac84a52a1c5f0e1bb2227e997ca9011ea3dd34e8610b"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -2704,7 +2704,7 @@ checksum = "0b2e6b945e9d3df726b65d6ee24060aff8e3533d431f677a9695db04eff9dfdb"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -2745,7 +2745,7 @@ dependencies = [
"darling",
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -2855,7 +2855,7 @@ dependencies = [
"proc-macro2",
"quote",
"rustversion",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -2877,9 +2877,9 @@ dependencies = [
[[package]]
name = "syn"
version = "2.0.57"
version = "2.0.58"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "11a6ae1e52eb25aab8f3fb9fca13be982a373b8f1157ca14b897a825ba4a2d35"
checksum = "44cfb93f38070beee36b3fef7d4f5a16f27751d94b187b666a5cc5e9b0d30687"
dependencies = [
"proc-macro2",
"quote",
@@ -2950,7 +2950,7 @@ dependencies = [
"cfg-if",
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -2961,7 +2961,7 @@ checksum = "5c89e72a01ed4c579669add59014b9a524d609c0c88c6a585ce37485879f6ffb"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
"test-case-core",
]
@@ -2982,7 +2982,7 @@ checksum = "c61f3ba182994efc43764a46c018c347bc492c79f024e705f46567b418f6d4f7"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -3103,7 +3103,7 @@ checksum = "34704c8d6ebcbc939824180af020566b01a7c01f80641264eba0999f6c2b6be7"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -3339,7 +3339,7 @@ checksum = "9881bea7cbe687e36c9ab3b778c36cd0487402e270304e8b1296d5085303c1a2"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -3424,7 +3424,7 @@ dependencies = [
"once_cell",
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
"wasm-bindgen-shared",
]
@@ -3458,7 +3458,7 @@ checksum = "e94f17b526d0a461a191c78ea52bbce64071ed5c04c9ffe424dcb38f74171bb7"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
"wasm-bindgen-backend",
"wasm-bindgen-shared",
]
@@ -3491,7 +3491,7 @@ checksum = "b7f89739351a2e03cb94beb799d47fb2cac01759b40ec441f7de39b00cbf7ef0"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]
@@ -3515,14 +3515,14 @@ dependencies = [
[[package]]
name = "which"
version = "4.4.2"
version = "6.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "87ba24419a2078cd2b0f2ede2691b6c66d8e47836da3b6db8265ebad47afbfc7"
checksum = "8211e4f58a2b2805adfbefbc07bab82958fc91e3836339b1ab7ae32465dce0d7"
dependencies = [
"either",
"home",
"once_cell",
"rustix",
"winsafe",
]
[[package]]
@@ -3715,6 +3715,12 @@ dependencies = [
"memchr",
]
[[package]]
name = "winsafe"
version = "0.0.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d135d17ab770252ad95e9a872d365cf3090e3be864a34ab46f48555993efc904"
[[package]]
name = "yansi"
version = "0.5.1"
@@ -3747,7 +3753,7 @@ checksum = "9ce1b18ccd8e73a9321186f97e46f9f04b778851177567b1975109d26a08d2a6"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
"syn 2.0.58",
]
[[package]]

View File

@@ -23,7 +23,7 @@ cachedir = { version = "0.3.1" }
chrono = { version = "0.4.35", default-features = false, features = ["clock"] }
clap = { version = "4.5.3", features = ["derive"] }
clap_complete_command = { version = "0.5.1" }
clearscreen = { version = "2.0.0" }
clearscreen = { version = "3.0.0" }
codspeed-criterion-compat = { version = "2.4.0", default-features = false }
colored = { version = "2.1.0" }
console_error_panic_hook = { version = "0.1.7" }

View File

@@ -151,7 +151,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.3.5
rev: v0.3.6
hooks:
# Run the linter.
- id: ruff

View File

@@ -3,9 +3,11 @@
extend-exclude = ["**/resources/**/*", "**/snapshots/**/*"]
[default.extend-words]
"arange" = "arange" # e.g. `numpy.arange`
hel = "hel"
whos = "whos"
spawnve = "spawnve"
ned = "ned"
pn = "pn" # `import panel as pd` is a thing
poit = "poit"
BA = "BA" # acronym for "Bad Allowed", used in testing.

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.3.5"
version = "0.3.6"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -149,6 +149,13 @@ pub fn run(
#[cfg(windows)]
assert!(colored::control::set_virtual_terminal(true).is_ok());
// support FORCE_COLOR env var
if let Some(force_color) = std::env::var_os("FORCE_COLOR") {
if force_color.len() > 0 {
colored::control::set_override(true);
}
}
set_up_logging(global_options.log_level())?;
if let Some(deprecated_alias_warning) = deprecated_alias_warning {

View File

@@ -1168,3 +1168,83 @@ def func():
Ok(())
}
/// Per-file selects via ! negation in per-file-ignores
#[test]
fn negated_per_file_ignores() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
[lint.per-file-ignores]
"!selected.py" = ["RUF"]
"#,
)?;
let selected = tempdir.path().join("selected.py");
fs::write(selected, "")?;
let ignored = tempdir.path().join("ignored.py");
fs::write(ignored, "")?;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
.arg("--select")
.arg("RUF901")
.current_dir(&tempdir)
, @r###"
success: false
exit_code: 1
----- stdout -----
selected.py:1:1: RUF901 [*] Hey this is a stable test rule with a safe fix.
Found 1 error.
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
Ok(())
}
#[test]
fn negated_per_file_ignores_absolute() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
[lint.per-file-ignores]
"!src/**.py" = ["RUF"]
"#,
)?;
let src_dir = tempdir.path().join("src");
fs::create_dir(&src_dir)?;
let selected = src_dir.join("selected.py");
fs::write(selected, "")?;
let ignored = tempdir.path().join("ignored.py");
fs::write(ignored, "")?;
insta::with_settings!({filters => vec![
// Replace windows paths
(r"\\", "/"),
]}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
.arg("--select")
.arg("RUF901")
.current_dir(&tempdir)
, @r###"
success: false
exit_code: 1
----- stdout -----
src/selected.py:1:1: RUF901 [*] Hey this is a stable test rule with a safe fix.
Found 1 error.
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
});
Ok(())
}

View File

@@ -50,6 +50,7 @@ file_resolver.exclude = [
"venv",
]
file_resolver.extend_exclude = [
"crates/ruff/resources/",
"crates/ruff_linter/resources/",
"crates/ruff_python_formatter/resources/",
]

View File

@@ -71,6 +71,14 @@ impl Diagnostic {
}
}
/// Consumes `self` and returns a new `Diagnostic` with the given parent node.
#[inline]
#[must_use]
pub fn with_parent(mut self, parent: TextSize) -> Self {
self.set_parent(parent);
self
}
/// Set the location of the diagnostic's parent node.
#[inline]
pub fn set_parent(&mut self, parent: TextSize) {

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.3.5"
version = "0.3.6"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -195,6 +195,13 @@ class BadAsyncIterator(collections.abc.AsyncIterator[str]):
def __aiter__(self) -> typing.AsyncIterator[str]:
... # Y034 "__aiter__" methods in classes like "BadAsyncIterator" usually return "self" at runtime. Consider using "typing_extensions.Self" in "BadAsyncIterator.__aiter__", e.g. "def __aiter__(self) -> Self: ..." # Y022 Use "collections.abc.AsyncIterator[T]" instead of "typing.AsyncIterator[T]" (PEP 585 syntax)
class SubclassOfBadIterator3(BadIterator3):
def __iter__(self) -> Iterator[int]: # Y034
...
class SubclassOfBadAsyncIterator(BadAsyncIterator):
def __aiter__(self) -> collections.abc.AsyncIterator[str]: # Y034
...
class AsyncIteratorReturningAsyncIterable:
def __aiter__(self) -> AsyncIterable[str]:
@@ -225,6 +232,11 @@ class MetaclassInWhichSelfCannotBeUsed4(ABCMeta):
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed4: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed4) -> MetaclassInWhichSelfCannotBeUsed4: ...
class SubclassOfMetaclassInWhichSelfCannotBeUsed(MetaclassInWhichSelfCannotBeUsed4):
def __new__(cls) -> SubclassOfMetaclassInWhichSelfCannotBeUsed: ...
def __enter__(self) -> SubclassOfMetaclassInWhichSelfCannotBeUsed: ...
async def __aenter__(self) -> SubclassOfMetaclassInWhichSelfCannotBeUsed: ...
def __isub__(self, other: SubclassOfMetaclassInWhichSelfCannotBeUsed) -> SubclassOfMetaclassInWhichSelfCannotBeUsed: ...
class Abstract(Iterator[str]):
@abstractmethod

View File

@@ -80,5 +80,13 @@ def test_single_list_of_lists(param):
@pytest.mark.parametrize("a", [1, 2])
@pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
@pytest.mark.parametrize("d", [3,])
def test_multiple_decorators(a, b, c):
@pytest.mark.parametrize(
"d",
[("3", "4")],
)
@pytest.mark.parametrize(
"e",
[("3", "4"),],
)
def test_multiple_decorators(a, b, c, d, e):
pass

View File

@@ -5,3 +5,5 @@ this_should_be_linted = f'double {"quote"} string'
# https://github.com/astral-sh/ruff/issues/10546
x: "Literal['foo', 'bar']"
# https://github.com/astral-sh/ruff/issues/10761
f"Before {f'x {x}' if y else f'foo {z}'} after"

View File

@@ -52,32 +52,32 @@ def f():
return False
def f():
# SIM103
if a:
return False
else:
return True
def f():
# OK
if a:
return False
else:
return False
def f():
# OK
if a:
return True
else:
return True
def f():
# SIM103 (but not fixable)
if a:
return False
else:
return True
def f():
# OK
if a:
return False
else:
return False
def f():
# OK
if a:
return True
else:
return True
def f():
# OK
def bool():
return False
if a:
@@ -86,6 +86,14 @@ def f():
return False
def f():
# SIM103
if keys is not None and notice.key not in keys:
return False
else:
return True
###
# Positive cases (preview)
###

View File

@@ -6,6 +6,14 @@ class Bad(namedtuple("foo", ["str", "int"])): # SLOT002
pass
class UnusualButStillBad(NamedTuple("foo", [("x", int, "y", int)])): # SLOT002
pass
class UnusualButOkay(NamedTuple("foo", [("x", int, "y", int)])):
__slots__ = ()
class Good(namedtuple("foo", ["str", "int"])): # OK
__slots__ = ("foo",)

View File

@@ -25,3 +25,9 @@ def non_ascii():
def all_caps():
"""th•s is not capitalized."""
def single_word():
"""singleword."""
def single_word_no_dot():
"""singleword"""

View File

@@ -1,6 +1,6 @@
"""Tests for constructs allowed in `.pyi` stub files but not at runtime"""
from typing import Optional, TypeAlias, Union
from typing import Generic, NewType, Optional, TypeAlias, TypeVar, Union
__version__: str
__author__: str
@@ -33,6 +33,19 @@ class Leaf: ...
class Tree(list[Tree | Leaf]): ... # valid in a `.pyi` stub file, not in a `.py` runtime file
class Tree2(list["Tree | Leaf"]): ... # always okay
# Generic bases can have forward references in stubs
class Foo(Generic[T]): ...
T = TypeVar("T")
class Bar(Foo[Baz]): ...
class Baz: ...
# bases in general can be forward references in stubs
class Eggs(Spam): ...
class Spam: ...
# NewType can have forward references
MyNew = NewType("MyNew", MyClass)
# Annotations are treated as assignments in .pyi files, but not in .py files
class MyClass:
foo: int
@@ -42,3 +55,6 @@ class MyClass:
baz: MyClass
eggs = baz # valid in a `.pyi` stub file, not in a `.py` runtime file
eggs = "baz" # always okay
class Blah:
class Blah2(Blah): ...

View File

@@ -0,0 +1,13 @@
"""Respect `# noqa` directives on `__all__` definitions."""
__all__ = [ # noqa: F822
"Bernoulli",
"Beta",
"Binomial",
]
__all__ += [
"ContinuousBernoulli", # noqa: F822
"ExponentialFamily",
]

View File

@@ -9,3 +9,22 @@ x = 1
x = 1 # noqa: F401, W203
# noqa: F401
# noqa: F401, W203
# OK
x = 2 # noqa: X100
x = 2 # noqa:X100
# PGH004
x = 2 # noqa X100
# PGH004
x = 2 # noqa X100, X200
# PGH004
x = 2 # noqa : X300
# PGH004
x = 2 # noqa : X400
# PGH004
x = 2 # noqa :X500

View File

@@ -0,0 +1,44 @@
class Wolf:
@staticmethod
def eat(self): # [bad-staticmethod-argument]
pass
class Wolf:
@staticmethod
def eat(sheep):
pass
class Sheep:
@staticmethod
def eat(cls, x, y, z): # [bad-staticmethod-argument]
pass
@staticmethod
def sleep(self, x, y, z): # [bad-staticmethod-argument]
pass
def grow(self, x, y, z):
pass
@classmethod
def graze(cls, x, y, z):
pass
class Foo:
@staticmethod
def eat(x, self, z):
pass
@staticmethod
def sleep(x, cls, z):
pass
def grow(self, x, y, z):
pass
@classmethod
def graze(cls, x, y, z):
pass

View File

@@ -0,0 +1,136 @@
# pylint: disable=missing-docstring, invalid-name, too-few-public-methods, redefined-outer-name
value = 10
value2 = 0
value3 = 3
# Positive
if value < 10: # [max-instead-of-if]
value = 10
if value <= 10: # [max-instead-of-if]
value = 10
if value < value2: # [max-instead-of-if]
value = value2
if value > 10: # [min-instead-of-if]
value = 10
if value >= 10: # [min-instead-of-if]
value = 10
if value > value2: # [min-instead-of-if]
value = value2
class A:
def __init__(self):
self.value = 13
A1 = A()
if A1.value < 10: # [max-instead-of-if]
A1.value = 10
if A1.value > 10: # [min-instead-of-if]
A1.value = 10
class AA:
def __init__(self, value):
self.value = value
def __gt__(self, b):
return self.value > b
def __ge__(self, b):
return self.value >= b
def __lt__(self, b):
return self.value < b
def __le__(self, b):
return self.value <= b
A1 = AA(0)
A2 = AA(3)
if A2 < A1: # [max-instead-of-if]
A2 = A1
if A2 <= A1: # [max-instead-of-if]
A2 = A1
if A2 > A1: # [min-instead-of-if]
A2 = A1
if A2 >= A1: # [min-instead-of-if]
A2 = A1
# Negative
if value < 10:
value = 2
if value <= 3:
value = 5
if value < 10:
value = 2
value2 = 3
if value < value2:
value = value3
if value < 5:
value = value3
if 2 < value <= 3:
value = 1
if value < 10:
value = 10
else:
value = 3
if value <= 3:
value = 5
elif value == 3:
value = 2
if value > 10:
value = 2
if value >= 3:
value = 5
if value > 10:
value = 2
value2 = 3
if value > value2:
value = value3
if value > 5:
value = value3
if 2 > value >= 3:
value = 1
if value > 10:
value = 10
else:
value = 3
if value >= 3:
value = 5
elif value == 3:
value = 2
# Parenthesized expressions
if value.attr > 3:
(
value.
attr
) = 3

View File

@@ -0,0 +1,13 @@
from enum import Enum
class A(str, Enum): ...
class B(Enum, str): ...
class D(int, str, Enum): ...
class E(str, int, Enum): ...

View File

@@ -28,10 +28,6 @@ with open("file.txt", encoding="utf8") as f:
with open("file.txt", errors="ignore") as f:
x = f.read()
# FURB101
with open("file.txt", errors="ignore", mode="rb") as f:
x = f.read()
# FURB101
with open("file.txt", mode="r") as f: # noqa: FURB120
x = f.read()
@@ -60,6 +56,11 @@ with foo() as a, open("file.txt") as b, foo() as c:
# Non-errors.
# Path.read_bytes does not support any kwargs
with open("file.txt", errors="ignore", mode="rb") as f:
x = f.read()
f2 = open("file2.txt")
with open("file.txt") as f:
x = f2.read()

View File

@@ -0,0 +1,132 @@
def foo():
...
def bar(x):
...
# Errors.
# FURB103
with open("file.txt", "w") as f:
f.write("test")
# FURB103
with open("file.txt", "wb") as f:
f.write(foobar)
# FURB103
with open("file.txt", mode="wb") as f:
f.write(b"abc")
# FURB103
with open("file.txt", "w", encoding="utf8") as f:
f.write(foobar)
# FURB103
with open("file.txt", "w", errors="ignore") as f:
f.write(foobar)
# FURB103
with open("file.txt", mode="w") as f:
f.write(foobar)
# FURB103
with open(foo(), "wb") as f:
# The body of `with` is non-trivial, but the recommendation holds.
bar("pre")
f.write(bar())
bar("post")
print("Done")
# FURB103
with open("a.txt", "w") as a, open("b.txt", "wb") as b:
a.write(x)
b.write(y)
# FURB103
with foo() as a, open("file.txt", "w") as b, foo() as c:
# We have other things in here, multiple with items, but the user
# writes a single time to file and that bit they can replace.
bar(a)
b.write(bar(bar(a + x)))
bar(c)
# FURB103
with open("file.txt", "w", newline="\r\n") as f:
f.write(foobar)
# Non-errors.
with open("file.txt", errors="ignore", mode="wb") as f:
# Path.write_bytes() does not support errors
f.write(foobar)
f2 = open("file2.txt", "w")
with open("file.txt", "w") as f:
f2.write(x)
# mode is dynamic
with open("file.txt", foo()) as f:
f.write(x)
# keyword mode is incorrect
with open("file.txt", mode="a+") as f:
f.write(x)
# enables line buffering, not supported in write_text()
with open("file.txt", buffering=1) as f:
f.write(x)
# dont mistake "newline" for "mode"
with open("file.txt", newline="wb") as f:
f.write(x)
# I guess we can possibly also report this case, but the question
# is why the user would put "w+" here in the first place.
with open("file.txt", "w+") as f:
f.write(x)
# Even though we write the whole file, we do other things.
with open("file.txt", "w") as f:
f.write(x)
f.seek(0)
x += f.read(100)
# This shouldn't error, since it could contain unsupported arguments, like `buffering`.
with open(*filename, mode="w") as f:
f.write(x)
# This shouldn't error, since it could contain unsupported arguments, like `buffering`.
with open(**kwargs) as f:
f.write(x)
# This shouldn't error, since it could contain unsupported arguments, like `buffering`.
with open("file.txt", **kwargs) as f:
f.write(x)
# This shouldn't error, since it could contain unsupported arguments, like `buffering`.
with open("file.txt", mode="w", **kwargs) as f:
f.write(x)
# This could error (but doesn't), since it can't contain unsupported arguments, like
# `buffering`.
with open(*filename, mode="w") as f:
f.write(x)
# This could error (but doesn't), since it can't contain unsupported arguments, like
# `buffering`.
with open(*filename, file="file.txt", mode="w") as f:
f.write(x)
# Loops imply multiple writes
with open("file.txt", "w") as f:
while x < 0:
f.write(foobar)
with open("file.txt", "w") as f:
for line in text:
f.write(line)

View File

@@ -0,0 +1,40 @@
z = x if x else y # FURB110
z = x \
if x else y # FURB110
z = x if x \
else \
y # FURB110
z = x() if x() else y() # FURB110
# FURB110
z = x if (
# Test for x.
x
) else (
# Test for y.
y
)
# FURB110
z = (
x if (
# Test for x.
x
) else (
# Test for y.
y
)
)
# FURB110
z = (
x if
# If true, use x.
x
# Otherwise, use y.
else
y
)

View File

@@ -10,7 +10,7 @@ op_mult = lambda x, y: x * y
op_matmutl = lambda x, y: x @ y
op_truediv = lambda x, y: x / y
op_mod = lambda x, y: x % y
op_pow = lambda x, y: x ** y
op_pow = lambda x, y: x**y
op_lshift = lambda x, y: x << y
op_rshift = lambda x, y: x >> y
op_bitor = lambda x, y: x | y
@@ -27,6 +27,10 @@ op_gte = lambda x, y: x >= y
op_is = lambda x, y: x is y
op_isnot = lambda x, y: x is not y
op_in = lambda x, y: y in x
op_itemgetter = lambda x: x[0]
op_itemgetter = lambda x: (x[0], x[1], x[2])
op_itemgetter = lambda x: (x[1:], x[2])
op_itemgetter = lambda x: x[:]
def op_not2(x):
@@ -41,21 +45,30 @@ class Adder:
def add(x, y):
return x + y
# OK.
op_add3 = lambda x, y = 1: x + y
op_add3 = lambda x, y=1: x + y
op_neg2 = lambda x, y: y - x
op_notin = lambda x, y: y not in x
op_and = lambda x, y: y and x
op_or = lambda x, y: y or x
op_in = lambda x, y: x in y
op_itemgetter = lambda x: (1, x[1], x[2])
op_itemgetter = lambda x: (x.y, x[1], x[2])
op_itemgetter = lambda x, y: (x[0], y[0])
op_itemgetter = lambda x, y: (x[0], y[0])
op_itemgetter = lambda x: ()
op_itemgetter = lambda x: (*x[0], x[1])
def op_neg3(x, y):
return y - x
def op_add4(x, y = 1):
def op_add4(x, y=1):
return x + y
def op_add5(x, y):
print("op_add5")
return x + y

View File

@@ -15,15 +15,19 @@ use crate::rules::{
pub(crate) fn deferred_scopes(checker: &mut Checker) {
if !checker.any_enabled(&[
Rule::AsyncioDanglingTask,
Rule::BadStaticmethodArgument,
Rule::BuiltinAttributeShadowing,
Rule::GlobalVariableNotAssigned,
Rule::ImportPrivateName,
Rule::ImportShadowedByLoopVar,
Rule::InvalidFirstArgumentNameForMethod,
Rule::InvalidFirstArgumentNameForClassMethod,
Rule::InvalidFirstArgumentNameForMethod,
Rule::NoSelfUse,
Rule::RedefinedArgumentFromLocal,
Rule::RedefinedWhileUnused,
Rule::RuntimeImportInTypeCheckingBlock,
Rule::SingledispatchMethod,
Rule::SingledispatchmethodFunction,
Rule::TooManyLocals,
Rule::TypingOnlyFirstPartyImport,
Rule::TypingOnlyStandardLibraryImport,
@@ -31,19 +35,16 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
Rule::UndefinedLocal,
Rule::UnusedAnnotation,
Rule::UnusedClassMethodArgument,
Rule::BuiltinAttributeShadowing,
Rule::UnusedFunctionArgument,
Rule::UnusedImport,
Rule::UnusedLambdaArgument,
Rule::UnusedMethodArgument,
Rule::UnusedPrivateProtocol,
Rule::UnusedPrivateTypeAlias,
Rule::UnusedPrivateTypeVar,
Rule::UnusedPrivateTypedDict,
Rule::UnusedPrivateTypeVar,
Rule::UnusedStaticMethodArgument,
Rule::UnusedVariable,
Rule::SingledispatchMethod,
Rule::SingledispatchmethodFunction,
]) {
return;
}
@@ -424,6 +425,10 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
pylint::rules::singledispatchmethod_function(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::BadStaticmethodArgument) {
pylint::rules::bad_staticmethod_argument(checker, scope, &mut diagnostics);
}
if checker.any_enabled(&[
Rule::InvalidFirstArgumentNameForClassMethod,
Rule::InvalidFirstArgumentNameForMethod,

View File

@@ -32,10 +32,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if let Some(operator) = typing::to_pep604_operator(value, slice, &checker.semantic)
{
if checker.enabled(Rule::FutureRewritableTypeAnnotation) {
if !checker.source_type.is_stub()
if !checker.semantic.future_annotations_or_stub()
&& checker.settings.target_version < PythonVersion::Py310
&& checker.settings.target_version >= PythonVersion::Py37
&& !checker.semantic.future_annotations()
&& checker.semantic.in_annotation()
&& !checker.settings.pyupgrade.keep_runtime_typing
{
@@ -48,7 +47,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.source_type.is_stub()
|| checker.settings.target_version >= PythonVersion::Py310
|| (checker.settings.target_version >= PythonVersion::Py37
&& checker.semantic.future_annotations()
&& checker.semantic.future_annotations_or_stub()
&& checker.semantic.in_annotation()
&& !checker.settings.pyupgrade.keep_runtime_typing)
{
@@ -60,9 +59,8 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
// Ex) list[...]
if checker.enabled(Rule::FutureRequiredTypeAnnotation) {
if !checker.source_type.is_stub()
if !checker.semantic.future_annotations_or_stub()
&& checker.settings.target_version < PythonVersion::Py39
&& !checker.semantic.future_annotations()
&& checker.semantic.in_annotation()
&& typing::is_pep585_generic(value, &checker.semantic)
{
@@ -186,10 +184,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
typing::to_pep585_generic(expr, &checker.semantic)
{
if checker.enabled(Rule::FutureRewritableTypeAnnotation) {
if !checker.source_type.is_stub()
if !checker.semantic.future_annotations_or_stub()
&& checker.settings.target_version < PythonVersion::Py39
&& checker.settings.target_version >= PythonVersion::Py37
&& !checker.semantic.future_annotations()
&& checker.semantic.in_annotation()
&& !checker.settings.pyupgrade.keep_runtime_typing
{
@@ -200,7 +197,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.source_type.is_stub()
|| checker.settings.target_version >= PythonVersion::Py39
|| (checker.settings.target_version >= PythonVersion::Py37
&& checker.semantic.future_annotations()
&& checker.semantic.future_annotations_or_stub()
&& checker.semantic.in_annotation()
&& !checker.settings.pyupgrade.keep_runtime_typing)
{
@@ -270,10 +267,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
]) {
if let Some(replacement) = typing::to_pep585_generic(expr, &checker.semantic) {
if checker.enabled(Rule::FutureRewritableTypeAnnotation) {
if !checker.source_type.is_stub()
if !checker.semantic.future_annotations_or_stub()
&& checker.settings.target_version < PythonVersion::Py39
&& checker.settings.target_version >= PythonVersion::Py37
&& !checker.semantic.future_annotations()
&& checker.semantic.in_annotation()
&& !checker.settings.pyupgrade.keep_runtime_typing
{
@@ -286,7 +282,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.source_type.is_stub()
|| checker.settings.target_version >= PythonVersion::Py39
|| (checker.settings.target_version >= PythonVersion::Py37
&& checker.semantic.future_annotations()
&& checker.semantic.future_annotations_or_stub()
&& checker.semantic.in_annotation()
&& !checker.settings.pyupgrade.keep_runtime_typing)
{
@@ -1176,9 +1172,8 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
}) => {
// Ex) `str | None`
if checker.enabled(Rule::FutureRequiredTypeAnnotation) {
if !checker.source_type.is_stub()
if !checker.semantic.future_annotations_or_stub()
&& checker.settings.target_version < PythonVersion::Py310
&& !checker.semantic.future_annotations()
&& checker.semantic.in_annotation()
{
flake8_future_annotations::rules::future_required_type_annotation(
@@ -1363,6 +1358,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::IfExprMinMax) {
refurb::rules::if_expr_min_max(checker, if_exp);
}
if checker.enabled(Rule::IfExpInsteadOfOrOperator) {
refurb::rules::if_exp_instead_of_or_operator(checker, if_exp);
}
}
Expr::ListComp(
comp @ ast::ExprListComp {

View File

@@ -406,6 +406,11 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::UselessObjectInheritance) {
pyupgrade::rules::useless_object_inheritance(checker, class_def);
}
if checker.enabled(Rule::ReplaceStrEnum) {
if checker.settings.target_version >= PythonVersion::Py311 {
pyupgrade::rules::replace_str_enum(checker, class_def);
}
}
if checker.enabled(Rule::UnnecessaryClassParentheses) {
pyupgrade::rules::unnecessary_class_parentheses(checker, class_def);
}
@@ -1112,6 +1117,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::TooManyBooleanExpressions) {
pylint::rules::too_many_boolean_expressions(checker, if_);
}
if checker.enabled(Rule::IfStmtMinMax) {
pylint::rules::if_stmt_min_max(checker, if_);
}
if checker.source_type.is_stub() {
if checker.any_enabled(&[
Rule::UnrecognizedVersionInfoCheck,
@@ -1217,6 +1225,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::ReadWholeFile) {
refurb::rules::read_whole_file(checker, with_stmt);
}
if checker.enabled(Rule::WriteWholeFile) {
refurb::rules::write_whole_file(checker, with_stmt);
}
if checker.enabled(Rule::UselessWithLock) {
pylint::rules::useless_with_lock(checker, with_stmt);
}

View File

@@ -56,9 +56,10 @@ impl AnnotationContext {
_ => {}
}
// If `__future__` annotations are enabled, then annotations are never evaluated
// at runtime, so we can treat them as typing-only.
if semantic.future_annotations() {
// If `__future__` annotations are enabled or it's a stub file,
// then annotations are never evaluated at runtime,
// so we can treat them as typing-only.
if semantic.future_annotations_or_stub() {
return Self::TypingOnly;
}
@@ -87,7 +88,7 @@ impl AnnotationContext {
semantic,
) {
Self::RuntimeRequired
} else if semantic.future_annotations() {
} else if semantic.future_annotations_or_stub() {
Self::TypingOnly
} else {
Self::RuntimeEvaluated

View File

@@ -12,6 +12,8 @@ pub(crate) struct Visit<'a> {
pub(crate) type_param_definitions: Vec<(&'a Expr, Snapshot)>,
pub(crate) functions: Vec<Snapshot>,
pub(crate) lambdas: Vec<Snapshot>,
/// N.B. This field should always be empty unless it's a stub file
pub(crate) class_bases: Vec<(&'a Expr, Snapshot)>,
}
impl Visit<'_> {
@@ -22,6 +24,7 @@ impl Visit<'_> {
&& self.type_param_definitions.is_empty()
&& self.functions.is_empty()
&& self.lambdas.is_empty()
&& self.class_bases.is_empty()
}
}

View File

@@ -31,15 +31,14 @@ use std::path::Path;
use itertools::Itertools;
use log::debug;
use ruff_python_ast::{
self as ast, all::DunderAllName, Comprehension, ElifElseClause, ExceptHandler, Expr,
ExprContext, Keyword, MatchCase, Parameter, ParameterWithDefault, Parameters, Pattern, Stmt,
Suite, UnaryOp,
self as ast, Comprehension, ElifElseClause, ExceptHandler, Expr, ExprContext, FStringElement,
Keyword, MatchCase, Parameter, ParameterWithDefault, Parameters, Pattern, Stmt, Suite, UnaryOp,
};
use ruff_text_size::{Ranged, TextRange, TextSize};
use ruff_diagnostics::{Diagnostic, IsolationLevel};
use ruff_notebook::{CellOffsets, NotebookIndex};
use ruff_python_ast::all::{extract_all_names, DunderAllFlags};
use ruff_python_ast::all::{extract_all_names, DunderAllDefinition, DunderAllFlags};
use ruff_python_ast::helpers::{
collect_import_from_member, extract_handled_exceptions, is_docstring_stmt, to_module_path,
};
@@ -572,6 +571,7 @@ impl<'a> Visitor<'a> for Checker<'a> {
match stmt {
Stmt::FunctionDef(
function_def @ ast::StmtFunctionDef {
name,
body,
parameters,
decorator_list,
@@ -691,9 +691,21 @@ impl<'a> Visitor<'a> for Checker<'a> {
if let Some(globals) = Globals::from_body(body) {
self.semantic.set_globals(globals);
}
let scope_id = self.semantic.scope_id;
self.analyze.scopes.push(scope_id);
self.semantic.pop_scope(); // Function scope
self.semantic.pop_definition();
self.semantic.pop_scope(); // Type parameter scope
self.add_binding(
name,
stmt.identifier(),
BindingKind::FunctionDefinition(scope_id),
BindingFlags::empty(),
);
}
Stmt::ClassDef(
class_def @ ast::StmtClassDef {
name,
body,
arguments,
decorator_list,
@@ -712,7 +724,9 @@ impl<'a> Visitor<'a> for Checker<'a> {
}
if let Some(arguments) = arguments {
self.semantic.flags |= SemanticModelFlags::CLASS_BASE;
self.visit_arguments(arguments);
self.semantic.flags -= SemanticModelFlags::CLASS_BASE;
}
let definition = docstrings::extraction::extract_definition(
@@ -732,6 +746,18 @@ impl<'a> Visitor<'a> for Checker<'a> {
// Set the docstring state before visiting the class body.
self.docstring_state = DocstringState::Expected;
self.visit_body(body);
let scope_id = self.semantic.scope_id;
self.analyze.scopes.push(scope_id);
self.semantic.pop_scope(); // Class scope
self.semantic.pop_definition();
self.semantic.pop_scope(); // Type parameter scope
self.add_binding(
name,
stmt.identifier(),
BindingKind::ClassDefinition(scope_id),
BindingFlags::empty(),
);
}
Stmt::TypeAlias(ast::StmtTypeAlias {
range: _,
@@ -888,35 +914,6 @@ impl<'a> Visitor<'a> for Checker<'a> {
};
// Step 3: Clean-up
match stmt {
Stmt::FunctionDef(ast::StmtFunctionDef { name, .. }) => {
let scope_id = self.semantic.scope_id;
self.analyze.scopes.push(scope_id);
self.semantic.pop_scope(); // Function scope
self.semantic.pop_definition();
self.semantic.pop_scope(); // Type parameter scope
self.add_binding(
name,
stmt.identifier(),
BindingKind::FunctionDefinition(scope_id),
BindingFlags::empty(),
);
}
Stmt::ClassDef(ast::StmtClassDef { name, .. }) => {
let scope_id = self.semantic.scope_id;
self.analyze.scopes.push(scope_id);
self.semantic.pop_scope(); // Class scope
self.semantic.pop_definition();
self.semantic.pop_scope(); // Type parameter scope
self.add_binding(
name,
stmt.identifier(),
BindingKind::ClassDefinition(scope_id),
BindingFlags::empty(),
);
}
_ => {}
}
// Step 4: Analysis
analyze::statement(stmt, self);
@@ -935,10 +932,23 @@ impl<'a> Visitor<'a> for Checker<'a> {
fn visit_expr(&mut self, expr: &'a Expr) {
// Step 0: Pre-processing
if self.source_type.is_stub()
&& self.semantic.in_class_base()
&& !self.semantic.in_deferred_class_base()
{
self.visit
.class_bases
.push((expr, self.semantic.snapshot()));
return;
}
if !self.semantic.in_typing_literal()
// `in_deferred_type_definition()` will only be `true` if we're now visiting the deferred nodes
// after having already traversed the source tree once. If we're now visiting the deferred nodes,
// we can't defer again, or we'll infinitely recurse!
&& !self.semantic.in_deferred_type_definition()
&& self.semantic.in_type_definition()
&& self.semantic.future_annotations()
&& self.semantic.future_annotations_or_stub()
&& (self.semantic.in_annotation() || self.source_type.is_stub())
{
if let Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) = expr {
@@ -1580,6 +1590,15 @@ impl<'a> Visitor<'a> for Checker<'a> {
.push((bound, self.semantic.snapshot()));
}
}
fn visit_f_string_element(&mut self, f_string_element: &'a FStringElement) {
let snapshot = self.semantic.flags;
if f_string_element.is_expression() {
self.semantic.flags |= SemanticModelFlags::F_STRING_REPLACEMENT_FIELD;
}
visitor::walk_f_string_element(self, f_string_element);
self.semantic.flags = snapshot;
}
}
impl<'a> Checker<'a> {
@@ -1956,6 +1975,52 @@ impl<'a> Checker<'a> {
scope.add(id, binding_id);
}
/// After initial traversal of the AST, visit all class bases that were deferred.
///
/// This method should only be relevant in stub files, where forward references are
/// legal in class bases. For other kinds of Python files, using a forward reference
/// in a class base is never legal, so `self.visit.class_bases` should always be empty.
///
/// For example, in a stub file:
/// ```python
/// class Foo(list[Bar]): ... # <-- `Bar` is a forward reference in a class base
/// class Bar: ...
/// ```
fn visit_deferred_class_bases(&mut self) {
let snapshot = self.semantic.snapshot();
let deferred_bases = std::mem::take(&mut self.visit.class_bases);
debug_assert!(
self.source_type.is_stub() || deferred_bases.is_empty(),
"Class bases should never be deferred outside of stub files"
);
for (expr, snapshot) in deferred_bases {
self.semantic.restore(snapshot);
// Set this flag to avoid infinite recursion, or we'll just defer it again:
self.semantic.flags |= SemanticModelFlags::DEFERRED_CLASS_BASE;
self.visit_expr(expr);
}
self.semantic.restore(snapshot);
}
/// After initial traversal of the AST, visit all "future type definitions".
///
/// A "future type definition" is a type definition where [PEP 563] semantics
/// apply (i.e., an annotation in a module that has `from __future__ import annotations`
/// at the top of the file, or an annotation in a stub file). These type definitions
/// support forward references, so they are deferred on initial traversal
/// of the source tree.
///
/// For example:
/// ```python
/// from __future__ import annotations
///
/// def foo() -> Bar: # <-- return annotation is a "future type definition"
/// return Bar()
///
/// class Bar: pass
/// ```
///
/// [PEP 563]: https://peps.python.org/pep-0563/
fn visit_deferred_future_type_definitions(&mut self) {
let snapshot = self.semantic.snapshot();
while !self.visit.future_type_definitions.is_empty() {
@@ -1963,6 +2028,14 @@ impl<'a> Checker<'a> {
for (expr, snapshot) in type_definitions {
self.semantic.restore(snapshot);
// Type definitions should only be considered "`__future__` type definitions"
// if they are annotations in a module where `from __future__ import
// annotations` is active, or they are type definitions in a stub file.
debug_assert!(
self.semantic.future_annotations_or_stub()
&& (self.source_type.is_stub() || self.semantic.in_annotation())
);
self.semantic.flags |= SemanticModelFlags::TYPE_DEFINITION
| SemanticModelFlags::FUTURE_TYPE_DEFINITION;
self.visit_expr(expr);
@@ -1971,6 +2044,19 @@ impl<'a> Checker<'a> {
self.semantic.restore(snapshot);
}
/// After initial traversal of the AST, visit all [type parameter definitions].
///
/// Type parameters natively support forward references,
/// so are always deferred during initial traversal of the source tree.
///
/// For example:
/// ```python
/// class Foo[T: Bar]: pass # <-- Forward reference used in definition of type parameter `T`
/// type X[T: Bar] = Foo[T] # <-- Ditto
/// class Bar: pass
/// ```
///
/// [type parameter definitions]: https://docs.python.org/3/reference/executionmodel.html#annotation-scopes
fn visit_deferred_type_param_definitions(&mut self) {
let snapshot = self.semantic.snapshot();
while !self.visit.type_param_definitions.is_empty() {
@@ -1986,6 +2072,17 @@ impl<'a> Checker<'a> {
self.semantic.restore(snapshot);
}
/// After initial traversal of the AST, visit all "string type definitions",
/// i.e., type definitions that are enclosed within quotes so as to allow
/// the type definition to use forward references.
///
/// For example:
/// ```python
/// def foo() -> "Bar": # <-- return annotation is a "string type definition"
/// return Bar()
///
/// class Bar: pass
/// ```
fn visit_deferred_string_type_definitions(&mut self, allocator: &'a typed_arena::Arena<Expr>) {
let snapshot = self.semantic.snapshot();
while !self.visit.string_type_definitions.is_empty() {
@@ -1998,7 +2095,7 @@ impl<'a> Checker<'a> {
self.semantic.restore(snapshot);
if self.semantic.in_annotation() && self.semantic.future_annotations() {
if self.semantic.in_annotation() && self.semantic.future_annotations_or_stub() {
if self.enabled(Rule::QuotedAnnotation) {
pyupgrade::rules::quoted_annotation(self, value, range);
}
@@ -2034,6 +2131,11 @@ impl<'a> Checker<'a> {
self.semantic.restore(snapshot);
}
/// After initial traversal of the AST, visit all function bodies.
///
/// Function bodies are always deferred on initial traversal of the source tree,
/// as the body of a function may validly contain references to global-scope symbols
/// that were not yet defined at the point when the function was defined.
fn visit_deferred_functions(&mut self) {
let snapshot = self.semantic.snapshot();
while !self.visit.functions.is_empty() {
@@ -2057,8 +2159,9 @@ impl<'a> Checker<'a> {
self.semantic.restore(snapshot);
}
/// Visit all deferred lambdas. Returns a list of snapshots, such that the caller can restore
/// the semantic model to the state it was in before visiting the deferred lambdas.
/// After initial traversal of the source tree has been completed,
/// visit all lambdas. Lambdas are deferred during the initial traversal
/// for the same reason as function bodies.
fn visit_deferred_lambdas(&mut self) {
let snapshot = self.semantic.snapshot();
while !self.visit.lambdas.is_empty() {
@@ -2084,10 +2187,12 @@ impl<'a> Checker<'a> {
self.semantic.restore(snapshot);
}
/// Recursively visit all deferred AST nodes, including lambdas, functions, and type
/// annotations.
/// After initial traversal of the source tree has been completed,
/// recursively visit all AST nodes that were deferred on the first pass.
/// This includes lambdas, functions, type parameters, and type annotations.
fn visit_deferred(&mut self, allocator: &'a typed_arena::Arena<Expr>) {
while !self.visit.is_empty() {
self.visit_deferred_class_bases();
self.visit_deferred_functions();
self.visit_deferred_type_param_definitions();
self.visit_deferred_lambdas();
@@ -2100,45 +2205,54 @@ impl<'a> Checker<'a> {
fn visit_exports(&mut self) {
let snapshot = self.semantic.snapshot();
let exports: Vec<DunderAllName> = self
let definitions: Vec<DunderAllDefinition> = self
.semantic
.global_scope()
.get_all("__all__")
.map(|binding_id| &self.semantic.bindings[binding_id])
.filter_map(|binding| match &binding.kind {
BindingKind::Export(Export { names }) => Some(names.iter().copied()),
BindingKind::Export(Export { names }) => {
Some(DunderAllDefinition::new(binding.range(), names.to_vec()))
}
_ => None,
})
.flatten()
.collect();
for export in exports {
let (name, range) = (export.name(), export.range());
if let Some(binding_id) = self.semantic.global_scope().get(name) {
self.semantic.flags |= SemanticModelFlags::DUNDER_ALL_DEFINITION;
// Mark anything referenced in `__all__` as used.
self.semantic
.add_global_reference(binding_id, ExprContext::Load, range);
self.semantic.flags -= SemanticModelFlags::DUNDER_ALL_DEFINITION;
} else {
if self.semantic.global_scope().uses_star_imports() {
if self.enabled(Rule::UndefinedLocalWithImportStarUsage) {
self.diagnostics.push(Diagnostic::new(
pyflakes::rules::UndefinedLocalWithImportStarUsage {
name: name.to_string(),
},
range,
));
}
for definition in definitions {
for export in definition.names() {
let (name, range) = (export.name(), export.range());
if let Some(binding_id) = self.semantic.global_scope().get(name) {
self.semantic.flags |= SemanticModelFlags::DUNDER_ALL_DEFINITION;
// Mark anything referenced in `__all__` as used.
self.semantic
.add_global_reference(binding_id, ExprContext::Load, range);
self.semantic.flags -= SemanticModelFlags::DUNDER_ALL_DEFINITION;
} else {
if self.enabled(Rule::UndefinedExport) {
if !self.path.ends_with("__init__.py") {
self.diagnostics.push(Diagnostic::new(
pyflakes::rules::UndefinedExport {
name: name.to_string(),
},
range,
));
if self.semantic.global_scope().uses_star_imports() {
if self.enabled(Rule::UndefinedLocalWithImportStarUsage) {
self.diagnostics.push(
Diagnostic::new(
pyflakes::rules::UndefinedLocalWithImportStarUsage {
name: name.to_string(),
},
range,
)
.with_parent(definition.start()),
);
}
} else {
if self.enabled(Rule::UndefinedExport) {
if !self.path.ends_with("__init__.py") {
self.diagnostics.push(
Diagnostic::new(
pyflakes::rules::UndefinedExport {
name: name.to_string(),
},
range,
)
.with_parent(definition.start()),
);
}
}
}
}

View File

@@ -225,12 +225,12 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "C0208") => (RuleGroup::Stable, rules::pylint::rules::IterationOverSet),
(Pylint, "C0414") => (RuleGroup::Stable, rules::pylint::rules::UselessImportAlias),
(Pylint, "C0415") => (RuleGroup::Preview, rules::pylint::rules::ImportOutsideTopLevel),
(Pylint, "C2401") => (RuleGroup::Preview, rules::pylint::rules::NonAsciiName),
(Pylint, "C2403") => (RuleGroup::Preview, rules::pylint::rules::NonAsciiImportName),
(Pylint, "C2801") => (RuleGroup::Preview, rules::pylint::rules::UnnecessaryDunderCall),
#[allow(deprecated)]
(Pylint, "C1901") => (RuleGroup::Nursery, rules::pylint::rules::CompareToEmptyString),
(Pylint, "C2401") => (RuleGroup::Preview, rules::pylint::rules::NonAsciiName),
(Pylint, "C2403") => (RuleGroup::Preview, rules::pylint::rules::NonAsciiImportName),
(Pylint, "C2701") => (RuleGroup::Preview, rules::pylint::rules::ImportPrivateName),
(Pylint, "C2801") => (RuleGroup::Preview, rules::pylint::rules::UnnecessaryDunderCall),
(Pylint, "C3002") => (RuleGroup::Stable, rules::pylint::rules::UnnecessaryDirectLambdaCall),
(Pylint, "E0100") => (RuleGroup::Stable, rules::pylint::rules::YieldInInit),
(Pylint, "E0101") => (RuleGroup::Stable, rules::pylint::rules::ReturnInInit),
@@ -272,6 +272,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "R0203") => (RuleGroup::Preview, rules::pylint::rules::NoStaticmethodDecorator),
(Pylint, "R0206") => (RuleGroup::Stable, rules::pylint::rules::PropertyWithParameters),
(Pylint, "R0402") => (RuleGroup::Stable, rules::pylint::rules::ManualFromImport),
(Pylint, "R0904") => (RuleGroup::Preview, rules::pylint::rules::TooManyPublicMethods),
(Pylint, "R0911") => (RuleGroup::Stable, rules::pylint::rules::TooManyReturnStatements),
(Pylint, "R0912") => (RuleGroup::Stable, rules::pylint::rules::TooManyBranches),
(Pylint, "R0913") => (RuleGroup::Stable, rules::pylint::rules::TooManyArguments),
@@ -282,10 +283,11 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "R1701") => (RuleGroup::Stable, rules::pylint::rules::RepeatedIsinstanceCalls),
(Pylint, "R1702") => (RuleGroup::Preview, rules::pylint::rules::TooManyNestedBlocks),
(Pylint, "R1704") => (RuleGroup::Preview, rules::pylint::rules::RedefinedArgumentFromLocal),
(Pylint, "R1706") => (RuleGroup::Removed, rules::pylint::rules::AndOrTernary),
(Pylint, "R1711") => (RuleGroup::Stable, rules::pylint::rules::UselessReturn),
(Pylint, "R1714") => (RuleGroup::Stable, rules::pylint::rules::RepeatedEqualityComparison),
(Pylint, "R1706") => (RuleGroup::Removed, rules::pylint::rules::AndOrTernary),
(Pylint, "R1722") => (RuleGroup::Stable, rules::pylint::rules::SysExitAlias),
(Pylint, "R1730") => (RuleGroup::Preview, rules::pylint::rules::IfStmtMinMax),
(Pylint, "R1733") => (RuleGroup::Preview, rules::pylint::rules::UnnecessaryDictIndexLookup),
(Pylint, "R1736") => (RuleGroup::Preview, rules::pylint::rules::UnnecessaryListIndexLookup),
(Pylint, "R2004") => (RuleGroup::Stable, rules::pylint::rules::MagicValueComparison),
@@ -302,11 +304,12 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "W0129") => (RuleGroup::Stable, rules::pylint::rules::AssertOnStringLiteral),
(Pylint, "W0131") => (RuleGroup::Stable, rules::pylint::rules::NamedExprWithoutContext),
(Pylint, "W0133") => (RuleGroup::Preview, rules::pylint::rules::UselessExceptionStatement),
(Pylint, "W0211") => (RuleGroup::Preview, rules::pylint::rules::BadStaticmethodArgument),
(Pylint, "W0245") => (RuleGroup::Preview, rules::pylint::rules::SuperWithoutBrackets),
(Pylint, "W0406") => (RuleGroup::Stable, rules::pylint::rules::ImportSelf),
(Pylint, "W0602") => (RuleGroup::Stable, rules::pylint::rules::GlobalVariableNotAssigned),
(Pylint, "W0604") => (RuleGroup::Preview, rules::pylint::rules::GlobalAtModuleLevel),
(Pylint, "W0603") => (RuleGroup::Stable, rules::pylint::rules::GlobalStatement),
(Pylint, "W0604") => (RuleGroup::Preview, rules::pylint::rules::GlobalAtModuleLevel),
(Pylint, "W0711") => (RuleGroup::Stable, rules::pylint::rules::BinaryOpException),
(Pylint, "W1501") => (RuleGroup::Preview, rules::pylint::rules::BadOpenMode),
(Pylint, "W1508") => (RuleGroup::Stable, rules::pylint::rules::InvalidEnvvarDefault),
@@ -316,7 +319,6 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
#[allow(deprecated)]
(Pylint, "W1641") => (RuleGroup::Nursery, rules::pylint::rules::EqWithoutHash),
(Pylint, "W2101") => (RuleGroup::Preview, rules::pylint::rules::UselessWithLock),
(Pylint, "R0904") => (RuleGroup::Preview, rules::pylint::rules::TooManyPublicMethods),
(Pylint, "W2901") => (RuleGroup::Stable, rules::pylint::rules::RedefinedLoopName),
#[allow(deprecated)]
(Pylint, "W3201") => (RuleGroup::Nursery, rules::pylint::rules::BadDunderMethodName),
@@ -546,6 +548,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pyupgrade, "039") => (RuleGroup::Stable, rules::pyupgrade::rules::UnnecessaryClassParentheses),
(Pyupgrade, "040") => (RuleGroup::Stable, rules::pyupgrade::rules::NonPEP695TypeAlias),
(Pyupgrade, "041") => (RuleGroup::Stable, rules::pyupgrade::rules::TimeoutErrorAlias),
(Pyupgrade, "042") => (RuleGroup::Preview, rules::pyupgrade::rules::ReplaceStrEnum),
// pydocstyle
(Pydocstyle, "100") => (RuleGroup::Stable, rules::pydocstyle::rules::UndocumentedPublicModule),
@@ -1034,7 +1037,9 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
// refurb
(Refurb, "101") => (RuleGroup::Preview, rules::refurb::rules::ReadWholeFile),
(Refurb, "103") => (RuleGroup::Preview, rules::refurb::rules::WriteWholeFile),
(Refurb, "105") => (RuleGroup::Preview, rules::refurb::rules::PrintEmptyString),
(Refurb, "110") => (RuleGroup::Preview, rules::refurb::rules::IfExpInsteadOfOrOperator),
#[allow(deprecated)]
(Refurb, "113") => (RuleGroup::Nursery, rules::refurb::rules::RepeatedAppend),
(Refurb, "118") => (RuleGroup::Preview, rules::refurb::rules::ReimplementedOperator),

View File

@@ -9,24 +9,37 @@ use crate::registry::RuleSet;
/// Create a set with codes matching the pattern/code pairs.
pub(crate) fn ignores_from_path(
path: &Path,
pattern_code_pairs: &[(GlobMatcher, GlobMatcher, RuleSet)],
pattern_code_pairs: &[(GlobMatcher, GlobMatcher, bool, RuleSet)],
) -> RuleSet {
let file_name = path.file_name().expect("Unable to parse filename");
pattern_code_pairs
.iter()
.filter_map(|(absolute, basename, rules)| {
.filter_map(|(absolute, basename, negated, rules)| {
if basename.is_match(file_name) {
if *negated { None } else {
debug!(
"Adding per-file ignores for {:?} due to basename match on {:?}: {:?}",
path,
basename.glob().regex(),
rules
);
Some(rules)
}
} else if absolute.is_match(path) {
if *negated { None } else {
debug!(
"Adding per-file ignores for {:?} due to absolute match on {:?}: {:?}",
path,
absolute.glob().regex(),
rules
);
Some(rules)
}
} else if *negated {
debug!(
"Adding per-file ignores for {:?} due to basename match on {:?}: {:?}",
"Adding per-file ignores for {:?} due to negated pattern matching neither {:?} nor {:?}: {:?}",
path,
basename.glob().regex(),
rules
);
Some(rules)
} else if absolute.is_match(path) {
debug!(
"Adding per-file ignores for {:?} due to absolute match on {:?}: {:?}",
path,
absolute.glob().regex(),
rules
);

View File

@@ -147,7 +147,7 @@ impl<'a> Directive<'a> {
/// Lex an individual rule code (e.g., `F401`).
#[inline]
fn lex_code(line: &str) -> Option<&str> {
pub(crate) fn lex_code(line: &str) -> Option<&str> {
// Extract, e.g., the `F` in `F401`.
let prefix = line.chars().take_while(char::is_ascii_uppercase).count();
// Extract, e.g., the `401` in `F401`.

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
#[allow(clippy::struct_excessive_bools)]
pub struct Settings {
pub mypy_init_return: bool,

View File

@@ -10,7 +10,7 @@ pub fn default_tmp_dirs() -> Vec<String> {
.to_vec()
}
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub hardcoded_tmp_directory: Vec<String>,
pub check_typed_exception: bool,

View File

@@ -6,7 +6,7 @@ use ruff_macros::CacheKey;
use crate::display_settings;
#[derive(Debug, CacheKey, Default)]
#[derive(Debug, Clone, CacheKey, Default)]
pub struct Settings {
pub extend_allowed_calls: Vec<String>,
}

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub extend_immutable_calls: Vec<String>,
}

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub builtins_ignorelist: Vec<String>,
}

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub allow_dict_calls_with_keyword_arguments: bool,
}

View File

@@ -7,7 +7,7 @@ use std::fmt::{Display, Formatter};
use crate::display_settings;
use ruff_macros::CacheKey;
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub notice_rgx: Regex,
pub author: Option<String>,

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub max_string_length: usize,
}

View File

@@ -2,7 +2,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub functions_names: Vec<String>,
}

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub allow_multiline: bool,
}

View File

@@ -57,7 +57,7 @@ impl FromIterator<String> for BannedAliases {
}
}
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub aliases: FxHashMap<String, String>,
pub banned_aliases: FxHashMap<String, BannedAliases>,

View File

@@ -1,9 +1,10 @@
use ruff_python_ast::{self as ast, Arguments, Decorator, Expr, Parameters, Stmt};
use ruff_python_ast::{self as ast, Decorator, Expr, Parameters, Stmt};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::map_subscript;
use ruff_python_ast::identifier::Identifier;
use ruff_python_semantic::analyze;
use ruff_python_semantic::analyze::visibility::{is_abstract, is_final, is_overload};
use ruff_python_semantic::{ScopeKind, SemanticModel};
@@ -119,7 +120,9 @@ pub(crate) fn non_self_return_type(
returns: Option<&Expr>,
parameters: &Parameters,
) {
let ScopeKind::Class(class_def) = checker.semantic().current_scope().kind else {
let semantic = checker.semantic();
let ScopeKind::Class(class_def) = semantic.current_scope().kind else {
return;
};
@@ -132,21 +135,19 @@ pub(crate) fn non_self_return_type(
};
// PEP 673 forbids the use of `typing(_extensions).Self` in metaclasses.
if is_metaclass(class_def, checker.semantic()) {
if is_metaclass(class_def, semantic) {
return;
}
// Skip any abstract or overloaded methods.
if is_abstract(decorator_list, checker.semantic())
|| is_overload(decorator_list, checker.semantic())
{
if is_abstract(decorator_list, semantic) || is_overload(decorator_list, semantic) {
return;
}
if is_async {
if name == "__aenter__"
&& is_name(returns, &class_def.name)
&& !is_final(&class_def.decorator_list, checker.semantic())
&& !is_final(&class_def.decorator_list, semantic)
{
checker.diagnostics.push(Diagnostic::new(
NonSelfReturnType {
@@ -161,7 +162,7 @@ pub(crate) fn non_self_return_type(
// In-place methods that are expected to return `Self`.
if is_inplace_bin_op(name) {
if !is_self(returns, checker.semantic()) {
if !is_self(returns, semantic) {
checker.diagnostics.push(Diagnostic::new(
NonSelfReturnType {
class_name: class_def.name.to_string(),
@@ -174,8 +175,7 @@ pub(crate) fn non_self_return_type(
}
if is_name(returns, &class_def.name) {
if matches!(name, "__enter__" | "__new__")
&& !is_final(&class_def.decorator_list, checker.semantic())
if matches!(name, "__enter__" | "__new__") && !is_final(&class_def.decorator_list, semantic)
{
checker.diagnostics.push(Diagnostic::new(
NonSelfReturnType {
@@ -190,8 +190,8 @@ pub(crate) fn non_self_return_type(
match name {
"__iter__" => {
if is_iterable(returns, checker.semantic())
&& is_iterator(class_def.arguments.as_deref(), checker.semantic())
if is_iterable_or_iterator(returns, semantic)
&& subclasses_iterator(class_def, semantic)
{
checker.diagnostics.push(Diagnostic::new(
NonSelfReturnType {
@@ -203,8 +203,8 @@ pub(crate) fn non_self_return_type(
}
}
"__aiter__" => {
if is_async_iterable(returns, checker.semantic())
&& is_async_iterator(class_def.arguments.as_deref(), checker.semantic())
if is_async_iterable_or_iterator(returns, semantic)
&& subclasses_async_iterator(class_def, semantic)
{
checker.diagnostics.push(Diagnostic::new(
NonSelfReturnType {
@@ -221,26 +221,14 @@ pub(crate) fn non_self_return_type(
/// Returns `true` if the given class is a metaclass.
fn is_metaclass(class_def: &ast::StmtClassDef, semantic: &SemanticModel) -> bool {
class_def.arguments.as_ref().is_some_and(|arguments| {
arguments
.args
.iter()
.any(|expr| is_metaclass_base(expr, semantic))
analyze::class::any_qualified_name(class_def, semantic, &|qualified_name| {
matches!(
qualified_name.segments(),
["" | "builtins", "type"] | ["abc", "ABCMeta"] | ["enum", "EnumMeta" | "EnumType"]
)
})
}
/// Returns `true` if the given expression resolves to a metaclass.
fn is_metaclass_base(base: &Expr, semantic: &SemanticModel) -> bool {
semantic
.resolve_qualified_name(base)
.is_some_and(|qualified_name| {
matches!(
qualified_name.segments(),
["" | "builtins", "type"] | ["abc", "ABCMeta"] | ["enum", "EnumMeta" | "EnumType"]
)
})
}
/// Returns `true` if the method is an in-place binary operator.
fn is_inplace_bin_op(name: &str) -> bool {
matches!(
@@ -275,24 +263,17 @@ fn is_self(expr: &Expr, semantic: &SemanticModel) -> bool {
}
/// Return `true` if the given class extends `collections.abc.Iterator`.
fn is_iterator(arguments: Option<&Arguments>, semantic: &SemanticModel) -> bool {
let Some(Arguments { args: bases, .. }) = arguments else {
return false;
};
bases.iter().any(|expr| {
semantic
.resolve_qualified_name(map_subscript(expr))
.is_some_and(|qualified_name| {
matches!(
qualified_name.segments(),
["typing", "Iterator"] | ["collections", "abc", "Iterator"]
)
})
fn subclasses_iterator(class_def: &ast::StmtClassDef, semantic: &SemanticModel) -> bool {
analyze::class::any_qualified_name(class_def, semantic, &|qualified_name| {
matches!(
qualified_name.segments(),
["typing", "Iterator"] | ["collections", "abc", "Iterator"]
)
})
}
/// Return `true` if the given expression resolves to `collections.abc.Iterable`.
fn is_iterable(expr: &Expr, semantic: &SemanticModel) -> bool {
/// Return `true` if the given expression resolves to `collections.abc.Iterable` or `collections.abc.Iterator`.
fn is_iterable_or_iterator(expr: &Expr, semantic: &SemanticModel) -> bool {
semantic
.resolve_qualified_name(map_subscript(expr))
.is_some_and(|qualified_name| {
@@ -305,24 +286,17 @@ fn is_iterable(expr: &Expr, semantic: &SemanticModel) -> bool {
}
/// Return `true` if the given class extends `collections.abc.AsyncIterator`.
fn is_async_iterator(arguments: Option<&Arguments>, semantic: &SemanticModel) -> bool {
let Some(Arguments { args: bases, .. }) = arguments else {
return false;
};
bases.iter().any(|expr| {
semantic
.resolve_qualified_name(map_subscript(expr))
.is_some_and(|qualified_name| {
matches!(
qualified_name.segments(),
["typing", "AsyncIterator"] | ["collections", "abc", "AsyncIterator"]
)
})
fn subclasses_async_iterator(class_def: &ast::StmtClassDef, semantic: &SemanticModel) -> bool {
analyze::class::any_qualified_name(class_def, semantic, &|qualified_name| {
matches!(
qualified_name.segments(),
["typing", "AsyncIterator"] | ["collections", "abc", "AsyncIterator"]
)
})
}
/// Return `true` if the given expression resolves to `collections.abc.AsyncIterable`.
fn is_async_iterable(expr: &Expr, semantic: &SemanticModel) -> bool {
/// Return `true` if the given expression resolves to `collections.abc.AsyncIterable` or `collections.abc.AsyncIterator`.
fn is_async_iterable_or_iterator(expr: &Expr, semantic: &SemanticModel) -> bool {
semantic
.resolve_qualified_name(map_subscript(expr))
.is_some_and(|qualified_name| {

View File

@@ -89,4 +89,20 @@ PYI034.py:195:9: PYI034 `__aiter__` methods in classes like `BadAsyncIterator` u
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.py:199:9: PYI034 `__iter__` methods in classes like `SubclassOfBadIterator3` usually return `self` at runtime
|
198 | class SubclassOfBadIterator3(BadIterator3):
199 | def __iter__(self) -> Iterator[int]: # Y034
| ^^^^^^^^ PYI034
200 | ...
|
= help: Consider using `typing_extensions.Self` as return type
PYI034.py:203:9: PYI034 `__aiter__` methods in classes like `SubclassOfBadAsyncIterator` usually return `self` at runtime
|
202 | class SubclassOfBadAsyncIterator(BadAsyncIterator):
203 | def __aiter__(self) -> collections.abc.AsyncIterator[str]: # Y034
| ^^^^^^^^^ PYI034
204 | ...
|
= help: Consider using `typing_extensions.Self` as return type

View File

@@ -567,7 +567,7 @@ fn check_values(checker: &mut Checker, names: &Expr, values: &Expr) {
// Replace `]` with `)` or `,)`.
let values_end = Edit::replacement(
if needs_trailing_comma {
"),".into()
",)".into()
} else {
")".into()
},

View File

@@ -24,7 +24,7 @@ pub fn default_broad_exceptions() -> Vec<IdentifierPattern> {
.to_vec()
}
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub fixture_parentheses: bool,
pub parametrize_names_type: types::ParametrizeNameType,

View File

@@ -168,7 +168,7 @@ PT007.py:81:38: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
| ^^^^^^^^^^^^^^^^ PT007
82 | @pytest.mark.parametrize("d", [3,])
83 | def test_multiple_decorators(a, b, c):
83 | @pytest.mark.parametrize(
|
= help: Use `list` of `list` for parameter values
@@ -179,8 +179,8 @@ PT007.py:81:38: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 |-@pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
81 |+@pytest.mark.parametrize(("b", "c"), [(3, 4), (5, 6)])
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",
PT007.py:81:39: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `list` of `list`
|
@@ -188,7 +188,7 @@ PT007.py:81:39: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
| ^^^^^^ PT007
82 | @pytest.mark.parametrize("d", [3,])
83 | def test_multiple_decorators(a, b, c):
83 | @pytest.mark.parametrize(
|
= help: Use `list` of `list` for parameter values
@@ -199,8 +199,8 @@ PT007.py:81:39: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 |-@pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
81 |+@pytest.mark.parametrize(("b", "c"), ([3, 4], (5, 6)))
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",
PT007.py:81:47: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `list` of `list`
|
@@ -208,7 +208,7 @@ PT007.py:81:47: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
| ^^^^^^ PT007
82 | @pytest.mark.parametrize("d", [3,])
83 | def test_multiple_decorators(a, b, c):
83 | @pytest.mark.parametrize(
|
= help: Use `list` of `list` for parameter values
@@ -219,5 +219,5 @@ PT007.py:81:47: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 |-@pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
81 |+@pytest.mark.parametrize(("b", "c"), ((3, 4), [5, 6]))
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",

View File

@@ -210,7 +210,7 @@ PT007.py:81:38: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
| ^^^^^^^^^^^^^^^^ PT007
82 | @pytest.mark.parametrize("d", [3,])
83 | def test_multiple_decorators(a, b, c):
83 | @pytest.mark.parametrize(
|
= help: Use `list` of `tuple` for parameter values
@@ -221,5 +221,5 @@ PT007.py:81:38: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 |-@pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
81 |+@pytest.mark.parametrize(("b", "c"), [(3, 4), (5, 6)])
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",

View File

@@ -237,7 +237,7 @@ PT007.py:80:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
80 |+@pytest.mark.parametrize("a", (1, 2))
81 81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
83 83 | @pytest.mark.parametrize(
PT007.py:81:39: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `list`
|
@@ -245,7 +245,7 @@ PT007.py:81:39: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
| ^^^^^^ PT007
82 | @pytest.mark.parametrize("d", [3,])
83 | def test_multiple_decorators(a, b, c):
83 | @pytest.mark.parametrize(
|
= help: Use `tuple` of `list` for parameter values
@@ -256,8 +256,8 @@ PT007.py:81:39: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 |-@pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
81 |+@pytest.mark.parametrize(("b", "c"), ([3, 4], (5, 6)))
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",
PT007.py:81:47: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `list`
|
@@ -265,7 +265,7 @@ PT007.py:81:47: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
| ^^^^^^ PT007
82 | @pytest.mark.parametrize("d", [3,])
83 | def test_multiple_decorators(a, b, c):
83 | @pytest.mark.parametrize(
|
= help: Use `tuple` of `list` for parameter values
@@ -276,8 +276,8 @@ PT007.py:81:47: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 |-@pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
81 |+@pytest.mark.parametrize(("b", "c"), ((3, 4), [5, 6]))
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",
PT007.py:82:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `list`
|
@@ -285,8 +285,8 @@ PT007.py:82:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
82 | @pytest.mark.parametrize("d", [3,])
| ^^^^ PT007
83 | def test_multiple_decorators(a, b, c):
84 | pass
83 | @pytest.mark.parametrize(
84 | "d",
|
= help: Use `tuple` of `list` for parameter values
@@ -296,5 +296,48 @@ PT007.py:82:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
82 |-@pytest.mark.parametrize("d", [3,])
82 |+@pytest.mark.parametrize("d", (3,))
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",
85 85 | [("3", "4")],
PT007.py:85:5: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `list`
|
83 | @pytest.mark.parametrize(
84 | "d",
85 | [("3", "4")],
| ^^^^^^^^^^^^ PT007
86 | )
87 | @pytest.mark.parametrize(
|
= help: Use `tuple` of `list` for parameter values
Unsafe fix
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | @pytest.mark.parametrize(
84 84 | "d",
85 |- [("3", "4")],
85 |+ (("3", "4"),),
86 86 | )
87 87 | @pytest.mark.parametrize(
88 88 | "e",
PT007.py:89:5: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `list`
|
87 | @pytest.mark.parametrize(
88 | "e",
89 | [("3", "4"),],
| ^^^^^^^^^^^^^ PT007
90 | )
91 | def test_multiple_decorators(a, b, c, d, e):
|
= help: Use `tuple` of `list` for parameter values
Unsafe fix
86 86 | )
87 87 | @pytest.mark.parametrize(
88 88 | "e",
89 |- [("3", "4"),],
89 |+ (("3", "4"),),
90 90 | )
91 91 | def test_multiple_decorators(a, b, c, d, e):
92 92 | pass

View File

@@ -279,7 +279,7 @@ PT007.py:80:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
80 |+@pytest.mark.parametrize("a", (1, 2))
81 81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | def test_multiple_decorators(a, b, c):
83 83 | @pytest.mark.parametrize(
PT007.py:82:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `tuple`
|
@@ -287,8 +287,8 @@ PT007.py:82:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
82 | @pytest.mark.parametrize("d", [3,])
| ^^^^ PT007
83 | def test_multiple_decorators(a, b, c):
84 | pass
83 | @pytest.mark.parametrize(
84 | "d",
|
= help: Use `tuple` of `tuple` for parameter values
@@ -298,5 +298,48 @@ PT007.py:82:31: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expect
81 81 | @pytest.mark.parametrize(("b", "c"), ((3, 4), (5, 6)))
82 |-@pytest.mark.parametrize("d", [3,])
82 |+@pytest.mark.parametrize("d", (3,))
83 83 | def test_multiple_decorators(a, b, c):
84 84 | pass
83 83 | @pytest.mark.parametrize(
84 84 | "d",
85 85 | [("3", "4")],
PT007.py:85:5: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `tuple`
|
83 | @pytest.mark.parametrize(
84 | "d",
85 | [("3", "4")],
| ^^^^^^^^^^^^ PT007
86 | )
87 | @pytest.mark.parametrize(
|
= help: Use `tuple` of `tuple` for parameter values
Unsafe fix
82 82 | @pytest.mark.parametrize("d", [3,])
83 83 | @pytest.mark.parametrize(
84 84 | "d",
85 |- [("3", "4")],
85 |+ (("3", "4"),),
86 86 | )
87 87 | @pytest.mark.parametrize(
88 88 | "e",
PT007.py:89:5: PT007 [*] Wrong values type in `@pytest.mark.parametrize` expected `tuple` of `tuple`
|
87 | @pytest.mark.parametrize(
88 | "e",
89 | [("3", "4"),],
| ^^^^^^^^^^^^^ PT007
90 | )
91 | def test_multiple_decorators(a, b, c, d, e):
|
= help: Use `tuple` of `tuple` for parameter values
Unsafe fix
86 86 | )
87 87 | @pytest.mark.parametrize(
88 88 | "e",
89 |- [("3", "4"),],
89 |+ (("3", "4"),),
90 90 | )
91 91 | def test_multiple_decorators(a, b, c, d, e):
92 92 | pass

View File

@@ -449,13 +449,8 @@ pub(crate) fn check_string_quotes(checker: &mut Checker, string_like: StringLike
return;
}
// If the string is part of a f-string, ignore it.
if checker
.indexer()
.fstring_ranges()
.outermost(string_like.start())
.is_some_and(|outer| outer.start() < string_like.start() && string_like.end() < outer.end())
{
// TODO(dhruvmanila): Support checking for escaped quotes in f-strings.
if checker.semantic().in_f_string_replacement_field() {
return;
}

View File

@@ -31,7 +31,7 @@ impl From<ruff_python_ast::str::Quote> for Quote {
}
}
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub inline_quotes: Quote,
pub multiline_quotes: Quote,

View File

@@ -17,7 +17,7 @@ pub const IGNORE_NAMES: [&str; 7] = [
"_value_",
];
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub ignore_names: Vec<String>,
}

View File

@@ -41,8 +41,8 @@ use crate::fix::snippet::SourceCodeSnippet;
/// [preview]: https://docs.astral.sh/ruff/preview/
#[violation]
pub struct NeedlessBool {
condition: SourceCodeSnippet,
replacement: Option<SourceCodeSnippet>,
condition: Option<SourceCodeSnippet>,
negate: bool,
}
impl Violation for NeedlessBool {
@@ -50,21 +50,22 @@ impl Violation for NeedlessBool {
#[derive_message_formats]
fn message(&self) -> String {
let NeedlessBool { condition, .. } = self;
if let Some(condition) = condition.full_display() {
let NeedlessBool { condition, negate } = self;
if let Some(condition) = condition.as_ref().and_then(SourceCodeSnippet::full_display) {
format!("Return the condition `{condition}` directly")
} else if *negate {
format!("Return the negated condition directly")
} else {
format!("Return the condition directly")
}
}
fn fix_title(&self) -> Option<String> {
let NeedlessBool { replacement, .. } = self;
if let Some(replacement) = replacement
.as_ref()
.and_then(SourceCodeSnippet::full_display)
{
Some(format!("Replace with `{replacement}`"))
let NeedlessBool { condition, .. } = self;
if let Some(condition) = condition.as_ref().and_then(SourceCodeSnippet::full_display) {
Some(format!("Replace with `return {condition}`"))
} else {
Some(format!("Inline condition"))
}
@@ -191,29 +192,21 @@ pub(crate) fn needless_bool(checker: &mut Checker, stmt: &Stmt) {
return;
}
let condition = checker.locator().slice(if_test);
let replacement = if checker.indexer().has_comments(&range, checker.locator()) {
// Generate the replacement condition.
let condition = if checker.indexer().has_comments(&range, checker.locator()) {
None
} else {
// If the return values are inverted, wrap the condition in a `not`.
if inverted {
let node = ast::StmtReturn {
value: Some(Box::new(Expr::UnaryOp(ast::ExprUnaryOp {
op: ast::UnaryOp::Not,
operand: Box::new(if_test.clone()),
range: TextRange::default(),
}))),
Some(Expr::UnaryOp(ast::ExprUnaryOp {
op: ast::UnaryOp::Not,
operand: Box::new(if_test.clone()),
range: TextRange::default(),
};
Some(checker.generator().stmt(&node.into()))
}))
} else if if_test.is_compare_expr() {
// If the condition is a comparison, we can replace it with the condition, since we
// know it's a boolean.
let node = ast::StmtReturn {
value: Some(Box::new(if_test.clone())),
range: TextRange::default(),
};
Some(checker.generator().stmt(&node.into()))
Some(if_test.clone())
} else if checker.semantic().is_builtin("bool") {
// Otherwise, we need to wrap the condition in a call to `bool`.
let func_node = ast::ExprName {
@@ -221,7 +214,7 @@ pub(crate) fn needless_bool(checker: &mut Checker, stmt: &Stmt) {
ctx: ExprContext::Load,
range: TextRange::default(),
};
let value_node = ast::ExprCall {
let call_node = ast::ExprCall {
func: Box::new(func_node.into()),
arguments: Arguments {
args: Box::from([if_test.clone()]),
@@ -230,20 +223,32 @@ pub(crate) fn needless_bool(checker: &mut Checker, stmt: &Stmt) {
},
range: TextRange::default(),
};
let return_node = ast::StmtReturn {
value: Some(Box::new(value_node.into())),
range: TextRange::default(),
};
Some(checker.generator().stmt(&return_node.into()))
Some(Expr::Call(call_node))
} else {
None
}
};
// Generate the replacement `return` statement.
let replacement = condition.as_ref().map(|expr| {
Stmt::Return(ast::StmtReturn {
value: Some(Box::new(expr.clone())),
range: TextRange::default(),
})
});
// Generate source code.
let replacement = replacement
.as_ref()
.map(|stmt| checker.generator().stmt(stmt));
let condition = condition
.as_ref()
.map(|expr| checker.generator().expr(expr));
let mut diagnostic = Diagnostic::new(
NeedlessBool {
condition: SourceCodeSnippet::from_str(condition),
replacement: replacement.clone().map(SourceCodeSnippet::new),
condition: condition.map(SourceCodeSnippet::new),
negate: inverted,
},
range,
);

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/flake8_simplify/mod.rs
---
SIM103.py:3:5: SIM103 [*] Return the condition `a` directly
SIM103.py:3:5: SIM103 [*] Return the condition `bool(a)` directly
|
1 | def f():
2 | # SIM103
@@ -52,7 +52,7 @@ SIM103.py:11:5: SIM103 [*] Return the condition `a == b` directly
16 13 |
17 14 | def f():
SIM103.py:21:5: SIM103 [*] Return the condition `b` directly
SIM103.py:21:5: SIM103 [*] Return the condition `bool(b)` directly
|
19 | if a:
20 | return 1
@@ -78,7 +78,7 @@ SIM103.py:21:5: SIM103 [*] Return the condition `b` directly
26 23 |
27 24 | def f():
SIM103.py:32:9: SIM103 [*] Return the condition `b` directly
SIM103.py:32:9: SIM103 [*] Return the condition `bool(b)` directly
|
30 | return 1
31 | else:
@@ -104,10 +104,10 @@ SIM103.py:32:9: SIM103 [*] Return the condition `b` directly
37 34 |
38 35 | def f():
SIM103.py:57:5: SIM103 [*] Return the condition `a` directly
SIM103.py:57:5: SIM103 [*] Return the condition `not a` directly
|
55 | def f():
56 | # SIM103 (but not fixable)
56 | # SIM103
57 | if a:
| _____^
58 | | return False
@@ -120,7 +120,7 @@ SIM103.py:57:5: SIM103 [*] Return the condition `a` directly
Unsafe fix
54 54 |
55 55 | def f():
56 56 | # SIM103 (but not fixable)
56 56 | # SIM103
57 |- if a:
58 |- return False
59 |- else:
@@ -130,7 +130,7 @@ SIM103.py:57:5: SIM103 [*] Return the condition `a` directly
62 59 |
63 60 | def f():
SIM103.py:83:5: SIM103 Return the condition `a` directly
SIM103.py:83:5: SIM103 Return the condition directly
|
81 | def bool():
82 | return False
@@ -142,3 +142,29 @@ SIM103.py:83:5: SIM103 Return the condition `a` directly
| |____________________^ SIM103
|
= help: Inline condition
SIM103.py:91:5: SIM103 [*] Return the condition `not (keys is not None and notice.key not in keys)` directly
|
89 | def f():
90 | # SIM103
91 | if keys is not None and notice.key not in keys:
| _____^
92 | | return False
93 | | else:
94 | | return True
| |___________________^ SIM103
|
= help: Replace with `return not (keys is not None and notice.key not in keys)`
Unsafe fix
88 88 |
89 89 | def f():
90 90 | # SIM103
91 |- if keys is not None and notice.key not in keys:
92 |- return False
93 |- else:
94 |- return True
91 |+ return not (keys is not None and notice.key not in keys)
95 92 |
96 93 |
97 94 | ###

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/flake8_simplify/mod.rs
---
SIM103.py:3:5: SIM103 [*] Return the condition `a` directly
SIM103.py:3:5: SIM103 [*] Return the condition `bool(a)` directly
|
1 | def f():
2 | # SIM103
@@ -52,7 +52,7 @@ SIM103.py:11:5: SIM103 [*] Return the condition `a == b` directly
16 13 |
17 14 | def f():
SIM103.py:21:5: SIM103 [*] Return the condition `b` directly
SIM103.py:21:5: SIM103 [*] Return the condition `bool(b)` directly
|
19 | if a:
20 | return 1
@@ -78,7 +78,7 @@ SIM103.py:21:5: SIM103 [*] Return the condition `b` directly
26 23 |
27 24 | def f():
SIM103.py:32:9: SIM103 [*] Return the condition `b` directly
SIM103.py:32:9: SIM103 [*] Return the condition `bool(b)` directly
|
30 | return 1
31 | else:
@@ -104,10 +104,10 @@ SIM103.py:32:9: SIM103 [*] Return the condition `b` directly
37 34 |
38 35 | def f():
SIM103.py:57:5: SIM103 [*] Return the condition `a` directly
SIM103.py:57:5: SIM103 [*] Return the condition `not a` directly
|
55 | def f():
56 | # SIM103 (but not fixable)
56 | # SIM103
57 | if a:
| _____^
58 | | return False
@@ -120,7 +120,7 @@ SIM103.py:57:5: SIM103 [*] Return the condition `a` directly
Unsafe fix
54 54 |
55 55 | def f():
56 56 | # SIM103 (but not fixable)
56 56 | # SIM103
57 |- if a:
58 |- return False
59 |- else:
@@ -130,7 +130,7 @@ SIM103.py:57:5: SIM103 [*] Return the condition `a` directly
62 59 |
63 60 | def f():
SIM103.py:83:5: SIM103 Return the condition `a` directly
SIM103.py:83:5: SIM103 Return the condition directly
|
81 | def bool():
82 | return False
@@ -143,47 +143,73 @@ SIM103.py:83:5: SIM103 Return the condition `a` directly
|
= help: Inline condition
SIM103.py:96:5: SIM103 [*] Return the condition `a` directly
SIM103.py:91:5: SIM103 [*] Return the condition `not (keys is not None and notice.key not in keys)` directly
|
94 | def f():
95 | # SIM103
96 | if a:
89 | def f():
90 | # SIM103
91 | if keys is not None and notice.key not in keys:
| _____^
97 | | return True
98 | | return False
| |________________^ SIM103
92 | | return False
93 | | else:
94 | | return True
| |___________________^ SIM103
|
= help: Replace with `return bool(a)`
= help: Replace with `return not (keys is not None and notice.key not in keys)`
Unsafe fix
93 93 |
94 94 | def f():
95 95 | # SIM103
96 |- if a:
97 |- return True
98 |- return False
96 |+ return bool(a)
99 97 |
100 98 |
101 99 | def f():
88 88 |
89 89 | def f():
90 90 | # SIM103
91 |- if keys is not None and notice.key not in keys:
92 |- return False
93 |- else:
94 |- return True
91 |+ return not (keys is not None and notice.key not in keys)
95 92 |
96 93 |
97 94 | ###
SIM103.py:103:5: SIM103 [*] Return the condition `a` directly
SIM103.py:104:5: SIM103 [*] Return the condition `bool(a)` directly
|
101 | def f():
102 | # SIM103
103 | if a:
102 | def f():
103 | # SIM103
104 | if a:
| _____^
104 | | return False
105 | | return True
105 | | return True
106 | | return False
| |________________^ SIM103
|
= help: Replace with `return bool(a)`
Unsafe fix
101 101 |
102 102 | def f():
103 103 | # SIM103
104 |- if a:
105 |- return True
106 |- return False
104 |+ return bool(a)
107 105 |
108 106 |
109 107 | def f():
SIM103.py:111:5: SIM103 [*] Return the condition `not a` directly
|
109 | def f():
110 | # SIM103
111 | if a:
| _____^
112 | | return False
113 | | return True
| |_______________^ SIM103
|
= help: Replace with `return not a`
Unsafe fix
100 100 |
101 101 | def f():
102 102 | # SIM103
103 |- if a:
104 |- return False
105 |- return True
103 |+ return not a
108 108 |
109 109 | def f():
110 110 | # SIM103
111 |- if a:
112 |- return False
113 |- return True
111 |+ return not a

View File

@@ -1,17 +1,16 @@
use ruff_python_ast as ast;
use ruff_python_ast::{Arguments, Expr, StmtClassDef};
use std::fmt;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::identifier::Identifier;
use ruff_python_ast::Stmt;
use ruff_python_ast::{self as ast, identifier::Identifier, Arguments, Expr, Stmt, StmtClassDef};
use ruff_python_semantic::SemanticModel;
use crate::checkers::ast::Checker;
use crate::rules::flake8_slots::rules::helpers::has_slots;
/// ## What it does
/// Checks for subclasses of `collections.namedtuple` that lack a `__slots__`
/// definition.
/// Checks for subclasses of `collections.namedtuple` or `typing.NamedTuple`
/// that lack a `__slots__` definition.
///
/// ## Why is this bad?
/// In Python, the `__slots__` attribute allows you to explicitly define the
@@ -48,12 +47,28 @@ use crate::rules::flake8_slots::rules::helpers::has_slots;
/// ## References
/// - [Python documentation: `__slots__`](https://docs.python.org/3/reference/datamodel.html#slots)
#[violation]
pub struct NoSlotsInNamedtupleSubclass;
pub struct NoSlotsInNamedtupleSubclass(NamedTupleKind);
impl Violation for NoSlotsInNamedtupleSubclass {
#[derive_message_formats]
fn message(&self) -> String {
format!("Subclasses of `collections.namedtuple()` should define `__slots__`")
let NoSlotsInNamedtupleSubclass(namedtuple_kind) = self;
format!("Subclasses of {namedtuple_kind} should define `__slots__`")
}
}
#[derive(Debug, PartialEq, Eq, Clone, Copy)]
enum NamedTupleKind {
Collections,
Typing,
}
impl fmt::Display for NamedTupleKind {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.write_str(match self {
Self::Collections => "`collections.namedtuple()`",
Self::Typing => "call-based `typing.NamedTuple()`",
})
}
}
@@ -67,22 +82,33 @@ pub(crate) fn no_slots_in_namedtuple_subclass(
return;
};
if bases.iter().any(|base| {
let Expr::Call(ast::ExprCall { func, .. }) = base else {
return false;
};
checker
.semantic()
.resolve_qualified_name(func)
.is_some_and(|qualified_name| {
matches!(qualified_name.segments(), ["collections", "namedtuple"])
})
}) {
if let Some(namedtuple_kind) = namedtuple_base(bases, checker.semantic()) {
if !has_slots(&class.body) {
checker.diagnostics.push(Diagnostic::new(
NoSlotsInNamedtupleSubclass,
NoSlotsInNamedtupleSubclass(namedtuple_kind),
stmt.identifier(),
));
}
}
}
/// If the class has a call-based namedtuple in its bases,
/// return the kind of namedtuple it is
/// (either `collections.namedtuple()`, or `typing.NamedTuple()`).
/// Else, return `None`.
fn namedtuple_base(bases: &[Expr], semantic: &SemanticModel) -> Option<NamedTupleKind> {
for base in bases {
let Expr::Call(ast::ExprCall { func, .. }) = base else {
continue;
};
let Some(qualified_name) = semantic.resolve_qualified_name(func) else {
continue;
};
match qualified_name.segments() {
["collections", "namedtuple"] => return Some(NamedTupleKind::Collections),
["typing", "NamedTuple"] => return Some(NamedTupleKind::Typing),
_ => continue,
}
}
None
}

View File

@@ -8,4 +8,9 @@ SLOT002.py:5:7: SLOT002 Subclasses of `collections.namedtuple()` should define `
6 | pass
|
SLOT002.py:9:7: SLOT002 Subclasses of call-based `typing.NamedTuple()` should define `__slots__`
|
9 | class UnusualButStillBad(NamedTuple("foo", [("x", int, "y", int)])): # SLOT002
| ^^^^^^^^^^^^^^^^^^ SLOT002
10 | pass
|

View File

@@ -39,7 +39,7 @@ impl Display for Strictness {
}
}
#[derive(Debug, CacheKey, Default)]
#[derive(Debug, Clone, CacheKey, Default)]
pub struct Settings {
pub ban_relative_imports: Strictness,
pub banned_api: FxHashMap<String, ApiBan>,

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub strict: bool,
pub exempt_modules: Vec<String>,

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub ignore_variadic_names: bool,
}

View File

@@ -270,7 +270,7 @@ pub(crate) fn categorize_imports<'a>(
block_by_type
}
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct KnownModules {
/// A map of known modules to their section.
known: Vec<(glob::Pattern, ImportSection)>,

View File

@@ -44,7 +44,7 @@ impl Display for RelativeImportsOrder {
}
}
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
#[allow(clippy::struct_excessive_bools)]
pub struct Settings {
pub required_imports: BTreeSet<String>,

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt::{Display, Formatter};
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub max_complexity: usize,
}

View File

@@ -11,7 +11,7 @@ use ruff_macros::CacheKey;
use crate::display_settings;
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub ignore_names: IgnoreNames,
pub classmethod_decorators: Vec<String>,
@@ -85,7 +85,7 @@ static DEFAULTS: &[&str] = &[
"maxDiff",
];
#[derive(Debug)]
#[derive(Debug, Clone)]
pub enum IgnoreNames {
Default,
UserProvided {

View File

@@ -6,7 +6,7 @@ use std::fmt;
use crate::line_width::LineLength;
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub max_line_length: LineLength,
pub max_doc_length: Option<LineLength>,

View File

@@ -59,26 +59,36 @@ pub(crate) fn capitalized(checker: &mut Checker, docstring: &Docstring) {
}
let body = docstring.body();
let Some(first_word) = body.split(' ').next() else {
return;
};
// Like pydocstyle, we only support ASCII for now.
for char in first_word.chars() {
if !char.is_ascii_alphabetic() && char != '\'' {
return;
}
}
let first_word = body.split_once(' ').map_or_else(
|| {
// If the docstring is a single word, trim the punctuation marks because
// it makes the ASCII test below fail.
body.trim_end_matches(['.', '!', '?'])
},
|(first_word, _)| first_word,
);
let mut first_word_chars = first_word.chars();
let Some(first_char) = first_word_chars.next() else {
return;
};
if !first_char.is_ascii() {
return;
}
let uppercase_first_char = first_char.to_ascii_uppercase();
if first_char == uppercase_first_char {
return;
}
// Like pydocstyle, we only support ASCII for now.
for char in first_word.chars().skip(1) {
if !char.is_ascii_alphabetic() && char != '\'' {
return;
}
}
let capitalized_word = uppercase_first_char.to_string() + first_word_chars.as_str();
let mut diagnostic = Diagnostic::new(

View File

@@ -83,7 +83,7 @@ impl fmt::Display for Convention {
}
}
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub convention: Option<Convention>,
pub ignore_decorators: BTreeSet<String>,

View File

@@ -19,4 +19,37 @@ D403.py:2:5: D403 [*] First word of the first line should be capitalized: `this`
4 4 | def good_function():
5 5 | """This docstring is capitalized."""
D403.py:30:5: D403 [*] First word of the first line should be capitalized: `singleword` -> `Singleword`
|
29 | def single_word():
30 | """singleword."""
| ^^^^^^^^^^^^^^^^^ D403
31 |
32 | def single_word_no_dot():
|
= help: Capitalize `singleword` to `Singleword`
Safe fix
27 27 | """th•s is not capitalized."""
28 28 |
29 29 | def single_word():
30 |- """singleword."""
30 |+ """Singleword."""
31 31 |
32 32 | def single_word_no_dot():
33 33 | """singleword"""
D403.py:33:5: D403 [*] First word of the first line should be capitalized: `singleword` -> `Singleword`
|
32 | def single_word_no_dot():
33 | """singleword"""
| ^^^^^^^^^^^^^^^^ D403
|
= help: Capitalize `singleword` to `Singleword`
Safe fix
30 30 | """singleword."""
31 31 |
32 32 | def single_word_no_dot():
33 |- """singleword"""
33 |+ """Singleword"""

View File

@@ -162,6 +162,7 @@ mod tests {
#[test_case(Rule::UndefinedExport, Path::new("F822_0.pyi"))]
#[test_case(Rule::UndefinedExport, Path::new("F822_1.py"))]
#[test_case(Rule::UndefinedExport, Path::new("F822_2.py"))]
#[test_case(Rule::UndefinedExport, Path::new("F822_3.py"))]
#[test_case(Rule::UndefinedLocal, Path::new("F823.py"))]
#[test_case(Rule::UnusedVariable, Path::new("F841_0.py"))]
#[test_case(Rule::UnusedVariable, Path::new("F841_1.py"))]

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt;
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub extend_generics: Vec<String>,
}

View File

@@ -0,0 +1,11 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
F822_3.py:12:5: F822 Undefined name `ExponentialFamily` in `__all__`
|
10 | __all__ += [
11 | "ContinuousBernoulli", # noqa: F822
12 | "ExponentialFamily",
| ^^^^^^^^^^^^^^^^^^^ F822
13 | ]
|

View File

@@ -1,8 +1,9 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_index::Indexer;
use ruff_python_trivia::Cursor;
use ruff_source_file::Locator;
use ruff_text_size::Ranged;
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::noqa::Directive;
@@ -27,15 +28,56 @@ use crate::noqa::Directive;
/// from .base import * # noqa: F403
/// ```
///
/// ## Fix safety
/// This rule will attempt to fix blanket `noqa` annotations that appear to
/// be unintentional. For example, given `# noqa F401`, the rule will suggest
/// inserting a colon, as in `# noqa: F401`.
///
/// While modifying `noqa` comments is generally safe, doing so may introduce
/// additional diagnostics.
///
/// ## References
/// - [Ruff documentation](https://docs.astral.sh/ruff/configuration/#error-suppression)
#[violation]
pub struct BlanketNOQA;
pub struct BlanketNOQA {
missing_colon: bool,
space_before_colon: bool,
}
impl Violation for BlanketNOQA {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
format!("Use specific rule codes when using `noqa`")
let BlanketNOQA {
missing_colon,
space_before_colon,
} = self;
// This awkward branching is necessary to ensure that the generic message is picked up by
// `derive_message_formats`.
if !missing_colon && !space_before_colon {
format!("Use specific rule codes when using `noqa`")
} else if *missing_colon {
format!("Use a colon when specifying `noqa` rule codes")
} else {
format!("Do not add spaces between `noqa` and its colon")
}
}
fn fix_title(&self) -> Option<String> {
let BlanketNOQA {
missing_colon,
space_before_colon,
} = self;
if *missing_colon {
Some("Add missing colon".to_string())
} else if *space_before_colon {
Some("Remove space(s) before colon".to_string())
} else {
None
}
}
}
@@ -47,8 +89,54 @@ pub(crate) fn blanket_noqa(
) {
for range in indexer.comment_ranges() {
let line = locator.slice(*range);
if let Ok(Some(Directive::All(all))) = Directive::try_extract(line, range.start()) {
diagnostics.push(Diagnostic::new(BlanketNOQA, all.range()));
let offset = range.start();
if let Ok(Some(Directive::All(all))) = Directive::try_extract(line, TextSize::new(0)) {
// The `all` range is that of the `noqa` directive in, e.g., `# noqa` or `# noqa F401`.
let noqa_start = offset + all.start();
let noqa_end = offset + all.end();
// Skip the `# noqa`, plus any trailing whitespace.
let mut cursor = Cursor::new(&line[all.end().to_usize()..]);
cursor.eat_while(char::is_whitespace);
// Check for extraneous spaces before the colon.
// Ex) `# noqa : F401`
if cursor.first() == ':' {
let start = offset + all.end();
let end = start + cursor.token_len();
let mut diagnostic = Diagnostic::new(
BlanketNOQA {
missing_colon: false,
space_before_colon: true,
},
TextRange::new(noqa_start, end),
);
diagnostic.set_fix(Fix::unsafe_edit(Edit::deletion(start, end)));
diagnostics.push(diagnostic);
} else if Directive::lex_code(cursor.chars().as_str()).is_some() {
// Check for a missing colon.
// Ex) `# noqa F401`
let start = offset + all.end();
let end = start + TextSize::new(1);
let mut diagnostic = Diagnostic::new(
BlanketNOQA {
missing_colon: true,
space_before_colon: false,
},
TextRange::new(noqa_start, end),
);
diagnostic.set_fix(Fix::unsafe_edit(Edit::insertion(':'.to_string(), start)));
diagnostics.push(diagnostic);
} else {
// Otherwise, it looks like an intentional blanket `noqa` annotation.
diagnostics.push(Diagnostic::new(
BlanketNOQA {
missing_colon: false,
space_before_colon: false,
},
TextRange::new(noqa_start, noqa_end),
));
}
}
}
}

View File

@@ -29,4 +29,97 @@ PGH004_0.py:4:1: PGH004 Use specific rule codes when using `noqa`
6 | # noqa:F401,W203
|
PGH004_0.py:18:8: PGH004 [*] Use a colon when specifying `noqa` rule codes
|
17 | # PGH004
18 | x = 2 # noqa X100
| ^^^^^^^ PGH004
19 |
20 | # PGH004
|
= help: Add missing colon
Unsafe fix
15 15 | x = 2 # noqa:X100
16 16 |
17 17 | # PGH004
18 |-x = 2 # noqa X100
18 |+x = 2 # noqa: X100
19 19 |
20 20 | # PGH004
21 21 | x = 2 # noqa X100, X200
PGH004_0.py:21:8: PGH004 [*] Use a colon when specifying `noqa` rule codes
|
20 | # PGH004
21 | x = 2 # noqa X100, X200
| ^^^^^^^ PGH004
22 |
23 | # PGH004
|
= help: Add missing colon
Unsafe fix
18 18 | x = 2 # noqa X100
19 19 |
20 20 | # PGH004
21 |-x = 2 # noqa X100, X200
21 |+x = 2 # noqa: X100, X200
22 22 |
23 23 | # PGH004
24 24 | x = 2 # noqa : X300
PGH004_0.py:24:8: PGH004 [*] Do not add spaces between `noqa` and its colon
|
23 | # PGH004
24 | x = 2 # noqa : X300
| ^^^^^^^ PGH004
25 |
26 | # PGH004
|
= help: Remove space(s) before colon
Unsafe fix
21 21 | x = 2 # noqa X100, X200
22 22 |
23 23 | # PGH004
24 |-x = 2 # noqa : X300
24 |+x = 2 # noqa: X300
25 25 |
26 26 | # PGH004
27 27 | x = 2 # noqa : X400
PGH004_0.py:27:8: PGH004 [*] Do not add spaces between `noqa` and its colon
|
26 | # PGH004
27 | x = 2 # noqa : X400
| ^^^^^^^^ PGH004
28 |
29 | # PGH004
|
= help: Remove space(s) before colon
Unsafe fix
24 24 | x = 2 # noqa : X300
25 25 |
26 26 | # PGH004
27 |-x = 2 # noqa : X400
27 |+x = 2 # noqa: X400
28 28 |
29 29 | # PGH004
30 30 | x = 2 # noqa :X500
PGH004_0.py:30:8: PGH004 [*] Do not add spaces between `noqa` and its colon
|
29 | # PGH004
30 | x = 2 # noqa :X500
| ^^^^^^^ PGH004
|
= help: Remove space(s) before colon
Unsafe fix
27 27 | x = 2 # noqa : X400
28 28 |
29 29 | # PGH004
30 |-x = 2 # noqa :X500
30 |+x = 2 # noqa:X500

View File

@@ -47,6 +47,7 @@ mod tests {
#[test_case(Rule::EqWithoutHash, Path::new("eq_without_hash.py"))]
#[test_case(Rule::EmptyComment, Path::new("empty_comment.py"))]
#[test_case(Rule::ManualFromImport, Path::new("import_aliasing.py"))]
#[test_case(Rule::IfStmtMinMax, Path::new("if_stmt_min_max.py"))]
#[test_case(Rule::SingleStringSlots, Path::new("single_string_slots.py"))]
#[test_case(Rule::SysExitAlias, Path::new("sys_exit_alias_0.py"))]
#[test_case(Rule::SysExitAlias, Path::new("sys_exit_alias_1.py"))]
@@ -190,6 +191,10 @@ mod tests {
Path::new("useless_exception_statement.py")
)]
#[test_case(Rule::NanComparison, Path::new("nan_comparison.py"))]
#[test_case(
Rule::BadStaticmethodArgument,
Path::new("bad_staticmethod_argument.py")
)]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
let diagnostics = test_path(

View File

@@ -0,0 +1,103 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast as ast;
use ruff_python_ast::ParameterWithDefault;
use ruff_python_semantic::analyze::function_type;
use ruff_python_semantic::Scope;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for static methods that use `self` or `cls` as their first argument.
///
/// ## Why is this bad?
/// [PEP 8] recommends the use of `self` and `cls` as the first arguments for
/// instance methods and class methods, respectively. Naming the first argument
/// of a static method as `self` or `cls` can be misleading, as static methods
/// do not receive an instance or class reference as their first argument.
///
/// ## Example
/// ```python
/// class Wolf:
/// @staticmethod
/// def eat(self):
/// pass
/// ```
///
/// Use instead:
/// ```python
/// class Wolf:
/// @staticmethod
/// def eat(sheep):
/// pass
/// ```
///
/// [PEP 8]: https://peps.python.org/pep-0008/#function-and-method-arguments
#[violation]
pub struct BadStaticmethodArgument {
argument_name: String,
}
impl Violation for BadStaticmethodArgument {
#[derive_message_formats]
fn message(&self) -> String {
let Self { argument_name } = self;
format!("First argument of a static method should not be named `{argument_name}`")
}
}
/// PLW0211
pub(crate) fn bad_staticmethod_argument(
checker: &Checker,
scope: &Scope,
diagnostics: &mut Vec<Diagnostic>,
) {
let Some(func) = scope.kind.as_function() else {
return;
};
let ast::StmtFunctionDef {
name,
decorator_list,
parameters,
..
} = func;
let Some(parent) = &checker.semantic().first_non_type_parent_scope(scope) else {
return;
};
let type_ = function_type::classify(
name,
decorator_list,
parent,
checker.semantic(),
&checker.settings.pep8_naming.classmethod_decorators,
&checker.settings.pep8_naming.staticmethod_decorators,
);
if !matches!(type_, function_type::FunctionType::StaticMethod) {
return;
}
let Some(ParameterWithDefault {
parameter: self_or_cls,
..
}) = parameters
.posonlyargs
.first()
.or_else(|| parameters.args.first())
else {
return;
};
if matches!(self_or_cls.name.as_str(), "self" | "cls") {
let diagnostic = Diagnostic::new(
BadStaticmethodArgument {
argument_name: self_or_cls.name.to_string(),
},
self_or_cls.range(),
);
diagnostics.push(diagnostic);
}
}

View File

@@ -0,0 +1,196 @@
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::{self as ast, CmpOp, Stmt};
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::fix::snippet::SourceCodeSnippet;
/// ## What it does
/// Checks for `if` statements that can be replaced with `min()` or `max()`
/// calls.
///
/// ## Why is this bad?
/// An `if` statement that selects the lesser or greater of two sub-expressions
/// can be replaced with a `min()` or `max()` call respectively. When possible,
/// prefer `min()` and `max()`, as they're more concise and readable than the
/// equivalent `if` statements.
///
/// ## Example
/// ```python
/// if score > highest_score:
/// highest_score = score
/// ```
///
/// Use instead:
/// ```python
/// highest_score = max(highest_score, score)
/// ```
///
/// ## References
/// - [Python documentation: max function](https://docs.python.org/3/library/functions.html#max)
/// - [Python documentation: min function](https://docs.python.org/3/library/functions.html#min)
#[violation]
pub struct IfStmtMinMax {
min_max: MinMax,
replacement: SourceCodeSnippet,
}
impl Violation for IfStmtMinMax {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let Self {
min_max,
replacement,
} = self;
if let Some(replacement) = replacement.full_display() {
format!("Replace `if` statement with `{replacement}`")
} else {
format!("Replace `if` statement with `{min_max}` call")
}
}
fn fix_title(&self) -> Option<String> {
let Self {
min_max,
replacement,
} = self;
if let Some(replacement) = replacement.full_display() {
Some(format!("Replace with `{replacement}`"))
} else {
Some(format!("Replace with `{min_max}` call"))
}
}
}
/// R1730, R1731
pub(crate) fn if_stmt_min_max(checker: &mut Checker, stmt_if: &ast::StmtIf) {
let ast::StmtIf {
test,
body,
elif_else_clauses,
range: _,
} = stmt_if;
if !elif_else_clauses.is_empty() {
return;
}
let [body @ Stmt::Assign(ast::StmtAssign {
targets: body_targets,
value: body_value,
..
})] = body.as_slice()
else {
return;
};
let [body_target] = body_targets.as_slice() else {
return;
};
let Some(ast::ExprCompare {
ops,
left,
comparators,
..
}) = test.as_compare_expr()
else {
return;
};
// Ignore, e.g., `foo < bar < baz`.
let [op] = &**ops else {
return;
};
// Determine whether to use `min()` or `max()`, and whether to flip the
// order of the arguments, which is relevant for breaking ties.
let (min_max, flip_args) = match op {
CmpOp::Gt => (MinMax::Max, true),
CmpOp::GtE => (MinMax::Max, false),
CmpOp::Lt => (MinMax::Min, true),
CmpOp::LtE => (MinMax::Min, false),
_ => return,
};
let [right] = &**comparators else {
return;
};
let _min_or_max = match op {
CmpOp::Gt | CmpOp::GtE => MinMax::Min,
CmpOp::Lt | CmpOp::LtE => MinMax::Max,
_ => return,
};
let left_cmp = ComparableExpr::from(left);
let body_target_cmp = ComparableExpr::from(body_target);
let right_statement_cmp = ComparableExpr::from(right);
let body_value_cmp = ComparableExpr::from(body_value);
if left_cmp != body_target_cmp || right_statement_cmp != body_value_cmp {
return;
}
let (arg1, arg2) = if flip_args {
(left.as_ref(), right)
} else {
(right, left.as_ref())
};
let replacement = format!(
"{} = {min_max}({}, {})",
checker.locator().slice(
parenthesized_range(
body_target.into(),
body.into(),
checker.indexer().comment_ranges(),
checker.locator().contents()
)
.unwrap_or(body_target.range())
),
checker.locator().slice(arg1),
checker.locator().slice(arg2),
);
let mut diagnostic = Diagnostic::new(
IfStmtMinMax {
min_max,
replacement: SourceCodeSnippet::from_str(replacement.as_str()),
},
stmt_if.range(),
);
if checker.semantic().is_builtin(min_max.as_str()) {
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
replacement,
stmt_if.range(),
)));
}
checker.diagnostics.push(diagnostic);
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
enum MinMax {
Min,
Max,
}
impl MinMax {
fn as_str(self) -> &'static str {
match self {
Self::Min => "min",
Self::Max => "max",
}
}
}
impl std::fmt::Display for MinMax {
fn fmt(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(fmt, "{}", self.as_str())
}
}

View File

@@ -3,6 +3,7 @@ pub(crate) use assert_on_string_literal::*;
pub(crate) use await_outside_async::*;
pub(crate) use bad_dunder_method_name::*;
pub(crate) use bad_open_mode::*;
pub(crate) use bad_staticmethod_argument::*;
pub(crate) use bad_str_strip_call::*;
pub(crate) use bad_string_format_character::BadStringFormatCharacter;
pub(crate) use bad_string_format_type::*;
@@ -20,6 +21,7 @@ pub(crate) use eq_without_hash::*;
pub(crate) use global_at_module_level::*;
pub(crate) use global_statement::*;
pub(crate) use global_variable_not_assigned::*;
pub(crate) use if_stmt_min_max::*;
pub(crate) use import_outside_top_level::*;
pub(crate) use import_private_name::*;
pub(crate) use import_self::*;
@@ -97,6 +99,7 @@ mod assert_on_string_literal;
mod await_outside_async;
mod bad_dunder_method_name;
mod bad_open_mode;
mod bad_staticmethod_argument;
mod bad_str_strip_call;
pub(crate) mod bad_string_format_character;
mod bad_string_format_type;
@@ -114,6 +117,7 @@ mod eq_without_hash;
mod global_at_module_level;
mod global_statement;
mod global_variable_not_assigned;
mod if_stmt_min_max;
mod import_outside_top_level;
mod import_private_name;
mod import_self;

View File

@@ -48,7 +48,7 @@ impl fmt::Display for ConstantType {
}
}
#[derive(Debug, CacheKey)]
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub allow_magic_value_types: Vec<ConstantType>,
pub allow_dunder_method_names: FxHashSet<String>,

View File

@@ -0,0 +1,296 @@
---
source: crates/ruff_linter/src/rules/pylint/mod.rs
---
if_stmt_min_max.py:8:1: PLR1730 [*] Replace `if` statement with `value = min(value, 10)`
|
7 | # Positive
8 | / if value < 10: # [max-instead-of-if]
9 | | value = 10
| |______________^ PLR1730
10 |
11 | if value <= 10: # [max-instead-of-if]
|
= help: Replace with `value = min(value, 10)`
Safe fix
5 5 | value3 = 3
6 6 |
7 7 | # Positive
8 |-if value < 10: # [max-instead-of-if]
9 |- value = 10
8 |+value = min(value, 10)
10 9 |
11 10 | if value <= 10: # [max-instead-of-if]
12 11 | value = 10
if_stmt_min_max.py:11:1: PLR1730 [*] Replace `if` statement with `value = min(10, value)`
|
9 | value = 10
10 |
11 | / if value <= 10: # [max-instead-of-if]
12 | | value = 10
| |______________^ PLR1730
13 |
14 | if value < value2: # [max-instead-of-if]
|
= help: Replace with `value = min(10, value)`
Safe fix
8 8 | if value < 10: # [max-instead-of-if]
9 9 | value = 10
10 10 |
11 |-if value <= 10: # [max-instead-of-if]
12 |- value = 10
11 |+value = min(10, value)
13 12 |
14 13 | if value < value2: # [max-instead-of-if]
15 14 | value = value2
if_stmt_min_max.py:14:1: PLR1730 [*] Replace `if` statement with `value = min(value, value2)`
|
12 | value = 10
13 |
14 | / if value < value2: # [max-instead-of-if]
15 | | value = value2
| |__________________^ PLR1730
16 |
17 | if value > 10: # [min-instead-of-if]
|
= help: Replace with `value = min(value, value2)`
Safe fix
11 11 | if value <= 10: # [max-instead-of-if]
12 12 | value = 10
13 13 |
14 |-if value < value2: # [max-instead-of-if]
15 |- value = value2
14 |+value = min(value, value2)
16 15 |
17 16 | if value > 10: # [min-instead-of-if]
18 17 | value = 10
if_stmt_min_max.py:17:1: PLR1730 [*] Replace `if` statement with `value = max(value, 10)`
|
15 | value = value2
16 |
17 | / if value > 10: # [min-instead-of-if]
18 | | value = 10
| |______________^ PLR1730
19 |
20 | if value >= 10: # [min-instead-of-if]
|
= help: Replace with `value = max(value, 10)`
Safe fix
14 14 | if value < value2: # [max-instead-of-if]
15 15 | value = value2
16 16 |
17 |-if value > 10: # [min-instead-of-if]
18 |- value = 10
17 |+value = max(value, 10)
19 18 |
20 19 | if value >= 10: # [min-instead-of-if]
21 20 | value = 10
if_stmt_min_max.py:20:1: PLR1730 [*] Replace `if` statement with `value = max(10, value)`
|
18 | value = 10
19 |
20 | / if value >= 10: # [min-instead-of-if]
21 | | value = 10
| |______________^ PLR1730
22 |
23 | if value > value2: # [min-instead-of-if]
|
= help: Replace with `value = max(10, value)`
Safe fix
17 17 | if value > 10: # [min-instead-of-if]
18 18 | value = 10
19 19 |
20 |-if value >= 10: # [min-instead-of-if]
21 |- value = 10
20 |+value = max(10, value)
22 21 |
23 22 | if value > value2: # [min-instead-of-if]
24 23 | value = value2
if_stmt_min_max.py:23:1: PLR1730 [*] Replace `if` statement with `value = max(value, value2)`
|
21 | value = 10
22 |
23 | / if value > value2: # [min-instead-of-if]
24 | | value = value2
| |__________________^ PLR1730
|
= help: Replace with `value = max(value, value2)`
Safe fix
20 20 | if value >= 10: # [min-instead-of-if]
21 21 | value = 10
22 22 |
23 |-if value > value2: # [min-instead-of-if]
24 |- value = value2
23 |+value = max(value, value2)
25 24 |
26 25 |
27 26 | class A:
if_stmt_min_max.py:33:1: PLR1730 [*] Replace `if` statement with `A1.value = min(A1.value, 10)`
|
32 | A1 = A()
33 | / if A1.value < 10: # [max-instead-of-if]
34 | | A1.value = 10
| |_________________^ PLR1730
35 |
36 | if A1.value > 10: # [min-instead-of-if]
|
= help: Replace with `A1.value = min(A1.value, 10)`
Safe fix
30 30 |
31 31 |
32 32 | A1 = A()
33 |-if A1.value < 10: # [max-instead-of-if]
34 |- A1.value = 10
33 |+A1.value = min(A1.value, 10)
35 34 |
36 35 | if A1.value > 10: # [min-instead-of-if]
37 36 | A1.value = 10
if_stmt_min_max.py:36:1: PLR1730 [*] Replace `if` statement with `A1.value = max(A1.value, 10)`
|
34 | A1.value = 10
35 |
36 | / if A1.value > 10: # [min-instead-of-if]
37 | | A1.value = 10
| |_________________^ PLR1730
|
= help: Replace with `A1.value = max(A1.value, 10)`
Safe fix
33 33 | if A1.value < 10: # [max-instead-of-if]
34 34 | A1.value = 10
35 35 |
36 |-if A1.value > 10: # [min-instead-of-if]
37 |- A1.value = 10
36 |+A1.value = max(A1.value, 10)
38 37 |
39 38 |
40 39 | class AA:
if_stmt_min_max.py:60:1: PLR1730 [*] Replace `if` statement with `A2 = min(A2, A1)`
|
58 | A2 = AA(3)
59 |
60 | / if A2 < A1: # [max-instead-of-if]
61 | | A2 = A1
| |___________^ PLR1730
62 |
63 | if A2 <= A1: # [max-instead-of-if]
|
= help: Replace with `A2 = min(A2, A1)`
Safe fix
57 57 | A1 = AA(0)
58 58 | A2 = AA(3)
59 59 |
60 |-if A2 < A1: # [max-instead-of-if]
61 |- A2 = A1
60 |+A2 = min(A2, A1)
62 61 |
63 62 | if A2 <= A1: # [max-instead-of-if]
64 63 | A2 = A1
if_stmt_min_max.py:63:1: PLR1730 [*] Replace `if` statement with `A2 = min(A1, A2)`
|
61 | A2 = A1
62 |
63 | / if A2 <= A1: # [max-instead-of-if]
64 | | A2 = A1
| |___________^ PLR1730
65 |
66 | if A2 > A1: # [min-instead-of-if]
|
= help: Replace with `A2 = min(A1, A2)`
Safe fix
60 60 | if A2 < A1: # [max-instead-of-if]
61 61 | A2 = A1
62 62 |
63 |-if A2 <= A1: # [max-instead-of-if]
64 |- A2 = A1
63 |+A2 = min(A1, A2)
65 64 |
66 65 | if A2 > A1: # [min-instead-of-if]
67 66 | A2 = A1
if_stmt_min_max.py:66:1: PLR1730 [*] Replace `if` statement with `A2 = max(A2, A1)`
|
64 | A2 = A1
65 |
66 | / if A2 > A1: # [min-instead-of-if]
67 | | A2 = A1
| |___________^ PLR1730
68 |
69 | if A2 >= A1: # [min-instead-of-if]
|
= help: Replace with `A2 = max(A2, A1)`
Safe fix
63 63 | if A2 <= A1: # [max-instead-of-if]
64 64 | A2 = A1
65 65 |
66 |-if A2 > A1: # [min-instead-of-if]
67 |- A2 = A1
66 |+A2 = max(A2, A1)
68 67 |
69 68 | if A2 >= A1: # [min-instead-of-if]
70 69 | A2 = A1
if_stmt_min_max.py:69:1: PLR1730 [*] Replace `if` statement with `A2 = max(A1, A2)`
|
67 | A2 = A1
68 |
69 | / if A2 >= A1: # [min-instead-of-if]
70 | | A2 = A1
| |___________^ PLR1730
71 |
72 | # Negative
|
= help: Replace with `A2 = max(A1, A2)`
Safe fix
66 66 | if A2 > A1: # [min-instead-of-if]
67 67 | A2 = A1
68 68 |
69 |-if A2 >= A1: # [min-instead-of-if]
70 |- A2 = A1
69 |+A2 = max(A1, A2)
71 70 |
72 71 | # Negative
73 72 | if value < 10:
if_stmt_min_max.py:132:1: PLR1730 [*] Replace `if` statement with `max` call
|
131 | # Parenthesized expressions
132 | / if value.attr > 3:
133 | | (
134 | | value.
135 | | attr
136 | | ) = 3
| |_________^ PLR1730
|
= help: Replace with `max` call
Safe fix
129 129 | value = 2
130 130 |
131 131 | # Parenthesized expressions
132 |-if value.attr > 3:
133 |- (
132 |+(
134 133 | value.
135 134 | attr
136 |- ) = 3
135 |+ ) = max(value.attr, 3)

View File

@@ -0,0 +1,28 @@
---
source: crates/ruff_linter/src/rules/pylint/mod.rs
---
bad_staticmethod_argument.py:3:13: PLW0211 First argument of a static method should not be named `self`
|
1 | class Wolf:
2 | @staticmethod
3 | def eat(self): # [bad-staticmethod-argument]
| ^^^^ PLW0211
4 | pass
|
bad_staticmethod_argument.py:15:13: PLW0211 First argument of a static method should not be named `cls`
|
13 | class Sheep:
14 | @staticmethod
15 | def eat(cls, x, y, z): # [bad-staticmethod-argument]
| ^^^ PLW0211
16 | pass
|
bad_staticmethod_argument.py:19:15: PLW0211 First argument of a static method should not be named `self`
|
18 | @staticmethod
19 | def sleep(self, x, y, z): # [bad-staticmethod-argument]
| ^^^^ PLW0211
20 | pass
|

View File

@@ -61,6 +61,7 @@ mod tests {
#[test_case(Rule::ReplaceUniversalNewlines, Path::new("UP021.py"))]
#[test_case(Rule::SuperCallWithParameters, Path::new("UP008.py"))]
#[test_case(Rule::TimeoutErrorAlias, Path::new("UP041.py"))]
#[test_case(Rule::ReplaceStrEnum, Path::new("UP042.py"))]
#[test_case(Rule::TypeOfPrimitive, Path::new("UP003.py"))]
#[test_case(Rule::TypingTextStrAlias, Path::new("UP019.py"))]
#[test_case(Rule::UTF8EncodingDeclaration, Path::new("UP009_0.py"))]

View File

@@ -18,6 +18,7 @@ pub(crate) use printf_string_formatting::*;
pub(crate) use quoted_annotation::*;
pub(crate) use redundant_open_modes::*;
pub(crate) use replace_stdout_stderr::*;
pub(crate) use replace_str_enum::*;
pub(crate) use replace_universal_newlines::*;
pub(crate) use super_call_with_parameters::*;
pub(crate) use timeout_error_alias::*;
@@ -58,6 +59,7 @@ mod printf_string_formatting;
mod quoted_annotation;
mod redundant_open_modes;
mod replace_stdout_stderr;
mod replace_str_enum;
mod replace_universal_newlines;
mod super_call_with_parameters;
mod timeout_error_alias;

View File

@@ -0,0 +1,160 @@
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast as ast;
use ruff_python_ast::identifier::Identifier;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::importer::ImportRequest;
/// ## What it does
/// Checks for classes that inherit from both `str` and `enum.Enum`.
///
/// ## Why is this bad?
/// Python 3.11 introduced `enum.StrEnum`, which is preferred over inheriting
/// from both `str` and `enum.Enum`.
///
/// ## Example
///
/// ```python
/// import enum
///
///
/// class Foo(str, enum.Enum):
/// ...
/// ```
///
/// Use instead:
///
/// ```python
/// import enum
///
///
/// class Foo(enum.StrEnum):
/// ...
/// ```
///
/// ## Fix safety
///
/// Python 3.11 introduced a [breaking change] for enums that inherit from both
/// `str` and `enum.Enum`. Consider the following enum:
///
/// ```python
/// from enum import Enum
///
///
/// class Foo(str, Enum):
/// BAR = "bar"
/// ```
///
/// In Python 3.11, the formatted representation of `Foo.BAR` changed as
/// follows:
///
/// ```python
/// # Python 3.10
/// f"{Foo.BAR}" # > bar
/// # Python 3.11
/// f"{Foo.BAR}" # > Foo.BAR
/// ```
///
/// Migrating from `str` and `enum.Enum` to `enum.StrEnum` will restore the
/// previous behavior, such that:
///
/// ```python
/// from enum import StrEnum
///
///
/// class Foo(StrEnum):
/// BAR = "bar"
///
///
/// f"{Foo.BAR}" # > bar
/// ```
///
/// As such, migrating to `enum.StrEnum` will introduce a behavior change for
/// code that relies on the Python 3.11 behavior.
///
/// ## References
/// - [enum.StrEnum](https://docs.python.org/3/library/enum.html#enum.StrEnum)
///
/// [breaking change]: https://blog.pecar.me/python-enum
#[violation]
pub struct ReplaceStrEnum {
name: String,
}
impl Violation for ReplaceStrEnum {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let ReplaceStrEnum { name } = self;
format!("Class {name} inherits from both `str` and `enum.Enum`")
}
fn fix_title(&self) -> Option<String> {
Some("Inherit from `enum.StrEnum`".to_string())
}
}
/// UP042
pub(crate) fn replace_str_enum(checker: &mut Checker, class_def: &ast::StmtClassDef) {
let Some(arguments) = class_def.arguments.as_deref() else {
// class does not inherit anything, exit early
return;
};
// Determine whether the class inherits from both `str` and `enum.Enum`.
let mut inherits_str = false;
let mut inherits_enum = false;
for base in arguments.args.iter() {
if let Some(qualified_name) = checker.semantic().resolve_qualified_name(base) {
match qualified_name.segments() {
["", "str"] => inherits_str = true,
["enum", "Enum"] => inherits_enum = true,
_ => {}
}
}
// Short-circuit if both `str` and `enum.Enum` are found.
if inherits_str && inherits_enum {
break;
}
}
// If the class does not inherit both `str` and `enum.Enum`, exit early.
if !inherits_str || !inherits_enum {
return;
};
let mut diagnostic = Diagnostic::new(
ReplaceStrEnum {
name: class_def.name.to_string(),
},
class_def.identifier(),
);
// If the base classes are _exactly_ `str` and `enum.Enum`, apply a fix.
// TODO(charlie): As an alternative, we could remove both arguments, and replace one of the two
// with `StrEnum`. However, `remove_argument` can't be applied multiple times within a single
// fix; doing so leads to a syntax error.
if arguments.len() == 2 {
diagnostic.try_set_fix(|| {
let (import_edit, binding) = checker.importer().get_or_import_symbol(
&ImportRequest::import("enum", "StrEnum"),
class_def.start(),
checker.semantic(),
)?;
Ok(Fix::unsafe_edits(
import_edit,
[Edit::range_replacement(
format!("({binding})"),
arguments.range(),
)],
))
});
}
checker.diagnostics.push(diagnostic);
}

View File

@@ -4,7 +4,7 @@ use crate::display_settings;
use ruff_macros::CacheKey;
use std::fmt;
#[derive(Debug, Default, CacheKey)]
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub keep_runtime_typing: bool,
}

View File

@@ -0,0 +1,55 @@
---
source: crates/ruff_linter/src/rules/pyupgrade/mod.rs
---
UP042.py:4:7: UP042 [*] Class A inherits from both `str` and `enum.Enum`
|
4 | class A(str, Enum): ...
| ^ UP042
|
= help: Inherit from `enum.StrEnum`
Unsafe fix
1 |-from enum import Enum
1 |+from enum import Enum, StrEnum
2 2 |
3 3 |
4 |-class A(str, Enum): ...
4 |+class A(StrEnum): ...
5 5 |
6 6 |
7 7 | class B(Enum, str): ...
UP042.py:7:7: UP042 [*] Class B inherits from both `str` and `enum.Enum`
|
7 | class B(Enum, str): ...
| ^ UP042
|
= help: Inherit from `enum.StrEnum`
Unsafe fix
1 |-from enum import Enum
1 |+from enum import Enum, StrEnum
2 2 |
3 3 |
4 4 | class A(str, Enum): ...
5 5 |
6 6 |
7 |-class B(Enum, str): ...
7 |+class B(StrEnum): ...
8 8 |
9 9 |
10 10 | class D(int, str, Enum): ...
UP042.py:10:7: UP042 Class D inherits from both `str` and `enum.Enum`
|
10 | class D(int, str, Enum): ...
| ^ UP042
|
= help: Inherit from `enum.StrEnum`
UP042.py:13:7: UP042 Class E inherits from both `str` and `enum.Enum`
|
13 | class E(str, int, Enum): ...
| ^ UP042
|
= help: Inherit from `enum.StrEnum`

View File

@@ -1,6 +1,7 @@
use ruff_python_ast as ast;
use ruff_python_ast::{self as ast, Expr};
use ruff_python_codegen::Generator;
use ruff_text_size::TextRange;
use ruff_python_semantic::{BindingId, ResolvedReference, SemanticModel};
use ruff_text_size::{Ranged, TextRange};
/// Format a code snippet to call `name.method()`.
pub(super) fn generate_method_call(name: &str, method: &str, generator: Generator) -> String {
@@ -61,3 +62,217 @@ pub(super) fn generate_none_identity_comparison(
};
generator.expr(&compare.into())
}
// Helpers for read-whole-file and write-whole-file
#[derive(Debug, Copy, Clone)]
pub(super) enum OpenMode {
/// "r"
ReadText,
/// "rb"
ReadBytes,
/// "w"
WriteText,
/// "wb"
WriteBytes,
}
impl OpenMode {
pub(super) fn pathlib_method(self) -> String {
match self {
OpenMode::ReadText => "read_text".to_string(),
OpenMode::ReadBytes => "read_bytes".to_string(),
OpenMode::WriteText => "write_text".to_string(),
OpenMode::WriteBytes => "write_bytes".to_string(),
}
}
}
/// A grab bag struct that joins together every piece of information we need to track
/// about a file open operation.
#[derive(Debug)]
pub(super) struct FileOpen<'a> {
/// With item where the open happens, we use it for the reporting range.
pub(super) item: &'a ast::WithItem,
/// Filename expression used as the first argument in `open`, we use it in the diagnostic message.
pub(super) filename: &'a Expr,
/// The file open mode.
pub(super) mode: OpenMode,
/// The file open keywords.
pub(super) keywords: Vec<&'a ast::Keyword>,
/// We only check `open` operations whose file handles are used exactly once.
pub(super) reference: &'a ResolvedReference,
}
impl<'a> FileOpen<'a> {
/// Determine whether an expression is a reference to the file handle, by comparing
/// their ranges. If two expressions have the same range, they must be the same expression.
pub(super) fn is_ref(&self, expr: &Expr) -> bool {
expr.range() == self.reference.range()
}
}
/// Find and return all `open` operations in the given `with` statement.
pub(super) fn find_file_opens<'a>(
with: &'a ast::StmtWith,
semantic: &'a SemanticModel<'a>,
read_mode: bool,
) -> Vec<FileOpen<'a>> {
with.items
.iter()
.filter_map(|item| find_file_open(item, with, semantic, read_mode))
.collect()
}
/// Find `open` operation in the given `with` item.
fn find_file_open<'a>(
item: &'a ast::WithItem,
with: &'a ast::StmtWith,
semantic: &'a SemanticModel<'a>,
read_mode: bool,
) -> Option<FileOpen<'a>> {
// We want to match `open(...) as var`.
let ast::ExprCall {
func,
arguments: ast::Arguments { args, keywords, .. },
..
} = item.context_expr.as_call_expr()?;
if func.as_name_expr()?.id != "open" {
return None;
}
let var = item.optional_vars.as_deref()?.as_name_expr()?;
// Ignore calls with `*args` and `**kwargs`. In the exact case of `open(*filename, mode="w")`,
// it could be a match; but in all other cases, the call _could_ contain unsupported keyword
// arguments, like `buffering`.
if args.iter().any(Expr::is_starred_expr)
|| keywords.iter().any(|keyword| keyword.arg.is_none())
{
return None;
}
// Match positional arguments, get filename and mode.
let (filename, pos_mode) = match_open_args(args)?;
// Match keyword arguments, get keyword arguments to forward and possibly mode.
let (keywords, kw_mode) = match_open_keywords(keywords, read_mode)?;
let mode = kw_mode.unwrap_or(pos_mode);
match mode {
OpenMode::ReadText | OpenMode::ReadBytes => {
if !read_mode {
return None;
}
}
OpenMode::WriteText | OpenMode::WriteBytes => {
if read_mode {
return None;
}
}
}
// Path.read_bytes and Path.write_bytes do not support any kwargs.
if matches!(mode, OpenMode::ReadBytes | OpenMode::WriteBytes) && !keywords.is_empty() {
return None;
}
// Now we need to find what is this variable bound to...
let scope = semantic.current_scope();
let bindings: Vec<BindingId> = scope.get_all(var.id.as_str()).collect();
let binding = bindings
.iter()
.map(|x| semantic.binding(*x))
// We might have many bindings with the same name, but we only care
// for the one we are looking at right now.
.find(|binding| binding.range() == var.range())?;
// Since many references can share the same binding, we can limit our attention span
// exclusively to the body of the current `with` statement.
let references: Vec<&ResolvedReference> = binding
.references
.iter()
.map(|id| semantic.reference(*id))
.filter(|reference| with.range().contains_range(reference.range()))
.collect();
// And even with all these restrictions, if the file handle gets used not exactly once,
// it doesn't fit the bill.
let [reference] = references.as_slice() else {
return None;
};
Some(FileOpen {
item,
filename,
mode,
keywords,
reference,
})
}
/// Match positional arguments. Return expression for the file name and open mode.
fn match_open_args(args: &[Expr]) -> Option<(&Expr, OpenMode)> {
match args {
[filename] => Some((filename, OpenMode::ReadText)),
[filename, mode_literal] => match_open_mode(mode_literal).map(|mode| (filename, mode)),
// The third positional argument is `buffering` and the pathlib methods don't support it.
_ => None,
}
}
/// Match keyword arguments. Return keyword arguments to forward and mode.
fn match_open_keywords(
keywords: &[ast::Keyword],
read_mode: bool,
) -> Option<(Vec<&ast::Keyword>, Option<OpenMode>)> {
let mut result: Vec<&ast::Keyword> = vec![];
let mut mode: Option<OpenMode> = None;
for keyword in keywords {
match keyword.arg.as_ref()?.as_str() {
"encoding" | "errors" => result.push(keyword),
// newline is only valid for write_text
"newline" => {
if read_mode {
return None;
}
result.push(keyword);
}
// This might look bizarre, - why do we re-wrap this optional?
//
// The answer is quite simple, in the result of the current function
// mode being `None` is a possible and correct option meaning that there
// was NO "mode" keyword argument.
//
// The result of `match_open_mode` on the other hand is None
// in the cases when the mode is not compatible with `write_text`/`write_bytes`.
//
// So, here we return None from this whole function if the mode
// is incompatible.
"mode" => mode = Some(match_open_mode(&keyword.value)?),
// All other keywords cannot be directly forwarded.
_ => return None,
};
}
Some((result, mode))
}
/// Match open mode to see if it is supported.
fn match_open_mode(mode: &Expr) -> Option<OpenMode> {
let ast::ExprStringLiteral { value, .. } = mode.as_string_literal_expr()?;
if value.is_implicit_concatenated() {
return None;
}
match value.to_str() {
"r" => Some(OpenMode::ReadText),
"rb" => Some(OpenMode::ReadBytes),
"w" => Some(OpenMode::WriteText),
"wb" => Some(OpenMode::WriteBytes),
_ => None,
}
}

View File

@@ -16,6 +16,7 @@ mod tests {
#[test_case(Rule::ReadWholeFile, Path::new("FURB101.py"))]
#[test_case(Rule::RepeatedAppend, Path::new("FURB113.py"))]
#[test_case(Rule::IfExpInsteadOfOrOperator, Path::new("FURB110.py"))]
#[test_case(Rule::ReimplementedOperator, Path::new("FURB118.py"))]
#[test_case(Rule::ReadlinesInFor, Path::new("FURB129.py"))]
#[test_case(Rule::DeleteFullSlice, Path::new("FURB131.py"))]
@@ -40,6 +41,7 @@ mod tests {
#[test_case(Rule::MetaClassABCMeta, Path::new("FURB180.py"))]
#[test_case(Rule::HashlibDigestHex, Path::new("FURB181.py"))]
#[test_case(Rule::ListReverseCopy, Path::new("FURB187.py"))]
#[test_case(Rule::WriteWholeFile, Path::new("FURB103.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
let diagnostics = test_path(

View File

@@ -0,0 +1,101 @@
use ruff_diagnostics::{Applicability, Diagnostic, Edit, Fix, FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast as ast;
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::helpers::contains_effect;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for ternary `if` expressions that can be replaced with the `or`
/// operator.
///
/// ## Why is this bad?
/// Ternary `if` expressions are more verbose than `or` expressions while
/// providing the same functionality.
///
/// ## Example
/// ```python
/// z = x if x else y
/// ```
///
/// Use instead:
/// ```python
/// z = x or y
/// ```
///
/// ## Fix safety
/// This rule's fix is marked as unsafe in the event that the body of the
/// `if` expression contains side effects.
///
/// For example, `foo` will be called twice in `foo() if foo() else bar()`
/// (assuming `foo()` returns a truthy value), but only once in
/// `foo() or bar()`.
#[violation]
pub struct IfExpInsteadOfOrOperator;
impl Violation for IfExpInsteadOfOrOperator {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
format!("Replace ternary `if` expression with `or` operator")
}
fn fix_title(&self) -> Option<String> {
Some(format!("Replace with `or` operator"))
}
}
/// FURB110
pub(crate) fn if_exp_instead_of_or_operator(checker: &mut Checker, if_expr: &ast::ExprIf) {
let ast::ExprIf {
test,
body,
orelse,
range,
} = if_expr;
if ComparableExpr::from(test) != ComparableExpr::from(body) {
return;
}
let mut diagnostic = Diagnostic::new(IfExpInsteadOfOrOperator, *range);
// Grab the range of the `test` and `orelse` expressions.
let left = parenthesized_range(
test.into(),
if_expr.into(),
checker.indexer().comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(test.range());
let right = parenthesized_range(
orelse.into(),
if_expr.into(),
checker.indexer().comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(orelse.range());
// Replace with `{test} or {orelse}`.
diagnostic.set_fix(Fix::applicable_edit(
Edit::range_replacement(
format!(
"{} or {}",
checker.locator().slice(left),
checker.locator().slice(right),
),
if_expr.range(),
),
if contains_effect(body, |id| checker.semantic().is_builtin(id)) {
Applicability::Unsafe
} else {
Applicability::Safe
},
));
checker.diagnostics.push(diagnostic);
}

View File

@@ -3,6 +3,7 @@ pub(crate) use check_and_remove_from_set::*;
pub(crate) use delete_full_slice::*;
pub(crate) use for_loop_set_mutations::*;
pub(crate) use hashlib_digest_hex::*;
pub(crate) use if_exp_instead_of_or_operator::*;
pub(crate) use if_expr_min_max::*;
pub(crate) use implicit_cwd::*;
pub(crate) use int_on_sliced_str::*;
@@ -24,12 +25,14 @@ pub(crate) use type_none_comparison::*;
pub(crate) use unnecessary_enumerate::*;
pub(crate) use unnecessary_from_float::*;
pub(crate) use verbose_decimal_constructor::*;
pub(crate) use write_whole_file::*;
mod bit_count;
mod check_and_remove_from_set;
mod delete_full_slice;
mod for_loop_set_mutations;
mod hashlib_digest_hex;
mod if_exp_instead_of_or_operator;
mod if_expr_min_max;
mod implicit_cwd;
mod int_on_sliced_str;
@@ -51,3 +54,4 @@ mod type_none_comparison;
mod unnecessary_enumerate;
mod unnecessary_from_float;
mod verbose_decimal_constructor;
mod write_whole_file;

View File

@@ -3,12 +3,13 @@ use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::visitor::{self, Visitor};
use ruff_python_ast::{self as ast, Expr};
use ruff_python_codegen::Generator;
use ruff_python_semantic::{BindingId, ResolvedReference, SemanticModel};
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
use crate::fix::snippet::SourceCodeSnippet;
use super::super::helpers::{find_file_opens, FileOpen};
/// ## What it does
/// Checks for uses of `open` and `read` that can be replaced by `pathlib`
/// methods, like `Path.read_text` and `Path.read_bytes`.
@@ -57,7 +58,7 @@ pub(crate) fn read_whole_file(checker: &mut Checker, with: &ast::StmtWith) {
}
// First we go through all the items in the statement and find all `open` operations.
let candidates = find_file_opens(with, checker.semantic());
let candidates = find_file_opens(with, checker.semantic(), true);
if candidates.is_empty() {
return;
}
@@ -85,176 +86,6 @@ pub(crate) fn read_whole_file(checker: &mut Checker, with: &ast::StmtWith) {
checker.diagnostics.extend(diagnostics);
}
#[derive(Debug)]
enum ReadMode {
/// "r" -> `read_text`
Text,
/// "rb" -> `read_bytes`
Bytes,
}
/// A grab bag struct that joins together every piece of information we need to track
/// about a file open operation.
#[derive(Debug)]
struct FileOpen<'a> {
/// With item where the open happens, we use it for the reporting range.
item: &'a ast::WithItem,
/// Filename expression used as the first argument in `open`, we use it in the diagnostic message.
filename: &'a Expr,
/// The type of read to choose `read_text` or `read_bytes`.
mode: ReadMode,
/// Keywords that can be used in the new read call.
keywords: Vec<&'a ast::Keyword>,
/// We only check `open` operations whose file handles are used exactly once.
reference: &'a ResolvedReference,
}
impl<'a> FileOpen<'a> {
/// Determine whether an expression is a reference to the file handle, by comparing
/// their ranges. If two expressions have the same range, they must be the same expression.
fn is_ref(&self, expr: &Expr) -> bool {
expr.range() == self.reference.range()
}
}
/// Find and return all `open` operations in the given `with` statement.
fn find_file_opens<'a>(
with: &'a ast::StmtWith,
semantic: &'a SemanticModel<'a>,
) -> Vec<FileOpen<'a>> {
with.items
.iter()
.filter_map(|item| find_file_open(item, with, semantic))
.collect()
}
/// Find `open` operation in the given `with` item.
fn find_file_open<'a>(
item: &'a ast::WithItem,
with: &'a ast::StmtWith,
semantic: &'a SemanticModel<'a>,
) -> Option<FileOpen<'a>> {
// We want to match `open(...) as var`.
let ast::ExprCall {
func,
arguments: ast::Arguments { args, keywords, .. },
..
} = item.context_expr.as_call_expr()?;
if func.as_name_expr()?.id != "open" {
return None;
}
let var = item.optional_vars.as_deref()?.as_name_expr()?;
// Ignore calls with `*args` and `**kwargs`. In the exact case of `open(*filename, mode="r")`,
// it could be a match; but in all other cases, the call _could_ contain unsupported keyword
// arguments, like `buffering`.
if args.iter().any(Expr::is_starred_expr)
|| keywords.iter().any(|keyword| keyword.arg.is_none())
{
return None;
}
// Match positional arguments, get filename and read mode.
let (filename, pos_mode) = match_open_args(args)?;
// Match keyword arguments, get keyword arguments to forward and possibly read mode.
let (keywords, kw_mode) = match_open_keywords(keywords)?;
// `pos_mode` could've been assigned default value corresponding to "r", while
// keyword mode should override that.
let mode = kw_mode.unwrap_or(pos_mode);
// Now we need to find what is this variable bound to...
let scope = semantic.current_scope();
let bindings: Vec<BindingId> = scope.get_all(var.id.as_str()).collect();
let binding = bindings
.iter()
.map(|x| semantic.binding(*x))
// We might have many bindings with the same name, but we only care
// for the one we are looking at right now.
.find(|binding| binding.range() == var.range())?;
// Since many references can share the same binding, we can limit our attention span
// exclusively to the body of the current `with` statement.
let references: Vec<&ResolvedReference> = binding
.references
.iter()
.map(|id| semantic.reference(*id))
.filter(|reference| with.range().contains_range(reference.range()))
.collect();
// And even with all these restrictions, if the file handle gets used not exactly once,
// it doesn't fit the bill.
let [reference] = references.as_slice() else {
return None;
};
Some(FileOpen {
item,
filename,
mode,
keywords,
reference,
})
}
/// Match positional arguments. Return expression for the file name and read mode.
fn match_open_args(args: &[Expr]) -> Option<(&Expr, ReadMode)> {
match args {
[filename] => Some((filename, ReadMode::Text)),
[filename, mode_literal] => match_open_mode(mode_literal).map(|mode| (filename, mode)),
// The third positional argument is `buffering` and `read_text` doesn't support it.
_ => None,
}
}
/// Match keyword arguments. Return keyword arguments to forward and read mode.
fn match_open_keywords(
keywords: &[ast::Keyword],
) -> Option<(Vec<&ast::Keyword>, Option<ReadMode>)> {
let mut result: Vec<&ast::Keyword> = vec![];
let mut mode: Option<ReadMode> = None;
for keyword in keywords {
match keyword.arg.as_ref()?.as_str() {
"encoding" | "errors" => result.push(keyword),
// This might look bizarre, - why do we re-wrap this optional?
//
// The answer is quite simple, in the result of the current function
// mode being `None` is a possible and correct option meaning that there
// was NO "mode" keyword argument.
//
// The result of `match_open_mode` on the other hand is None
// in the cases when the mode is not compatible with `read_text`/`read_bytes`.
//
// So, here we return None from this whole function if the mode
// is incompatible.
"mode" => mode = Some(match_open_mode(&keyword.value)?),
// All other keywords cannot be directly forwarded.
_ => return None,
};
}
Some((result, mode))
}
/// Match open mode to see if it is supported.
fn match_open_mode(mode: &Expr) -> Option<ReadMode> {
let ast::ExprStringLiteral { value, .. } = mode.as_string_literal_expr()?;
if value.is_implicit_concatenated() {
return None;
}
match value.to_str() {
"r" => Some(ReadMode::Text),
"rb" => Some(ReadMode::Bytes),
_ => None,
}
}
/// AST visitor that matches `open` operations with the corresponding `read` calls.
#[derive(Debug)]
struct ReadMatcher<'a> {
@@ -305,17 +136,12 @@ fn match_read_call(expr: &Expr) -> Option<&Expr> {
return None;
}
Some(attr.value.as_ref())
Some(&*attr.value)
}
/// Construct the replacement suggestion call.
fn make_suggestion(open: &FileOpen<'_>, generator: Generator) -> SourceCodeSnippet {
let method_name = match open.mode {
ReadMode::Text => "read_text",
ReadMode::Bytes => "read_bytes",
};
let name = ast::ExprName {
id: method_name.to_string(),
id: open.mode.pathlib_method(),
ctx: ast::ExprContext::Load,
range: TextRange::default(),
};

View File

@@ -1,9 +1,13 @@
use std::borrow::Cow;
use std::fmt::{Debug, Display, Formatter};
use anyhow::Result;
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Expr, Stmt};
use ruff_python_ast::{self as ast, Expr, ExprSlice, ExprSubscript, ExprTuple, Parameters, Stmt};
use ruff_python_semantic::SemanticModel;
use ruff_source_file::Locator;
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
@@ -39,7 +43,7 @@ use crate::importer::{ImportRequest, Importer};
/// ## References
#[violation]
pub struct ReimplementedOperator {
operator: &'static str,
operator: Operator,
target: FunctionLikeKind,
}
@@ -71,9 +75,10 @@ pub(crate) fn reimplemented_operator(checker: &mut Checker, target: &FunctionLik
return;
};
let Some(body) = target.body() else { return };
let Some(operator) = get_operator(body, params) else {
let Some(operator) = get_operator(body, params, checker.locator()) else {
return;
};
let fix = target.try_fix(&operator, checker.importer(), checker.semantic());
let mut diagnostic = Diagnostic::new(
ReimplementedOperator {
operator,
@@ -81,8 +86,7 @@ pub(crate) fn reimplemented_operator(checker: &mut Checker, target: &FunctionLik
},
target.range(),
);
diagnostic
.try_set_optional_fix(|| target.try_fix(operator, checker.importer(), checker.semantic()));
diagnostic.try_set_optional_fix(|| fix);
checker.diagnostics.push(diagnostic);
}
@@ -115,8 +119,8 @@ impl Ranged for FunctionLike<'_> {
}
impl FunctionLike<'_> {
/// Return the [`ast::Parameters`] of the function-like node.
fn parameters(&self) -> Option<&ast::Parameters> {
/// Return the [`Parameters`] of the function-like node.
fn parameters(&self) -> Option<&Parameters> {
match self {
Self::Lambda(expr) => expr.parameters.as_deref(),
Self::Function(stmt) => Some(&stmt.parameters),
@@ -149,19 +153,24 @@ impl FunctionLike<'_> {
/// function from `operator` module.
fn try_fix(
&self,
operator: &'static str,
operator: &Operator,
importer: &Importer,
semantic: &SemanticModel,
) -> Result<Option<Fix>> {
match self {
Self::Lambda(_) => {
let (edit, binding) = importer.get_or_import_symbol(
&ImportRequest::import("operator", operator),
&ImportRequest::import("operator", operator.name),
self.start(),
semantic,
)?;
let content = if operator.args.is_empty() {
binding
} else {
format!("{binding}({})", operator.args.join(", "))
};
Ok(Some(Fix::safe_edits(
Edit::range_replacement(binding, self.range()),
Edit::range_replacement(content, self.range()),
[edit],
)))
}
@@ -170,12 +179,112 @@ impl FunctionLike<'_> {
}
}
/// Return the name of the `operator` implemented by the given expression.
fn get_operator(expr: &Expr, params: &ast::Parameters) -> Option<&'static str> {
/// Convert the slice expression to the string representation of `slice` call.
/// For example, expression `1:2` will be `slice(1, 2)`, and `:` will be `slice(None)`.
fn slice_expr_to_slice_call(slice: &ExprSlice, locator: &Locator) -> String {
let stringify = |expr: Option<&Expr>| expr.map_or("None", |expr| locator.slice(expr));
match (
slice.lower.as_deref(),
slice.upper.as_deref(),
slice.step.as_deref(),
) {
(lower, upper, step @ Some(_)) => format!(
"slice({}, {}, {})",
stringify(lower),
stringify(upper),
stringify(step)
),
(None, upper, None) => format!("slice({})", stringify(upper)),
(lower @ Some(_), upper, None) => {
format!("slice({}, {})", stringify(lower), stringify(upper))
}
}
}
/// Convert the given expression to a string representation, suitable to be a function argument.
fn subscript_slice_to_string<'a>(expr: &Expr, locator: &Locator<'a>) -> Cow<'a, str> {
if let Expr::Slice(expr_slice) = expr {
Cow::Owned(slice_expr_to_slice_call(expr_slice, locator))
} else {
Cow::Borrowed(locator.slice(expr))
}
}
/// Return the `operator` implemented by given subscript expression.
fn itemgetter_op(expr: &ExprSubscript, params: &Parameters, locator: &Locator) -> Option<Operator> {
let [arg] = params.args.as_slice() else {
return None;
};
if !is_same_expression(arg, &expr.value) {
return None;
};
Some(Operator {
name: "itemgetter",
args: vec![subscript_slice_to_string(expr.slice.as_ref(), locator).to_string()],
})
}
/// Return the `operator` implemented by given tuple expression.
fn itemgetter_op_tuple(
expr: &ExprTuple,
params: &Parameters,
locator: &Locator,
) -> Option<Operator> {
let [arg] = params.args.as_slice() else {
return None;
};
if expr.elts.is_empty() {
return None;
}
Some(Operator {
name: "itemgetter",
args: expr
.elts
.iter()
.map(|expr| {
expr.as_subscript_expr()
.filter(|expr| is_same_expression(arg, &expr.value))
.map(|expr| expr.slice.as_ref())
.map(|slice| subscript_slice_to_string(slice, locator).to_string())
})
.collect::<Option<Vec<_>>>()?,
})
}
#[derive(Eq, PartialEq, Debug)]
struct Operator {
name: &'static str,
args: Vec<String>,
}
impl From<&'static str> for Operator {
fn from(value: &'static str) -> Self {
Self {
name: value,
args: vec![],
}
}
}
impl Display for Operator {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.name)?;
if self.args.is_empty() {
Ok(())
} else {
write!(f, "({})", self.args.join(", "))
}
}
}
/// Return the `operator` implemented by the given expression.
fn get_operator(expr: &Expr, params: &Parameters, locator: &Locator) -> Option<Operator> {
match expr {
Expr::UnaryOp(expr) => unary_op(expr, params),
Expr::BinOp(expr) => bin_op(expr, params),
Expr::Compare(expr) => cmp_op(expr, params),
Expr::UnaryOp(expr) => unary_op(expr, params).map(Operator::from),
Expr::BinOp(expr) => bin_op(expr, params).map(Operator::from),
Expr::Compare(expr) => cmp_op(expr, params).map(Operator::from),
Expr::Subscript(expr) => itemgetter_op(expr, params, locator),
Expr::Tuple(expr) => itemgetter_op_tuple(expr, params, locator),
_ => None,
}
}
@@ -187,7 +296,7 @@ enum FunctionLikeKind {
}
/// Return the name of the `operator` implemented by the given unary expression.
fn unary_op(expr: &ast::ExprUnaryOp, params: &ast::Parameters) -> Option<&'static str> {
fn unary_op(expr: &ast::ExprUnaryOp, params: &Parameters) -> Option<&'static str> {
let [arg] = params.args.as_slice() else {
return None;
};
@@ -203,7 +312,7 @@ fn unary_op(expr: &ast::ExprUnaryOp, params: &ast::Parameters) -> Option<&'stati
}
/// Return the name of the `operator` implemented by the given binary expression.
fn bin_op(expr: &ast::ExprBinOp, params: &ast::Parameters) -> Option<&'static str> {
fn bin_op(expr: &ast::ExprBinOp, params: &Parameters) -> Option<&'static str> {
let [arg1, arg2] = params.args.as_slice() else {
return None;
};
@@ -228,7 +337,7 @@ fn bin_op(expr: &ast::ExprBinOp, params: &ast::Parameters) -> Option<&'static st
}
/// Return the name of the `operator` implemented by the given comparison expression.
fn cmp_op(expr: &ast::ExprCompare, params: &ast::Parameters) -> Option<&'static str> {
fn cmp_op(expr: &ast::ExprCompare, params: &Parameters) -> Option<&'static str> {
let [arg1, arg2] = params.args.as_slice() else {
return None;
};
@@ -240,71 +349,19 @@ fn cmp_op(expr: &ast::ExprCompare, params: &ast::Parameters) -> Option<&'static
};
match op {
ast::CmpOp::Eq => {
if match_arguments(arg1, arg2, &expr.left, right) {
Some("eq")
} else {
None
}
}
ast::CmpOp::NotEq => {
if match_arguments(arg1, arg2, &expr.left, right) {
Some("ne")
} else {
None
}
}
ast::CmpOp::Lt => {
if match_arguments(arg1, arg2, &expr.left, right) {
Some("lt")
} else {
None
}
}
ast::CmpOp::LtE => {
if match_arguments(arg1, arg2, &expr.left, right) {
Some("le")
} else {
None
}
}
ast::CmpOp::Gt => {
if match_arguments(arg1, arg2, &expr.left, right) {
Some("gt")
} else {
None
}
}
ast::CmpOp::GtE => {
if match_arguments(arg1, arg2, &expr.left, right) {
Some("ge")
} else {
None
}
}
ast::CmpOp::Is => {
if match_arguments(arg1, arg2, &expr.left, right) {
Some("is_")
} else {
None
}
}
ast::CmpOp::IsNot => {
if match_arguments(arg1, arg2, &expr.left, right) {
Some("is_not")
} else {
None
}
}
ast::CmpOp::Eq => match_arguments(arg1, arg2, &expr.left, right).then_some("eq"),
ast::CmpOp::NotEq => match_arguments(arg1, arg2, &expr.left, right).then_some("ne"),
ast::CmpOp::Lt => match_arguments(arg1, arg2, &expr.left, right).then_some("lt"),
ast::CmpOp::LtE => match_arguments(arg1, arg2, &expr.left, right).then_some("le"),
ast::CmpOp::Gt => match_arguments(arg1, arg2, &expr.left, right).then_some("gt"),
ast::CmpOp::GtE => match_arguments(arg1, arg2, &expr.left, right).then_some("ge"),
ast::CmpOp::Is => match_arguments(arg1, arg2, &expr.left, right).then_some("is_"),
ast::CmpOp::IsNot => match_arguments(arg1, arg2, &expr.left, right).then_some("is_not"),
ast::CmpOp::In => {
// Note: `operator.contains` reverses the order of arguments. That is:
// `operator.contains` is equivalent to `lambda x, y: y in x`, rather than
// `lambda x, y: x in y`.
if match_arguments(arg1, arg2, right, &expr.left) {
Some("contains")
} else {
None
}
match_arguments(arg1, arg2, right, &expr.left).then_some("contains")
}
ast::CmpOp::NotIn => None,
}

View File

@@ -0,0 +1,182 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::relocate::relocate_expr;
use ruff_python_ast::visitor::{self, Visitor};
use ruff_python_ast::{self as ast, Expr, Stmt};
use ruff_python_codegen::Generator;
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
use crate::fix::snippet::SourceCodeSnippet;
use super::super::helpers::{find_file_opens, FileOpen};
/// ## What it does
/// Checks for uses of `open` and `write` that can be replaced by `pathlib`
/// methods, like `Path.write_text` and `Path.write_bytes`.
///
/// ## Why is this bad?
/// When writing a single string to a file, it's simpler and more concise
/// to use `pathlib` methods like `Path.write_text` and `Path.write_bytes`
/// instead of `open` and `write` calls via `with` statements.
///
/// ## Example
/// ```python
/// with open(filename, "w") as f:
/// f.write(contents)
/// ```
///
/// Use instead:
/// ```python
/// from pathlib import Path
///
/// Path(filename).write_text(contents)
/// ```
///
/// ## References
/// - [Python documentation: `Path.write_bytes`](https://docs.python.org/3/library/pathlib.html#pathlib.Path.write_bytes)
/// - [Python documentation: `Path.write_text`](https://docs.python.org/3/library/pathlib.html#pathlib.Path.write_text)
#[violation]
pub struct WriteWholeFile {
filename: SourceCodeSnippet,
suggestion: SourceCodeSnippet,
}
impl Violation for WriteWholeFile {
#[derive_message_formats]
fn message(&self) -> String {
let filename = self.filename.truncated_display();
let suggestion = self.suggestion.truncated_display();
format!("`open` and `write` should be replaced by `Path({filename}).{suggestion}`")
}
}
/// FURB103
pub(crate) fn write_whole_file(checker: &mut Checker, with: &ast::StmtWith) {
// `async` check here is more of a precaution.
if with.is_async || !checker.semantic().is_builtin("open") {
return;
}
// First we go through all the items in the statement and find all `open` operations.
let candidates = find_file_opens(with, checker.semantic(), false);
if candidates.is_empty() {
return;
}
// Then we need to match each `open` operation with exactly one `write` call.
let (matches, contents) = {
let mut matcher = WriteMatcher::new(candidates);
visitor::walk_body(&mut matcher, &with.body);
matcher.finish()
};
// All the matched operations should be reported.
let diagnostics: Vec<Diagnostic> = matches
.iter()
.zip(contents)
.map(|(open, content)| {
Diagnostic::new(
WriteWholeFile {
filename: SourceCodeSnippet::from_str(&checker.generator().expr(open.filename)),
suggestion: make_suggestion(open, content, checker.generator()),
},
open.item.range(),
)
})
.collect();
checker.diagnostics.extend(diagnostics);
}
/// AST visitor that matches `open` operations with the corresponding `write` calls.
#[derive(Debug)]
struct WriteMatcher<'a> {
candidates: Vec<FileOpen<'a>>,
matches: Vec<FileOpen<'a>>,
contents: Vec<&'a Expr>,
loop_counter: u32,
}
impl<'a> WriteMatcher<'a> {
fn new(candidates: Vec<FileOpen<'a>>) -> Self {
Self {
candidates,
matches: vec![],
contents: vec![],
loop_counter: 0,
}
}
fn finish(self) -> (Vec<FileOpen<'a>>, Vec<&'a Expr>) {
(self.matches, self.contents)
}
}
impl<'a> Visitor<'a> for WriteMatcher<'a> {
fn visit_stmt(&mut self, stmt: &'a Stmt) {
if matches!(stmt, ast::Stmt::While(_) | ast::Stmt::For(_)) {
self.loop_counter += 1;
visitor::walk_stmt(self, stmt);
self.loop_counter -= 1;
} else {
visitor::walk_stmt(self, stmt);
}
}
fn visit_expr(&mut self, expr: &'a Expr) {
if let Some((write_to, content)) = match_write_call(expr) {
if let Some(open) = self
.candidates
.iter()
.position(|open| open.is_ref(write_to))
{
if self.loop_counter == 0 {
self.matches.push(self.candidates.remove(open));
self.contents.push(content);
} else {
self.candidates.remove(open);
}
}
return;
}
visitor::walk_expr(self, expr);
}
}
/// Match `x.write(foo)` expression and return expression `x` and `foo` on success.
fn match_write_call(expr: &Expr) -> Option<(&Expr, &Expr)> {
let call = expr.as_call_expr()?;
let attr = call.func.as_attribute_expr()?;
let method_name = &attr.attr;
if method_name != "write"
|| !attr.value.is_name_expr()
|| call.arguments.args.len() != 1
|| !call.arguments.keywords.is_empty()
{
return None;
}
// `write` only takes in a single positional argument.
Some((&*attr.value, call.arguments.args.first()?))
}
fn make_suggestion(open: &FileOpen<'_>, arg: &Expr, generator: Generator) -> SourceCodeSnippet {
let name = ast::ExprName {
id: open.mode.pathlib_method(),
ctx: ast::ExprContext::Load,
range: TextRange::default(),
};
let mut arg = arg.clone();
relocate_expr(&mut arg, TextRange::default());
let call = ast::ExprCall {
func: Box::new(name.into()),
arguments: ast::Arguments {
args: Box::new([arg]),
keywords: open.keywords.iter().copied().cloned().collect(),
range: TextRange::default(),
},
range: TextRange::default(),
};
SourceCodeSnippet::from_str(&generator.expr(&call.into()))
}

Some files were not shown because too many files have changed in this diff Show More