Compare commits

...

59 Commits

Author SHA1 Message Date
Zanie
d2b0b1ca26 Simplify comment 2023-08-22 10:23:31 -05:00
Zanie
6d88ac4ca2 Restore unrelated snapshot 2023-08-22 10:22:34 -05:00
Zanie
268be0cf8e Add more test cases; check format types that occur after placeholders 2023-08-22 10:08:08 -05:00
Zanie
5df9ba716f Consume all remaining placeholders at end of format spec parsing 2023-08-22 09:57:13 -05:00
Zanie
0e3284244b Add failing test case for bad-string-format-character 2023-08-22 09:37:26 -05:00
Zanie
d272874dfd Clean up implementation 2023-08-22 09:36:48 -05:00
Micha Reiser
ccac9681e1 Preserve yield parentheses (#6766) 2023-08-22 10:27:20 +00:00
Micha Reiser
b52cc84df6 Omit tuple parentheses in for statements except when absolutely necessary (#6765) 2023-08-22 12:18:59 +02:00
Micha Reiser
fec6fc2fab Preserve empty lines between try clause headers (#6759) 2023-08-22 11:50:28 +02:00
konsti
ba4c27598a Document IO Error (#6712)
`IOError` is special, it is not actually a lint but an error before
linting. I'm not entirely sure how to document it since it does not
match the general lint rule pattern (`Checks that the file can be read
in its entirety.` is imho worse).

I added the in my experience two most common reasons for io errors on
unix systems and linked two tutorials on how to fix them.

See https://github.com/astral-sh/ruff/issues/2646

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-08-22 11:46:18 +02:00
Victor Hugo Gomes
0f9ccfcad9 Format PatternMatchSingleton (#6741) 2023-08-22 08:23:47 +02:00
Charlie Marsh
fa32cd9b6f Truncate some messages in diagnostics (#6748)
## Summary

I noticed this in the ecosystem CI check from
https://github.com/astral-sh/ruff/pull/6742. If we include source code
directly in a diagnostic, we need to be careful to avoid rendering
multi-line diagnostics or even excessively long diagnostics.

## Test Plan

`cargo test`
2023-08-21 23:46:24 -04:00
Victor Hugo Gomes
0aad0c41f6 [pylint] Implement no-self-use (R6301) (#6574) 2023-08-22 03:44:38 +00:00
Charlie Marsh
424b8d4ad2 Use a single node hierarchy to track statements and expressions (#6709)
## Summary

This PR is a follow-up to the suggestion in
https://github.com/astral-sh/ruff/pull/6345#discussion_r1285470953 to
use a single stack to store all statements and expressions, rather than
using separate vectors for each, which gives us something closer to a
full-fidelity chain. (We can then generalize this concept to include all
other AST nodes too.)

This is in part made possible by the removal of the hash map from
`&Stmt` to `StatementId` (#6694), which makes it much cheaper to store
these using a single interface (since doing so no longer introduces the
requirement that we hash all expressions).

I'll follow-up with some profiling, but a few notes on how the data
requirements have changed:

- We now store a `BranchId` for every expression, not just every
statement, so that's an extra `u32`.
- We now store a single `NodeId` on every snapshot, rather than separate
`StatementId` and `ExpressionId` IDs, so that's one fewer `u32` for each
snapshot.
- We're probably doing a few more lookups in general, since any calls to
`current_statement()` etc. now have to iterate up the node hierarchy
until they identify the first statement.

## Test Plan

`cargo test`
2023-08-21 21:32:57 -04:00
Charlie Marsh
abc5065fc7 Avoid E231 if comma is at end-of-line (#6747)
## Summary

I don't know how this could come up in valid Python, but anyway...

Closes https://github.com/astral-sh/ruff/issues/6738.
2023-08-21 20:47:20 -04:00
Victor Hugo Gomes
37f4920e1e Don't trigger eq-without-hash when __hash__ is explicitly set to None (#6739) 2023-08-21 23:51:21 +00:00
Charlie Marsh
c0df99b965 Avoid attempting to fix unconventional submodule imports (#6745)
## Summary

Avoid attempting to rewrite `import matplotlib.pyplot` as `import
matplotlib.pyplot as plt`. We can't support these right now, since we
don't track references at the attribute level (like
`matplotlib.pyplot`).

Closes https://github.com/astral-sh/ruff/issues/6719.
2023-08-21 23:45:32 +00:00
Charlie Marsh
7650c6ee45 Support C419 autofixes for set comprehensions (#6744)
Closes https://github.com/astral-sh/ruff/issues/6713.
2023-08-21 23:41:13 +00:00
Charlie Marsh
7b14d17e39 Ignore star imports when importing symbols in fixes (#6743)
## Summary

Given:

```python
from sys import *

exit(0)
```

We can't add `exit` to `from sys import *`, so we should just ignore it.
Ideally, we'd just resolve `exit` in the first place (since it's
imported from `from sys import *`), but as long as we don't support
wildcard imports, this is more consistent.

Closes https://github.com/astral-sh/ruff/issues/6718.

## Test Plan

`cargo test`
2023-08-21 23:31:30 +00:00
Charlie Marsh
4678f7dafe Remove parenthesis lexing in RSE102 (#6732)
## Summary

Now that we have an `Arguments` node, we can just use the range of the
arguments directly to find the parentheses in `raise Error()`.
2023-08-21 20:59:06 +00:00
konsti
b182368008 Simplify suite formatting (#6722)
Avoid the nesting in a macro by using the new `WithNodeLevel` to
`PyFormatter` deref. No changes otherwise.

I wanted to follow this up with quickly fixing the typeshed empty line
rules but they turned out a lot more complex than i had anticipated.
2023-08-21 21:01:51 +02:00
Charlie Marsh
e032fbd2e7 Remove remove_super_arguments (#6735)
Now that we have an `Arguments` node, we can use it directly to get the
range.
2023-08-21 13:04:07 -04:00
dependabot[bot]
575b77aa52 ci(deps): bump cloudflare/wrangler-action from 3.0.2 to 3.1.0 (#6736)
Bumps
[cloudflare/wrangler-action](https://github.com/cloudflare/wrangler-action)
from 3.0.2 to 3.1.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/cloudflare/wrangler-action/releases">cloudflare/wrangler-action's
releases</a>.</em></p>
<blockquote>
<h2>v3.1.0</h2>
<h3>Minor Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/cloudflare/wrangler-action/pull/154">#154</a>
<a
href="3f40637a1c"><code>3f40637</code></a>
Thanks <a
href="https://github.com/JacobMGEvans"><code>@​JacobMGEvans</code></a>!
- feat: Quiet mode
Some of the stderr, stdout, info &amp; groupings can be a little noisy
for some users and use cases.
This feature allows for a option to be passed 'quiet: true' this would
significantly reduce the noise.</p>
<p>There will still be output that lets the user know Wrangler Installed
and Wrangler Action completed successfully.
Any failure status will still be output to the user as well, to prevent
silent failures.</p>
<p>resolves <a
href="https://redirect.github.com/cloudflare/wrangler-action/issues/142">#142</a></p>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/cloudflare/wrangler-action/blob/main/CHANGELOG.md">cloudflare/wrangler-action's
changelog</a>.</em></p>
<blockquote>
<h2>3.1.0</h2>
<h3>Minor Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/cloudflare/wrangler-action/pull/154">#154</a>
<a
href="3f40637a1c"><code>3f40637</code></a>
Thanks <a
href="https://github.com/JacobMGEvans"><code>@​JacobMGEvans</code></a>!
- feat: Quiet mode
Some of the stderr, stdout, info &amp; groupings can be a little noisy
for some users and use cases.
This feature allows for a option to be passed 'quiet: true' this would
significantly reduce the noise.</p>
<p>There will still be output that lets the user know Wrangler Installed
and Wrangler Action completed successfully.
Any failure status will still be output to the user as well, to prevent
silent failures.</p>
<p>resolves <a
href="https://redirect.github.com/cloudflare/wrangler-action/issues/142">#142</a></p>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c25aadc965"><code>c25aadc</code></a>
Automatic compilation</li>
<li><a
href="fcf648c789"><code>fcf648c</code></a>
Merge pull request <a
href="https://redirect.github.com/cloudflare/wrangler-action/issues/158">#158</a>
from cloudflare/changeset-release/main</li>
<li><a
href="fcbabec21e"><code>fcbabec</code></a>
Version Packages</li>
<li><a
href="0aa12f0c2b"><code>0aa12f0</code></a>
Merge pull request <a
href="https://redirect.github.com/cloudflare/wrangler-action/issues/154">#154</a>
from cloudflare/jacobmgevans/silence-mode</li>
<li><a
href="ad7441b6ad"><code>ad7441b</code></a>
Merge pull request <a
href="https://redirect.github.com/cloudflare/wrangler-action/issues/157">#157</a>
from EstebanBorai/main</li>
<li><a
href="3f40637a1c"><code>3f40637</code></a>
Quiet feature</li>
<li><a
href="4132892387"><code>4132892</code></a>
fix: use <code>wrangler@3.5.1</code> by default</li>
<li><a
href="62ce9d23a3"><code>62ce9d2</code></a>
Merge pull request <a
href="https://redirect.github.com/cloudflare/wrangler-action/issues/155">#155</a>
from ethanppl/fix-readme</li>
<li><a
href="f089b0a195"><code>f089b0a</code></a>
Update README.md</li>
<li><a
href="4318a2fb97"><code>4318a2f</code></a>
Fix examples in README.md</li>
<li>Additional commits viewable in <a
href="https://github.com/cloudflare/wrangler-action/compare/v3.0.2...v3.1.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cloudflare/wrangler-action&package-manager=github_actions&previous-version=3.0.2&new-version=3.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-21 16:35:59 +00:00
Micha Reiser
17a26e6ff3 Fix fmt:skip for function with return type (#6733) 2023-08-21 17:45:23 +02:00
Charlie Marsh
d5a51b4e45 Allow ctypes.WinError() in flake8-raise (#6731)
Closes https://github.com/astral-sh/ruff/issues/6730.
2023-08-21 14:57:34 +00:00
Charlie Marsh
83f68891e0 Allow next in FBT exclusions (#6729)
Closes https://github.com/astral-sh/ruff/issues/6711.
2023-08-21 14:56:38 +00:00
konsti
aafde6db28 Remove some indexing (#6728)
**Summary** A common pattern in the code used to be
```rust
if statements.len() != 1 {
    return;
}
use_single_entry(statements[0])?;
```
which can be better expressed as
```rust
let [statement] = statements else {
    return;
};
use_single_entry(statements)?;
```

Direct indexing can cause panics if you don't manually take care of
checking the length, while matching (such as if-let or let-else) can
never panic.

This isn't a complete refactor, i've just removed some of the obvious
cases. I've specifically looked for `.len() != 1` and fixed those.

**Test Plan** No functional changes
2023-08-21 16:56:15 +02:00
Charlie Marsh
2405536d03 Remove unnecessary LibCST usage in key-in-dict (#6727)
## Summary

We're using LibCST to ensure that we return the full parenthesized range
of an expression, for display purposes. We can just use
`parenthesized_range` which is more efficient and removes one LibCST
dependency.

## Test Plan

`cargo test`
2023-08-21 10:32:09 -04:00
Micha Reiser
f017555d53 Parenthesize NamedExpr if target breaks (#6714) 2023-08-21 16:29:26 +02:00
Charlie Marsh
be96e0041a Accept empty inner calls in C414 (#6725)
Closes https://github.com/astral-sh/ruff/issues/6716.
2023-08-21 14:05:09 +00:00
Harutaka Kawamura
3c2dd5e42e Remove confusing comment on get_parametrize_name_range (#6724) 2023-08-21 08:52:48 -04:00
Micha Reiser
8b347cdaa9 Simplify IfRequired needs parentheses condition (#6678) 2023-08-21 07:11:31 +00:00
Tom Kuson
2a8d24dd4b Format function and class definitions into a single line if its body is an ellipsis (#6592) 2023-08-21 09:02:23 +02:00
Charlie Marsh
bb5fbb1b5c Use simple lexer for argument removal (#6710) 2023-08-21 04:16:29 +00:00
Harutaka Kawamura
086e11087f [flake8-pytest-style] Autofix PT014 (#6698) 2023-08-21 03:45:12 +00:00
Charlie Marsh
1b7e4a12a9 Refactor remove_unused_variable to take &Binding (#6707) 2023-08-20 15:50:57 +00:00
Charlie Marsh
da1697121e Add BranchId to the model snapshot (#6706)
This _probably_ never matters given the set of rules we support and in
fact I'm having trouble thinking of a test-case for it, but it's
definitely incorrect _not_ to pass on the `BranchId` here.
2023-08-20 15:35:49 +00:00
Harutaka Kawamura
419615f29b Add docs for E275, E231, E251, and E252 (#6700) 2023-08-20 14:51:50 +00:00
Charlie Marsh
a742a562fd Ignore multi-comparisons in repeated-equality-comparison-target (#6705)
Given `foo == "a" == "b" or foo == "c"`, we were suggesting `foo in
{"a", "b", "c"}`.
2023-08-20 14:41:10 +00:00
Harutaka Kawamura
129b19050a Refactor flake8_pytest_style/rules/parametrize.rs (#6703) 2023-08-20 14:30:26 +00:00
Konrad Listwan-Ciesielski
0dc23da1d0 Add docs for DTZ011 and DTZ012 (#6688) 2023-08-20 10:21:10 -04:00
Harutaka Kawamura
c62e544cba Add doc for E999 (#6699) 2023-08-20 14:14:22 +00:00
Charlie Marsh
7e9023b6f8 Use typing_extensions.TypeAlias for PYI026 fixes on pre-3.10 (#6696)
Closes https://github.com/astral-sh/ruff/issues/6695.
2023-08-19 22:16:44 +00:00
Harutaka Kawamura
a489b96a65 [flake8-pie] Implement unnecessary-range-start (PIE808) (#6690) 2023-08-19 21:59:11 +00:00
Charlie Marsh
17af12e57c Add branch detection to the semantic model (#6694)
## Summary

We have a few rules that rely on detecting whether two statements are in
different branches -- for example, different arms of an `if`-`else`.
Historically, the way this was implemented is that, given two statement
IDs, we'd find the common parent (by traversing upwards via our
`Statements` abstraction); then identify branches "manually" by matching
the parents against `try`, `if`, and `match`, and returning iterators
over the arms; then check if there's an arm for which one of the
statements is a child, and the other is not.

This has a few drawbacks:

1. First, the code is generally a bit hard to follow (Konsti mentioned
this too when working on the `ElifElseClause` refactor).

2. Second, this is the only place in the codebase where we need to go
from `&Stmt` to `StatementID` -- _everywhere_ else, we only need to go
in the _other_ direction. Supporting these lookups means we need to
maintain a mapping from `&Stmt` to `StatementID` that includes every
`&Stmt` in the program. (We _also_ end up maintaining a `depth` level
for every statement.) I'd like to get rid of these requirements to
improve efficiency, reduce complexity, and enable us to treat AST modes
more generically in the future. (When I looked at adding the `&Expr` to
our existing statement-tracking infrastructure, maintaining a hash map
with all the statements noticeably hurt performance.)

The solution implemented here instead makes branches a first-class
concept in the semantic model. Like with `Statements`, we now have a
`Branches` abstraction, where each branch points to its optional parent.
When we store statements, we store the `BranchID` alongside each
statement. When we need to detect whether two statements are in the same
branch, we just realize each statement's branch path and compare the
two. (Assuming that the two statements are in the same scope, then
they're on the same branch IFF one branch path is a subset of the other,
starting from the top.) We then add some calls to the visitor to push
and pop branches in the appropriate places, for `if`, `try`, and `match`
statements.

Note that a branch is not 1:1 with a statement; instead, each branch is
closer to a suite, but not _every_ suite is a branch. For example, each
arm in an `if`-`elif`-`else` is a branch, but the `else` in a `for` loop
is not considered a branch.

In addition to being much simpler, this should also be more efficient,
since we've shed the entire `&Stmt` hash map, plus the `depth` that we
track on `StatementWithParent` in favor of a single `Option<BranchID>`
on `StatementWithParent` plus a single vector for all branches. The
lookups should be faster too, since instead of doing a bunch of jumps
around with the hash map + repeated recursive calls to find the common
parents, we instead just do a few simple lookups in the `Branches`
vector to realize and compare the branch paths.

## Test Plan

`cargo test` -- we have a lot of coverage for this, which we inherited
from PyFlakes
2023-08-19 21:28:17 +00:00
Chris Pryer
648333b8b2 ruff_formatter crate doc comment fixes (#6677) 2023-08-19 17:42:02 +01:00
Charlie Marsh
3849fa0cf1 Rewrite yield-in-for-loop to avoid recursing over body (#6692)
## Summary

This is much simpler and avoids (1) multiple passes over the entire
function body, (2) requiring the rule to do its own binding tracking (we
can just use the semantic model), and (3) a usage of `StatementKey`.

In general, where we can, we should try to remove these kinds of custom
visitors that track name references, and instead rely on the semantic
model.

## Test Plan

`cargo test`
2023-08-19 11:25:29 -04:00
Victor Hugo Gomes
59e533047a Fix typo in ruff_python_formatter documentation (#6687)
## Summary

In the documentation was written `Javascript` but we are working with
`Python` here :)

## Test Plan

n/a
2023-08-18 19:16:09 -04:00
Charlie Marsh
053b1145f0 Avoid panic in unused arguments rule for parameter-free lambda (#6679)
## Summary

This was just a mistake in pattern-matching with no test coverage.

## Test Plan

`cargo test`
2023-08-18 18:29:31 +00:00
Charlie Marsh
6a5acde226 Make Parameters an optional field on ExprLambda (#6669)
## Summary

If a lambda doesn't contain any parameters, or any parameter _tokens_
(like `*`), we can use `None` for the parameters. This feels like a
better representation to me, since, e.g., what should the `TextRange` be
for a non-existent set of parameters? It also allows us to remove
several sites where we check if the `Parameters` is empty by seeing if
it contains any arguments, so semantically, we're already trying to
detect and model around this elsewhere.

Changing this also fixes a number of issues with dangling comments in
parameter-less lambdas, since those comments are now automatically
marked as dangling on the lambda. (As-is, we were also doing something
not-great whereby the lambda was responsible for formatting dangling
comments on the parameters, which has been removed.)

Closes https://github.com/astral-sh/ruff/issues/6646.

Closes https://github.com/astral-sh/ruff/issues/6647.

## Test Plan

`cargo test`
2023-08-18 15:34:54 +00:00
Micha Reiser
ea72d5feba Refactor SourceKind to store file content (#6640) 2023-08-18 13:45:38 +00:00
Charlie Marsh
2aeb27334d Avoid cloning source code multiple times (#6629)
## Summary

In working on https://github.com/astral-sh/ruff/pull/6628, I noticed
that we clone the source code contents, potentially multiple times,
prior to linting. The issue is that `SourceKind::Python` takes a
`String`, so we first have to provide it with a `String`. In the stdin
case, that means cloning. However, on top of this, we then have to clone
`source_kind.contents()` because `SourceKind` gets mutated. So for
stdin, we end up cloning twice. For non-stdin, we end up cloning once,
but unnecessarily (since the _contents_ don't get mutated, only the
kind).

This PR removes the `String` from `source_kind`, instead requiring that
we parse it out elsewhere. It reduces the number of clones down to 1 for
Jupyter Notebooks, and zero otherwise.
2023-08-18 09:32:18 -04:00
Micha Reiser
0cea4975fc Rename Comments methods (#6649) 2023-08-18 06:37:01 +00:00
Charlie Marsh
3ceb6fbeb0 Remove some unnecessary ampersands in the formatter (#6667) 2023-08-18 04:18:26 +00:00
Charlie Marsh
8e18f8018f Remove some trailing commas in write calls (#6666) 2023-08-18 00:14:44 -04:00
Charlie Marsh
8228429a70 Convert comment to rustdoc in placement.rs (#6665) 2023-08-18 04:11:38 +00:00
Charlie Marsh
1811312722 Improve with statement comment handling and expression breaking (#6621)
## Summary

The motivating code here was:

```python
with test as (
    # test
foo):
    pass
```

Which we were formatting as:

```python
with test as
# test
(foo):
    pass
```

`with` statements are oddly difficult. This PR makes a bunch of subtle
modifications and adds a more extensive test suite. For example, we now
only preserve parentheses if there's more than one `WithItem` _or_ a
trailing comma; before, we always preserved.

Our formatting is_not_ the same as Black, but here's a diff of our
formatted code vs. Black's for the `with.py` test suite. The primary
difference is that we tend to break parentheses when they contain
comments rather than move them to the end of the life (this is a
consistent difference that we make across the codebase):

```diff
diff --git a/crates/ruff_python_formatter/foo.py b/crates/ruff_python_formatter/foo.py
index 85e761080..31625c876 100644
--- a/crates/ruff_python_formatter/foo.py
+++ b/crates/ruff_python_formatter/foo.py
@@ -1,6 +1,4 @@
-with (
-    aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
-), aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa:
+with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa, aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa:
     ...
     # trailing
 
@@ -16,28 +14,33 @@ with (
     # trailing
 
 
-with a, b:  # a  # comma  # c  # colon
+with (
+    a,  # a  # comma
+    b,  # c
+):  # colon
     ...
 
 
 with (
-    a as  # a  # as
-    # own line
-    b,  # b  # comma
+    a as (  # a  # as
+        # own line
+        b
+    ),  # b  # comma
     c,  # c
 ):  # colon
     ...  # body
     # body trailing own
 
-with (
-    a as  # a  # as
+with a as (  # a  # as
     # own line
-    bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb  # b
-):
+    bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
+):  # b
     pass
 
 
-with (a,):  # magic trailing comma
+with (
+    a,
+):  # magic trailing comma
     ...
 
 
@@ -47,6 +50,7 @@ with a:  # should remove brackets
 with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as c:
     ...
 
+
 with (
     # leading comment
     a
@@ -74,8 +78,7 @@ with (
 with (
     a  # trailing same line comment
     # trailing own line comment
-    as b
-):
+) as b:
     ...
 
 with (
@@ -87,7 +90,9 @@ with (
 with (
     a
     # trailing own line comment
-) as b:  # trailing as same line comment  # trailing b same line comment
+) as (  # trailing as same line comment
+    b
+):  # trailing b same line comment
     ...
 
 with (
@@ -124,18 +129,24 @@ with (  # comment
     ...
 
 with (  # outer comment
-    CtxManager1() as example1,  # inner comment
+    (  # inner comment
+        CtxManager1()
+    ) as example1,
     CtxManager2() as example2,
     CtxManager3() as example3,
 ):
     ...
 
-with CtxManager() as example:  # outer comment
+with (  # outer comment
+    CtxManager()
+) as example:
     ...
 
 with (  # outer comment
     CtxManager()
-) as example, CtxManager2() as example2:  # inner comment
+) as example, (  # inner comment
+    CtxManager2()
+) as example2:
     ...
 
 with (  # outer comment
@@ -145,7 +156,9 @@ with (  # outer comment
     ...
 
 with (  # outer comment
-    (CtxManager1()),  # inner comment
+    (  # inner comment
+        CtxManager1()
+    ),
     CtxManager2(),
 ) as example:
     ...
@@ -179,7 +192,9 @@ with (
 ):
     pass
 
-with a as (b):  # foo
+with a as (  # foo
+    b
+):
     pass
 
 with f(
@@ -209,17 +224,13 @@ with f(
 ) as b, c as d:
     pass
 
-with (
-    aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
-) as b:
+with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as b:
     pass
 
 with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as b:
     pass
 
-with (
-    aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
-) as b, c as d:
+with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as b, c as d:
     pass
 
 with (
@@ -230,6 +241,8 @@ with (
     pass
 
 with (
-    aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
-) as b, c as d:
+    aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+    + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as b,
+    c as d,
+):
     pass
```

Closes https://github.com/astral-sh/ruff/issues/6600.
## Test Plan

Before:

| project      | similarity index |
|--------------|------------------|
| cpython      | 0.75473          |
| django       | 0.99804          |
| transformers | 0.99618          |
| twine        | 0.99876          |
| typeshed     | 0.74292          |
| warehouse    | 0.99601          |
| zulip        | 0.99727          |

After:

| project      | similarity index |
|--------------|------------------|
| cpython      | 0.75473          |
| django       | 0.99804          |
| transformers | 0.99618          |
| twine        | 0.99876          |
| typeshed     | 0.74292          |
| warehouse    | 0.99601          |
| zulip        | 0.99727          |

`cargo test`
2023-08-18 03:30:38 +00:00
Charlie Marsh
26bba11be6 Manually format comments around := in named expressions (#6634)
## Summary

Attaches comments around the `:=` operator in a named expression as
dangling, and formats them manually in the `named_expr.rs` formatter.

Closes https://github.com/astral-sh/ruff/issues/5695.

## Test Plan

`cargo test`
2023-08-18 03:10:45 +00:00
Shantanu
a128fe5148 Apply RUF017 when start is passed via position (#6664)
As discussed in
https://github.com/astral-sh/ruff/pull/6489#discussion_r1297858919.
Linking https://github.com/astral-sh/ruff/issues/5073
2023-08-17 20:10:07 -04:00
257 changed files with 5642 additions and 2896 deletions

View File

@@ -40,7 +40,7 @@ jobs:
run: mkdocs build --strict -f mkdocs.generated.yml
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
uses: cloudflare/wrangler-action@v3.0.2
uses: cloudflare/wrangler-action@v3.1.0
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}

View File

@@ -40,7 +40,7 @@ jobs:
working-directory: playground
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
uses: cloudflare/wrangler-action@v3.0.2
uses: cloudflare/wrangler-action@v3.1.0
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}

View File

@@ -69,6 +69,7 @@ g_action.set_enabled(True)
settings.set_enable_developer_extras(True)
foo.is_(True)
bar.is_not(False)
next(iter([]), False)
class Registry:
def __init__(self) -> None:

View File

@@ -22,6 +22,10 @@ tuple(
"o"]
)
)
set(set())
set(list())
set(tuple())
sorted(reversed())
# Nested sorts with differing keyword arguments. Not flagged.
sorted(sorted(x, key=lambda y: y))

View File

@@ -1,22 +1,29 @@
import math # not checked
def not_checked():
import math
import altair # unconventional
import matplotlib.pyplot # unconventional
import numpy # unconventional
import pandas # unconventional
import seaborn # unconventional
import tkinter # unconventional
import altair as altr # unconventional
import matplotlib.pyplot as plot # unconventional
import numpy as nmp # unconventional
import pandas as pdas # unconventional
import seaborn as sbrn # unconventional
import tkinter as tkr # unconventional
def unconventional():
import altair
import matplotlib.pyplot
import numpy
import pandas
import seaborn
import tkinter
import altair as alt # conventional
import matplotlib.pyplot as plt # conventional
import numpy as np # conventional
import pandas as pd # conventional
import seaborn as sns # conventional
import tkinter as tk # conventional
def unconventional_aliases():
import altair as altr
import matplotlib.pyplot as plot
import numpy as nmp
import pandas as pdas
import seaborn as sbrn
import tkinter as tkr
def conventional_aliases():
import altair as alt
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import tkinter as tk

View File

@@ -0,0 +1,13 @@
# PIE808
range(0, 10)
# OK
range(x, 10)
range(-15, 10)
range(10)
range(0)
range(0, 10, x)
range(0, 10, 1)
range(0, 10, step=1)
range(start=0, stop=10)
range(0, stop=10)

View File

@@ -64,3 +64,8 @@ def test_implicit_str_concat_no_parens(param1, param2, param3):
@pytest.mark.parametrize((("param1, " "param2, " "param3")), [(1, 2, 3), (4, 5, 6)])
def test_implicit_str_concat_with_multi_parens(param1, param2, param3):
...
@pytest.mark.parametrize(("param1,param2"), [(1, 2), (3, 4)])
def test_csv_with_parens(param1, param2):
...

View File

@@ -16,11 +16,38 @@ def test_error_expr_simple(x):
...
@pytest.mark.parametrize("x", [(a, b), (a, b), (b, c)])
@pytest.mark.parametrize(
"x",
[
(a, b),
# comment
(a, b),
(b, c),
],
)
def test_error_expr_complex(x):
...
@pytest.mark.parametrize("x", [a, b, (a), c, ((a))])
def test_error_parentheses(x):
...
@pytest.mark.parametrize(
"x",
[
a,
b,
(a),
c,
((a)),
],
)
def test_error_parentheses_trailing_comma(x):
...
@pytest.mark.parametrize("x", [1, 2])
def test_ok(x):
...

View File

@@ -19,11 +19,20 @@ raise TypeError ()
raise TypeError \
()
# RSE102
raise TypeError \
();
# RSE102
raise TypeError(
)
# RSE102
raise (TypeError) (
)
# RSE102
raise TypeError(
# Hello, world!
@@ -52,3 +61,10 @@ class Class:
# OK
raise Class.error()
import ctypes
# OK
raise ctypes.WinError(1)

View File

@@ -31,6 +31,8 @@ for key in list(obj.keys()):
key in (obj or {}).keys() # SIM118
(key) in (obj or {}).keys() # SIM118
from typing import KeysView

View File

@@ -27,6 +27,8 @@ def f(cls, x):
###
lambda x: print("Hello, world!")
lambda: print("Hello, world!")
class C:
###

View File

@@ -28,3 +28,6 @@ mdtypes_template = {
'tag_full': [('mdtype', 'u4'), ('byte_count', 'u4')],
'tag_smalldata':[('byte_count_mdtype', 'u4'), ('data', 'S4')],
}
#: Okay
a = (1,

View File

@@ -16,8 +16,10 @@
"{:*^30s}".format("centered") # OK
"{:{s}}".format("hello", s="s") # OK (nested replacement value not checked)
"{:{s:y}}".format("hello", s="s") # [bad-format-character] (nested replacement format spec checked)
"{0:.{prec}g}".format(1.23, prec=15) # OK
"{0:.{foo}x{bar}y{foobar}g}".format(...) # OK (all nested replacements are consumed without considering in between chars)
"{0:.{foo}{bar}{foobar}y}".format(...) # [bad-format-character] (check value after replacements)
## f-strings

View File

@@ -1,10 +1,11 @@
class Person:
class Person: # [eq-without-hash]
def __init__(self):
self.name = "monty"
def __eq__(self, other):
return isinstance(other, Person) and other.name == self.name
# OK
class Language:
def __init__(self):
self.name = "python"
@@ -14,3 +15,9 @@ class Language:
def __hash__(self):
return hash(self.name)
class MyClass:
def __eq__(self, other):
return True
__hash__ = None

View File

@@ -0,0 +1,62 @@
import abc
class Person:
def developer_greeting(self, name): # [no-self-use]
print(f"Greetings {name}!")
def greeting_1(self): # [no-self-use]
print("Hello!")
def greeting_2(self): # [no-self-use]
print("Hi!")
# OK
def developer_greeting():
print("Greetings developer!")
# OK
class Person:
name = "Paris"
def __init__(self):
pass
def __cmp__(self, other):
print(24)
def __repr__(self):
return "Person"
def func(self):
...
def greeting_1(self):
print(f"Hello from {self.name} !")
@staticmethod
def greeting_2():
print("Hi!")
class Base(abc.ABC):
"""abstract class"""
@abstractmethod
def abstract_method(self):
"""abstract method could not be a function"""
raise NotImplementedError
class Sub(Base):
@override
def abstract_method(self):
print("concret method")
class Prop:
@property
def count(self):
return 24

View File

@@ -32,3 +32,7 @@ foo not in {"a", "b", "c"} # Uses membership test already.
foo == "a" # Single comparison.
foo != "a" # Single comparison.
foo == "a" == "b" or foo == "c" # Multiple comparisons.
foo == bar == "b" or foo == "c" # Multiple comparisons.

View File

@@ -0,0 +1,3 @@
from sys import *
exit(0)

View File

@@ -1,15 +1,15 @@
//! Interface for generating autofix edits from higher-level actions (e.g., "remove an argument").
use anyhow::{bail, Result};
use anyhow::{Context, Result};
use ruff_diagnostics::Edit;
use ruff_python_ast::{
self as ast, Arguments, ExceptHandler, Expr, Keyword, PySourceType, Ranged, Stmt,
};
use ruff_python_ast::{self as ast, Arguments, ExceptHandler, Expr, Keyword, Ranged, Stmt};
use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer;
use ruff_python_parser::{lexer, AsMode};
use ruff_python_trivia::{has_leading_content, is_python_whitespace, PythonWhitespace};
use ruff_python_trivia::{
has_leading_content, is_python_whitespace, PythonWhitespace, SimpleTokenKind, SimpleTokenizer,
};
use ruff_source_file::{Locator, NewlineWithTrailingNewline};
use ruff_text_size::{TextLen, TextRange, TextSize};
@@ -89,78 +89,49 @@ pub(crate) fn remove_argument<T: Ranged>(
argument: &T,
arguments: &Arguments,
parentheses: Parentheses,
locator: &Locator,
source_type: PySourceType,
source: &str,
) -> Result<Edit> {
// TODO(sbrugman): Preserve trailing comments.
if arguments.keywords.len() + arguments.args.len() > 1 {
let mut fix_start = None;
let mut fix_end = None;
// Partition into arguments before and after the argument to remove.
let (before, after): (Vec<_>, Vec<_>) = arguments
.args
.iter()
.map(Expr::range)
.chain(arguments.keywords.iter().map(Keyword::range))
.filter(|range| argument.range() != *range)
.partition(|range| range.start() < argument.start());
if arguments
.args
.iter()
.map(Expr::start)
.chain(arguments.keywords.iter().map(Keyword::start))
.any(|location| location > argument.start())
{
// Case 1: argument or keyword is _not_ the last node, so delete from the start of the
// argument to the end of the subsequent comma.
let mut seen_comma = false;
for (tok, range) in lexer::lex_starts_at(
locator.slice(arguments.range()),
source_type.as_mode(),
arguments.start(),
)
.flatten()
{
if seen_comma {
if tok.is_non_logical_newline() {
// Also delete any non-logical newlines after the comma.
continue;
}
fix_end = Some(if tok.is_newline() {
range.end()
} else {
range.start()
});
break;
}
if range.start() == argument.start() {
fix_start = Some(range.start());
}
if fix_start.is_some() && tok.is_comma() {
seen_comma = true;
}
}
} else {
// Case 2: argument or keyword is the last node, so delete from the start of the
// previous comma to the end of the argument.
for (tok, range) in lexer::lex_starts_at(
locator.slice(arguments.range()),
source_type.as_mode(),
arguments.start(),
)
.flatten()
{
if range.start() == argument.start() {
fix_end = Some(argument.end());
break;
}
if tok.is_comma() {
fix_start = Some(range.start());
}
}
}
if !after.is_empty() {
// Case 1: argument or keyword is _not_ the last node, so delete from the start of the
// argument to the end of the subsequent comma.
let mut tokenizer = SimpleTokenizer::starts_at(argument.end(), source);
match (fix_start, fix_end) {
(Some(start), Some(end)) => Ok(Edit::deletion(start, end)),
_ => {
bail!("No fix could be constructed")
}
}
// Find the trailing comma.
tokenizer
.find(|token| token.kind == SimpleTokenKind::Comma)
.context("Unable to find trailing comma")?;
// Find the next non-whitespace token.
let next = tokenizer
.find(|token| {
token.kind != SimpleTokenKind::Whitespace && token.kind != SimpleTokenKind::Newline
})
.context("Unable to find next token")?;
Ok(Edit::deletion(argument.start(), next.start()))
} else if let Some(previous) = before.iter().map(Ranged::end).max() {
// Case 2: argument or keyword is the last node, so delete from the start of the
// previous comma to the end of the argument.
let mut tokenizer = SimpleTokenizer::starts_at(previous, source);
// Find the trailing comma.
let comma = tokenizer
.find(|token| token.kind == SimpleTokenKind::Comma)
.context("Unable to find trailing comma")?;
Ok(Edit::deletion(comma.start(), argument.end()))
} else {
// Only one argument; remove it (but preserve parentheses, if needed).
// Case 3: argument or keyword is the only node, so delete the arguments (but preserve
// parentheses, if needed).
Ok(match parentheses {
Parentheses::Remove => Edit::deletion(arguments.start(), arguments.end()),
Parentheses::Preserve => {

View File

@@ -13,6 +13,7 @@ use crate::registry::{AsRule, Rule};
pub(crate) mod codemods;
pub(crate) mod edits;
pub(crate) mod snippet;
pub(crate) mod source_map;
pub(crate) struct FixResult {

View File

@@ -0,0 +1,36 @@
use unicode_width::UnicodeWidthStr;
/// A snippet of source code for user-facing display, as in a diagnostic.
#[derive(Debug, Clone, PartialEq, Eq)]
pub(crate) struct SourceCodeSnippet(String);
impl SourceCodeSnippet {
pub(crate) fn new(source_code: String) -> Self {
Self(source_code)
}
/// Return the full snippet for user-facing display, or `None` if the snippet should be
/// truncated.
pub(crate) fn full_display(&self) -> Option<&str> {
if Self::should_truncate(&self.0) {
None
} else {
Some(&self.0)
}
}
/// Return a truncated snippet for user-facing display.
pub(crate) fn truncated_display(&self) -> &str {
if Self::should_truncate(&self.0) {
"..."
} else {
&self.0
}
}
/// Returns `true` if the source code should be truncated when included in a user-facing
/// diagnostic.
fn should_truncate(source_code: &str) -> bool {
source_code.width() > 50 || source_code.contains(['\r', '\n'])
}
}

View File

@@ -7,10 +7,6 @@ use crate::rules::flake8_simplify;
/// Run lint rules over a [`Comprehension`] syntax nodes.
pub(crate) fn comprehension(comprehension: &Comprehension, checker: &mut Checker) {
if checker.enabled(Rule::InDictKeys) {
flake8_simplify::rules::key_in_dict_for(
checker,
&comprehension.target,
&comprehension.iter,
);
flake8_simplify::rules::key_in_dict_comprehension(checker, comprehension);
}
}

View File

@@ -1,8 +1,8 @@
use ruff_python_ast::{self as ast, Stmt};
use ruff_python_ast::Stmt;
use crate::checkers::ast::Checker;
use crate::codes::Rule;
use crate::rules::{flake8_bugbear, perflint};
use crate::rules::{flake8_bugbear, perflint, pyupgrade};
/// Run lint rules over all deferred for-loops in the [`SemanticModel`].
pub(crate) fn deferred_for_loops(checker: &mut Checker) {
@@ -11,18 +11,18 @@ pub(crate) fn deferred_for_loops(checker: &mut Checker) {
for snapshot in for_loops {
checker.semantic.restore(snapshot);
let Stmt::For(ast::StmtFor {
target, iter, body, ..
}) = checker.semantic.current_statement()
else {
let Stmt::For(stmt_for) = checker.semantic.current_statement() else {
unreachable!("Expected Stmt::For");
};
if checker.enabled(Rule::UnusedLoopControlVariable) {
flake8_bugbear::rules::unused_loop_control_variable(checker, target, body);
flake8_bugbear::rules::unused_loop_control_variable(checker, stmt_for);
}
if checker.enabled(Rule::IncorrectDictIterator) {
perflint::rules::incorrect_dict_iterator(checker, target, iter);
perflint::rules::incorrect_dict_iterator(checker, stmt_for);
}
if checker.enabled(Rule::YieldInForLoop) {
pyupgrade::rules::yield_in_for_loop(checker, stmt_for);
}
}
}

View File

@@ -1,6 +1,6 @@
use ruff_diagnostics::Diagnostic;
use ruff_python_ast::Ranged;
use ruff_python_semantic::analyze::{branch_detection, visibility};
use ruff_python_semantic::analyze::visibility;
use ruff_python_semantic::{Binding, BindingKind, ScopeKind};
use crate::checkers::ast::Checker;
@@ -30,6 +30,7 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
Rule::UnusedPrivateTypedDict,
Rule::UnusedStaticMethodArgument,
Rule::UnusedVariable,
Rule::NoSelfUse,
]) {
return;
}
@@ -112,11 +113,7 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
// If the bindings are in different forks, abort.
if shadowed.source.map_or(true, |left| {
binding.source.map_or(true, |right| {
branch_detection::different_forks(
left,
right,
checker.semantic.statements(),
)
checker.semantic.different_branches(left, right)
})
}) {
continue;
@@ -172,7 +169,7 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
continue;
}
let Some(statement_id) = shadowed.source else {
let Some(node_id) = shadowed.source else {
continue;
};
@@ -180,7 +177,7 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
if shadowed.kind.is_function_definition() {
if checker
.semantic
.statement(statement_id)
.statement(node_id)
.as_function_def_stmt()
.is_some_and(|function| {
visibility::is_overload(
@@ -208,11 +205,7 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
// If the bindings are in different forks, abort.
if shadowed.source.map_or(true, |left| {
binding.source.map_or(true, |right| {
branch_detection::different_forks(
left,
right,
checker.semantic.statements(),
)
checker.semantic.different_branches(left, right)
})
}) {
continue;
@@ -310,6 +303,12 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
pyflakes::rules::unused_import(checker, scope, &mut diagnostics);
}
}
if scope.kind.is_function() {
if checker.enabled(Rule::NoSelfUse) {
pylint::rules::no_self_use(checker, scope, &mut diagnostics);
}
}
}
checker.diagnostics.extend(diagnostics);
}

View File

@@ -432,7 +432,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
pyupgrade::rules::deprecated_unittest_alias(checker, func);
}
if checker.enabled(Rule::SuperCallWithParameters) {
pyupgrade::rules::super_call_with_parameters(checker, expr, func, args);
pyupgrade::rules::super_call_with_parameters(checker, call);
}
if checker.enabled(Rule::UnnecessaryEncodeUTF8) {
pyupgrade::rules::unnecessary_encode_utf8(checker, call);
@@ -531,6 +531,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::UnnecessaryDictKwargs) {
flake8_pie::rules::unnecessary_dict_kwargs(checker, expr, keywords);
}
if checker.enabled(Rule::UnnecessaryRangeStart) {
flake8_pie::rules::unnecessary_range_start(checker, call);
}
if checker.enabled(Rule::ExecBuiltin) {
flake8_bandit::rules::exec_used(checker, func);
}
@@ -1175,7 +1178,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
pylint::rules::magic_value_comparison(checker, left, comparators);
}
if checker.enabled(Rule::InDictKeys) {
flake8_simplify::rules::key_in_dict_compare(checker, expr, left, ops, comparators);
flake8_simplify::rules::key_in_dict_compare(checker, compare);
}
if checker.enabled(Rule::YodaConditions) {
flake8_simplify::rules::yoda_conditions(checker, expr, left, ops, comparators);

View File

@@ -338,9 +338,6 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::FStringDocstring) {
flake8_bugbear::rules::f_string_docstring(checker, body);
}
if checker.enabled(Rule::YieldInForLoop) {
pyupgrade::rules::yield_in_for_loop(checker, stmt);
}
if let ScopeKind::Class(class_def) = checker.semantic.current_scope().kind {
if checker.enabled(Rule::BuiltinAttributeShadowing) {
flake8_builtins::rules::builtin_method_shadowing(
@@ -467,17 +464,17 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
flake8_pyi::rules::pass_statement_stub_body(checker, body);
}
if checker.enabled(Rule::PassInClassBody) {
flake8_pyi::rules::pass_in_class_body(checker, stmt, body);
flake8_pyi::rules::pass_in_class_body(checker, class_def);
}
}
if checker.enabled(Rule::EllipsisInNonEmptyClassBody) {
flake8_pyi::rules::ellipsis_in_non_empty_class_body(checker, stmt, body);
flake8_pyi::rules::ellipsis_in_non_empty_class_body(checker, body);
}
if checker.enabled(Rule::PytestIncorrectMarkParenthesesStyle) {
flake8_pytest_style::rules::marks(checker, decorator_list);
}
if checker.enabled(Rule::DuplicateClassFieldDefinition) {
flake8_pie::rules::duplicate_class_field_definition(checker, stmt, body);
flake8_pie::rules::duplicate_class_field_definition(checker, body);
}
if checker.enabled(Rule::NonUniqueEnums) {
flake8_pie::rules::non_unique_enums(checker, stmt, body);
@@ -1142,7 +1139,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
pygrep_hooks::rules::non_existent_mock_method(checker, test);
}
}
Stmt::With(with_ @ ast::StmtWith { items, body, .. }) => {
Stmt::With(with_stmt @ ast::StmtWith { items, body, .. }) => {
if checker.enabled(Rule::AssertRaisesException) {
flake8_bugbear::rules::assert_raises_exception(checker, items);
}
@@ -1152,7 +1149,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::MultipleWithStatements) {
flake8_simplify::rules::multiple_with_statements(
checker,
with_,
with_stmt,
checker.semantic.current_statement_parent(),
);
}
@@ -1171,15 +1168,21 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
perflint::rules::try_except_in_loop(checker, body);
}
}
Stmt::For(ast::StmtFor {
target,
body,
iter,
orelse,
..
}) => {
if checker.any_enabled(&[Rule::UnusedLoopControlVariable, Rule::IncorrectDictIterator])
{
Stmt::For(
for_stmt @ ast::StmtFor {
target,
body,
iter,
orelse,
is_async,
..
},
) => {
if checker.any_enabled(&[
Rule::UnusedLoopControlVariable,
Rule::IncorrectDictIterator,
Rule::YieldInForLoop,
]) {
checker.deferred.for_loops.push(checker.semantic.snapshot());
}
if checker.enabled(Rule::LoopVariableOverridesIterator) {
@@ -1200,17 +1203,6 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::IterationOverSet) {
pylint::rules::iteration_over_set(checker, iter);
}
if stmt.is_for_stmt() {
if checker.enabled(Rule::ReimplementedBuiltin) {
flake8_simplify::rules::convert_for_loop_to_any_all(checker, stmt);
}
if checker.enabled(Rule::InDictKeys) {
flake8_simplify::rules::key_in_dict_for(checker, target, iter);
}
if checker.enabled(Rule::TryExceptInLoop) {
perflint::rules::try_except_in_loop(checker, body);
}
}
if checker.enabled(Rule::ManualListComprehension) {
perflint::rules::manual_list_comprehension(checker, target, body);
}
@@ -1220,6 +1212,17 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::UnnecessaryListCast) {
perflint::rules::unnecessary_list_cast(checker, iter);
}
if !is_async {
if checker.enabled(Rule::ReimplementedBuiltin) {
flake8_simplify::rules::convert_for_loop_to_any_all(checker, stmt);
}
if checker.enabled(Rule::InDictKeys) {
flake8_simplify::rules::key_in_dict_for(checker, for_stmt);
}
if checker.enabled(Rule::TryExceptInLoop) {
perflint::rules::try_except_in_loop(checker, body);
}
}
}
Stmt::Try(ast::StmtTry {
body,

View File

@@ -32,8 +32,8 @@ use itertools::Itertools;
use log::error;
use ruff_python_ast::{
self as ast, Arguments, Comprehension, Constant, ElifElseClause, ExceptHandler, Expr,
ExprContext, Keyword, Parameter, ParameterWithDefault, Parameters, Pattern, Ranged, Stmt,
Suite, UnaryOp,
ExprContext, Keyword, MatchCase, Parameter, ParameterWithDefault, Parameters, Pattern, Ranged,
Stmt, Suite, UnaryOp,
};
use ruff_text_size::{TextRange, TextSize};
@@ -193,18 +193,22 @@ impl<'a> Checker<'a> {
}
}
/// Returns the [`IsolationLevel`] for fixes in the current context.
/// Returns the [`IsolationLevel`] to isolate fixes for the current statement.
///
/// The primary use-case for fix isolation is to ensure that we don't delete all statements
/// in a given indented block, which would cause a syntax error. We therefore need to ensure
/// that we delete at most one statement per indented block per fixer pass. Fix isolation should
/// thus be applied whenever we delete a statement, but can otherwise be omitted.
pub(crate) fn isolation(&self, parent: Option<&Stmt>) -> IsolationLevel {
parent
.and_then(|stmt| self.semantic.statement_id(stmt))
.map_or(IsolationLevel::default(), |node_id| {
IsolationLevel::Group(node_id.into())
})
pub(crate) fn statement_isolation(&self) -> IsolationLevel {
IsolationLevel::Group(self.semantic.current_statement_id().into())
}
/// Returns the [`IsolationLevel`] to isolate fixes in the current statement's parent.
pub(crate) fn parent_isolation(&self) -> IsolationLevel {
self.semantic
.current_statement_parent_id()
.map(|node_id| IsolationLevel::Group(node_id.into()))
.unwrap_or_default()
}
/// The [`Locator`] for the current file, which enables extraction of source code from byte
@@ -263,7 +267,7 @@ where
{
fn visit_stmt(&mut self, stmt: &'b Stmt) {
// Step 0: Pre-processing
self.semantic.push_statement(stmt);
self.semantic.push_node(stmt);
// Track whether we've seen docstrings, non-imports, etc.
match stmt {
@@ -619,16 +623,28 @@ where
}
}
// Iterate over the `body`, then the `handlers`, then the `orelse`, then the
// `finalbody`, but treat the body and the `orelse` as a single branch for
// flow analysis purposes.
let branch = self.semantic.push_branch();
self.semantic.handled_exceptions.push(handled_exceptions);
self.visit_body(body);
self.semantic.handled_exceptions.pop();
self.semantic.pop_branch();
for except_handler in handlers {
self.semantic.push_branch();
self.visit_except_handler(except_handler);
self.semantic.pop_branch();
}
self.semantic.set_branch(branch);
self.visit_body(orelse);
self.semantic.pop_branch();
self.semantic.push_branch();
self.visit_body(finalbody);
self.semantic.pop_branch();
}
Stmt::AnnAssign(ast::StmtAnnAssign {
target,
@@ -708,6 +724,7 @@ where
) => {
self.visit_boolean_test(test);
self.semantic.push_branch();
if typing::is_type_checking_block(stmt_if, &self.semantic) {
if self.semantic.at_top_level() {
self.importer.visit_type_checking_block(stmt);
@@ -716,9 +733,12 @@ where
} else {
self.visit_body(body);
}
self.semantic.pop_branch();
for clause in elif_else_clauses {
self.semantic.push_branch();
self.visit_elif_else_clause(clause);
self.semantic.pop_branch();
}
}
_ => visitor::walk_stmt(self, stmt),
@@ -759,7 +779,7 @@ where
analyze::statement(stmt, self);
self.semantic.flags = flags_snapshot;
self.semantic.pop_statement();
self.semantic.pop_node();
}
fn visit_annotation(&mut self, expr: &'b Expr) {
@@ -795,7 +815,7 @@ where
return;
}
self.semantic.push_expression(expr);
self.semantic.push_node(expr);
// Store the flags prior to any further descent, so that we can restore them after visiting
// the node.
@@ -874,18 +894,20 @@ where
},
) => {
// Visit the default arguments, but avoid the body, which will be deferred.
for ParameterWithDefault {
default,
parameter: _,
range: _,
} in parameters
.posonlyargs
.iter()
.chain(&parameters.args)
.chain(&parameters.kwonlyargs)
{
if let Some(expr) = &default {
self.visit_expr(expr);
if let Some(parameters) = parameters {
for ParameterWithDefault {
default,
parameter: _,
range: _,
} in parameters
.posonlyargs
.iter()
.chain(&parameters.args)
.chain(&parameters.kwonlyargs)
{
if let Some(expr) = &default {
self.visit_expr(expr);
}
}
}
@@ -1213,7 +1235,7 @@ where
analyze::expression(expr, self);
self.semantic.flags = flags_snapshot;
self.semantic.pop_expression();
self.semantic.pop_node();
}
fn visit_except_handler(&mut self, except_handler: &'b ExceptHandler) {
@@ -1351,6 +1373,17 @@ where
}
}
fn visit_match_case(&mut self, match_case: &'b MatchCase) {
self.visit_pattern(&match_case.pattern);
if let Some(expr) = &match_case.guard {
self.visit_expr(expr);
}
self.semantic.push_branch();
self.visit_body(&match_case.body);
self.semantic.pop_branch();
}
fn visit_type_param(&mut self, type_param: &'b ast::TypeParam) {
// Step 1: Binding
match type_param {
@@ -1834,7 +1867,9 @@ impl<'a> Checker<'a> {
range: _,
}) = expr
{
self.visit_parameters(parameters);
if let Some(parameters) = parameters {
self.visit_parameters(parameters);
}
self.visit_expr(body);
} else {
unreachable!("Expected Expr::Lambda");

View File

@@ -216,6 +216,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "R1722") => (RuleGroup::Unspecified, rules::pylint::rules::SysExitAlias),
(Pylint, "R2004") => (RuleGroup::Unspecified, rules::pylint::rules::MagicValueComparison),
(Pylint, "R5501") => (RuleGroup::Unspecified, rules::pylint::rules::CollapsibleElseIf),
(Pylint, "R6301") => (RuleGroup::Nursery, rules::pylint::rules::NoSelfUse),
(Pylint, "W0120") => (RuleGroup::Unspecified, rules::pylint::rules::UselessElseOnLoop),
(Pylint, "W0127") => (RuleGroup::Unspecified, rules::pylint::rules::SelfAssigningVariable),
(Pylint, "W0129") => (RuleGroup::Unspecified, rules::pylint::rules::AssertOnStringLiteral),
@@ -707,6 +708,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Pie, "800") => (RuleGroup::Unspecified, rules::flake8_pie::rules::UnnecessarySpread),
(Flake8Pie, "804") => (RuleGroup::Unspecified, rules::flake8_pie::rules::UnnecessaryDictKwargs),
(Flake8Pie, "807") => (RuleGroup::Unspecified, rules::flake8_pie::rules::ReimplementedListBuiltin),
(Flake8Pie, "808") => (RuleGroup::Unspecified, rules::flake8_pie::rules::UnnecessaryRangeStart),
(Flake8Pie, "810") => (RuleGroup::Unspecified, rules::flake8_pie::rules::MultipleStartsEndsWith),
// flake8-commas

View File

@@ -405,7 +405,7 @@ y = 2
z = x + 1";
assert_eq!(
noqa_mappings(contents),
NoqaMapping::from_iter([TextRange::new(TextSize::from(0), TextSize::from(22)),])
NoqaMapping::from_iter([TextRange::new(TextSize::from(0), TextSize::from(22))])
);
let contents = "x = 1

View File

@@ -301,12 +301,14 @@ impl<'a> Importer<'a> {
}
if let Stmt::ImportFrom(ast::StmtImportFrom {
module: name,
names,
level,
..
range: _,
}) = stmt
{
if level.map_or(true, |level| level.to_u32() == 0)
&& name.as_ref().is_some_and(|name| name == module)
&& names.iter().all(|alias| alias.name.as_str() != "*")
{
import_from = Some(*stmt);
}

View File

@@ -33,7 +33,7 @@ pub fn round_trip(path: &Path) -> anyhow::Result<String> {
err
)
})?;
let code = notebook.content().to_string();
let code = notebook.source_code().to_string();
notebook.update_cell_content(&code);
let mut writer = Vec::new();
notebook.write_inner(&mut writer)?;
@@ -103,7 +103,7 @@ pub struct Notebook {
/// separated by a newline and a trailing newline. The trailing newline
/// is added to make sure that each cell ends with a newline which will
/// be removed when updating the cell content.
content: String,
source_code: String,
/// The index of the notebook. This is used to map between the concatenated
/// source code and the original notebook.
index: OnceCell<JupyterIndex>,
@@ -132,8 +132,8 @@ impl Notebook {
}
/// Read the Jupyter Notebook from its JSON string.
pub fn from_contents(contents: &str) -> Result<Self, Box<Diagnostic>> {
Self::from_reader(Cursor::new(contents))
pub fn from_source_code(source_code: &str) -> Result<Self, Box<Diagnostic>> {
Self::from_reader(Cursor::new(source_code))
}
/// Read a Jupyter Notebook from a [`Read`] implementor.
@@ -268,7 +268,7 @@ impl Notebook {
// The additional newline at the end is to maintain consistency for
// all cells. These newlines will be removed before updating the
// source code with the transformed content. Refer `update_cell_content`.
content: contents.join("\n") + "\n",
source_code: contents.join("\n") + "\n",
cell_offsets,
valid_code_cells,
trailing_newline,
@@ -404,8 +404,8 @@ impl Notebook {
/// Return the notebook content.
///
/// This is the concatenation of all Python code cells.
pub(crate) fn content(&self) -> &str {
&self.content
pub fn source_code(&self) -> &str {
&self.source_code
}
/// Return the Jupyter notebook index.
@@ -424,12 +424,13 @@ impl Notebook {
}
/// Update the notebook with the given sourcemap and transformed content.
pub(crate) fn update(&mut self, source_map: &SourceMap, transformed: &str) {
pub(crate) fn update(&mut self, source_map: &SourceMap, transformed: String) {
// Cell offsets must be updated before updating the cell content as
// it depends on the offsets to extract the cell content.
self.index.take();
self.update_cell_offsets(source_map);
self.update_cell_content(transformed);
self.content = transformed.to_string();
self.update_cell_content(&transformed);
self.source_code = transformed;
}
/// Return a slice of [`Cell`] in the Jupyter notebook.
@@ -476,14 +477,16 @@ mod tests {
use crate::jupyter::schema::Cell;
use crate::jupyter::Notebook;
use crate::registry::Rule;
use crate::test::{read_jupyter_notebook, test_notebook_path, test_resource_path};
use crate::test::{
read_jupyter_notebook, test_notebook_path, test_resource_path, TestedNotebook,
};
use crate::{assert_messages, settings};
/// Read a Jupyter cell from the `resources/test/fixtures/jupyter/cell` directory.
fn read_jupyter_cell(path: impl AsRef<Path>) -> Result<Cell> {
let path = test_resource_path("fixtures/jupyter/cell").join(path);
let contents = std::fs::read_to_string(path)?;
Ok(serde_json::from_str(&contents)?)
let source_code = std::fs::read_to_string(path)?;
Ok(serde_json::from_str(&source_code)?)
}
#[test]
@@ -536,7 +539,7 @@ mod tests {
fn test_concat_notebook() -> Result<()> {
let notebook = read_jupyter_notebook(Path::new("valid.ipynb"))?;
assert_eq!(
notebook.content,
notebook.source_code,
r#"def unused_variable():
x = 1
y = 2
@@ -578,49 +581,64 @@ print("after empty cells")
#[test]
fn test_import_sorting() -> Result<()> {
let path = "isort.ipynb".to_string();
let (diagnostics, source_kind, _) = test_notebook_path(
let TestedNotebook {
messages,
source_notebook,
..
} = test_notebook_path(
&path,
Path::new("isort_expected.ipynb"),
&settings::Settings::for_rule(Rule::UnsortedImports),
)?;
assert_messages!(diagnostics, path, source_kind);
assert_messages!(messages, path, source_notebook);
Ok(())
}
#[test]
fn test_ipy_escape_command() -> Result<()> {
let path = "ipy_escape_command.ipynb".to_string();
let (diagnostics, source_kind, _) = test_notebook_path(
let TestedNotebook {
messages,
source_notebook,
..
} = test_notebook_path(
&path,
Path::new("ipy_escape_command_expected.ipynb"),
&settings::Settings::for_rule(Rule::UnusedImport),
)?;
assert_messages!(diagnostics, path, source_kind);
assert_messages!(messages, path, source_notebook);
Ok(())
}
#[test]
fn test_unused_variable() -> Result<()> {
let path = "unused_variable.ipynb".to_string();
let (diagnostics, source_kind, _) = test_notebook_path(
let TestedNotebook {
messages,
source_notebook,
..
} = test_notebook_path(
&path,
Path::new("unused_variable_expected.ipynb"),
&settings::Settings::for_rule(Rule::UnusedVariable),
)?;
assert_messages!(diagnostics, path, source_kind);
assert_messages!(messages, path, source_notebook);
Ok(())
}
#[test]
fn test_json_consistency() -> Result<()> {
let path = "before_fix.ipynb".to_string();
let (_, _, source_kind) = test_notebook_path(
let TestedNotebook {
linted_notebook: fixed_notebook,
..
} = test_notebook_path(
path,
Path::new("after_fix.ipynb"),
&settings::Settings::for_rule(Rule::UnusedImport),
)?;
let mut writer = Vec::new();
source_kind.expect_jupyter().write_inner(&mut writer)?;
fixed_notebook.write_inner(&mut writer)?;
let actual = String::from_utf8(writer)?;
let expected =
std::fs::read_to_string(test_resource_path("fixtures/jupyter/after_fix.ipynb"))?;

View File

@@ -63,7 +63,7 @@ pub struct FixerResult<'a> {
/// The result returned by the linter, after applying any fixes.
pub result: LinterResult<(Vec<Message>, Option<ImportMap>)>,
/// The resulting source code, after applying any fixes.
pub transformed: Cow<'a, str>,
pub transformed: Cow<'a, SourceKind>,
/// The number of fixes applied for each [`Rule`].
pub fixed: FixTable,
}
@@ -335,19 +335,19 @@ pub fn add_noqa_to_path(path: &Path, package: Option<&Path>, settings: &Settings
/// Generate a [`Message`] for each [`Diagnostic`] triggered by the given source
/// code.
pub fn lint_only(
contents: &str,
path: &Path,
package: Option<&Path>,
settings: &Settings,
noqa: flags::Noqa,
source_kind: Option<&SourceKind>,
source_kind: &SourceKind,
source_type: PySourceType,
) -> LinterResult<(Vec<Message>, Option<ImportMap>)> {
// Tokenize once.
let tokens: Vec<LexResult> = ruff_python_parser::tokenize(contents, source_type.as_mode());
let tokens: Vec<LexResult> =
ruff_python_parser::tokenize(source_kind.source_code(), source_type.as_mode());
// Map row and column locations to byte slices (lazily).
let locator = Locator::new(contents);
let locator = Locator::new(source_kind.source_code());
// Detect the current code style (lazily).
let stylist = Stylist::from_tokens(&tokens, &locator);
@@ -374,7 +374,7 @@ pub fn lint_only(
&directives,
settings,
noqa,
source_kind,
Some(source_kind),
source_type,
);
@@ -416,15 +416,14 @@ fn diagnostics_to_messages(
/// Generate `Diagnostic`s from source code content, iteratively autofixing
/// until stable.
pub fn lint_fix<'a>(
contents: &'a str,
path: &Path,
package: Option<&Path>,
noqa: flags::Noqa,
settings: &Settings,
source_kind: &mut SourceKind,
source_kind: &'a SourceKind,
source_type: PySourceType,
) -> Result<FixerResult<'a>> {
let mut transformed = Cow::Borrowed(contents);
let mut transformed = Cow::Borrowed(source_kind);
// Track the number of fixed errors across iterations.
let mut fixed = FxHashMap::default();
@@ -439,10 +438,10 @@ pub fn lint_fix<'a>(
loop {
// Tokenize once.
let tokens: Vec<LexResult> =
ruff_python_parser::tokenize(&transformed, source_type.as_mode());
ruff_python_parser::tokenize(transformed.source_code(), source_type.as_mode());
// Map row and column locations to byte slices (lazily).
let locator = Locator::new(&transformed);
let locator = Locator::new(transformed.source_code());
// Detect the current code style (lazily).
let stylist = Stylist::from_tokens(&tokens, &locator);
@@ -482,7 +481,7 @@ pub fn lint_fix<'a>(
if parseable && result.error.is_some() {
report_autofix_syntax_error(
path,
&transformed,
transformed.source_code(),
&result.error.unwrap(),
fixed.keys().copied(),
);
@@ -503,12 +502,7 @@ pub fn lint_fix<'a>(
*fixed.entry(rule).or_default() += count;
}
if let SourceKind::Jupyter(notebook) = source_kind {
notebook.update(&source_map, &fixed_contents);
}
// Store the fixed contents.
transformed = Cow::Owned(fixed_contents);
transformed = Cow::Owned(transformed.updated(fixed_contents, &source_map));
// Increment the iteration count.
iterations += 1;
@@ -517,7 +511,7 @@ pub fn lint_fix<'a>(
continue;
}
report_failed_to_converge_error(path, &transformed, &result.data.0);
report_failed_to_converge_error(path, transformed.source_code(), &result.data.0);
}
return Ok(FixerResult {

View File

@@ -18,7 +18,7 @@ impl Emitter for AzureEmitter {
context: &EmitterContext,
) -> anyhow::Result<()> {
for message in messages {
let location = if context.is_jupyter_notebook(message.filename()) {
let location = if context.is_notebook(message.filename()) {
// We can't give a reasonable location for the structured formats,
// so we show one that's clearly a fallback
SourceLocation::default()

View File

@@ -20,7 +20,7 @@ impl Emitter for GithubEmitter {
) -> anyhow::Result<()> {
for message in messages {
let source_location = message.compute_start_location();
let location = if context.is_jupyter_notebook(message.filename()) {
let location = if context.is_notebook(message.filename()) {
// We can't give a reasonable location for the structured formats,
// so we show one that's clearly a fallback
SourceLocation::default()

View File

@@ -63,7 +63,7 @@ impl Serialize for SerializedMessages<'_> {
let start_location = message.compute_start_location();
let end_location = message.compute_end_location();
let lines = if self.context.is_jupyter_notebook(message.filename()) {
let lines = if self.context.is_notebook(message.filename()) {
// We can't give a reasonable location for the structured formats,
// so we show one that's clearly a fallback
json!({

View File

@@ -13,7 +13,6 @@ use crate::message::text::{MessageCodeFrame, RuleCodeAndBody};
use crate::message::{
group_messages_by_filename, Emitter, EmitterContext, Message, MessageWithLocation,
};
use crate::source_kind::SourceKind;
#[derive(Default)]
pub struct GroupedEmitter {
@@ -66,10 +65,7 @@ impl Emitter for GroupedEmitter {
writer,
"{}",
DisplayGroupedMessage {
jupyter_index: context
.source_kind(message.filename())
.and_then(SourceKind::notebook)
.map(Notebook::index),
jupyter_index: context.notebook(message.filename()).map(Notebook::index),
message,
show_fix_status: self.show_fix_status,
show_source: self.show_source,

View File

@@ -45,7 +45,7 @@ impl Emitter for JunitEmitter {
} = message;
let mut status = TestCaseStatus::non_success(NonSuccessKind::Failure);
status.set_message(message.kind.body.clone());
let location = if context.is_jupyter_notebook(message.filename()) {
let location = if context.is_notebook(message.filename()) {
// We can't give a reasonable location for the structured formats,
// so we show one that's clearly a fallback
SourceLocation::default()

View File

@@ -3,10 +3,8 @@ use std::collections::BTreeMap;
use std::io::Write;
use std::ops::Deref;
use ruff_text_size::{TextRange, TextSize};
use rustc_hash::FxHashMap;
use crate::source_kind::SourceKind;
pub use azure::AzureEmitter;
pub use github::GithubEmitter;
pub use gitlab::GitlabEmitter;
@@ -17,8 +15,11 @@ pub use junit::JunitEmitter;
pub use pylint::PylintEmitter;
use ruff_diagnostics::{Diagnostic, DiagnosticKind, Fix};
use ruff_source_file::{SourceFile, SourceLocation};
use ruff_text_size::{TextRange, TextSize};
pub use text::TextEmitter;
use crate::jupyter::Notebook;
mod azure;
mod diff;
mod github;
@@ -129,33 +130,31 @@ pub trait Emitter {
/// Context passed to [`Emitter`].
pub struct EmitterContext<'a> {
source_kind: &'a FxHashMap<String, SourceKind>,
notebooks: &'a FxHashMap<String, Notebook>,
}
impl<'a> EmitterContext<'a> {
pub fn new(source_kind: &'a FxHashMap<String, SourceKind>) -> Self {
Self { source_kind }
pub fn new(notebooks: &'a FxHashMap<String, Notebook>) -> Self {
Self { notebooks }
}
/// Tests if the file with `name` is a jupyter notebook.
pub fn is_jupyter_notebook(&self, name: &str) -> bool {
self.source_kind
.get(name)
.is_some_and(SourceKind::is_jupyter)
pub fn is_notebook(&self, name: &str) -> bool {
self.notebooks.contains_key(name)
}
pub fn source_kind(&self, name: &str) -> Option<&SourceKind> {
self.source_kind.get(name)
pub fn notebook(&self, name: &str) -> Option<&Notebook> {
self.notebooks.get(name)
}
}
#[cfg(test)]
mod tests {
use ruff_text_size::{TextRange, TextSize};
use rustc_hash::FxHashMap;
use ruff_diagnostics::{Diagnostic, DiagnosticKind, Edit, Fix};
use ruff_source_file::SourceFileBuilder;
use ruff_text_size::{TextRange, TextSize};
use crate::message::{Emitter, EmitterContext, Message};

View File

@@ -19,7 +19,7 @@ impl Emitter for PylintEmitter {
context: &EmitterContext,
) -> anyhow::Result<()> {
for message in messages {
let row = if context.is_jupyter_notebook(message.filename()) {
let row = if context.is_notebook(message.filename()) {
// We can't give a reasonable location for the structured formats,
// so we show one that's clearly a fallback
OneIndexed::from_zero_indexed(0)

View File

@@ -6,9 +6,9 @@ use annotate_snippets::display_list::{DisplayList, FormatOptions};
use annotate_snippets::snippet::{Annotation, AnnotationType, Slice, Snippet, SourceAnnotation};
use bitflags::bitflags;
use colored::Colorize;
use ruff_text_size::{TextRange, TextSize};
use ruff_source_file::{OneIndexed, SourceLocation};
use ruff_text_size::{TextRange, TextSize};
use crate::fs::relativize_path;
use crate::jupyter::{JupyterIndex, Notebook};
@@ -16,7 +16,6 @@ use crate::line_width::{LineWidth, TabSize};
use crate::message::diff::Diff;
use crate::message::{Emitter, EmitterContext, Message};
use crate::registry::AsRule;
use crate::source_kind::SourceKind;
bitflags! {
#[derive(Default)]
@@ -72,10 +71,7 @@ impl Emitter for TextEmitter {
)?;
let start_location = message.compute_start_location();
let jupyter_index = context
.source_kind(message.filename())
.and_then(SourceKind::notebook)
.map(Notebook::index);
let jupyter_index = context.notebook(message.filename()).map(Notebook::index);
// Check if we're working on a jupyter notebook and translate positions with cell accordingly
let diagnostic_location = if let Some(jupyter_index) = jupyter_index {

View File

@@ -4,7 +4,8 @@ use ruff_python_ast::{self as ast, Constant, Expr};
pub(super) fn is_allowed_func_call(name: &str) -> bool {
matches!(
name,
"append"
"__setattr__"
| "append"
| "assertEqual"
| "assertEquals"
| "assertNotEqual"
@@ -26,13 +27,13 @@ pub(super) fn is_allowed_func_call(name: &str) -> bool {
| "int"
| "is_"
| "is_not"
| "next"
| "param"
| "pop"
| "remove"
| "set_blocking"
| "set_enabled"
| "setattr"
| "__setattr__"
| "setdefault"
| "str"
)

View File

@@ -81,12 +81,12 @@ FBT.py:19:5: FBT001 Boolean-typed positional argument in function definition
21 | kwonly_nonvalued_nohint,
|
FBT.py:86:19: FBT001 Boolean-typed positional argument in function definition
FBT.py:87:19: FBT001 Boolean-typed positional argument in function definition
|
85 | # FBT001: Boolean positional arg in function definition
86 | def foo(self, value: bool) -> None:
86 | # FBT001: Boolean positional arg in function definition
87 | def foo(self, value: bool) -> None:
| ^^^^^ FBT001
87 | pass
88 | pass
|

View File

@@ -90,8 +90,7 @@ impl AlwaysAutofixableViolation for DuplicateHandlerException {
#[derive_message_formats]
fn message(&self) -> String {
let DuplicateHandlerException { names } = self;
if names.len() == 1 {
let name = &names[0];
if let [name] = names.as_slice() {
format!("Exception handler with duplicate exception: `{name}`")
} else {
let names = names.iter().map(|name| format!("`{name}`")).join(", ");

View File

@@ -184,7 +184,10 @@ impl<'a> Visitor<'a> for SuspiciousVariablesVisitor<'a> {
return false;
}
if parameters.includes(&loaded.id) {
if parameters
.as_ref()
.is_some_and(|parameters| parameters.includes(&loaded.id))
{
return false;
}

View File

@@ -76,17 +76,20 @@ where
range: _,
}) => {
visitor::walk_expr(self, body);
for ParameterWithDefault {
parameter,
default: _,
range: _,
} in parameters
.posonlyargs
.iter()
.chain(&parameters.args)
.chain(&parameters.kwonlyargs)
{
self.names.remove(parameter.name.as_str());
if let Some(parameters) = parameters {
for ParameterWithDefault {
parameter,
default: _,
range: _,
} in parameters
.posonlyargs
.iter()
.chain(&parameters.args)
.chain(&parameters.kwonlyargs)
{
self.names.remove(parameter.name.as_str());
}
}
}
_ => visitor::walk_expr(self, expr),

View File

@@ -1,9 +1,9 @@
use ruff_python_ast::{self as ast, Expr, Ranged, Stmt};
use rustc_hash::FxHashMap;
use ruff_diagnostics::{AutofixKind, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::{self as ast, Expr, Ranged};
use ruff_python_ast::{helpers, visitor};
use crate::checkers::ast::Checker;
@@ -105,16 +105,16 @@ where
}
/// B007
pub(crate) fn unused_loop_control_variable(checker: &mut Checker, target: &Expr, body: &[Stmt]) {
pub(crate) fn unused_loop_control_variable(checker: &mut Checker, stmt_for: &ast::StmtFor) {
let control_names = {
let mut finder = NameFinder::new();
finder.visit_expr(target);
finder.visit_expr(stmt_for.target.as_ref());
finder.names
};
let used_names = {
let mut finder = NameFinder::new();
for stmt in body {
for stmt in &stmt_for.body {
finder.visit_stmt(stmt);
}
finder.names
@@ -132,9 +132,10 @@ pub(crate) fn unused_loop_control_variable(checker: &mut Checker, target: &Expr,
}
// Avoid fixing any variables that _may_ be used, but undetectably so.
let certainty = Certainty::from(!helpers::uses_magic_variable_access(body, |id| {
checker.semantic().is_builtin(id)
}));
let certainty =
Certainty::from(!helpers::uses_magic_variable_access(&stmt_for.body, |id| {
checker.semantic().is_builtin(id)
}));
// Attempt to rename the variable by prepending an underscore, but avoid
// applying the fix if doing so wouldn't actually cause us to ignore the

View File

@@ -765,13 +765,14 @@ pub(crate) fn fix_unnecessary_double_cast_or_process(
outer_call.args = match outer_call.args.split_first() {
Some((first, rest)) => {
let inner_call = match_call(&first.value)?;
if let Some(iterable) = inner_call.args.first() {
let mut args = vec![iterable.clone()];
args.extend_from_slice(rest);
args
} else {
bail!("Expected at least one argument in inner function call");
}
inner_call
.args
.iter()
.filter(|argument| argument.keyword.is_none())
.take(1)
.chain(rest.iter())
.cloned()
.collect::<Vec<_>>()
}
None => bail!("Expected at least one argument in outer function call"),
};
@@ -1044,8 +1045,26 @@ pub(crate) fn fix_unnecessary_comprehension_any_all(
let mut tree = match_expression(module_text)?;
let call = match_call_mut(&mut tree)?;
let Expression::ListComp(list_comp) = &call.args[0].value else {
bail!("Expected Expression::ListComp");
let (whitespace_after, whitespace_before, elt, for_in, lpar, rpar) = match &call.args[0].value {
Expression::ListComp(list_comp) => (
&list_comp.lbracket.whitespace_after,
&list_comp.rbracket.whitespace_before,
&list_comp.elt,
&list_comp.for_in,
&list_comp.lpar,
&list_comp.rpar,
),
Expression::SetComp(set_comp) => (
&set_comp.lbrace.whitespace_after,
&set_comp.rbrace.whitespace_before,
&set_comp.elt,
&set_comp.for_in,
&set_comp.lpar,
&set_comp.rpar,
),
_ => {
bail!("Expected Expression::ListComp | Expression::SetComp");
}
};
let mut new_empty_lines = vec![];
@@ -1054,7 +1073,7 @@ pub(crate) fn fix_unnecessary_comprehension_any_all(
first_line,
empty_lines,
..
}) = &list_comp.lbracket.whitespace_after
}) = &whitespace_after
{
// If there's a comment on the line after the opening bracket, we need
// to preserve it. The way we do this is by adding a new empty line
@@ -1143,7 +1162,7 @@ pub(crate) fn fix_unnecessary_comprehension_any_all(
..
},
..
}) = &list_comp.rbracket.whitespace_before
}) = &whitespace_before
{
Some(format!("{}{}", whitespace.0, comment.0))
} else {
@@ -1151,10 +1170,10 @@ pub(crate) fn fix_unnecessary_comprehension_any_all(
};
call.args[0].value = Expression::GeneratorExp(Box::new(GeneratorExp {
elt: list_comp.elt.clone(),
for_in: list_comp.for_in.clone(),
lpar: list_comp.lpar.clone(),
rpar: list_comp.rpar.clone(),
elt: elt.clone(),
for_in: for_in.clone(),
lpar: lpar.clone(),
rpar: rpar.clone(),
}));
let whitespace_after_arg = match &call.args[0].comma {

View File

@@ -69,30 +69,31 @@ pub(crate) fn unnecessary_comprehension_any_all(
let Expr::Name(ast::ExprName { id, .. }) = func else {
return;
};
if (matches!(id.as_str(), "all" | "any")) && args.len() == 1 {
let (Expr::ListComp(ast::ExprListComp { elt, .. })
| Expr::SetComp(ast::ExprSetComp { elt, .. })) = &args[0]
else {
return;
};
if contains_await(elt) {
return;
}
if !checker.semantic().is_builtin(id) {
return;
}
let mut diagnostic = Diagnostic::new(UnnecessaryComprehensionAnyAll, args[0].range());
if checker.patch(diagnostic.kind.rule()) {
diagnostic.try_set_fix(|| {
fixes::fix_unnecessary_comprehension_any_all(
checker.locator(),
checker.stylist(),
expr,
)
});
}
checker.diagnostics.push(diagnostic);
if !matches!(id.as_str(), "all" | "any") {
return;
}
let [arg] = args else {
return;
};
let (Expr::ListComp(ast::ExprListComp { elt, .. })
| Expr::SetComp(ast::ExprSetComp { elt, .. })) = arg
else {
return;
};
if contains_await(elt) {
return;
}
if !checker.semantic().is_builtin(id) {
return;
}
let mut diagnostic = Diagnostic::new(UnnecessaryComprehensionAnyAll, arg.range());
if checker.patch(diagnostic.kind.rule()) {
diagnostic.try_set_fix(|| {
fixes::fix_unnecessary_comprehension_any_all(checker.locator(), checker.stylist(), expr)
});
}
checker.diagnostics.push(diagnostic);
}
/// Return `true` if the [`Expr`] contains an `await` expression.

View File

@@ -103,7 +103,10 @@ pub(crate) fn unnecessary_map(
return;
};
if late_binding(parameters, body) {
if parameters
.as_ref()
.is_some_and(|parameters| late_binding(parameters, body))
{
return;
}
}
@@ -134,7 +137,10 @@ pub(crate) fn unnecessary_map(
return;
};
if late_binding(parameters, body) {
if parameters
.as_ref()
.is_some_and(|parameters| late_binding(parameters, body))
{
return;
}
}
@@ -171,7 +177,10 @@ pub(crate) fn unnecessary_map(
return;
}
if late_binding(parameters, body) {
if parameters
.as_ref()
.is_some_and(|parameters| late_binding(parameters, body))
{
return;
}
}
@@ -240,7 +249,7 @@ struct LateBindingVisitor<'a> {
/// The arguments to the current lambda.
parameters: &'a Parameters,
/// The arguments to any lambdas within the current lambda body.
lambdas: Vec<&'a Parameters>,
lambdas: Vec<Option<&'a Parameters>>,
/// Whether any names within the current lambda body are late-bound within nested lambdas.
late_bound: bool,
}
@@ -261,7 +270,7 @@ impl<'a> Visitor<'a> for LateBindingVisitor<'a> {
fn visit_expr(&mut self, expr: &'a Expr) {
match expr {
Expr::Lambda(ast::ExprLambda { parameters, .. }) => {
self.lambdas.push(parameters);
self.lambdas.push(parameters.as_deref());
visitor::walk_expr(self, expr);
self.lambdas.pop();
}
@@ -275,11 +284,11 @@ impl<'a> Visitor<'a> for LateBindingVisitor<'a> {
// If the name is defined in the current lambda...
if self.parameters.includes(id) {
// And isn't overridden by any nested lambdas...
if !self
.lambdas
.iter()
.any(|parameters| parameters.includes(id))
{
if !self.lambdas.iter().any(|parameters| {
parameters
.as_ref()
.is_some_and(|parameters| parameters.includes(id))
}) {
// Then it's late-bound.
self.late_bound = true;
}

View File

@@ -365,8 +365,8 @@ C414.py:19:1: C414 [*] Unnecessary `list` call within `tuple()`
23 | | )
24 | | )
| |_^ C414
25 |
26 | # Nested sorts with differing keyword arguments. Not flagged.
25 | set(set())
26 | set(list())
|
= help: Remove the inner `list` call
@@ -380,8 +380,91 @@ C414.py:19:1: C414 [*] Unnecessary `list` call within `tuple()`
22 21 | "o"]
23 22 | )
24 |-)
25 23 |
26 24 | # Nested sorts with differing keyword arguments. Not flagged.
27 25 | sorted(sorted(x, key=lambda y: y))
25 23 | set(set())
26 24 | set(list())
27 25 | set(tuple())
C414.py:25:1: C414 [*] Unnecessary `set` call within `set()`
|
23 | )
24 | )
25 | set(set())
| ^^^^^^^^^^ C414
26 | set(list())
27 | set(tuple())
|
= help: Remove the inner `set` call
Suggested fix
22 22 | "o"]
23 23 | )
24 24 | )
25 |-set(set())
25 |+set()
26 26 | set(list())
27 27 | set(tuple())
28 28 | sorted(reversed())
C414.py:26:1: C414 [*] Unnecessary `list` call within `set()`
|
24 | )
25 | set(set())
26 | set(list())
| ^^^^^^^^^^^ C414
27 | set(tuple())
28 | sorted(reversed())
|
= help: Remove the inner `list` call
Suggested fix
23 23 | )
24 24 | )
25 25 | set(set())
26 |-set(list())
26 |+set()
27 27 | set(tuple())
28 28 | sorted(reversed())
29 29 |
C414.py:27:1: C414 [*] Unnecessary `tuple` call within `set()`
|
25 | set(set())
26 | set(list())
27 | set(tuple())
| ^^^^^^^^^^^^ C414
28 | sorted(reversed())
|
= help: Remove the inner `tuple` call
Suggested fix
24 24 | )
25 25 | set(set())
26 26 | set(list())
27 |-set(tuple())
27 |+set()
28 28 | sorted(reversed())
29 29 |
30 30 | # Nested sorts with differing keyword arguments. Not flagged.
C414.py:28:1: C414 [*] Unnecessary `reversed` call within `sorted()`
|
26 | set(list())
27 | set(tuple())
28 | sorted(reversed())
| ^^^^^^^^^^^^^^^^^^ C414
29 |
30 | # Nested sorts with differing keyword arguments. Not flagged.
|
= help: Remove the inner `reversed` call
Suggested fix
25 25 | set(set())
26 26 | set(list())
27 27 | set(tuple())
28 |-sorted(reversed())
28 |+sorted()
29 29 |
30 30 | # Nested sorts with differing keyword arguments. Not flagged.
31 31 | sorted(sorted(x, key=lambda y: y))

View File

@@ -77,7 +77,7 @@ C419.py:7:5: C419 [*] Unnecessary list comprehension.
9 9 | any({x.id for x in bar})
10 10 |
C419.py:9:5: C419 Unnecessary list comprehension.
C419.py:9:5: C419 [*] Unnecessary list comprehension.
|
7 | [x.id for x in bar], # second comment
8 | ) # third comment
@@ -88,6 +88,16 @@ C419.py:9:5: C419 Unnecessary list comprehension.
|
= help: Remove unnecessary list comprehension
Suggested fix
6 6 | all( # first comment
7 7 | [x.id for x in bar], # second comment
8 8 | ) # third comment
9 |-any({x.id for x in bar})
9 |+any(x.id for x in bar)
10 10 |
11 11 | # OK
12 12 | all(x.id for x in bar)
C419.py:24:5: C419 [*] Unnecessary list comprehension.
|
22 | # Special comment handling

View File

@@ -6,6 +6,43 @@ use ruff_macros::{derive_message_formats, violation};
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for usage of `datetime.date.fromtimestamp()`.
///
/// ## Why is this bad?
/// Python datetime objects can be naive or timezone-aware. While an aware
/// object represents a specific moment in time, a naive object does not
/// contain enough information to unambiguously locate itself relative to other
/// datetime objects. Since this can lead to errors, it is recommended to
/// always use timezone-aware objects.
///
/// `datetime.date.fromtimestamp(ts)` returns a naive datetime object.
/// Instead, use `datetime.datetime.fromtimestamp(ts, tz=)` to return a
/// timezone-aware object.
///
/// ## Example
/// ```python
/// import datetime
///
/// datetime.date.fromtimestamp(946684800)
/// ```
///
/// Use instead:
/// ```python
/// import datetime
///
/// datetime.datetime.fromtimestamp(946684800, tz=datetime.timezone.utc)
/// ```
///
/// Or, for Python 3.11 and later:
/// ```python
/// import datetime
///
/// datetime.datetime.fromtimestamp(946684800, tz=datetime.UTC)
/// ```
///
/// ## References
/// - [Python documentation: Aware and Naive Objects](https://docs.python.org/3/library/datetime.html#aware-and-naive-objects)
#[violation]
pub struct CallDateFromtimestamp;
@@ -19,12 +56,6 @@ impl Violation for CallDateFromtimestamp {
}
}
/// Checks for `datetime.date.fromtimestamp()`. (DTZ012)
///
/// ## Why is this bad?
///
/// It uses the system local timezone.
/// Use `datetime.datetime.fromtimestamp(, tz=).date()` instead.
pub(crate) fn call_date_fromtimestamp(checker: &mut Checker, func: &Expr, location: TextRange) {
if checker
.semantic()

View File

@@ -6,6 +6,42 @@ use ruff_macros::{derive_message_formats, violation};
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for usage of `datetime.date.today()`.
///
/// ## Why is this bad?
/// Python datetime objects can be naive or timezone-aware. While an aware
/// object represents a specific moment in time, a naive object does not
/// contain enough information to unambiguously locate itself relative to other
/// datetime objects. Since this can lead to errors, it is recommended to
/// always use timezone-aware objects.
///
/// `datetime.date.today` returns a naive datetime object. Instead, use
/// `datetime.datetime.now(tz=).date()` to return a timezone-aware object.
///
/// ## Example
/// ```python
/// import datetime
///
/// datetime.datetime.today()
/// ```
///
/// Use instead:
/// ```python
/// import datetime
///
/// datetime.datetime.now(tz=datetime.timezone.utc).date()
/// ```
///
/// Or, for Python 3.11 and later:
/// ```python
/// import datetime
///
/// datetime.datetime.now(tz=datetime.UTC).date()
/// ```
///
/// ## References
/// - [Python documentation: Aware and Naive Objects](https://docs.python.org/3/library/datetime.html#aware-and-naive-objects)
#[violation]
pub struct CallDateToday;
@@ -19,12 +55,6 @@ impl Violation for CallDateToday {
}
}
/// Checks for `datetime.date.today()`. (DTZ011)
///
/// ## Why is this bad?
///
/// It uses the system local timezone.
/// Use `datetime.datetime.now(tz=).date()` instead.
pub(crate) fn call_date_today(checker: &mut Checker, func: &Expr, location: TextRange) {
if checker
.semantic()

View File

@@ -80,13 +80,15 @@ pub(crate) fn unconventional_import_alias(
binding.range(),
);
if checker.patch(diagnostic.kind.rule()) {
if checker.semantic().is_available(expected_alias) {
diagnostic.try_set_fix(|| {
let scope = &checker.semantic().scopes[binding.scope];
let (edit, rest) =
Renamer::rename(name, expected_alias, scope, checker.semantic())?;
Ok(Fix::suggested_edits(edit, rest))
});
if !import.is_submodule_import() {
if checker.semantic().is_available(expected_alias) {
diagnostic.try_set_fix(|| {
let scope = &checker.semantic().scopes[binding.scope];
let (edit, rest) =
Renamer::rename(name, expected_alias, scope, checker.semantic())?;
Ok(Fix::suggested_edits(edit, rest))
});
}
}
}
Some(diagnostic)

View File

@@ -1,132 +1,238 @@
---
source: crates/ruff/src/rules/flake8_import_conventions/mod.rs
---
defaults.py:3:8: ICN001 `altair` should be imported as `alt`
defaults.py:6:12: ICN001 [*] `altair` should be imported as `alt`
|
1 | import math # not checked
2 |
3 | import altair # unconventional
| ^^^^^^ ICN001
4 | import matplotlib.pyplot # unconventional
5 | import numpy # unconventional
5 | def unconventional():
6 | import altair
| ^^^^^^ ICN001
7 | import matplotlib.pyplot
8 | import numpy
|
= help: Alias `altair` to `alt`
defaults.py:4:8: ICN001 `matplotlib.pyplot` should be imported as `plt`
Suggested fix
3 3 |
4 4 |
5 5 | def unconventional():
6 |- import altair
6 |+ import altair as alt
7 7 | import matplotlib.pyplot
8 8 | import numpy
9 9 | import pandas
defaults.py:7:12: ICN001 `matplotlib.pyplot` should be imported as `plt`
|
3 | import altair # unconventional
4 | import matplotlib.pyplot # unconventional
| ^^^^^^^^^^^^^^^^^ ICN001
5 | import numpy # unconventional
6 | import pandas # unconventional
5 | def unconventional():
6 | import altair
7 | import matplotlib.pyplot
| ^^^^^^^^^^^^^^^^^ ICN001
8 | import numpy
9 | import pandas
|
= help: Alias `matplotlib.pyplot` to `plt`
defaults.py:5:8: ICN001 `numpy` should be imported as `np`
|
3 | import altair # unconventional
4 | import matplotlib.pyplot # unconventional
5 | import numpy # unconventional
| ^^^^^ ICN001
6 | import pandas # unconventional
7 | import seaborn # unconventional
|
= help: Alias `numpy` to `np`
defaults.py:6:8: ICN001 `pandas` should be imported as `pd`
|
4 | import matplotlib.pyplot # unconventional
5 | import numpy # unconventional
6 | import pandas # unconventional
| ^^^^^^ ICN001
7 | import seaborn # unconventional
8 | import tkinter # unconventional
|
= help: Alias `pandas` to `pd`
defaults.py:7:8: ICN001 `seaborn` should be imported as `sns`
|
5 | import numpy # unconventional
6 | import pandas # unconventional
7 | import seaborn # unconventional
| ^^^^^^^ ICN001
8 | import tkinter # unconventional
|
= help: Alias `seaborn` to `sns`
defaults.py:8:8: ICN001 `tkinter` should be imported as `tk`
defaults.py:8:12: ICN001 [*] `numpy` should be imported as `np`
|
6 | import pandas # unconventional
7 | import seaborn # unconventional
8 | import tkinter # unconventional
| ^^^^^^^ ICN001
9 |
10 | import altair as altr # unconventional
|
= help: Alias `tkinter` to `tk`
defaults.py:10:18: ICN001 `altair` should be imported as `alt`
|
8 | import tkinter # unconventional
9 |
10 | import altair as altr # unconventional
| ^^^^ ICN001
11 | import matplotlib.pyplot as plot # unconventional
12 | import numpy as nmp # unconventional
|
= help: Alias `altair` to `alt`
defaults.py:11:29: ICN001 `matplotlib.pyplot` should be imported as `plt`
|
10 | import altair as altr # unconventional
11 | import matplotlib.pyplot as plot # unconventional
| ^^^^ ICN001
12 | import numpy as nmp # unconventional
13 | import pandas as pdas # unconventional
|
= help: Alias `matplotlib.pyplot` to `plt`
defaults.py:12:17: ICN001 `numpy` should be imported as `np`
|
10 | import altair as altr # unconventional
11 | import matplotlib.pyplot as plot # unconventional
12 | import numpy as nmp # unconventional
| ^^^ ICN001
13 | import pandas as pdas # unconventional
14 | import seaborn as sbrn # unconventional
6 | import altair
7 | import matplotlib.pyplot
8 | import numpy
| ^^^^^ ICN001
9 | import pandas
10 | import seaborn
|
= help: Alias `numpy` to `np`
defaults.py:13:18: ICN001 `pandas` should be imported as `pd`
Suggested fix
5 5 | def unconventional():
6 6 | import altair
7 7 | import matplotlib.pyplot
8 |- import numpy
8 |+ import numpy as np
9 9 | import pandas
10 10 | import seaborn
11 11 | import tkinter
defaults.py:9:12: ICN001 [*] `pandas` should be imported as `pd`
|
11 | import matplotlib.pyplot as plot # unconventional
12 | import numpy as nmp # unconventional
13 | import pandas as pdas # unconventional
| ^^^^ ICN001
14 | import seaborn as sbrn # unconventional
15 | import tkinter as tkr # unconventional
7 | import matplotlib.pyplot
8 | import numpy
9 | import pandas
| ^^^^^^ ICN001
10 | import seaborn
11 | import tkinter
|
= help: Alias `pandas` to `pd`
defaults.py:14:19: ICN001 `seaborn` should be imported as `sns`
Suggested fix
6 6 | import altair
7 7 | import matplotlib.pyplot
8 8 | import numpy
9 |- import pandas
9 |+ import pandas as pd
10 10 | import seaborn
11 11 | import tkinter
12 12 |
defaults.py:10:12: ICN001 [*] `seaborn` should be imported as `sns`
|
12 | import numpy as nmp # unconventional
13 | import pandas as pdas # unconventional
14 | import seaborn as sbrn # unconventional
| ^^^^ ICN001
15 | import tkinter as tkr # unconventional
8 | import numpy
9 | import pandas
10 | import seaborn
| ^^^^^^^ ICN001
11 | import tkinter
|
= help: Alias `seaborn` to `sns`
defaults.py:15:19: ICN001 `tkinter` should be imported as `tk`
Suggested fix
7 7 | import matplotlib.pyplot
8 8 | import numpy
9 9 | import pandas
10 |- import seaborn
10 |+ import seaborn as sns
11 11 | import tkinter
12 12 |
13 13 |
defaults.py:11:12: ICN001 [*] `tkinter` should be imported as `tk`
|
13 | import pandas as pdas # unconventional
14 | import seaborn as sbrn # unconventional
15 | import tkinter as tkr # unconventional
| ^^^ ICN001
16 |
17 | import altair as alt # conventional
9 | import pandas
10 | import seaborn
11 | import tkinter
| ^^^^^^^ ICN001
|
= help: Alias `tkinter` to `tk`
Suggested fix
8 8 | import numpy
9 9 | import pandas
10 10 | import seaborn
11 |- import tkinter
11 |+ import tkinter as tk
12 12 |
13 13 |
14 14 | def unconventional_aliases():
defaults.py:15:22: ICN001 [*] `altair` should be imported as `alt`
|
14 | def unconventional_aliases():
15 | import altair as altr
| ^^^^ ICN001
16 | import matplotlib.pyplot as plot
17 | import numpy as nmp
|
= help: Alias `altair` to `alt`
Suggested fix
12 12 |
13 13 |
14 14 | def unconventional_aliases():
15 |- import altair as altr
15 |+ import altair as alt
16 16 | import matplotlib.pyplot as plot
17 17 | import numpy as nmp
18 18 | import pandas as pdas
defaults.py:16:33: ICN001 [*] `matplotlib.pyplot` should be imported as `plt`
|
14 | def unconventional_aliases():
15 | import altair as altr
16 | import matplotlib.pyplot as plot
| ^^^^ ICN001
17 | import numpy as nmp
18 | import pandas as pdas
|
= help: Alias `matplotlib.pyplot` to `plt`
Suggested fix
13 13 |
14 14 | def unconventional_aliases():
15 15 | import altair as altr
16 |- import matplotlib.pyplot as plot
16 |+ import matplotlib.pyplot as plt
17 17 | import numpy as nmp
18 18 | import pandas as pdas
19 19 | import seaborn as sbrn
defaults.py:17:21: ICN001 [*] `numpy` should be imported as `np`
|
15 | import altair as altr
16 | import matplotlib.pyplot as plot
17 | import numpy as nmp
| ^^^ ICN001
18 | import pandas as pdas
19 | import seaborn as sbrn
|
= help: Alias `numpy` to `np`
Suggested fix
14 14 | def unconventional_aliases():
15 15 | import altair as altr
16 16 | import matplotlib.pyplot as plot
17 |- import numpy as nmp
17 |+ import numpy as np
18 18 | import pandas as pdas
19 19 | import seaborn as sbrn
20 20 | import tkinter as tkr
defaults.py:18:22: ICN001 [*] `pandas` should be imported as `pd`
|
16 | import matplotlib.pyplot as plot
17 | import numpy as nmp
18 | import pandas as pdas
| ^^^^ ICN001
19 | import seaborn as sbrn
20 | import tkinter as tkr
|
= help: Alias `pandas` to `pd`
Suggested fix
15 15 | import altair as altr
16 16 | import matplotlib.pyplot as plot
17 17 | import numpy as nmp
18 |- import pandas as pdas
18 |+ import pandas as pd
19 19 | import seaborn as sbrn
20 20 | import tkinter as tkr
21 21 |
defaults.py:19:23: ICN001 [*] `seaborn` should be imported as `sns`
|
17 | import numpy as nmp
18 | import pandas as pdas
19 | import seaborn as sbrn
| ^^^^ ICN001
20 | import tkinter as tkr
|
= help: Alias `seaborn` to `sns`
Suggested fix
16 16 | import matplotlib.pyplot as plot
17 17 | import numpy as nmp
18 18 | import pandas as pdas
19 |- import seaborn as sbrn
19 |+ import seaborn as sns
20 20 | import tkinter as tkr
21 21 |
22 22 |
defaults.py:20:23: ICN001 [*] `tkinter` should be imported as `tk`
|
18 | import pandas as pdas
19 | import seaborn as sbrn
20 | import tkinter as tkr
| ^^^ ICN001
|
= help: Alias `tkinter` to `tk`
Suggested fix
17 17 | import numpy as nmp
18 18 | import pandas as pdas
19 19 | import seaborn as sbrn
20 |- import tkinter as tkr
20 |+ import tkinter as tk
21 21 |
22 22 |
23 23 | def conventional_aliases():

View File

@@ -15,6 +15,7 @@ mod tests {
#[test_case(Rule::DuplicateClassFieldDefinition, Path::new("PIE794.py"))]
#[test_case(Rule::UnnecessaryDictKwargs, Path::new("PIE804.py"))]
#[test_case(Rule::MultipleStartsEndsWith, Path::new("PIE810.py"))]
#[test_case(Rule::UnnecessaryRangeStart, Path::new("PIE808.py"))]
#[test_case(Rule::UnnecessaryPass, Path::new("PIE790.py"))]
#[test_case(Rule::UnnecessarySpread, Path::new("PIE800.py"))]
#[test_case(Rule::ReimplementedListBuiltin, Path::new("PIE807.py"))]

View File

@@ -1,9 +1,9 @@
use ruff_python_ast::{self as ast, Expr, Ranged, Stmt};
use rustc_hash::FxHashSet;
use ruff_diagnostics::Diagnostic;
use ruff_diagnostics::{AlwaysAutofixableViolation, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Expr, Ranged, Stmt};
use crate::autofix;
use crate::checkers::ast::Checker;
@@ -49,11 +49,7 @@ impl AlwaysAutofixableViolation for DuplicateClassFieldDefinition {
}
/// PIE794
pub(crate) fn duplicate_class_field_definition(
checker: &mut Checker,
parent: &Stmt,
body: &[Stmt],
) {
pub(crate) fn duplicate_class_field_definition(checker: &mut Checker, body: &[Stmt]) {
let mut seen_targets: FxHashSet<&str> = FxHashSet::default();
for stmt in body {
// Extract the property name from the assignment statement.
@@ -85,11 +81,11 @@ pub(crate) fn duplicate_class_field_definition(
if checker.patch(diagnostic.kind.rule()) {
let edit = autofix::edits::delete_stmt(
stmt,
Some(parent),
Some(stmt),
checker.locator(),
checker.indexer(),
);
diagnostic.set_fix(Fix::suggested(edit).isolate(checker.isolation(Some(parent))));
diagnostic.set_fix(Fix::suggested(edit).isolate(checker.statement_isolation()));
}
checker.diagnostics.push(diagnostic);
}

View File

@@ -4,6 +4,7 @@ pub(crate) use no_unnecessary_pass::*;
pub(crate) use non_unique_enums::*;
pub(crate) use reimplemented_list_builtin::*;
pub(crate) use unnecessary_dict_kwargs::*;
pub(crate) use unnecessary_range_start::*;
pub(crate) use unnecessary_spread::*;
mod duplicate_class_field_definition;
@@ -12,4 +13,5 @@ mod no_unnecessary_pass;
mod non_unique_enums;
mod reimplemented_list_builtin;
mod unnecessary_dict_kwargs;
mod unnecessary_range_start;
mod unnecessary_spread;

View File

@@ -50,28 +50,28 @@ impl AlwaysAutofixableViolation for UnnecessaryPass {
/// PIE790
pub(crate) fn no_unnecessary_pass(checker: &mut Checker, body: &[Stmt]) {
if body.len() > 1 {
// This only catches the case in which a docstring makes a `pass` statement
// redundant. Consider removing all `pass` statements instead.
if !is_docstring_stmt(&body[0]) {
return;
}
// The second statement must be a `pass` statement.
let stmt = &body[1];
if !stmt.is_pass_stmt() {
return;
}
let mut diagnostic = Diagnostic::new(UnnecessaryPass, stmt.range());
if checker.patch(diagnostic.kind.rule()) {
let edit = if let Some(index) = trailing_comment_start_offset(stmt, checker.locator()) {
Edit::range_deletion(stmt.range().add_end(index))
} else {
autofix::edits::delete_stmt(stmt, None, checker.locator(), checker.indexer())
};
diagnostic.set_fix(Fix::automatic(edit));
}
checker.diagnostics.push(diagnostic);
let [first, second, ..] = body else {
return;
};
// This only catches the case in which a docstring makes a `pass` statement
// redundant. Consider removing all `pass` statements instead.
if !is_docstring_stmt(first) {
return;
}
// The second statement must be a `pass` statement.
if !second.is_pass_stmt() {
return;
}
let mut diagnostic = Diagnostic::new(UnnecessaryPass, second.range());
if checker.patch(diagnostic.kind.rule()) {
let edit = if let Some(index) = trailing_comment_start_offset(second, checker.locator()) {
Edit::range_deletion(second.range().add_end(index))
} else {
autofix::edits::delete_stmt(second, None, checker.locator(), checker.indexer())
};
diagnostic.set_fix(Fix::automatic(edit));
}
checker.diagnostics.push(diagnostic);
}

View File

@@ -59,12 +59,7 @@ pub(crate) fn reimplemented_list_builtin(checker: &mut Checker, expr: &ExprLambd
range: _,
} = expr;
if parameters.args.is_empty()
&& parameters.kwonlyargs.is_empty()
&& parameters.posonlyargs.is_empty()
&& parameters.vararg.is_none()
&& parameters.kwarg.is_none()
{
if parameters.is_none() {
if let Expr::List(ast::ExprList { elts, .. }) = body.as_ref() {
if elts.is_empty() {
let mut diagnostic = Diagnostic::new(ReimplementedListBuiltin, expr.range());

View File

@@ -0,0 +1,94 @@
use num_bigint::BigInt;
use ruff_diagnostics::Diagnostic;
use ruff_diagnostics::{AlwaysAutofixableViolation, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Constant, Expr, Ranged};
use crate::autofix::edits::{remove_argument, Parentheses};
use crate::checkers::ast::Checker;
use crate::registry::AsRule;
/// ## What it does
/// Checks for `range` calls with an unnecessary `start` argument.
///
/// ## Why is this bad?
/// `range(0, x)` is equivalent to `range(x)`, as `0` is the default value for
/// the `start` argument. Omitting the `start` argument makes the code more
/// concise and idiomatic.
///
/// ## Example
/// ```python
/// range(0, 3)
/// ```
///
/// Use instead:
/// ```python
/// range(3)
/// ```
///
/// ## References
/// - [Python documentation: `range`](https://docs.python.org/3/library/stdtypes.html#range)
#[violation]
pub struct UnnecessaryRangeStart;
impl AlwaysAutofixableViolation for UnnecessaryRangeStart {
#[derive_message_formats]
fn message(&self) -> String {
format!("Unnecessary `start` argument in `range`")
}
fn autofix_title(&self) -> String {
format!("Remove `start` argument")
}
}
/// PIE808
pub(crate) fn unnecessary_range_start(checker: &mut Checker, call: &ast::ExprCall) {
// Verify that the call is to the `range` builtin.
let Expr::Name(ast::ExprName { id, .. }) = call.func.as_ref() else {
return;
};
if id != "range" {
return;
};
if !checker.semantic().is_builtin("range") {
return;
};
// `range` doesn't accept keyword arguments.
if !call.arguments.keywords.is_empty() {
return;
}
// Verify that the call has exactly two arguments (no `step`).
let [start, _] = call.arguments.args.as_slice() else {
return;
};
// Verify that the `start` argument is the literal `0`.
let Expr::Constant(ast::ExprConstant {
value: Constant::Int(value),
..
}) = start
else {
return;
};
if *value != BigInt::from(0) {
return;
};
let mut diagnostic = Diagnostic::new(UnnecessaryRangeStart, start.range());
if checker.patch(diagnostic.kind.rule()) {
diagnostic.try_set_fix(|| {
remove_argument(
&start,
&call.arguments,
Parentheses::Preserve,
checker.locator().contents(),
)
.map(Fix::automatic)
});
}
checker.diagnostics.push(diagnostic);
}

View File

@@ -0,0 +1,22 @@
---
source: crates/ruff/src/rules/flake8_pie/mod.rs
---
PIE808.py:2:7: PIE808 [*] Unnecessary `start` argument in `range`
|
1 | # PIE808
2 | range(0, 10)
| ^ PIE808
3 |
4 | # OK
|
= help: Remove `start` argument
Fix
1 1 | # PIE808
2 |-range(0, 10)
2 |+range(10)
3 3 |
4 4 | # OK
5 5 | range(x, 10)

View File

@@ -30,46 +30,50 @@ mod tests {
#[test_case(Rule::ComplexAssignmentInStub, Path::new("PYI017.pyi"))]
#[test_case(Rule::ComplexIfStatementInStub, Path::new("PYI002.py"))]
#[test_case(Rule::ComplexIfStatementInStub, Path::new("PYI002.pyi"))]
#[test_case(Rule::CustomTypeVarReturnType, Path::new("PYI019.py"))]
#[test_case(Rule::CustomTypeVarReturnType, Path::new("PYI019.pyi"))]
#[test_case(Rule::DocstringInStub, Path::new("PYI021.py"))]
#[test_case(Rule::DocstringInStub, Path::new("PYI021.pyi"))]
#[test_case(Rule::DuplicateUnionMember, Path::new("PYI016.py"))]
#[test_case(Rule::DuplicateUnionMember, Path::new("PYI016.pyi"))]
#[test_case(Rule::EllipsisInNonEmptyClassBody, Path::new("PYI013.py"))]
#[test_case(Rule::EllipsisInNonEmptyClassBody, Path::new("PYI013.pyi"))]
#[test_case(Rule::NonSelfReturnType, Path::new("PYI034.py"))]
#[test_case(Rule::NonSelfReturnType, Path::new("PYI034.pyi"))]
#[test_case(Rule::FutureAnnotationsInStub, Path::new("PYI044.py"))]
#[test_case(Rule::FutureAnnotationsInStub, Path::new("PYI044.pyi"))]
#[test_case(Rule::IterMethodReturnIterable, Path::new("PYI045.py"))]
#[test_case(Rule::IterMethodReturnIterable, Path::new("PYI045.pyi"))]
#[test_case(Rule::NoReturnArgumentAnnotationInStub, Path::new("PYI050.py"))]
#[test_case(Rule::NoReturnArgumentAnnotationInStub, Path::new("PYI050.pyi"))]
#[test_case(Rule::NumericLiteralTooLong, Path::new("PYI054.py"))]
#[test_case(Rule::NumericLiteralTooLong, Path::new("PYI054.pyi"))]
#[test_case(Rule::NonEmptyStubBody, Path::new("PYI010.py"))]
#[test_case(Rule::NonEmptyStubBody, Path::new("PYI010.pyi"))]
#[test_case(Rule::NonSelfReturnType, Path::new("PYI034.py"))]
#[test_case(Rule::NonSelfReturnType, Path::new("PYI034.pyi"))]
#[test_case(Rule::NumericLiteralTooLong, Path::new("PYI054.py"))]
#[test_case(Rule::NumericLiteralTooLong, Path::new("PYI054.pyi"))]
#[test_case(Rule::PassInClassBody, Path::new("PYI012.py"))]
#[test_case(Rule::PassInClassBody, Path::new("PYI012.pyi"))]
#[test_case(Rule::PassStatementStubBody, Path::new("PYI009.py"))]
#[test_case(Rule::PassStatementStubBody, Path::new("PYI009.pyi"))]
#[test_case(Rule::PatchVersionComparison, Path::new("PYI004.py"))]
#[test_case(Rule::PatchVersionComparison, Path::new("PYI004.pyi"))]
#[test_case(Rule::QuotedAnnotationInStub, Path::new("PYI020.py"))]
#[test_case(Rule::QuotedAnnotationInStub, Path::new("PYI020.pyi"))]
#[test_case(Rule::RedundantLiteralUnion, Path::new("PYI051.py"))]
#[test_case(Rule::RedundantLiteralUnion, Path::new("PYI051.pyi"))]
#[test_case(Rule::RedundantNumericUnion, Path::new("PYI041.py"))]
#[test_case(Rule::RedundantNumericUnion, Path::new("PYI041.pyi"))]
#[test_case(Rule::SnakeCaseTypeAlias, Path::new("PYI042.py"))]
#[test_case(Rule::SnakeCaseTypeAlias, Path::new("PYI042.pyi"))]
#[test_case(Rule::UnassignedSpecialVariableInStub, Path::new("PYI035.py"))]
#[test_case(Rule::UnassignedSpecialVariableInStub, Path::new("PYI035.pyi"))]
#[test_case(Rule::StrOrReprDefinedInStub, Path::new("PYI029.py"))]
#[test_case(Rule::StrOrReprDefinedInStub, Path::new("PYI029.pyi"))]
#[test_case(Rule::UnnecessaryLiteralUnion, Path::new("PYI030.py"))]
#[test_case(Rule::UnnecessaryLiteralUnion, Path::new("PYI030.pyi"))]
#[test_case(Rule::StringOrBytesTooLong, Path::new("PYI053.py"))]
#[test_case(Rule::StringOrBytesTooLong, Path::new("PYI053.pyi"))]
#[test_case(Rule::StubBodyMultipleStatements, Path::new("PYI048.py"))]
#[test_case(Rule::StubBodyMultipleStatements, Path::new("PYI048.pyi"))]
#[test_case(Rule::TSuffixedTypeAlias, Path::new("PYI043.py"))]
#[test_case(Rule::TSuffixedTypeAlias, Path::new("PYI043.pyi"))]
#[test_case(Rule::FutureAnnotationsInStub, Path::new("PYI044.py"))]
#[test_case(Rule::FutureAnnotationsInStub, Path::new("PYI044.pyi"))]
#[test_case(Rule::PatchVersionComparison, Path::new("PYI004.py"))]
#[test_case(Rule::PatchVersionComparison, Path::new("PYI004.pyi"))]
#[test_case(Rule::TypeAliasWithoutAnnotation, Path::new("PYI026.py"))]
#[test_case(Rule::TypeAliasWithoutAnnotation, Path::new("PYI026.pyi"))]
#[test_case(Rule::TypeCommentInStub, Path::new("PYI033.py"))]
#[test_case(Rule::TypeCommentInStub, Path::new("PYI033.pyi"))]
#[test_case(Rule::TypedArgumentDefaultInStub, Path::new("PYI011.py"))]
@@ -78,8 +82,12 @@ mod tests {
#[test_case(Rule::UnaliasedCollectionsAbcSetImport, Path::new("PYI025.pyi"))]
#[test_case(Rule::UnannotatedAssignmentInStub, Path::new("PYI052.py"))]
#[test_case(Rule::UnannotatedAssignmentInStub, Path::new("PYI052.pyi"))]
#[test_case(Rule::StringOrBytesTooLong, Path::new("PYI053.py"))]
#[test_case(Rule::StringOrBytesTooLong, Path::new("PYI053.pyi"))]
#[test_case(Rule::UnassignedSpecialVariableInStub, Path::new("PYI035.py"))]
#[test_case(Rule::UnassignedSpecialVariableInStub, Path::new("PYI035.pyi"))]
#[test_case(Rule::UnnecessaryLiteralUnion, Path::new("PYI030.py"))]
#[test_case(Rule::UnnecessaryLiteralUnion, Path::new("PYI030.pyi"))]
#[test_case(Rule::UnnecessaryTypeUnion, Path::new("PYI055.py"))]
#[test_case(Rule::UnnecessaryTypeUnion, Path::new("PYI055.pyi"))]
#[test_case(Rule::UnprefixedTypeParam, Path::new("PYI001.py"))]
#[test_case(Rule::UnprefixedTypeParam, Path::new("PYI001.pyi"))]
#[test_case(Rule::UnrecognizedPlatformCheck, Path::new("PYI007.py"))]
@@ -88,24 +96,18 @@ mod tests {
#[test_case(Rule::UnrecognizedPlatformName, Path::new("PYI008.pyi"))]
#[test_case(Rule::UnrecognizedVersionInfoCheck, Path::new("PYI003.py"))]
#[test_case(Rule::UnrecognizedVersionInfoCheck, Path::new("PYI003.pyi"))]
#[test_case(Rule::WrongTupleLengthVersionComparison, Path::new("PYI005.py"))]
#[test_case(Rule::WrongTupleLengthVersionComparison, Path::new("PYI005.pyi"))]
#[test_case(Rule::TypeAliasWithoutAnnotation, Path::new("PYI026.py"))]
#[test_case(Rule::TypeAliasWithoutAnnotation, Path::new("PYI026.pyi"))]
#[test_case(Rule::UnsupportedMethodCallOnAll, Path::new("PYI056.py"))]
#[test_case(Rule::UnsupportedMethodCallOnAll, Path::new("PYI056.pyi"))]
#[test_case(Rule::UnusedPrivateTypeVar, Path::new("PYI018.py"))]
#[test_case(Rule::UnusedPrivateTypeVar, Path::new("PYI018.pyi"))]
#[test_case(Rule::UnusedPrivateProtocol, Path::new("PYI046.py"))]
#[test_case(Rule::UnusedPrivateProtocol, Path::new("PYI046.pyi"))]
#[test_case(Rule::UnusedPrivateTypeAlias, Path::new("PYI047.py"))]
#[test_case(Rule::UnusedPrivateTypeAlias, Path::new("PYI047.pyi"))]
#[test_case(Rule::UnusedPrivateTypeVar, Path::new("PYI018.py"))]
#[test_case(Rule::UnusedPrivateTypeVar, Path::new("PYI018.pyi"))]
#[test_case(Rule::UnusedPrivateTypedDict, Path::new("PYI049.py"))]
#[test_case(Rule::UnusedPrivateTypedDict, Path::new("PYI049.pyi"))]
#[test_case(Rule::RedundantLiteralUnion, Path::new("PYI051.py"))]
#[test_case(Rule::RedundantLiteralUnion, Path::new("PYI051.pyi"))]
#[test_case(Rule::UnnecessaryTypeUnion, Path::new("PYI055.py"))]
#[test_case(Rule::UnnecessaryTypeUnion, Path::new("PYI055.pyi"))]
#[test_case(Rule::WrongTupleLengthVersionComparison, Path::new("PYI005.py"))]
#[test_case(Rule::WrongTupleLengthVersionComparison, Path::new("PYI005.pyi"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
let diagnostics = test_path(
@@ -116,15 +118,15 @@ mod tests {
Ok(())
}
#[test_case(Path::new("PYI019.py"))]
#[test_case(Path::new("PYI019.pyi"))]
fn custom_type_var_return_type(path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", "PYI019", path.to_string_lossy());
#[test_case(Rule::TypeAliasWithoutAnnotation, Path::new("PYI026.py"))]
#[test_case(Rule::TypeAliasWithoutAnnotation, Path::new("PYI026.pyi"))]
fn py38(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("py38_{}_{}", rule_code.noqa_code(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_pyi").join(path).as_path(),
&settings::Settings {
target_version: PythonVersion::Py312,
..settings::Settings::for_rules(vec![Rule::CustomTypeVarReturnType])
target_version: PythonVersion::Py38,
..settings::Settings::for_rule(rule_code)
},
)?;
assert_messages!(snapshot, diagnostics);

View File

@@ -1,7 +1,6 @@
use ruff_python_ast::{Expr, ExprConstant, Ranged, Stmt, StmtExpr};
use ruff_diagnostics::{AutofixKind, Diagnostic, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{Constant, Expr, ExprConstant, Ranged, Stmt, StmtExpr};
use crate::autofix;
use crate::checkers::ast::Checker;
@@ -44,11 +43,7 @@ impl Violation for EllipsisInNonEmptyClassBody {
}
/// PYI013
pub(crate) fn ellipsis_in_non_empty_class_body(
checker: &mut Checker,
parent: &Stmt,
body: &[Stmt],
) {
pub(crate) fn ellipsis_in_non_empty_class_body(checker: &mut Checker, body: &[Stmt]) {
// If the class body contains a single statement, then it's fine for it to be an ellipsis.
if body.len() == 1 {
return;
@@ -59,24 +54,24 @@ pub(crate) fn ellipsis_in_non_empty_class_body(
continue;
};
let Expr::Constant(ExprConstant { value, .. }) = value.as_ref() else {
continue;
};
if !value.is_ellipsis() {
continue;
if matches!(
value.as_ref(),
Expr::Constant(ExprConstant {
value: Constant::Ellipsis,
..
})
) {
let mut diagnostic = Diagnostic::new(EllipsisInNonEmptyClassBody, stmt.range());
if checker.patch(diagnostic.kind.rule()) {
let edit = autofix::edits::delete_stmt(
stmt,
Some(stmt),
checker.locator(),
checker.indexer(),
);
diagnostic.set_fix(Fix::automatic(edit).isolate(checker.statement_isolation()));
}
checker.diagnostics.push(diagnostic);
}
let mut diagnostic = Diagnostic::new(EllipsisInNonEmptyClassBody, stmt.range());
if checker.patch(diagnostic.kind.rule()) {
let edit = autofix::edits::delete_stmt(
stmt,
Some(parent),
checker.locator(),
checker.indexer(),
);
diagnostic.set_fix(Fix::automatic(edit).isolate(checker.isolation(Some(parent))));
}
checker.diagnostics.push(diagnostic);
}
}

View File

@@ -21,6 +21,7 @@ pub(crate) use quoted_annotation_in_stub::*;
pub(crate) use redundant_literal_union::*;
pub(crate) use redundant_numeric_union::*;
pub(crate) use simple_defaults::*;
use std::fmt;
pub(crate) use str_or_repr_defined_in_stub::*;
pub(crate) use string_or_bytes_too_long::*;
pub(crate) use stub_body_multiple_statements::*;
@@ -69,3 +70,26 @@ mod unrecognized_platform;
mod unrecognized_version_info;
mod unsupported_method_call_on_all;
mod unused_private_type_definition;
// TODO(charlie): Replace this with a common utility for selecting the appropriate source
// module for a given `typing` member.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
enum TypingModule {
Typing,
TypingExtensions,
}
impl TypingModule {
fn as_str(self) -> &'static str {
match self {
TypingModule::Typing => "typing",
TypingModule::TypingExtensions => "typing_extensions",
}
}
}
impl fmt::Display for TypingModule {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
fmt.write_str(self.as_str())
}
}

View File

@@ -22,10 +22,7 @@ impl AlwaysAutofixableViolation for NonEmptyStubBody {
/// PYI010
pub(crate) fn non_empty_stub_body(checker: &mut Checker, body: &[Stmt]) {
if body.len() != 1 {
return;
}
if let Stmt::Expr(ast::StmtExpr { value, range: _ }) = &body[0] {
if let [Stmt::Expr(ast::StmtExpr { value, range: _ })] = body {
if let Expr::Constant(ast::ExprConstant { value, .. }) = value.as_ref() {
if matches!(value, Constant::Ellipsis | Constant::Str(_)) {
return;

View File

@@ -1,7 +1,6 @@
use ruff_python_ast::{Ranged, Stmt};
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Ranged};
use crate::autofix;
use crate::checkers::ast::Checker;
@@ -22,26 +21,22 @@ impl AlwaysAutofixableViolation for PassInClassBody {
}
/// PYI012
pub(crate) fn pass_in_class_body(checker: &mut Checker, parent: &Stmt, body: &[Stmt]) {
pub(crate) fn pass_in_class_body(checker: &mut Checker, class_def: &ast::StmtClassDef) {
// `pass` is required in these situations (or handled by `pass_statement_stub_body`).
if body.len() < 2 {
if class_def.body.len() < 2 {
return;
}
for stmt in body {
for stmt in &class_def.body {
if !stmt.is_pass_stmt() {
continue;
}
let mut diagnostic = Diagnostic::new(PassInClassBody, stmt.range());
if checker.patch(diagnostic.kind.rule()) {
let edit = autofix::edits::delete_stmt(
stmt,
Some(parent),
checker.locator(),
checker.indexer(),
);
diagnostic.set_fix(Fix::automatic(edit).isolate(checker.isolation(Some(parent))));
let edit =
autofix::edits::delete_stmt(stmt, Some(stmt), checker.locator(), checker.indexer());
diagnostic.set_fix(Fix::automatic(edit).isolate(checker.statement_isolation()));
}
checker.diagnostics.push(diagnostic);
}

View File

@@ -1,7 +1,6 @@
use ruff_python_ast::{Ranged, Stmt};
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{Ranged, Stmt};
use crate::checkers::ast::Checker;
use crate::registry::Rule;
@@ -22,15 +21,15 @@ impl AlwaysAutofixableViolation for PassStatementStubBody {
/// PYI009
pub(crate) fn pass_statement_stub_body(checker: &mut Checker, body: &[Stmt]) {
if body.len() != 1 {
let [stmt] = body else {
return;
}
if body[0].is_pass_stmt() {
let mut diagnostic = Diagnostic::new(PassStatementStubBody, body[0].range());
};
if stmt.is_pass_stmt() {
let mut diagnostic = Diagnostic::new(PassStatementStubBody, stmt.range());
if checker.patch(Rule::PassStatementStubBody) {
diagnostic.set_fix(Fix::automatic(Edit::range_replacement(
format!("..."),
body[0].range(),
stmt.range(),
)));
};
checker.diagnostics.push(diagnostic);

View File

@@ -1,17 +1,18 @@
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::call_path::CallPath;
use ruff_python_ast::{
self as ast, Arguments, Constant, Expr, Operator, ParameterWithDefault, Parameters, Ranged,
Stmt, UnaryOp,
};
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::call_path::CallPath;
use ruff_python_semantic::{ScopeKind, SemanticModel};
use ruff_source_file::Locator;
use crate::checkers::ast::Checker;
use crate::importer::ImportRequest;
use crate::registry::AsRule;
use crate::rules::flake8_pyi::rules::TypingModule;
use crate::settings::types::PythonVersion;
#[violation]
pub struct TypedArgumentDefaultInStub;
@@ -124,6 +125,7 @@ impl Violation for UnassignedSpecialVariableInStub {
/// ```
#[violation]
pub struct TypeAliasWithoutAnnotation {
module: TypingModule,
name: String,
value: String,
}
@@ -131,12 +133,16 @@ pub struct TypeAliasWithoutAnnotation {
impl AlwaysAutofixableViolation for TypeAliasWithoutAnnotation {
#[derive_message_formats]
fn message(&self) -> String {
let TypeAliasWithoutAnnotation { name, value } = self;
format!("Use `typing.TypeAlias` for type alias, e.g., `{name}: typing.TypeAlias = {value}`")
let TypeAliasWithoutAnnotation {
module,
name,
value,
} = self;
format!("Use `{module}.TypeAlias` for type alias, e.g., `{name}: TypeAlias = {value}`")
}
fn autofix_title(&self) -> String {
"Add `typing.TypeAlias` annotation".to_string()
"Add `TypeAlias` annotation".to_string()
}
}
@@ -606,7 +612,7 @@ pub(crate) fn unassigned_special_variable_in_stub(
));
}
/// PIY026
/// PYI026
pub(crate) fn type_alias_without_annotation(checker: &mut Checker, value: &Expr, targets: &[Expr]) {
let [target] = targets else {
return;
@@ -620,8 +626,15 @@ pub(crate) fn type_alias_without_annotation(checker: &mut Checker, value: &Expr,
return;
}
let module = if checker.settings.target_version >= PythonVersion::Py310 {
TypingModule::Typing
} else {
TypingModule::TypingExtensions
};
let mut diagnostic = Diagnostic::new(
TypeAliasWithoutAnnotation {
module,
name: id.to_string(),
value: checker.generator().expr(value),
},
@@ -630,7 +643,7 @@ pub(crate) fn type_alias_without_annotation(checker: &mut Checker, value: &Expr,
if checker.patch(diagnostic.kind.rule()) {
diagnostic.try_set_fix(|| {
let (import_edit, binding) = checker.importer().get_or_import_symbol(
&ImportRequest::import("typing", "TypeAlias"),
&ImportRequest::import(module.as_str(), "TypeAlias"),
target.start(),
checker.semantic(),
)?;

View File

@@ -99,10 +99,7 @@ pub(crate) fn str_or_repr_defined_in_stub(checker: &mut Checker, stmt: &Stmt) {
let stmt = checker.semantic().current_statement();
let parent = checker.semantic().current_statement_parent();
let edit = delete_stmt(stmt, parent, checker.locator(), checker.indexer());
diagnostic.set_fix(
Fix::automatic(edit)
.isolate(checker.isolation(checker.semantic().current_statement_parent())),
);
diagnostic.set_fix(Fix::automatic(edit).isolate(checker.parent_isolation()));
}
checker.diagnostics.push(diagnostic);
}

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff/src/rules/flake8_pyi/mod.rs
---
PYI026.pyi:3:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `NewAny: typing.TypeAlias = Any`
PYI026.pyi:3:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `NewAny: TypeAlias = Any`
|
1 | from typing import Literal, Any
2 |
@@ -10,7 +10,7 @@ PYI026.pyi:3:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `NewAny:
4 | OptionalStr = typing.Optional[str]
5 | Foo = Literal["foo"]
|
= help: Add `typing.TypeAlias` annotation
= help: Add `TypeAlias` annotation
Suggested fix
1 |-from typing import Literal, Any
@@ -22,7 +22,7 @@ PYI026.pyi:3:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `NewAny:
5 5 | Foo = Literal["foo"]
6 6 | IntOrStr = int | str
PYI026.pyi:4:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `OptionalStr: typing.TypeAlias = typing.Optional[str]`
PYI026.pyi:4:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `OptionalStr: TypeAlias = typing.Optional[str]`
|
3 | NewAny = Any
4 | OptionalStr = typing.Optional[str]
@@ -30,7 +30,7 @@ PYI026.pyi:4:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `Optiona
5 | Foo = Literal["foo"]
6 | IntOrStr = int | str
|
= help: Add `typing.TypeAlias` annotation
= help: Add `TypeAlias` annotation
Suggested fix
1 |-from typing import Literal, Any
@@ -43,7 +43,7 @@ PYI026.pyi:4:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `Optiona
6 6 | IntOrStr = int | str
7 7 | AliasNone = None
PYI026.pyi:5:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `Foo: typing.TypeAlias = Literal["foo"]`
PYI026.pyi:5:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `Foo: TypeAlias = Literal["foo"]`
|
3 | NewAny = Any
4 | OptionalStr = typing.Optional[str]
@@ -52,7 +52,7 @@ PYI026.pyi:5:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `Foo: ty
6 | IntOrStr = int | str
7 | AliasNone = None
|
= help: Add `typing.TypeAlias` annotation
= help: Add `TypeAlias` annotation
Suggested fix
1 |-from typing import Literal, Any
@@ -66,7 +66,7 @@ PYI026.pyi:5:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `Foo: ty
7 7 | AliasNone = None
8 8 |
PYI026.pyi:6:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `IntOrStr: typing.TypeAlias = int | str`
PYI026.pyi:6:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `IntOrStr: TypeAlias = int | str`
|
4 | OptionalStr = typing.Optional[str]
5 | Foo = Literal["foo"]
@@ -74,7 +74,7 @@ PYI026.pyi:6:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `IntOrSt
| ^^^^^^^^ PYI026
7 | AliasNone = None
|
= help: Add `typing.TypeAlias` annotation
= help: Add `TypeAlias` annotation
Suggested fix
1 |-from typing import Literal, Any
@@ -89,7 +89,7 @@ PYI026.pyi:6:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `IntOrSt
8 8 |
9 9 | NewAny: typing.TypeAlias = Any
PYI026.pyi:7:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `AliasNone: typing.TypeAlias = None`
PYI026.pyi:7:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `AliasNone: TypeAlias = None`
|
5 | Foo = Literal["foo"]
6 | IntOrStr = int | str
@@ -98,7 +98,7 @@ PYI026.pyi:7:1: PYI026 [*] Use `typing.TypeAlias` for type alias, e.g., `AliasNo
8 |
9 | NewAny: typing.TypeAlias = Any
|
= help: Add `typing.TypeAlias` annotation
= help: Add `TypeAlias` annotation
Suggested fix
1 |-from typing import Literal, Any

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff/src/rules/flake8_pyi/mod.rs
---

View File

@@ -0,0 +1,117 @@
---
source: crates/ruff/src/rules/flake8_pyi/mod.rs
---
PYI026.pyi:3:1: PYI026 [*] Use `typing_extensions.TypeAlias` for type alias, e.g., `NewAny: TypeAlias = Any`
|
1 | from typing import Literal, Any
2 |
3 | NewAny = Any
| ^^^^^^ PYI026
4 | OptionalStr = typing.Optional[str]
5 | Foo = Literal["foo"]
|
= help: Add `TypeAlias` annotation
Suggested fix
1 1 | from typing import Literal, Any
2 |+import typing_extensions
2 3 |
3 |-NewAny = Any
4 |+NewAny: typing_extensions.TypeAlias = Any
4 5 | OptionalStr = typing.Optional[str]
5 6 | Foo = Literal["foo"]
6 7 | IntOrStr = int | str
PYI026.pyi:4:1: PYI026 [*] Use `typing_extensions.TypeAlias` for type alias, e.g., `OptionalStr: TypeAlias = typing.Optional[str]`
|
3 | NewAny = Any
4 | OptionalStr = typing.Optional[str]
| ^^^^^^^^^^^ PYI026
5 | Foo = Literal["foo"]
6 | IntOrStr = int | str
|
= help: Add `TypeAlias` annotation
Suggested fix
1 1 | from typing import Literal, Any
2 |+import typing_extensions
2 3 |
3 4 | NewAny = Any
4 |-OptionalStr = typing.Optional[str]
5 |+OptionalStr: typing_extensions.TypeAlias = typing.Optional[str]
5 6 | Foo = Literal["foo"]
6 7 | IntOrStr = int | str
7 8 | AliasNone = None
PYI026.pyi:5:1: PYI026 [*] Use `typing_extensions.TypeAlias` for type alias, e.g., `Foo: TypeAlias = Literal["foo"]`
|
3 | NewAny = Any
4 | OptionalStr = typing.Optional[str]
5 | Foo = Literal["foo"]
| ^^^ PYI026
6 | IntOrStr = int | str
7 | AliasNone = None
|
= help: Add `TypeAlias` annotation
Suggested fix
1 1 | from typing import Literal, Any
2 |+import typing_extensions
2 3 |
3 4 | NewAny = Any
4 5 | OptionalStr = typing.Optional[str]
5 |-Foo = Literal["foo"]
6 |+Foo: typing_extensions.TypeAlias = Literal["foo"]
6 7 | IntOrStr = int | str
7 8 | AliasNone = None
8 9 |
PYI026.pyi:6:1: PYI026 [*] Use `typing_extensions.TypeAlias` for type alias, e.g., `IntOrStr: TypeAlias = int | str`
|
4 | OptionalStr = typing.Optional[str]
5 | Foo = Literal["foo"]
6 | IntOrStr = int | str
| ^^^^^^^^ PYI026
7 | AliasNone = None
|
= help: Add `TypeAlias` annotation
Suggested fix
1 1 | from typing import Literal, Any
2 |+import typing_extensions
2 3 |
3 4 | NewAny = Any
4 5 | OptionalStr = typing.Optional[str]
5 6 | Foo = Literal["foo"]
6 |-IntOrStr = int | str
7 |+IntOrStr: typing_extensions.TypeAlias = int | str
7 8 | AliasNone = None
8 9 |
9 10 | NewAny: typing.TypeAlias = Any
PYI026.pyi:7:1: PYI026 [*] Use `typing_extensions.TypeAlias` for type alias, e.g., `AliasNone: TypeAlias = None`
|
5 | Foo = Literal["foo"]
6 | IntOrStr = int | str
7 | AliasNone = None
| ^^^^^^^^^ PYI026
8 |
9 | NewAny: typing.TypeAlias = Any
|
= help: Add `TypeAlias` annotation
Suggested fix
1 1 | from typing import Literal, Any
2 |+import typing_extensions
2 3 |
3 4 | NewAny = Any
4 5 | OptionalStr = typing.Optional[str]
5 6 | Foo = Literal["foo"]
6 7 | IntOrStr = int | str
7 |-AliasNone = None
8 |+AliasNone: typing_extensions.TypeAlias = None
8 9 |
9 10 | NewAny: typing.TypeAlias = Any
10 11 | OptionalStr: TypeAlias = typing.Optional[str]

View File

@@ -732,8 +732,7 @@ fn check_fixture_decorator(checker: &mut Checker, func_name: &str, decorator: &D
keyword,
arguments,
edits::Parentheses::Preserve,
checker.locator(),
checker.source_type,
checker.locator().contents(),
)
.map(Fix::suggested)
});

View File

@@ -5,12 +5,13 @@ use ruff_python_ast::{
self as ast, Arguments, Constant, Decorator, Expr, ExprContext, PySourceType, Ranged,
};
use ruff_python_parser::{lexer, AsMode, Tok};
use ruff_text_size::TextRange;
use ruff_text_size::{TextRange, TextSize};
use ruff_diagnostics::{AutofixKind, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_codegen::Generator;
use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer};
use ruff_source_file::Locator;
use crate::checkers::ast::Checker;
@@ -215,11 +216,17 @@ pub struct PytestDuplicateParametrizeTestCases {
}
impl Violation for PytestDuplicateParametrizeTestCases {
const AUTOFIX: AutofixKind = AutofixKind::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let PytestDuplicateParametrizeTestCases { index } = self;
format!("Duplicate of test case at index {index} in `@pytest_mark.parametrize`")
}
fn autofix_title(&self) -> Option<String> {
Some("Remove duplicate test case".to_string())
}
}
fn elts_to_csv(elts: &[Expr], generator: Generator) -> Option<String> {
@@ -263,12 +270,11 @@ fn elts_to_csv(elts: &[Expr], generator: Generator) -> Option<String> {
/// Returns the range of the `name` argument of `@pytest.mark.parametrize`.
///
/// This accounts for implicit string concatenation with parenthesis.
/// For example, the following code will return the range marked with `^`:
/// This accounts for parenthesized expressions. For example, the following code
/// will return the range marked with `^`:
/// ```python
/// @pytest.mark.parametrize(("a, " "b"), [(1, 2)])
/// # ^^^^^^^^^^^
/// # implicit string concatenation with parenthesis
/// @pytest.mark.parametrize(("x"), [(1, 2)])
/// # ^^^^^
/// def test(a, b):
/// ...
/// ```
@@ -281,7 +287,7 @@ fn get_parametrize_name_range(
source_type: PySourceType,
) -> TextRange {
let mut locations = Vec::new();
let mut implicit_concat = None;
let mut name_range = None;
// The parenthesis are not part of the AST, so we need to tokenize the
// decorator to find them.
@@ -296,7 +302,7 @@ fn get_parametrize_name_range(
Tok::Lpar => locations.push(range.start()),
Tok::Rpar => {
if let Some(start) = locations.pop() {
implicit_concat = Some(TextRange::new(start, range.end()));
name_range = Some(TextRange::new(start, range.end()));
}
}
// Stop after the first argument.
@@ -304,12 +310,7 @@ fn get_parametrize_name_range(
_ => (),
}
}
if let Some(range) = implicit_concat {
range
} else {
expr.range()
}
name_range.unwrap_or_else(|| expr.range())
}
/// PT006
@@ -551,6 +552,21 @@ fn check_values(checker: &mut Checker, names: &Expr, values: &Expr) {
}
}
/// Given an element in a list, return the comma that follows it:
/// ```python
/// @pytest.mark.parametrize(
/// "x",
/// [.., (elt), ..],
/// ^^^^^
/// Tokenize this range to locate the comma.
/// )
/// ```
fn trailing_comma(element: &Expr, source: &str) -> Option<TextSize> {
SimpleTokenizer::starts_at(element.end(), source)
.find(|token| token.kind == SimpleTokenKind::Comma)
.map(|token| token.start())
}
/// PT014
fn check_duplicates(checker: &mut Checker, values: &Expr) {
let (Expr::List(ast::ExprList { elts, .. }) | Expr::Tuple(ast::ExprTuple { elts, .. })) =
@@ -561,16 +577,37 @@ fn check_duplicates(checker: &mut Checker, values: &Expr) {
let mut seen: FxHashMap<ComparableExpr, usize> =
FxHashMap::with_capacity_and_hasher(elts.len(), BuildHasherDefault::default());
for (index, elt) in elts.iter().enumerate() {
let expr = ComparableExpr::from(elt);
let mut prev = None;
for (index, element) in elts.iter().enumerate() {
let expr = ComparableExpr::from(element);
seen.entry(expr)
.and_modify(|index| {
checker.diagnostics.push(Diagnostic::new(
let mut diagnostic = Diagnostic::new(
PytestDuplicateParametrizeTestCases { index: *index },
elt.range(),
));
element.range(),
);
if checker.patch(diagnostic.kind.rule()) {
if let Some(prev) = prev {
let values_end = values.range().end() - TextSize::new(1);
let previous_end = trailing_comma(prev, checker.locator().contents())
.unwrap_or(values_end);
let element_end = trailing_comma(element, checker.locator().contents())
.unwrap_or(values_end);
let deletion_range = TextRange::new(previous_end, element_end);
if !checker
.indexer()
.comment_ranges()
.intersects(deletion_range)
{
diagnostic
.set_fix(Fix::suggested(Edit::range_deletion(deletion_range)));
}
}
}
checker.diagnostics.push(diagnostic);
})
.or_insert(index);
prev = Some(element);
}
}
@@ -636,19 +673,17 @@ pub(crate) fn parametrize(checker: &mut Checker, decorators: &[Decorator]) {
}) = &decorator.expression
{
if checker.enabled(Rule::PytestParametrizeNamesWrongType) {
if let Some(names) = args.get(0) {
if let [names, ..] = args.as_slice() {
check_names(checker, decorator, names);
}
}
if checker.enabled(Rule::PytestParametrizeValuesWrongType) {
if let Some(names) = args.get(0) {
if let Some(values) = args.get(1) {
check_values(checker, names, values);
}
if let [names, values, ..] = args.as_slice() {
check_values(checker, names, values);
}
}
if checker.enabled(Rule::PytestDuplicateParametrizeTestCases) {
if let [_, values, ..] = &args[..] {
if let [_, values, ..] = args.as_slice() {
check_duplicates(checker, values);
}
}

View File

@@ -89,18 +89,19 @@ fn check_patch_call(call: &ast::ExprCall, index: usize) -> Option<Diagnostic> {
.find_argument("new", index)?
.as_lambda_expr()?;
// Walk the lambda body.
let mut visitor = LambdaBodyVisitor {
parameters,
uses_args: false,
};
visitor.visit_expr(body);
if visitor.uses_args {
None
} else {
Some(Diagnostic::new(PytestPatchWithLambda, call.func.range()))
// Walk the lambda body. If the lambda uses the arguments, then it's valid.
if let Some(parameters) = parameters {
let mut visitor = LambdaBodyVisitor {
parameters,
uses_args: false,
};
visitor.visit_expr(body);
if visitor.uses_args {
return None;
}
}
Some(Diagnostic::new(PytestPatchWithLambda, call.func.range()))
}
/// PT008

View File

@@ -208,5 +208,24 @@ PT006.py:64:26: PT006 [*] Wrong name(s) type in `@pytest.mark.parametrize`, expe
64 |+@pytest.mark.parametrize(("param1", "param2", "param3"), [(1, 2, 3), (4, 5, 6)])
65 65 | def test_implicit_str_concat_with_multi_parens(param1, param2, param3):
66 66 | ...
67 67 |
PT006.py:69:26: PT006 [*] Wrong name(s) type in `@pytest.mark.parametrize`, expected `tuple`
|
69 | @pytest.mark.parametrize(("param1,param2"), [(1, 2), (3, 4)])
| ^^^^^^^^^^^^^^^^^ PT006
70 | def test_csv_with_parens(param1, param2):
71 | ...
|
= help: Use a `tuple` for parameter names
Suggested fix
66 66 | ...
67 67 |
68 68 |
69 |-@pytest.mark.parametrize(("param1,param2"), [(1, 2), (3, 4)])
69 |+@pytest.mark.parametrize(("param1", "param2"), [(1, 2), (3, 4)])
70 70 | def test_csv_with_parens(param1, param2):
71 71 | ...

View File

@@ -170,5 +170,24 @@ PT006.py:64:26: PT006 [*] Wrong name(s) type in `@pytest.mark.parametrize`, expe
64 |+@pytest.mark.parametrize(["param1", "param2", "param3"], [(1, 2, 3), (4, 5, 6)])
65 65 | def test_implicit_str_concat_with_multi_parens(param1, param2, param3):
66 66 | ...
67 67 |
PT006.py:69:26: PT006 [*] Wrong name(s) type in `@pytest.mark.parametrize`, expected `list`
|
69 | @pytest.mark.parametrize(("param1,param2"), [(1, 2), (3, 4)])
| ^^^^^^^^^^^^^^^^^ PT006
70 | def test_csv_with_parens(param1, param2):
71 | ...
|
= help: Use a `list` for parameter names
Suggested fix
66 66 | ...
67 67 |
68 68 |
69 |-@pytest.mark.parametrize(("param1,param2"), [(1, 2), (3, 4)])
69 |+@pytest.mark.parametrize(["param1", "param2"], [(1, 2), (3, 4)])
70 70 | def test_csv_with_parens(param1, param2):
71 71 | ...

View File

@@ -1,44 +1,169 @@
---
source: crates/ruff/src/rules/flake8_pytest_style/mod.rs
---
PT014.py:4:35: PT014 Duplicate of test case at index 0 in `@pytest_mark.parametrize`
PT014.py:4:35: PT014 [*] Duplicate of test case at index 0 in `@pytest_mark.parametrize`
|
4 | @pytest.mark.parametrize("x", [1, 1, 2])
| ^ PT014
5 | def test_error_literal(x):
6 | ...
|
= help: Remove duplicate test case
PT014.py:14:35: PT014 Duplicate of test case at index 0 in `@pytest_mark.parametrize`
Suggested fix
1 1 | import pytest
2 2 |
3 3 |
4 |-@pytest.mark.parametrize("x", [1, 1, 2])
4 |+@pytest.mark.parametrize("x", [1, 2])
5 5 | def test_error_literal(x):
6 6 | ...
7 7 |
PT014.py:14:35: PT014 [*] Duplicate of test case at index 0 in `@pytest_mark.parametrize`
|
14 | @pytest.mark.parametrize("x", [a, a, b, b, b, c])
| ^ PT014
15 | def test_error_expr_simple(x):
16 | ...
|
= help: Remove duplicate test case
PT014.py:14:41: PT014 Duplicate of test case at index 2 in `@pytest_mark.parametrize`
Suggested fix
11 11 | c = 3
12 12 |
13 13 |
14 |-@pytest.mark.parametrize("x", [a, a, b, b, b, c])
14 |+@pytest.mark.parametrize("x", [a, b, b, b, c])
15 15 | def test_error_expr_simple(x):
16 16 | ...
17 17 |
PT014.py:14:41: PT014 [*] Duplicate of test case at index 2 in `@pytest_mark.parametrize`
|
14 | @pytest.mark.parametrize("x", [a, a, b, b, b, c])
| ^ PT014
15 | def test_error_expr_simple(x):
16 | ...
|
= help: Remove duplicate test case
PT014.py:14:44: PT014 Duplicate of test case at index 2 in `@pytest_mark.parametrize`
Suggested fix
11 11 | c = 3
12 12 |
13 13 |
14 |-@pytest.mark.parametrize("x", [a, a, b, b, b, c])
14 |+@pytest.mark.parametrize("x", [a, a, b, b, c])
15 15 | def test_error_expr_simple(x):
16 16 | ...
17 17 |
PT014.py:14:44: PT014 [*] Duplicate of test case at index 2 in `@pytest_mark.parametrize`
|
14 | @pytest.mark.parametrize("x", [a, a, b, b, b, c])
| ^ PT014
15 | def test_error_expr_simple(x):
16 | ...
|
= help: Remove duplicate test case
PT014.py:19:40: PT014 Duplicate of test case at index 0 in `@pytest_mark.parametrize`
Suggested fix
11 11 | c = 3
12 12 |
13 13 |
14 |-@pytest.mark.parametrize("x", [a, a, b, b, b, c])
14 |+@pytest.mark.parametrize("x", [a, a, b, b, c])
15 15 | def test_error_expr_simple(x):
16 16 | ...
17 17 |
PT014.py:24:9: PT014 Duplicate of test case at index 0 in `@pytest_mark.parametrize`
|
19 | @pytest.mark.parametrize("x", [(a, b), (a, b), (b, c)])
| ^^^^^^ PT014
20 | def test_error_expr_complex(x):
21 | ...
22 | (a, b),
23 | # comment
24 | (a, b),
| ^^^^^^ PT014
25 | (b, c),
26 | ],
|
= help: Remove duplicate test case
PT014.py:32:39: PT014 [*] Duplicate of test case at index 0 in `@pytest_mark.parametrize`
|
32 | @pytest.mark.parametrize("x", [a, b, (a), c, ((a))])
| ^ PT014
33 | def test_error_parentheses(x):
34 | ...
|
= help: Remove duplicate test case
Suggested fix
29 29 | ...
30 30 |
31 31 |
32 |-@pytest.mark.parametrize("x", [a, b, (a), c, ((a))])
32 |+@pytest.mark.parametrize("x", [a, b, c, ((a))])
33 33 | def test_error_parentheses(x):
34 34 | ...
35 35 |
PT014.py:32:48: PT014 [*] Duplicate of test case at index 0 in `@pytest_mark.parametrize`
|
32 | @pytest.mark.parametrize("x", [a, b, (a), c, ((a))])
| ^ PT014
33 | def test_error_parentheses(x):
34 | ...
|
= help: Remove duplicate test case
Suggested fix
29 29 | ...
30 30 |
31 31 |
32 |-@pytest.mark.parametrize("x", [a, b, (a), c, ((a))])
32 |+@pytest.mark.parametrize("x", [a, b, (a), c])
33 33 | def test_error_parentheses(x):
34 34 | ...
35 35 |
PT014.py:42:10: PT014 [*] Duplicate of test case at index 0 in `@pytest_mark.parametrize`
|
40 | a,
41 | b,
42 | (a),
| ^ PT014
43 | c,
44 | ((a)),
|
= help: Remove duplicate test case
Suggested fix
39 39 | [
40 40 | a,
41 41 | b,
42 |- (a),
43 42 | c,
44 43 | ((a)),
45 44 | ],
PT014.py:44:11: PT014 [*] Duplicate of test case at index 0 in `@pytest_mark.parametrize`
|
42 | (a),
43 | c,
44 | ((a)),
| ^ PT014
45 | ],
46 | )
|
= help: Remove duplicate test case
Suggested fix
41 41 | b,
42 42 | (a),
43 43 | c,
44 |- ((a)),
45 44 | ],
46 45 | )
47 46 | def test_error_parentheses_trailing_comma(x):

View File

@@ -1,10 +1,6 @@
use ruff_python_ast::{self as ast, Arguments, Expr, PySourceType, Ranged};
use ruff_python_parser::{lexer, AsMode, Tok};
use ruff_text_size::{TextRange, TextSize};
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_source_file::Locator;
use ruff_python_ast::{self as ast, Expr, Ranged};
use crate::checkers::ast::Checker;
use crate::registry::AsRule;
@@ -49,19 +45,14 @@ impl AlwaysAutofixableViolation for UnnecessaryParenOnRaiseException {
pub(crate) fn unnecessary_paren_on_raise_exception(checker: &mut Checker, expr: &Expr) {
let Expr::Call(ast::ExprCall {
func,
arguments:
Arguments {
args,
keywords,
range: _,
},
arguments,
range: _,
}) = expr
else {
return;
};
if args.is_empty() && keywords.is_empty() {
if arguments.is_empty() {
// `raise func()` still requires parentheses; only `raise Class()` does not.
if checker
.semantic()
@@ -71,49 +62,20 @@ pub(crate) fn unnecessary_paren_on_raise_exception(checker: &mut Checker, expr:
return;
}
let range = match_parens(func.end(), checker.locator(), checker.source_type)
.expect("Expected call to include parentheses");
let mut diagnostic = Diagnostic::new(UnnecessaryParenOnRaiseException, range);
// `ctypes.WinError()` is a function, not a class. It's part of the standard library, so
// we might as well get it right.
if checker
.semantic()
.resolve_call_path(func)
.is_some_and(|call_path| matches!(call_path.as_slice(), ["ctypes", "WinError"]))
{
return;
}
let mut diagnostic = Diagnostic::new(UnnecessaryParenOnRaiseException, arguments.range());
if checker.patch(diagnostic.kind.rule()) {
diagnostic.set_fix(Fix::automatic(Edit::deletion(func.end(), range.end())));
diagnostic.set_fix(Fix::automatic(Edit::range_deletion(arguments.range())));
}
checker.diagnostics.push(diagnostic);
}
}
/// Return the range of the first parenthesis pair after a given [`TextSize`].
fn match_parens(
start: TextSize,
locator: &Locator,
source_type: PySourceType,
) -> Option<TextRange> {
let contents = &locator.contents()[usize::from(start)..];
let mut fix_start = None;
let mut fix_end = None;
let mut count = 0u32;
for (tok, range) in lexer::lex_starts_at(contents, source_type.as_mode(), start).flatten() {
match tok {
Tok::Lpar => {
if count == 0 {
fix_start = Some(range.start());
}
count = count.saturating_add(1);
}
Tok::Rpar => {
count = count.saturating_sub(1);
if count == 0 {
fix_end = Some(range.end());
break;
}
}
_ => {}
}
}
match (fix_start, fix_end) {
(Some(start), Some(end)) => Some(TextRange::new(start, end)),
_ => None,
}
}

View File

@@ -57,7 +57,7 @@ RSE102.py:16:17: RSE102 [*] Unnecessary parentheses on raised exception
14 14 |
15 15 | # RSE102
16 |-raise TypeError ()
16 |+raise TypeError
16 |+raise TypeError
17 17 |
18 18 | # RSE102
19 19 | raise TypeError \
@@ -74,64 +74,109 @@ RSE102.py:20:5: RSE102 [*] Unnecessary parentheses on raised exception
= help: Remove unnecessary parentheses
Fix
16 16 | raise TypeError ()
17 17 |
18 18 | # RSE102
19 |-raise TypeError \
19 19 | raise TypeError \
20 |- ()
19 |+raise TypeError
21 20 |
22 21 | # RSE102
23 22 | raise TypeError(
RSE102.py:23:16: RSE102 [*] Unnecessary parentheses on raised exception
|
22 | # RSE102
23 | raise TypeError(
| ________________^
24 | |
25 | | )
| |_^ RSE102
26 |
27 | # RSE102
|
= help: Remove unnecessary parentheses
Fix
20 20 | ()
20 |+
21 21 |
22 22 | # RSE102
23 |-raise TypeError(
24 |-
25 |-)
23 |+raise TypeError
26 24 |
27 25 | # RSE102
28 26 | raise TypeError(
23 23 | raise TypeError \
RSE102.py:28:16: RSE102 [*] Unnecessary parentheses on raised exception
RSE102.py:24:5: RSE102 [*] Unnecessary parentheses on raised exception
|
27 | # RSE102
28 | raise TypeError(
| ________________^
29 | | # Hello, world!
30 | | )
| |_^ RSE102
31 |
32 | # OK
22 | # RSE102
23 | raise TypeError \
24 | ();
| ^^ RSE102
25 |
26 | # RSE102
|
= help: Remove unnecessary parentheses
Fix
25 25 | )
26 26 |
27 27 | # RSE102
28 |-raise TypeError(
29 |- # Hello, world!
30 |-)
28 |+raise TypeError
31 29 |
32 30 | # OK
33 31 | raise AssertionError
21 21 |
22 22 | # RSE102
23 23 | raise TypeError \
24 |- ();
24 |+ ;
25 25 |
26 26 | # RSE102
27 27 | raise TypeError(
RSE102.py:27:16: RSE102 [*] Unnecessary parentheses on raised exception
|
26 | # RSE102
27 | raise TypeError(
| ________________^
28 | |
29 | | )
| |_^ RSE102
30 |
31 | # RSE102
|
= help: Remove unnecessary parentheses
Fix
24 24 | ();
25 25 |
26 26 | # RSE102
27 |-raise TypeError(
28 |-
29 |-)
27 |+raise TypeError
30 28 |
31 29 | # RSE102
32 30 | raise (TypeError) (
RSE102.py:32:19: RSE102 [*] Unnecessary parentheses on raised exception
|
31 | # RSE102
32 | raise (TypeError) (
| ___________________^
33 | |
34 | | )
| |_^ RSE102
35 |
36 | # RSE102
|
= help: Remove unnecessary parentheses
Fix
29 29 | )
30 30 |
31 31 | # RSE102
32 |-raise (TypeError) (
33 |-
34 |-)
32 |+raise (TypeError)
35 33 |
36 34 | # RSE102
37 35 | raise TypeError(
RSE102.py:37:16: RSE102 [*] Unnecessary parentheses on raised exception
|
36 | # RSE102
37 | raise TypeError(
| ________________^
38 | | # Hello, world!
39 | | )
| |_^ RSE102
40 |
41 | # OK
|
= help: Remove unnecessary parentheses
Fix
34 34 | )
35 35 |
36 36 | # RSE102
37 |-raise TypeError(
38 |- # Hello, world!
39 |-)
37 |+raise TypeError
40 38 |
41 39 | # OK
42 40 | raise AssertionError

View File

@@ -465,16 +465,15 @@ fn match_eq_target(expr: &Expr) -> Option<(&str, &Expr)> {
else {
return None;
};
if ops.len() != 1 || comparators.len() != 1 {
return None;
}
if !matches!(&ops[0], CmpOp::Eq) {
if ops != &[CmpOp::Eq] {
return None;
}
let Expr::Name(ast::ExprName { id, .. }) = left.as_ref() else {
return None;
};
let comparator = &comparators[0];
let [comparator] = comparators.as_slice() else {
return None;
};
if !matches!(&comparator, Expr::Name(_)) {
return None;
}

View File

@@ -878,7 +878,7 @@ pub(crate) fn use_dict_get_with_default(checker: &mut Checker, stmt_if: &ast::St
else {
return;
};
if body_var.len() != 1 {
let [body_var] = body_var.as_slice() else {
return;
};
let Stmt::Assign(ast::StmtAssign {
@@ -889,7 +889,7 @@ pub(crate) fn use_dict_get_with_default(checker: &mut Checker, stmt_if: &ast::St
else {
return;
};
if orelse_var.len() != 1 {
let [orelse_var] = orelse_var.as_slice() else {
return;
};
let Expr::Compare(ast::ExprCompare {
@@ -901,27 +901,16 @@ pub(crate) fn use_dict_get_with_default(checker: &mut Checker, stmt_if: &ast::St
else {
return;
};
if test_dict.len() != 1 {
let [test_dict] = test_dict.as_slice() else {
return;
}
};
let (expected_var, expected_value, default_var, default_value) = match ops[..] {
[CmpOp::In] => (
&body_var[0],
body_value,
&orelse_var[0],
orelse_value.as_ref(),
),
[CmpOp::NotIn] => (
&orelse_var[0],
orelse_value,
&body_var[0],
body_value.as_ref(),
),
[CmpOp::In] => (body_var, body_value, orelse_var, orelse_value.as_ref()),
[CmpOp::NotIn] => (orelse_var, orelse_value, body_var, body_value.as_ref()),
_ => {
return;
}
};
let test_dict = &test_dict[0];
let Expr::Subscript(ast::ExprSubscript {
value: expected_subscript,
slice: expected_slice,

View File

@@ -1,16 +1,12 @@
use anyhow::Result;
use ruff_python_ast::{self as ast, Arguments, CmpOp, Expr, Ranged};
use ruff_text_size::TextRange;
use ruff_diagnostics::Edit;
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_codegen::Stylist;
use ruff_source_file::Locator;
use ruff_python_ast::node::AnyNodeRef;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::{self as ast, Arguments, CmpOp, Comprehension, Expr, Ranged};
use ruff_text_size::TextRange;
use crate::autofix::codemods::CodegenStylist;
use crate::checkers::ast::Checker;
use crate::cst::matchers::{match_attribute, match_call_mut, match_expression};
use crate::registry::AsRule;
/// ## What it does
@@ -67,7 +63,7 @@ fn key_in_dict(
left: &Expr,
right: &Expr,
operator: CmpOp,
range: TextRange,
parent: AnyNodeRef,
) {
let Expr::Call(ast::ExprCall {
func,
@@ -100,13 +96,16 @@ fn key_in_dict(
return;
}
// Slice exact content to preserve formatting.
let left_content = checker.locator().slice(left.range());
let Ok(value_content) =
value_content_for_key_in_dict(checker.locator(), checker.stylist(), right)
else {
return;
};
// Extract the exact range of the left and right expressions.
let left_range = parenthesized_range(left.into(), parent, checker.locator().contents())
.unwrap_or(left.range());
let right_range = parenthesized_range(right.into(), parent, checker.locator().contents())
.unwrap_or(right.range());
let value_range = parenthesized_range(value.into(), parent, checker.locator().contents())
.unwrap_or(value.range());
let left_content = checker.locator().slice(left_range);
let value_content = checker.locator().slice(value_range);
let mut diagnostic = Diagnostic::new(
InDictKeys {
@@ -114,37 +113,42 @@ fn key_in_dict(
dict: value_content.to_string(),
operator: operator.as_str().to_string(),
},
range,
TextRange::new(left_range.start(), right_range.end()),
);
if checker.patch(diagnostic.kind.rule()) {
diagnostic.set_fix(Fix::suggested(Edit::range_replacement(
value_content,
right.range(),
value_content.to_string(),
right_range,
)));
}
checker.diagnostics.push(diagnostic);
}
/// SIM118 in a for loop
pub(crate) fn key_in_dict_for(checker: &mut Checker, target: &Expr, iter: &Expr) {
/// SIM118 in a `for` loop.
pub(crate) fn key_in_dict_for(checker: &mut Checker, for_stmt: &ast::StmtFor) {
key_in_dict(
checker,
target,
iter,
&for_stmt.target,
&for_stmt.iter,
CmpOp::In,
TextRange::new(target.start(), iter.end()),
for_stmt.into(),
);
}
/// SIM118 in a comparison
pub(crate) fn key_in_dict_compare(
checker: &mut Checker,
expr: &Expr,
left: &Expr,
ops: &[CmpOp],
comparators: &[Expr],
) {
let [op] = ops else {
/// SIM118 in a comprehension.
pub(crate) fn key_in_dict_comprehension(checker: &mut Checker, comprehension: &Comprehension) {
key_in_dict(
checker,
&comprehension.target,
&comprehension.iter,
CmpOp::In,
comprehension.into(),
);
}
/// SIM118 in a comparison.
pub(crate) fn key_in_dict_compare(checker: &mut Checker, compare: &ast::ExprCompare) {
let [op] = compare.ops.as_slice() else {
return;
};
@@ -152,21 +156,9 @@ pub(crate) fn key_in_dict_compare(
return;
}
let [right] = comparators else {
let [right] = compare.comparators.as_slice() else {
return;
};
key_in_dict(checker, left, right, *op, expr.range());
}
fn value_content_for_key_in_dict(
locator: &Locator,
stylist: &Stylist,
expr: &Expr,
) -> Result<String> {
let content = locator.slice(expr.range());
let mut expression = match_expression(content)?;
let call = match_call_mut(&mut expression)?;
let attribute = match_attribute(&mut call.func)?;
Ok(attribute.value.codegen_stylist(stylist))
key_in_dict(checker, &compare.left, right, *op, compare.into());
}

View File

@@ -267,13 +267,10 @@ fn match_loop(stmt: &Stmt) -> Option<Loop> {
else {
return None;
};
if nested_body.len() != 1 {
return None;
}
if !nested_elif_else_clauses.is_empty() {
return None;
}
let Stmt::Return(ast::StmtReturn { value, range: _ }) = &nested_body[0] else {
let [Stmt::Return(ast::StmtReturn { value, range: _ })] = nested_body.as_slice() else {
return None;
};
let Some(value) = value else {

View File

@@ -1,14 +1,15 @@
use anyhow::Result;
use libcst_native::CompOp;
use ruff_python_ast::{self as ast, CmpOp, Expr, Ranged, UnaryOp};
use crate::autofix::codemods::CodegenStylist;
use ruff_diagnostics::{AutofixKind, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, CmpOp, Expr, Ranged, UnaryOp};
use ruff_python_codegen::Stylist;
use ruff_python_stdlib::str::{self};
use ruff_source_file::Locator;
use crate::autofix::codemods::CodegenStylist;
use crate::autofix::snippet::SourceCodeSnippet;
use crate::checkers::ast::Checker;
use crate::cst::matchers::{match_comparison, match_expression};
use crate::registry::AsRule;
@@ -45,7 +46,7 @@ use crate::registry::AsRule;
/// - [Python documentation: Assignment statements](https://docs.python.org/3/reference/simple_stmts.html#assignment-statements)
#[violation]
pub struct YodaConditions {
pub suggestion: Option<String>,
suggestion: Option<SourceCodeSnippet>,
}
impl Violation for YodaConditions {
@@ -54,7 +55,10 @@ impl Violation for YodaConditions {
#[derive_message_formats]
fn message(&self) -> String {
let YodaConditions { suggestion } = self;
if let Some(suggestion) = suggestion {
if let Some(suggestion) = suggestion
.as_ref()
.and_then(SourceCodeSnippet::full_display)
{
format!("Yoda conditions are discouraged, use `{suggestion}` instead")
} else {
format!("Yoda conditions are discouraged")
@@ -63,9 +67,13 @@ impl Violation for YodaConditions {
fn autofix_title(&self) -> Option<String> {
let YodaConditions { suggestion } = self;
suggestion
.as_ref()
.map(|suggestion| format!("Replace Yoda condition with `{suggestion}`"))
suggestion.as_ref().map(|suggestion| {
if let Some(suggestion) = suggestion.full_display() {
format!("Replace Yoda condition with `{suggestion}`")
} else {
format!("Replace Yoda condition")
}
})
}
}
@@ -178,7 +186,7 @@ pub(crate) fn yoda_conditions(
if let Ok(suggestion) = reverse_comparison(expr, checker.locator(), checker.stylist()) {
let mut diagnostic = Diagnostic::new(
YodaConditions {
suggestion: Some(suggestion.to_string()),
suggestion: Some(SourceCodeSnippet::new(suggestion.clone())),
},
expr.range(),
);

View File

@@ -274,7 +274,7 @@ SIM118.py:32:1: SIM118 [*] Use `key in (obj or {})` instead of `key in (obj or {
32 | key in (obj or {}).keys() # SIM118
| ^^^^^^^^^^^^^^^^^^^^^^^^^ SIM118
33 |
34 | from typing import KeysView
34 | (key) in (obj or {}).keys() # SIM118
|
= help: Convert to `key in (obj or {})`
@@ -285,7 +285,28 @@ SIM118.py:32:1: SIM118 [*] Use `key in (obj or {})` instead of `key in (obj or {
32 |-key in (obj or {}).keys() # SIM118
32 |+key in (obj or {}) # SIM118
33 33 |
34 34 | from typing import KeysView
34 34 | (key) in (obj or {}).keys() # SIM118
35 35 |
SIM118.py:34:1: SIM118 [*] Use `(key) in (obj or {})` instead of `(key) in (obj or {}).keys()`
|
32 | key in (obj or {}).keys() # SIM118
33 |
34 | (key) in (obj or {}).keys() # SIM118
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ SIM118
35 |
36 | from typing import KeysView
|
= help: Convert to `(key) in (obj or {})`
Suggested fix
31 31 |
32 32 | key in (obj or {}).keys() # SIM118
33 33 |
34 |-(key) in (obj or {}).keys() # SIM118
34 |+(key) in (obj or {}) # SIM118
35 35 |
36 36 | from typing import KeysView
37 37 |

View File

@@ -290,7 +290,7 @@ SIM300.py:15:1: SIM300 [*] Yoda conditions are discouraged, use `(number - 100)
17 17 |
18 18 | # OK
SIM300.py:16:1: SIM300 [*] Yoda conditions are discouraged, use `(60 * 60) < SomeClass().settings.SOME_CONSTANT_VALUE` instead
SIM300.py:16:1: SIM300 [*] Yoda conditions are discouraged
|
14 | JediOrder.YODA == age # SIM300
15 | 0 < (number - 100) # SIM300
@@ -299,7 +299,7 @@ SIM300.py:16:1: SIM300 [*] Yoda conditions are discouraged, use `(60 * 60) < Som
17 |
18 | # OK
|
= help: Replace Yoda condition with `(60 * 60) < SomeClass().settings.SOME_CONSTANT_VALUE`
= help: Replace Yoda condition
Fix
13 13 | YODA >= age # SIM300

View File

@@ -61,7 +61,7 @@ pub(crate) fn empty_type_checking_block(checker: &mut Checker, stmt: &ast::StmtI
let stmt = checker.semantic().current_statement();
let parent = checker.semantic().current_statement_parent();
let edit = autofix::edits::delete_stmt(stmt, parent, checker.locator(), checker.indexer());
diagnostic.set_fix(Fix::automatic(edit).isolate(checker.isolation(parent)));
diagnostic.set_fix(Fix::automatic(edit).isolate(checker.parent_isolation()));
}
checker.diagnostics.push(diagnostic);
}

View File

@@ -6,7 +6,7 @@ use rustc_hash::FxHashMap;
use ruff_diagnostics::{AutofixKind, Diagnostic, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_semantic::{AnyImport, Imported, ResolvedReferenceId, Scope, StatementId};
use ruff_python_semantic::{AnyImport, Imported, NodeId, ResolvedReferenceId, Scope};
use ruff_text_size::TextRange;
use crate::autofix;
@@ -72,8 +72,8 @@ pub(crate) fn runtime_import_in_type_checking_block(
diagnostics: &mut Vec<Diagnostic>,
) {
// Collect all runtime imports by statement.
let mut errors_by_statement: FxHashMap<StatementId, Vec<ImportBinding>> = FxHashMap::default();
let mut ignores_by_statement: FxHashMap<StatementId, Vec<ImportBinding>> = FxHashMap::default();
let mut errors_by_statement: FxHashMap<NodeId, Vec<ImportBinding>> = FxHashMap::default();
let mut ignores_by_statement: FxHashMap<NodeId, Vec<ImportBinding>> = FxHashMap::default();
for binding_id in scope.binding_ids() {
let binding = checker.semantic().binding(binding_id);
@@ -95,7 +95,7 @@ pub(crate) fn runtime_import_in_type_checking_block(
.is_runtime()
})
{
let Some(statement_id) = binding.source else {
let Some(node_id) = binding.source else {
continue;
};
@@ -115,23 +115,20 @@ pub(crate) fn runtime_import_in_type_checking_block(
})
{
ignores_by_statement
.entry(statement_id)
.entry(node_id)
.or_default()
.push(import);
} else {
errors_by_statement
.entry(statement_id)
.or_default()
.push(import);
errors_by_statement.entry(node_id).or_default().push(import);
}
}
}
// Generate a diagnostic for every import, but share a fix across all imports within the same
// statement (excluding those that are ignored).
for (statement_id, imports) in errors_by_statement {
for (node_id, imports) in errors_by_statement {
let fix = if checker.patch(Rule::RuntimeImportInTypeCheckingBlock) {
fix_imports(checker, statement_id, &imports).ok()
fix_imports(checker, node_id, &imports).ok()
} else {
None
};
@@ -200,13 +197,9 @@ impl Ranged for ImportBinding<'_> {
}
/// Generate a [`Fix`] to remove runtime imports from a type-checking block.
fn fix_imports(
checker: &Checker,
statement_id: StatementId,
imports: &[ImportBinding],
) -> Result<Fix> {
let statement = checker.semantic().statement(statement_id);
let parent = checker.semantic().parent_statement(statement_id);
fn fix_imports(checker: &Checker, node_id: NodeId, imports: &[ImportBinding]) -> Result<Fix> {
let statement = checker.semantic().statement(node_id);
let parent = checker.semantic().parent_statement(node_id);
let member_names: Vec<Cow<'_, str>> = imports
.iter()
@@ -244,6 +237,6 @@ fn fix_imports(
Ok(
Fix::suggested_edits(remove_import_edit, add_import_edit.into_edits())
.isolate(checker.isolation(parent)),
.isolate(checker.parent_isolation()),
)
}

View File

@@ -6,7 +6,7 @@ use rustc_hash::FxHashMap;
use ruff_diagnostics::{AutofixKind, Diagnostic, DiagnosticKind, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_semantic::{AnyImport, Binding, Imported, ResolvedReferenceId, Scope, StatementId};
use ruff_python_semantic::{AnyImport, Binding, Imported, NodeId, ResolvedReferenceId, Scope};
use ruff_text_size::TextRange;
use crate::autofix;
@@ -227,9 +227,9 @@ pub(crate) fn typing_only_runtime_import(
diagnostics: &mut Vec<Diagnostic>,
) {
// Collect all typing-only imports by statement and import type.
let mut errors_by_statement: FxHashMap<(StatementId, ImportType), Vec<ImportBinding>> =
let mut errors_by_statement: FxHashMap<(NodeId, ImportType), Vec<ImportBinding>> =
FxHashMap::default();
let mut ignores_by_statement: FxHashMap<(StatementId, ImportType), Vec<ImportBinding>> =
let mut ignores_by_statement: FxHashMap<(NodeId, ImportType), Vec<ImportBinding>> =
FxHashMap::default();
for binding_id in scope.binding_ids() {
@@ -302,7 +302,7 @@ pub(crate) fn typing_only_runtime_import(
continue;
}
let Some(statement_id) = binding.source else {
let Some(node_id) = binding.source else {
continue;
};
@@ -319,12 +319,12 @@ pub(crate) fn typing_only_runtime_import(
})
{
ignores_by_statement
.entry((statement_id, import_type))
.entry((node_id, import_type))
.or_default()
.push(import);
} else {
errors_by_statement
.entry((statement_id, import_type))
.entry((node_id, import_type))
.or_default()
.push(import);
}
@@ -333,9 +333,9 @@ pub(crate) fn typing_only_runtime_import(
// Generate a diagnostic for every import, but share a fix across all imports within the same
// statement (excluding those that are ignored).
for ((statement_id, import_type), imports) in errors_by_statement {
for ((node_id, import_type), imports) in errors_by_statement {
let fix = if checker.patch(rule_for(import_type)) {
fix_imports(checker, statement_id, &imports).ok()
fix_imports(checker, node_id, &imports).ok()
} else {
None
};
@@ -445,13 +445,9 @@ fn is_exempt(name: &str, exempt_modules: &[&str]) -> bool {
}
/// Generate a [`Fix`] to remove typing-only imports from a runtime context.
fn fix_imports(
checker: &Checker,
statement_id: StatementId,
imports: &[ImportBinding],
) -> Result<Fix> {
let statement = checker.semantic().statement(statement_id);
let parent = checker.semantic().parent_statement(statement_id);
fn fix_imports(checker: &Checker, node_id: NodeId, imports: &[ImportBinding]) -> Result<Fix> {
let statement = checker.semantic().statement(node_id);
let parent = checker.semantic().parent_statement(node_id);
let member_names: Vec<Cow<'_, str>> = imports
.iter()
@@ -491,6 +487,6 @@ fn fix_imports(
Ok(
Fix::suggested_edits(remove_import_edit, add_import_edit.into_edits())
.isolate(checker.isolation(parent)),
.isolate(checker.parent_isolation()),
)
}

View File

@@ -1,5 +1,5 @@
//! Rules from [flake8-unused-arguments](https://pypi.org/project/flake8-unused-arguments/).
mod helpers;
pub(crate) mod helpers;
pub(crate) mod rules;
pub mod settings;

View File

@@ -437,19 +437,21 @@ pub(crate) fn unused_arguments(
}
}
ScopeKind::Lambda(ast::ExprLambda { parameters, .. }) => {
if checker.enabled(Argumentable::Lambda.rule_code()) {
function(
Argumentable::Lambda,
parameters,
scope,
checker.semantic(),
&checker.settings.dummy_variable_rgx,
checker
.settings
.flake8_unused_arguments
.ignore_variadic_names,
diagnostics,
);
if let Some(parameters) = parameters {
if checker.enabled(Argumentable::Lambda.rule_code()) {
function(
Argumentable::Lambda,
parameters,
scope,
checker.semantic(),
&checker.settings.dummy_variable_rgx,
checker
.settings
.flake8_unused_arguments
.ignore_variadic_names,
diagnostics,
);
}
}
}
_ => panic!("Expected ScopeKind::Function | ScopeKind::Lambda"),

View File

@@ -1,40 +1,40 @@
---
source: crates/ruff/src/rules/flake8_unused_arguments/mod.rs
---
ARG.py:35:17: ARG002 Unused method argument: `x`
ARG.py:37:17: ARG002 Unused method argument: `x`
|
33 | # Unused arguments.
34 | ###
35 | def f(self, x):
35 | # Unused arguments.
36 | ###
37 | def f(self, x):
| ^ ARG002
36 | print("Hello, world!")
38 | print("Hello, world!")
|
ARG.py:38:20: ARG002 Unused method argument: `x`
ARG.py:40:20: ARG002 Unused method argument: `x`
|
36 | print("Hello, world!")
37 |
38 | def f(self, /, x):
38 | print("Hello, world!")
39 |
40 | def f(self, /, x):
| ^ ARG002
39 | print("Hello, world!")
41 | print("Hello, world!")
|
ARG.py:41:16: ARG002 Unused method argument: `x`
ARG.py:43:16: ARG002 Unused method argument: `x`
|
39 | print("Hello, world!")
40 |
41 | def f(cls, x):
41 | print("Hello, world!")
42 |
43 | def f(cls, x):
| ^ ARG002
42 | print("Hello, world!")
44 | print("Hello, world!")
|
ARG.py:190:24: ARG002 Unused method argument: `x`
ARG.py:192:24: ARG002 Unused method argument: `x`
|
188 | ###
189 | class C:
190 | def __init__(self, x) -> None:
190 | ###
191 | class C:
192 | def __init__(self, x) -> None:
| ^ ARG002
191 | print("Hello, world!")
193 | print("Hello, world!")
|

View File

@@ -1,12 +1,12 @@
---
source: crates/ruff/src/rules/flake8_unused_arguments/mod.rs
---
ARG.py:45:16: ARG003 Unused class method argument: `x`
ARG.py:47:16: ARG003 Unused class method argument: `x`
|
44 | @classmethod
45 | def f(cls, x):
46 | @classmethod
47 | def f(cls, x):
| ^ ARG003
46 | print("Hello, world!")
48 | print("Hello, world!")
|

View File

@@ -1,28 +1,28 @@
---
source: crates/ruff/src/rules/flake8_unused_arguments/mod.rs
---
ARG.py:49:11: ARG004 Unused static method argument: `cls`
ARG.py:51:11: ARG004 Unused static method argument: `cls`
|
48 | @staticmethod
49 | def f(cls, x):
50 | @staticmethod
51 | def f(cls, x):
| ^^^ ARG004
50 | print("Hello, world!")
52 | print("Hello, world!")
|
ARG.py:49:16: ARG004 Unused static method argument: `x`
ARG.py:51:16: ARG004 Unused static method argument: `x`
|
48 | @staticmethod
49 | def f(cls, x):
50 | @staticmethod
51 | def f(cls, x):
| ^ ARG004
50 | print("Hello, world!")
52 | print("Hello, world!")
|
ARG.py:53:11: ARG004 Unused static method argument: `x`
ARG.py:55:11: ARG004 Unused static method argument: `x`
|
52 | @staticmethod
53 | def f(x):
54 | @staticmethod
55 | def f(x):
| ^ ARG004
54 | print("Hello, world!")
56 | print("Hello, world!")
|

View File

@@ -7,6 +7,8 @@ ARG.py:28:8: ARG005 Unused lambda argument: `x`
27 | ###
28 | lambda x: print("Hello, world!")
| ^ ARG005
29 |
30 | lambda: print("Hello, world!")
|

View File

@@ -1,10 +1,11 @@
use itertools::Itertools;
use ruff_python_ast::{self as ast, Arguments, Constant, Expr, Ranged};
use ruff_text_size::TextRange;
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Arguments, Constant, Expr, Ranged};
use ruff_text_size::TextRange;
use crate::autofix::snippet::SourceCodeSnippet;
use crate::checkers::ast::Checker;
use crate::registry::AsRule;
use crate::rules::flynt::helpers;
@@ -29,19 +30,27 @@ use crate::rules::flynt::helpers;
/// - [Python documentation: f-strings](https://docs.python.org/3/reference/lexical_analysis.html#f-strings)
#[violation]
pub struct StaticJoinToFString {
expr: String,
expression: SourceCodeSnippet,
}
impl AlwaysAutofixableViolation for StaticJoinToFString {
#[derive_message_formats]
fn message(&self) -> String {
let StaticJoinToFString { expr } = self;
format!("Consider `{expr}` instead of string join")
let StaticJoinToFString { expression } = self;
if let Some(expression) = expression.full_display() {
format!("Consider `{expression}` instead of string join")
} else {
format!("Consider f-string instead of string join")
}
}
fn autofix_title(&self) -> String {
let StaticJoinToFString { expr } = self;
format!("Replace with `{expr}`")
let StaticJoinToFString { expression } = self;
if let Some(expression) = expression.full_display() {
format!("Replace with `{expression}`")
} else {
format!("Replace with f-string")
}
}
}
@@ -114,14 +123,17 @@ pub(crate) fn static_join_to_fstring(checker: &mut Checker, expr: &Expr, joiner:
return;
};
if !keywords.is_empty() || args.len() != 1 {
// If there are kwargs or more than one argument, this is some non-standard
// string join call.
// If there are kwargs or more than one argument, this is some non-standard
// string join call.
if !keywords.is_empty() {
return;
}
let [arg] = args.as_slice() else {
return;
};
// Get the elements to join; skip (e.g.) generators, sets, etc.
let joinees = match &args[0] {
let joinees = match &arg {
Expr::List(ast::ExprList { elts, .. }) if is_static_length(elts) => elts,
Expr::Tuple(ast::ExprTuple { elts, .. }) if is_static_length(elts) => elts,
_ => return,
@@ -137,7 +149,7 @@ pub(crate) fn static_join_to_fstring(checker: &mut Checker, expr: &Expr, joiner:
let mut diagnostic = Diagnostic::new(
StaticJoinToFString {
expr: contents.clone(),
expression: SourceCodeSnippet::new(contents.clone()),
},
expr.range(),
);

View File

@@ -1,7 +1,7 @@
use ruff_diagnostics::{AutofixKind, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::is_const_true;
use ruff_python_ast::{self as ast, Keyword, PySourceType, Ranged};
use ruff_python_ast::{self as ast, Keyword, Ranged};
use ruff_source_file::Locator;
use crate::autofix::edits::{remove_argument, Parentheses};
@@ -78,12 +78,9 @@ pub(crate) fn inplace_argument(checker: &mut Checker, call: &ast::ExprCall) {
&& checker.semantic().current_statement().is_expr_stmt()
&& checker.semantic().current_expression_parent().is_none()
{
if let Some(fix) = convert_inplace_argument_to_assignment(
call,
keyword,
checker.source_type,
checker.locator(),
) {
if let Some(fix) =
convert_inplace_argument_to_assignment(call, keyword, checker.locator())
{
diagnostic.set_fix(fix);
}
}
@@ -103,7 +100,6 @@ pub(crate) fn inplace_argument(checker: &mut Checker, call: &ast::ExprCall) {
fn convert_inplace_argument_to_assignment(
call: &ast::ExprCall,
keyword: &Keyword,
source_type: PySourceType,
locator: &Locator,
) -> Option<Fix> {
// Add the assignment.
@@ -118,8 +114,7 @@ fn convert_inplace_argument_to_assignment(
keyword,
&call.arguments,
Parentheses::Preserve,
locator,
source_type,
locator.contents(),
)
.ok()?;

View File

@@ -1,11 +1,10 @@
use std::fmt;
use ruff_python_ast as ast;
use ruff_python_ast::Ranged;
use ruff_python_ast::{Arguments, Expr};
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast as ast;
use ruff_python_ast::Ranged;
use ruff_python_ast::{Arguments, Expr};
use ruff_python_semantic::SemanticModel;
use crate::checkers::ast::Checker;
@@ -58,8 +57,8 @@ impl AlwaysAutofixableViolation for IncorrectDictIterator {
}
/// PERF102
pub(crate) fn incorrect_dict_iterator(checker: &mut Checker, target: &Expr, iter: &Expr) {
let Expr::Tuple(ast::ExprTuple { elts, .. }) = target else {
pub(crate) fn incorrect_dict_iterator(checker: &mut Checker, stmt_for: &ast::StmtFor) {
let Expr::Tuple(ast::ExprTuple { elts, .. }) = stmt_for.target.as_ref() else {
return;
};
let [key, value] = elts.as_slice() else {
@@ -69,7 +68,7 @@ pub(crate) fn incorrect_dict_iterator(checker: &mut Checker, target: &Expr, iter
func,
arguments: Arguments { args, .. },
..
}) = iter
}) = stmt_for.iter.as_ref()
else {
return;
};
@@ -105,7 +104,7 @@ pub(crate) fn incorrect_dict_iterator(checker: &mut Checker, target: &Expr, iter
let replace_attribute = Edit::range_replacement("values".to_string(), attr.range());
let replace_target = Edit::range_replacement(
checker.locator().slice(value.range()).to_string(),
target.range(),
stmt_for.target.range(),
);
diagnostic.set_fix(Fix::suggested_edits(replace_attribute, [replace_target]));
}
@@ -123,7 +122,7 @@ pub(crate) fn incorrect_dict_iterator(checker: &mut Checker, target: &Expr, iter
let replace_attribute = Edit::range_replacement("keys".to_string(), attr.range());
let replace_target = Edit::range_replacement(
checker.locator().slice(key.range()).to_string(),
target.range(),
stmt_for.target.range(),
);
diagnostic.set_fix(Fix::suggested_edits(replace_attribute, [replace_target]));
}

View File

@@ -7,6 +7,28 @@ use ruff_source_file::Locator;
use crate::logging::DisplayParseErrorType;
/// ## What it does
/// This is not a regular diagnostic; instead, it's raised when a file cannot be read
/// from disk.
///
/// ## Why is this bad?
/// An `IOError` indicates an error in the development setup. For example, the user may
/// not have permissions to read a given file, or the filesystem may contain a broken
/// symlink.
///
/// ## Example
/// On Linux or macOS:
/// ```shell
/// $ echo 'print("hello world!")' > a.py
/// $ chmod 000 a.py
/// $ ruff a.py
/// a.py:1:1: E902 Permission denied (os error 13)
/// Found 1 error.
/// ```
///
/// ## References
/// - [UNIX Permissions introduction](https://mason.gmu.edu/~montecin/UNIXpermiss.htm)
/// - [Command Line Basics: Symbolic Links](https://www.digitalocean.com/community/tutorials/workflow-symbolic-links)
#[violation]
pub struct IOError {
pub message: String,
@@ -21,6 +43,25 @@ impl Violation for IOError {
}
}
/// ## What it does
/// Checks for code that contains syntax errors.
///
/// ## Why is this bad?
/// Code with syntax errors cannot be executed. Such errors are likely a
/// mistake.
///
/// ## Example
/// ```python
/// x =
/// ```
///
/// Use instead:
/// ```python
/// x = 1
/// ```
///
/// ## References
/// - [Python documentation: Syntax Errors](https://docs.python.org/3/tutorial/errors.html#syntax-errors)
#[violation]
pub struct SyntaxError {
pub message: String,

Some files were not shown because too many files have changed in this diff Show More