Compare commits

..

43 Commits

Author SHA1 Message Date
Charlie Marsh
1c41789c2a Bump version to 0.0.252 (#3142) 2023-02-22 14:50:14 -05:00
Charlie Marsh
2f9de335db Upgrade RustPython to match new flattened exports (#3141) 2023-02-22 19:36:13 +00:00
Ran Benita
ba61bb6a6c Fix isort no-lines-before preceded by an empty section (#3139)
Fix isort no-lines-before preceded by an empty section

Fix #3138.
2023-02-22 14:35:53 -05:00
Charlie Marsh
17ab71ff75 Include match in nested block check (#3137) 2023-02-22 14:32:08 -05:00
Charlie Marsh
4ad4e3e091 Avoid useless-else-on-loop for break within match (#3136) 2023-02-22 19:12:44 +00:00
Florian Best
6ced5122e4 refactor(use-from-import): build fixed variant via AST (#3132) 2023-02-22 13:17:37 -05:00
Marijn Valk
7d55b417f7 add delta-rs to list of users (#3133) 2023-02-22 13:07:58 -05:00
Charlie Marsh
f0e0efc46f Upgrade RustPython to handle trailing commas in map patterns (#3130) 2023-02-22 11:17:13 -05:00
Charlie Marsh
1efa2e07ad Avoid match statement misidentification in token rules (#3129) 2023-02-22 15:44:45 +00:00
Charlie Marsh
df3932f750 Use file-specific quote for C408 (#3128) 2023-02-22 15:26:46 +00:00
Rupert Tombs
817d0b4902 Fix =/== error in ManualDictLookup (#3117) 2023-02-22 15:14:30 +00:00
Micha Reiser
ffd8e958fc chore: Upgrade Rust to 1.67.0 (#3125) 2023-02-22 10:03:17 -05:00
Micha Reiser
ed33b75bad test(ruff_python_formatter): Run all Black tests (#2993)
This PR changes the testing infrastructure to run all black tests and:

* Pass if Ruff and Black generate the same formatting
* Fail and write a markdown snapshot that shows the input code, the differences between Black and Ruff, Ruffs output, and Blacks output

This is achieved by introducing a new `fixture` macro (open to better name suggestions) that "duplicates" the attributed test for every file that matches the specified glob pattern. Creating a new test for each file over having a test that iterates over all files has the advantage that you can run a single test, and that test failures indicate which case is failing. 

The `fixture` macro also makes it straightforward to e.g. setup our own spec tests that test very specific formatting by creating a new folder and use insta to assert the formatted output.
2023-02-22 09:25:06 -05:00
Micha Reiser
262e768fd3 refactor(ruff): Implement doc_lines_from_tokens as iterator (#3124)
This is a nit refactor... It implements the extraction of document lines as an iterator instead of a Vector to avoid the extra allocation.
2023-02-22 09:22:06 -05:00
Ran Benita
bc3a9ce003 Mark typing.assert_never as no return (#3121)
This function always raises, so RET503 shouldn't trigger for it.
2023-02-22 09:15:39 -05:00
Charlie Marsh
48005d87f8 Add missing backticks from rustdoc (#3112) 2023-02-22 05:03:06 +00:00
Charlie Marsh
e37e9c2ca3 Skip EXE001 and EXE002 rules on Windows (#3111) 2023-02-21 23:39:56 -05:00
Matthieu Devlin
8fde63b323 [pylint] Implement E1205 and E106 (#3084) 2023-02-21 22:53:11 -05:00
Matthew Lloyd
97338e4cd6 [pylint] redefined-loop-name (W2901) (#3022)
Slightly broadens W2901 to cover `with` statements too.

Closes #2972.
2023-02-22 03:23:47 +00:00
Charlie Marsh
9645790a8b Support shell expansion for --config argument (#3107) 2023-02-21 23:33:41 +00:00
Charlie Marsh
18800c6884 Include file permissions in cache key (#3104) 2023-02-21 18:20:06 -05:00
Charlie Marsh
fd638a2e54 Bump version to 0.0.251 (#3105) 2023-02-21 18:13:59 -05:00
Charlie Marsh
fa1459d56e Avoid prefer-list-builtin for lambdas with *args or **kwargs (#3102) 2023-02-21 17:44:37 -05:00
Charlie Marsh
d93c5811ea Create bindings for MatchAs patterns (#3098) 2023-02-21 22:04:09 +00:00
Charlie Marsh
06e426f509 Bump version to 0.0.250 (#3095) 2023-02-21 15:20:46 -05:00
Carlos Gonçalves
6eb014b3b2 feat(B032): add b032 flake8_bugbear (#3085) 2023-02-21 19:53:29 +00:00
Charlie Marsh
d9fd78d907 Ignore setters in flake8-boolean-trap (#3092) 2023-02-21 19:31:00 +00:00
Charlie Marsh
37df07d2e0 Re-add compatibility to README (#3091) 2023-02-21 18:57:47 +00:00
Charlie Marsh
d5c65b5f1b Add support for structural pattern matching (#3047) 2023-02-21 18:52:10 +00:00
Charlie Marsh
cdc4e86158 Add support for TryStar (#3089) 2023-02-21 13:42:20 -05:00
Charlie Marsh
50ec6d3b0f Use LibCST to fix chained assertions (#3087) 2023-02-21 13:10:31 -05:00
Charlie Marsh
a6eb60cdd5 Enable function2 test (#3083) 2023-02-21 04:37:50 +00:00
Charlie Marsh
90c04b9cff Enable tupleassign test (#3080) 2023-02-21 00:42:23 +00:00
Charlie Marsh
b701cca779 Enable some already-passing Black tests (#3079) 2023-02-21 00:10:35 +00:00
Charlie Marsh
ce8953442d Add support for trailing colons in slice expressions (#3077) 2023-02-20 23:24:32 +00:00
Charlie Marsh
7d4e513a82 Omit while-True loops from implicit return enforcement (#3076) 2023-02-20 18:22:28 -05:00
Charlie Marsh
35f7f7b66d Avoid boolean-trap rules for positional-only builtin calls (#3075) 2023-02-20 23:08:18 +00:00
Charlie Marsh
6e02405bd6 Add StmtKind::Try; fix trailing newlines (#3074) 2023-02-20 22:55:32 +00:00
Carlos Gonçalves
b657468346 feat(B029): Add B029 from flake8-bugbear (#3068) 2023-02-20 15:57:13 -05:00
Micha Reiser
f72ed255e5 chore: Use LF on all platforms (#3005)
I worked on #2993 and ran into issues that the formatter tests are failing on Windows because `writeln!` emits `\n` as line terminator on all platforms, but `git` on Windows converted the line endings in the snapshots to `\r\n`.

I then tried to replicate the issue on my Windows machine and was surprised that all linter snapshot tests are failing on my machine. I figured out after some time that it is due to my global git config keeping the input line endings rather than converting to `\r\n`. 

Luckily, I've been made aware of #2033 which introduced an "override" for the `assert_yaml_snapshot` macro that normalizes new lines, by splitting the formatted string using the platform-specific newline character. This is a clever approach and gives nice diffs for multiline fixes but makes assumptions about the setup contributors use and requires special care whenever we use line endings inside of tests. 

I recommend that we remove the special new line handling and use `.gitattributes` to enforce the use of `LF` on all platforms [guide](https://docs.github.com/en/get-started/getting-started-with-git/configuring-git-to-handle-line-endings). This gives us platform agnostic tests without having to worry about line endings in our tests or different git configurations.

## Note

It may be necessary for Windows contributors to run the following command to update the line endings of their files

```bash
git rm --cached -r .
git reset --hard
```
2023-02-20 20:13:37 +00:00
Charlie Marsh
7e9dea0027 Change contributing suggestion (#3067) 2023-02-20 20:05:38 +00:00
Colin Delahunty
9545958ad8 [flake8-simplify]: Implement manual-dict-lookup (#2767) 2023-02-20 20:00:59 +00:00
Colin Delahunty
41faa335d1 [tryceratops]: Verbose Log Messages (#3036) 2023-02-20 18:21:04 +00:00
592 changed files with 18788 additions and 5589 deletions

6
.gitattributes vendored Normal file
View File

@@ -0,0 +1,6 @@
* text=auto eol=lf
crates/ruff/resources/test/fixtures/isort/line_ending_crlf.py text eol=crlf
crates/ruff/resources/test/fixtures/pycodestyle/W605_1.py text eol=crlf
ruff.schema.json linguist-generated=true text=auto eol=lf

View File

@@ -29,13 +29,10 @@ If you're looking for a place to start, we recommend implementing a new lint rul
pattern-match against the examples in the existing codebase. Many lint rules are inspired by
existing Python plugins, which can be used as a reference implementation.
As a concrete example: consider taking on one of the rules from the [`tryceratops`](https://github.com/charliermarsh/ruff/issues/2056)
plugin, and looking to the originating [Python source](https://github.com/guilatrova/tryceratops)
As a concrete example: consider taking on one of the rules from the [`flake8-pyi`](https://github.com/charliermarsh/ruff/issues/848)
plugin, and looking to the originating [Python source](https://github.com/PyCQA/flake8-pyi)
for guidance.
Alternatively, we've started work on the [`flake8-pyi`](https://github.com/charliermarsh/ruff/issues/848)
plugin (see the [Python source](https://github.com/PyCQA/flake8-pyi)) -- another good place to start.
### Prerequisites
Ruff is written in Rust. You'll need to install the

38
Cargo.lock generated
View File

@@ -753,7 +753,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.249"
version = "0.0.252"
dependencies = [
"anyhow",
"clap 4.1.6",
@@ -1035,15 +1035,6 @@ dependencies = [
"windows-sys 0.45.0",
]
[[package]]
name = "is_executable"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fa9acdc6d67b75e626ad644734e8bc6df893d9cd2a834129065d3dd6158ea9c8"
dependencies = [
"winapi",
]
[[package]]
name = "itertools"
version = "0.10.5"
@@ -1927,7 +1918,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.249"
version = "0.0.252"
dependencies = [
"anyhow",
"bisection",
@@ -1947,7 +1938,6 @@ dependencies = [
"ignore",
"imperative",
"insta",
"is_executable",
"itertools",
"js-sys",
"libcst",
@@ -1983,7 +1973,7 @@ dependencies = [
[[package]]
name = "ruff_cli"
version = "0.0.249"
version = "0.0.252"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -2012,6 +2002,7 @@ dependencies = [
"rustc-hash",
"serde",
"serde_json",
"shellexpand",
"similar",
"strum",
"textwrap",
@@ -2084,13 +2075,26 @@ dependencies = [
"insta",
"once_cell",
"ruff_formatter",
"ruff_testing_macros",
"ruff_text_size",
"rustc-hash",
"rustpython-common",
"rustpython-parser",
"similar",
"test-case",
]
[[package]]
name = "ruff_testing_macros"
version = "0.0.0"
dependencies = [
"glob",
"proc-macro-error",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "ruff_text_size"
version = "0.0.0"
@@ -2146,7 +2150,7 @@ dependencies = [
[[package]]
name = "rustpython-ast"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=61b48f108982d865524f86624a9d5bc2ae3bccef#61b48f108982d865524f86624a9d5bc2ae3bccef"
source = "git+https://github.com/RustPython/RustPython.git?rev=edf5995a1e4c366976304ca05432dd27c913054e#edf5995a1e4c366976304ca05432dd27c913054e"
dependencies = [
"num-bigint",
"rustpython-compiler-core",
@@ -2155,7 +2159,7 @@ dependencies = [
[[package]]
name = "rustpython-common"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=61b48f108982d865524f86624a9d5bc2ae3bccef#61b48f108982d865524f86624a9d5bc2ae3bccef"
source = "git+https://github.com/RustPython/RustPython.git?rev=edf5995a1e4c366976304ca05432dd27c913054e#edf5995a1e4c366976304ca05432dd27c913054e"
dependencies = [
"ascii",
"bitflags",
@@ -2180,7 +2184,7 @@ dependencies = [
[[package]]
name = "rustpython-compiler-core"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=61b48f108982d865524f86624a9d5bc2ae3bccef#61b48f108982d865524f86624a9d5bc2ae3bccef"
source = "git+https://github.com/RustPython/RustPython.git?rev=edf5995a1e4c366976304ca05432dd27c913054e#edf5995a1e4c366976304ca05432dd27c913054e"
dependencies = [
"bincode",
"bitflags",
@@ -2197,7 +2201,7 @@ dependencies = [
[[package]]
name = "rustpython-parser"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=61b48f108982d865524f86624a9d5bc2ae3bccef#61b48f108982d865524f86624a9d5bc2ae3bccef"
source = "git+https://github.com/RustPython/RustPython.git?rev=edf5995a1e4c366976304ca05432dd27c913054e#edf5995a1e4c366976304ca05432dd27c913054e"
dependencies = [
"ahash",
"anyhow",

View File

@@ -3,7 +3,7 @@ members = ["crates/*"]
[workspace.package]
edition = "2021"
rust-version = "1.65.0"
rust-version = "1.67.0"
[workspace.dependencies]
anyhow = { version = "1.0.66" }
@@ -13,8 +13,8 @@ libcst = { git = "https://github.com/charliermarsh/LibCST", rev = "f2f0b7a487a87
once_cell = { version = "1.16.0" }
regex = { version = "1.6.0" }
rustc-hash = { version = "1.1.0" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "61b48f108982d865524f86624a9d5bc2ae3bccef" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "61b48f108982d865524f86624a9d5bc2ae3bccef" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "edf5995a1e4c366976304ca05432dd27c913054e" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "edf5995a1e4c366976304ca05432dd27c913054e" }
schemars = { version = "0.8.11" }
serde = { version = "1.0.147", features = ["derive"] }
serde_json = { version = "1.0.87" }

View File

@@ -27,6 +27,7 @@ An extremely fast Python linter, written in Rust.
* ⚡️ 10-100x faster than existing linters
* 🐍 Installable via `pip`
* 🛠️ `pyproject.toml` support
* 🤝 Python 3.11 compatibility
* 📦 Built-in caching, to avoid re-analyzing unchanged files
* 🔧 Autofix support, for automatic error correction (e.g., automatically remove unused imports)
* 📏 Over [400 built-in rules](https://beta.ruff.rs/docs/rules/) (and growing)
@@ -167,7 +168,7 @@ Ruff also works with [pre-commit](https://pre-commit.com):
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.249'
rev: 'v0.0.252'
hooks:
- id: ruff
```
@@ -177,7 +178,7 @@ Or, to enable autofix:
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.249'
rev: 'v0.0.252'
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix]
@@ -741,6 +742,7 @@ Ruff is used in a number of major open-source projects, including:
* [featuretools](https://github.com/alteryx/featuretools)
* [meson-python](https://github.com/mesonbuild/meson-python)
* [ZenML](https://github.com/zenml-io/zenml)
* [delta-rs](https://github.com/delta-io/delta-rs)
## License

View File

@@ -1,6 +1,6 @@
[package]
name = "flake8-to-ruff"
version = "0.0.249"
version = "0.0.252"
edition = { workspace = true }
rust-version = { workspace = true }

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.0.249"
version = "0.0.252"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = { workspace = true }
rust-version = { workspace = true }
@@ -57,7 +57,6 @@ titlecase = { version = "2.2.1" }
toml = { workspace = true }
# https://docs.rs/getrandom/0.2.7/getrandom/#webassembly-support
# For (future) wasm-pack support
[target.'cfg(all(target_family = "wasm", target_os = "unknown"))'.dependencies]
getrandom = { version = "0.2.7", features = ["js"] }
console_error_panic_hook = { version = "0.1.7" }
@@ -66,9 +65,6 @@ serde-wasm-bindgen = { version = "0.4" }
js-sys = { version = "0.3.60" }
wasm-bindgen = { version = "0.2.83" }
[target.'cfg(not(target_family = "wasm"))'.dependencies]
is_executable = "1.0.1"
[dev-dependencies]
insta = { version = "1.19.0", features = ["yaml", "redactions"] }
test-case = { version = "2.2.2" }

View File

@@ -57,6 +57,8 @@ dict.fromkeys(("world",), True)
{}.deploy(True, False)
getattr(someobj, attrname, False)
mylist.index(True)
int(True)
str(int(False))
class Registry:
@@ -66,3 +68,11 @@ class Registry:
# FBT001: Boolean positional arg in function definition
def __setitem__(self, switch: Switch, value: bool) -> None:
self._switches[switch.value] = value
@foo.setter
def foo(self, value: bool) -> None:
pass
# FBT001: Boolean positional arg in function definition
def foo(self, value: bool) -> None:
pass

View File

@@ -105,3 +105,25 @@ while True:
pass
finally:
break # warning
while True:
try:
pass
finally:
match *0, 1, *2:
case 0,:
y = 0
case 0, *x:
break # warning
while True:
try:
pass
finally:
match *0, 1, *2:
case 0,:
y = 0
case 0, *x:
pass # no warning

View File

@@ -0,0 +1,14 @@
"""
Should emit:
B029 - on lines 8 and 13
"""
try:
pass
except ():
pass
try:
pass
except () as e:
pass

View File

@@ -0,0 +1,29 @@
"""
Should emit:
B032 - on lines 9, 10, 12, 13, 16-19
"""
# Flag these
dct = {"a": 1}
dct["b"]: 2
dct.b: 2
dct["b"]: "test"
dct.b: "test"
test = "test"
dct["b"]: test
dct["b"]: test.lower()
dct.b: test
dct.b: test.lower()
# Do not flag below
typed_dct: dict[str, int] = {"a": 1}
typed_dct["b"] = 2
typed_dct.b = 2
class TestClass:
def test_self(self):
self.test: int

View File

@@ -62,3 +62,11 @@ except Exception as e:
raise RuntimeError("boom!")
else:
raise RuntimeError("bang!")
try:
...
except Exception as e:
match 0:
case 0:
raise RuntimeError("boom!")

View File

@@ -18,3 +18,10 @@ class Foo:
class FooTable(BaseTable):
bar = fields.ListField(list)
lambda *args, **kwargs: []
lambda *args: []
lambda **kwargs: []

View File

@@ -9,6 +9,7 @@ def test_ok():
assert something, "something message"
assert something or something_else and something_third, "another message"
def test_error():
assert something and something_else
assert something and something_else and something_third
@@ -17,13 +18,24 @@ def test_error():
assert not something and something_else
assert not (something or something_else)
assert not (something or something_else or something_third)
assert something and something_else == """error
message
"""
# recursive case
assert not (a or not (b or c))
assert not (a or not (b and c)) # note that we only reduce once here
assert not (a or not (b or c)) # note that we only reduce once here
assert not (a or not (b and c))
# detected, but no autofix for messages
assert something and something_else, "error message"
assert not (something or something_else and something_third), "with message"
# detected, but no autofix for mixed conditions (e.g. `a or b and c`)
assert not (something or something_else and something_third)
# detected, but no autofix for parenthesized conditions
assert (
something
and something_else
== """error
message
"""
)

View File

@@ -3,6 +3,8 @@ import os
import posix
from posix import abort
import sys as std_sys
import typing
import typing_extensions
import _thread
import _winapi
@@ -77,7 +79,7 @@ def x(y):
# last line in while loop
def x(y):
while True:
while i > 0:
if y > 0:
return 1
y += 1
@@ -211,6 +213,18 @@ def noreturn_sys_exit():
std_sys.exit(0)
def noreturn_typing_assert_never():
if x > 0:
return 1
typing.assert_never(0)
def noreturn_typing_extensions_assert_never():
if x > 0:
return 1
typing_extensions.assert_never(0)
def noreturn__thread_exit():
if x > 0:
return 1
@@ -259,3 +273,23 @@ def nested(values):
for value in values:
print(value)
def while_true():
while True:
if y > 0:
return 1
y += 1
# match
def x(y):
match y:
case 0:
return 1
case 1:
print() # error
def foo(baz: str) -> str:
return baz

View File

@@ -0,0 +1,76 @@
# Errors
a = "hello"
# SIM116
if a == "foo":
return "bar"
elif a == "bar":
return "baz"
elif a == "boo":
return "ooh"
else:
return 42
# SIM116
if a == 1:
return (1, 2, 3)
elif a == 2:
return (4, 5, 6)
elif a == 3:
return (7, 8, 9)
else:
return (10, 11, 12)
# SIM116
if a == 1:
return (1, 2, 3)
elif a == 2:
return (4, 5, 6)
elif a == 3:
return (7, 8, 9)
# SIM116
if a == "hello 'sir'":
return (1, 2, 3)
elif a == 'goodbye "mam"':
return (4, 5, 6)
elif a == """Fairwell 'mister'""":
return (7, 8, 9)
else:
return (10, 11, 12)
# SIM116
if a == b"one":
return 1
elif a == b"two":
return 2
elif a == b"three":
return 3
# SIM116
if a == "hello 'sir'":
return ("hello'", 'hi"', 3)
elif a == 'goodbye "mam"':
return (4, 5, 6)
elif a == """Fairwell 'mister'""":
return (7, 8, 9)
else:
return (10, 11, 12)
# OK
if a == "foo":
return "bar"
elif a == "bar":
return baz()
elif a == "boo":
return "ooh"
else:
return 42
# OK
if a == b"one":
return 1
elif b == b"two":
return 2
elif a == b"three":
return 3

View File

@@ -1,2 +1,2 @@
from long_module_name import member_one, member_two, member_three, member_four, member_five
from long_module_name import member_one, member_two, member_three, member_four, member_five

View File

@@ -0,0 +1,3 @@
from __future__ import annotations
from typing import Any
from . import my_local_folder_object

View File

@@ -54,3 +54,9 @@ def f(): ...
class C: ...; x = 1
#: E701:1:8 E702:1:13
class C: ...; ...
#: E701:2:12
match *0, 1, *2:
case 0,: y = 0
#:
class Foo:
match: Optional[Match] = None

View File

@@ -1,35 +1,35 @@
#: W605:1:10
regex = '\.png$'
#: W605:2:1
regex = '''
\.png$
'''
#: W605:2:6
f(
'\_'
)
#: W605:4:6
"""
multi-line
literal
with \_ somewhere
in the middle
"""
#: Okay
regex = r'\.png$'
regex = '\\.png$'
regex = r'''
\.png$
'''
regex = r'''
\\.png$
'''
s = '\\'
regex = '\w' # noqa
regex = '''
\w
''' # noqa
#: W605:1:10
regex = '\.png$'
#: W605:2:1
regex = '''
\.png$
'''
#: W605:2:6
f(
'\_'
)
#: W605:4:6
"""
multi-line
literal
with \_ somewhere
in the middle
"""
#: Okay
regex = r'\.png$'
regex = '\\.png$'
regex = r'''
\.png$
'''
regex = r'''
\\.png$
'''
s = '\\'
regex = '\w' # noqa
regex = '''
\w
''' # noqa

View File

@@ -85,3 +85,10 @@ else:
CustomInt: TypeAlias = "np.int8 | np.int16"
# Test: match statements.
match *0, 1, *2:
case 0,:
import x
import y

View File

@@ -1,7 +1,7 @@
"""
Test that shadowing a global with a class attribute does not produce a
warning.
"""
Test that shadowing a global with a class attribute does not produce a
warning.
"""
import fu

View File

@@ -0,0 +1,26 @@
"""Test: match statements."""
from dataclasses import dataclass
@dataclass
class Car:
make: str
model: str
def f():
match Car("Toyota", "Corolla"):
case Car("Toyota", model):
print(model)
case Car(make, "Corolla"):
print(make)
def f(provided: int) -> int:
match provided:
case True:
return captured # F821
case [captured, *_]:
return captured
case captured:
return captured

View File

@@ -71,7 +71,7 @@ def f():
def connect():
return None, None
with (connect() as (connection, cursor)):
with connect() as (connection, cursor):
cursor.execute("SELECT * FROM users")
@@ -94,3 +94,30 @@ def f():
(exponential := (exponential * base_multiplier) % 3): i + 1 for i in range(2)
}
return hash_map
def f(x: int):
msg1 = "Hello, world!"
msg2 = "Hello, world!"
msg3 = "Hello, world!"
match x:
case 1:
print(msg1)
case 2:
print(msg2)
def f(x: int):
import enum
Foo = enum.Enum("Foo", "A B")
Bar = enum.Enum("Bar", "A B")
Baz = enum.Enum("Baz", "A B")
match x:
case (Foo.A):
print("A")
case [Bar.A, *_]:
print("A")
case y:
pass

View File

@@ -0,0 +1,12 @@
import logging
logging.warning("Hello %s %s", "World!") # [logging-too-few-args]
# do not handle calls with kwargs (like pylint)
logging.warning("Hello %s", "World!", "again", something="else")
logging.warning("Hello %s", "World!")
import warning
warning.warning("Hello %s %s", "World!")

View File

@@ -0,0 +1,12 @@
import logging
logging.warning("Hello %s", "World!", "again") # [logging-too-many-args]
# do not handle calls with kwargs (like pylint)
logging.warning("Hello %s", "World!", "again", something="else")
logging.warning("Hello %s", "World!")
import warning
warning.warning("Hello %s", "World!", "again")

View File

@@ -0,0 +1,155 @@
# For -> for, variable reused
for i in []:
for i in []: # error
pass
# With -> for, variable reused
with None as i:
for i in []: # error
pass
# For -> with, variable reused
for i in []:
with None as i: # error
pass
# With -> with, variable reused
with None as i:
with None as i: # error
pass
# For -> for, different variable
for i in []:
for j in []: # ok
pass
# With -> with, different variable
with None as i:
with None as j: # ok
pass
# For -> for -> for, doubly nested variable reuse
for i in []:
for j in []:
for i in []: # error
pass
# For -> for -> for -> for, doubly nested variable reuse x2
for i in []:
for j in []:
for i in []: # error
for j in []: # error
pass
# For -> assignment
for i in []:
i = 5 # error
# For -> augmented assignment
for i in []:
i += 5 # error
# For -> annotated assignment
for i in []:
i: int = 5 # error
# Async for -> for, variable reused
async for i in []:
for i in []: # error
pass
# For -> async for, variable reused
for i in []:
async for i in []: # error
pass
# For -> for, outer loop unpacks tuple
for i, j in enumerate([]):
for i in []: # error
pass
# For -> for, inner loop unpacks tuple
for i in []:
for i, j in enumerate([]): # error
pass
# For -> for, both loops unpack tuple
for (i, (j, k)) in []:
for i, j in enumerate([]): # two errors
pass
# For else -> for, variable reused in else
for i in []:
pass
else:
for i in []: # no error
pass
# For -> for, ignore dummy variables
for _ in []:
for _ in []: # no error
pass
# For -> for, outer loop unpacks with asterisk
for i, *j in []:
for j in []: # error
pass
# For -> function definition
for i in []:
def f():
i = 2 # no error
# For -> class definition
for i in []:
class A:
i = 2 # no error
# For -> function definition -> for -> assignment
for i in []:
def f():
for i in []: # no error
i = 2 # error
# For -> class definition -> for -> for
for i in []:
class A:
for i in []: # no error
for i in []: # error
pass
# For -> use in assignment target without actual assignment; subscript
for i in []:
a[i] = 2 # no error
i[a] = 2 # no error
# For -> use in assignment target without actual assignment; attribute
for i in []:
a.i = 2 # no error
i.a = 2 # no error
# For target with subscript -> assignment
for a[0] in []:
a[0] = 2 # error
a[1] = 2 # no error
# For target with subscript -> assignment
for a['i'] in []:
a['i'] = 2 # error
a['j'] = 2 # no error
# For target with attribute -> assignment
for a.i in []:
a.i = 2 # error
a.j = 2 # no error
# For target with double nested attribute -> assignment
for a.i.j in []:
a.i.j = 2 # error
a.j.i = 2 # no error
# For target with attribute -> assignment with different spacing
for a.i in []:
a. i = 2 # error
for a. i in []:
a.i = 2 # error

View File

@@ -124,3 +124,14 @@ def test_break_in_with():
else:
return True
return False
def test_break_in_match():
"""no false positive for break in match"""
for name in ["demo"]:
match name:
case "demo":
break
else:
return True
return False

View File

@@ -20,6 +20,24 @@ def bad():
logger.exception("something failed")
def bad():
try:
a = process()
if not a:
raise MyException(a)
raise MyException(a)
try:
b = process()
if not b:
raise MyException(b)
except* Exception:
logger.exception("something failed")
except* Exception:
logger.exception("something failed")
def good():
try:
a = process() # This throws the exception now

View File

@@ -0,0 +1,64 @@
# Errors
def main_function():
try:
process()
handle()
finish()
except Exception as ex:
logger.exception(f"Found an error: {ex}") # TRY401
def main_function():
try:
process()
handle()
finish()
except ValueError as bad:
if True is False:
for i in range(10):
logger.exception(f"Found an error: {bad} {good}") # TRY401
except IndexError as bad:
logger.exception(f"Found an error: {bad} {bad}") # TRY401
except Exception as bad:
logger.exception(f"Found an error: {bad}") # TRY401
logger.exception(f"Found an error: {bad}") # TRY401
if True:
logger.exception(f"Found an error: {bad}") # TRY401
import logging
logger = logging.getLogger(__name__)
def func_fstr():
try:
...
except Exception as ex:
logger.exception(f"Logging an error: {ex}") # TRY401
def func_concat():
try:
...
except Exception as ex:
logger.exception("Logging an error: " + str(ex)) # TRY401
def func_comma():
try:
...
except Exception as ex:
logger.exception("Logging an error:", ex) # TRY401
# OK
def main_function():
try:
process()
handle()
finish()
except Exception as ex:
logger.exception(f"Found an error: {er}")
logger.exception(f"Found an error: {ex.field}")

View File

@@ -1,18 +0,0 @@
/// Platform-independent snapshot assertion
#[macro_export]
macro_rules! assert_yaml_snapshot {
( $($args: expr),+) => {
let line_sep = if cfg!(windows) { "\r\n" } else { "\n" };
// adjust snapshot file for platform
let mut settings = insta::Settings::clone_current();
settings.add_redaction("[].fix.content", insta::dynamic_redaction(move |value, _path| {
insta::internals::Content::Seq(
value.as_str().unwrap().split(line_sep).map(|line| line.into()).collect()
)
}));
settings.bind(|| {
insta::assert_yaml_snapshot!($($args),+);
});
};
}

View File

@@ -59,6 +59,12 @@ fn alternatives<'a>(stmt: &'a RefEquality<'a, Stmt>) -> Vec<Vec<RefEquality<'a,
handlers,
orelse,
..
}
| StmtKind::TryStar {
body,
handlers,
orelse,
..
} => vec![body.iter().chain(orelse.iter()).map(RefEquality).collect()]
.into_iter()
.chain(handlers.iter().map(|handler| {

View File

@@ -4,8 +4,8 @@
use num_bigint::BigInt;
use rustpython_parser::ast::{
Alias, Arg, Arguments, Boolop, Cmpop, Comprehension, Constant, Excepthandler,
ExcepthandlerKind, Expr, ExprContext, ExprKind, Keyword, Operator, Stmt, StmtKind, Unaryop,
Withitem,
ExcepthandlerKind, Expr, ExprContext, ExprKind, Keyword, MatchCase, Operator, Pattern,
PatternKind, Stmt, StmtKind, Unaryop, Withitem,
};
#[derive(Debug, PartialEq, Eq, Hash)]
@@ -157,6 +157,110 @@ impl<'a> From<&'a Withitem> for ComparableWithitem<'a> {
}
}
#[allow(clippy::enum_variant_names)]
#[derive(Debug, PartialEq, Eq, Hash)]
pub enum ComparablePattern<'a> {
MatchValue {
value: ComparableExpr<'a>,
},
MatchSingleton {
value: ComparableConstant<'a>,
},
MatchSequence {
patterns: Vec<ComparablePattern<'a>>,
},
MatchMapping {
keys: Vec<ComparableExpr<'a>>,
patterns: Vec<ComparablePattern<'a>>,
rest: Option<&'a str>,
},
MatchClass {
cls: ComparableExpr<'a>,
patterns: Vec<ComparablePattern<'a>>,
kwd_attrs: Vec<&'a str>,
kwd_patterns: Vec<ComparablePattern<'a>>,
},
MatchStar {
name: Option<&'a str>,
},
MatchAs {
pattern: Option<Box<ComparablePattern<'a>>>,
name: Option<&'a str>,
},
MatchOr {
patterns: Vec<ComparablePattern<'a>>,
},
}
impl<'a> From<&'a Pattern> for ComparablePattern<'a> {
fn from(pattern: &'a Pattern) -> Self {
match &pattern.node {
PatternKind::MatchValue { value } => Self::MatchValue {
value: value.into(),
},
PatternKind::MatchSingleton { value } => Self::MatchSingleton {
value: value.into(),
},
PatternKind::MatchSequence { patterns } => Self::MatchSequence {
patterns: patterns.iter().map(Into::into).collect(),
},
PatternKind::MatchMapping {
keys,
patterns,
rest,
} => Self::MatchMapping {
keys: keys.iter().map(Into::into).collect(),
patterns: patterns.iter().map(Into::into).collect(),
rest: rest.as_deref(),
},
PatternKind::MatchClass {
cls,
patterns,
kwd_attrs,
kwd_patterns,
} => Self::MatchClass {
cls: cls.into(),
patterns: patterns.iter().map(Into::into).collect(),
kwd_attrs: kwd_attrs.iter().map(String::as_str).collect(),
kwd_patterns: kwd_patterns.iter().map(Into::into).collect(),
},
PatternKind::MatchStar { name } => Self::MatchStar {
name: name.as_deref(),
},
PatternKind::MatchAs { pattern, name } => Self::MatchAs {
pattern: pattern.as_ref().map(Into::into),
name: name.as_deref(),
},
PatternKind::MatchOr { patterns } => Self::MatchOr {
patterns: patterns.iter().map(Into::into).collect(),
},
}
}
}
impl<'a> From<&'a Box<Pattern>> for Box<ComparablePattern<'a>> {
fn from(pattern: &'a Box<Pattern>) -> Self {
Box::new((&**pattern).into())
}
}
#[derive(Debug, PartialEq, Eq, Hash)]
pub struct ComparableMatchCase<'a> {
pub pattern: ComparablePattern<'a>,
pub guard: Option<ComparableExpr<'a>>,
pub body: Vec<ComparableStmt<'a>>,
}
impl<'a> From<&'a MatchCase> for ComparableMatchCase<'a> {
fn from(match_case: &'a MatchCase) -> Self {
Self {
pattern: (&match_case.pattern).into(),
guard: match_case.guard.as_ref().map(Into::into),
body: match_case.body.iter().map(Into::into).collect(),
}
}
}
#[derive(Debug, PartialEq, Eq, Hash)]
pub enum ComparableConstant<'a> {
None,
@@ -644,6 +748,10 @@ pub enum ComparableStmt<'a> {
body: Vec<ComparableStmt<'a>>,
type_comment: Option<&'a str>,
},
Match {
subject: ComparableExpr<'a>,
cases: Vec<ComparableMatchCase<'a>>,
},
Raise {
exc: Option<ComparableExpr<'a>>,
cause: Option<ComparableExpr<'a>>,
@@ -654,6 +762,12 @@ pub enum ComparableStmt<'a> {
orelse: Vec<ComparableStmt<'a>>,
finalbody: Vec<ComparableStmt<'a>>,
},
TryStar {
body: Vec<ComparableStmt<'a>>,
handlers: Vec<ComparableExcepthandler<'a>>,
orelse: Vec<ComparableStmt<'a>>,
finalbody: Vec<ComparableStmt<'a>>,
},
Assert {
test: ComparableExpr<'a>,
msg: Option<ComparableExpr<'a>>,
@@ -811,7 +925,10 @@ impl<'a> From<&'a Stmt> for ComparableStmt<'a> {
body: body.iter().map(Into::into).collect(),
type_comment: type_comment.as_ref().map(String::as_str),
},
StmtKind::Match { .. } => unreachable!("StmtKind::Match is not supported"),
StmtKind::Match { subject, cases } => Self::Match {
subject: subject.into(),
cases: cases.iter().map(Into::into).collect(),
},
StmtKind::Raise { exc, cause } => Self::Raise {
exc: exc.as_ref().map(Into::into),
cause: cause.as_ref().map(Into::into),
@@ -827,6 +944,17 @@ impl<'a> From<&'a Stmt> for ComparableStmt<'a> {
orelse: orelse.iter().map(Into::into).collect(),
finalbody: finalbody.iter().map(Into::into).collect(),
},
StmtKind::TryStar {
body,
handlers,
orelse,
finalbody,
} => Self::TryStar {
body: body.iter().map(Into::into).collect(),
handlers: handlers.iter().map(Into::into).collect(),
orelse: orelse.iter().map(Into::into).collect(),
finalbody: finalbody.iter().map(Into::into).collect(),
},
StmtKind::Assert { test, msg } => Self::Assert {
test: test.into(),
msg: msg.as_ref().map(Into::into),

View File

@@ -7,11 +7,9 @@ use regex::Regex;
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_parser::ast::{
Arguments, Constant, Excepthandler, ExcepthandlerKind, Expr, ExprKind, Keyword, KeywordData,
Located, Location, Stmt, StmtKind,
Located, Location, MatchCase, Pattern, PatternKind, Stmt, StmtKind,
};
use rustpython_parser::lexer;
use rustpython_parser::lexer::Tok;
use rustpython_parser::token::StringKind;
use rustpython_parser::{lexer, Mode, StringKind, Tok};
use smallvec::{smallvec, SmallVec};
use crate::ast::types::{Binding, BindingKind, CallPath, Range};
@@ -249,6 +247,46 @@ where
}
}
pub fn any_over_pattern<F>(pattern: &Pattern, func: &F) -> bool
where
F: Fn(&Expr) -> bool,
{
match &pattern.node {
PatternKind::MatchValue { value } => any_over_expr(value, func),
PatternKind::MatchSingleton { .. } => false,
PatternKind::MatchSequence { patterns } => patterns
.iter()
.any(|pattern| any_over_pattern(pattern, func)),
PatternKind::MatchMapping { keys, patterns, .. } => {
keys.iter().any(|key| any_over_expr(key, func))
|| patterns
.iter()
.any(|pattern| any_over_pattern(pattern, func))
}
PatternKind::MatchClass {
cls,
patterns,
kwd_patterns,
..
} => {
any_over_expr(cls, func)
|| patterns
.iter()
.any(|pattern| any_over_pattern(pattern, func))
|| kwd_patterns
.iter()
.any(|pattern| any_over_pattern(pattern, func))
}
PatternKind::MatchStar { .. } => false,
PatternKind::MatchAs { pattern, .. } => pattern
.as_ref()
.map_or(false, |pattern| any_over_pattern(pattern, func)),
PatternKind::MatchOr { patterns } => patterns
.iter()
.any(|pattern| any_over_pattern(pattern, func)),
}
}
pub fn any_over_stmt<F>(stmt: &Stmt, func: &F) -> bool
where
F: Fn(&Expr) -> bool,
@@ -391,6 +429,12 @@ where
handlers,
orelse,
finalbody,
}
| StmtKind::TryStar {
body,
handlers,
orelse,
finalbody,
} => {
any_over_body(body, func)
|| handlers.iter().any(|handler| {
@@ -409,8 +453,21 @@ where
.as_ref()
.map_or(false, |value| any_over_expr(value, func))
}
// TODO(charlie): Handle match statements.
StmtKind::Match { .. } => false,
StmtKind::Match { subject, cases } => {
any_over_expr(subject, func)
|| cases.iter().any(|case| {
let MatchCase {
pattern,
guard,
body,
} = case;
any_over_pattern(pattern, func)
|| guard
.as_ref()
.map_or(false, |expr| any_over_expr(expr, func))
|| any_over_body(body, func)
})
}
StmtKind::Import { .. } => false,
StmtKind::ImportFrom { .. } => false,
StmtKind::Global { .. } => false,
@@ -596,7 +653,7 @@ pub fn has_comments<T>(located: &Located<T>, locator: &Locator) -> bool {
/// Returns `true` if a [`Range`] includes at least one comment.
pub fn has_comments_in(range: Range, locator: &Locator) -> bool {
for tok in lexer::make_tokenizer(locator.slice(&range)) {
for tok in lexer::lex_located(locator.slice(&range), Mode::Module, range.location) {
match tok {
Ok((_, tok, _)) => {
if matches!(tok, Tok::Comment(..)) {
@@ -811,7 +868,7 @@ pub fn match_parens(start: Location, locator: &Locator) -> Option<Range> {
let mut fix_start = None;
let mut fix_end = None;
let mut count: usize = 0;
for (start, tok, end) in lexer::make_tokenizer_located(contents, start).flatten() {
for (start, tok, end) in lexer::lex_located(contents, Mode::Module, start).flatten() {
if matches!(tok, Tok::Lpar) {
if count == 0 {
fix_start = Some(start);
@@ -843,7 +900,8 @@ pub fn identifier_range(stmt: &Stmt, locator: &Locator) -> Range {
| StmtKind::AsyncFunctionDef { .. }
) {
let contents = locator.slice(&Range::from_located(stmt));
for (start, tok, end) in lexer::make_tokenizer_located(contents, stmt.location).flatten() {
for (start, tok, end) in lexer::lex_located(contents, Mode::Module, stmt.location).flatten()
{
if matches!(tok, Tok::Name { .. }) {
return Range::new(start, end);
}
@@ -869,16 +927,18 @@ pub fn binding_range(binding: &Binding, locator: &Locator) -> Range {
}
// Return the ranges of `Name` tokens within a specified node.
pub fn find_names<T>(located: &Located<T>, locator: &Locator) -> Vec<Range> {
pub fn find_names<'a, T, U>(
located: &'a Located<T, U>,
locator: &'a Locator,
) -> impl Iterator<Item = Range> + 'a {
let contents = locator.slice(&Range::from_located(located));
lexer::make_tokenizer_located(contents, located.location)
lexer::lex_located(contents, Mode::Module, located.location)
.flatten()
.filter(|(_, tok, _)| matches!(tok, Tok::Name { .. }))
.map(|(start, _, end)| Range {
location: start,
end_location: end,
})
.collect()
}
/// Return the `Range` of `name` in `Excepthandler`.
@@ -890,7 +950,7 @@ pub fn excepthandler_name_range(handler: &Excepthandler, locator: &Locator) -> O
(Some(_), Some(type_)) => {
let type_end_location = type_.end_location.unwrap();
let contents = locator.slice(&Range::new(type_end_location, body[0].location));
let range = lexer::make_tokenizer_located(contents, type_end_location)
let range = lexer::lex_located(contents, Mode::Module, type_end_location)
.flatten()
.tuple_windows()
.find(|(tok, next_tok)| {
@@ -917,7 +977,7 @@ pub fn except_range(handler: &Excepthandler, locator: &Locator) -> Range {
location: handler.location,
end_location: end,
});
let range = lexer::make_tokenizer_located(contents, handler.location)
let range = lexer::lex_located(contents, Mode::Module, handler.location)
.flatten()
.find(|(_, kind, _)| matches!(kind, Tok::Except { .. }))
.map(|(location, _, end_location)| Range {
@@ -931,7 +991,7 @@ pub fn except_range(handler: &Excepthandler, locator: &Locator) -> Range {
/// Find f-strings that don't contain any formatted values in a `JoinedStr`.
pub fn find_useless_f_strings(expr: &Expr, locator: &Locator) -> Vec<(Range, Range)> {
let contents = locator.slice(&Range::from_located(expr));
lexer::make_tokenizer_located(contents, expr.location)
lexer::lex_located(contents, Mode::Module, expr.location)
.flatten()
.filter_map(|(location, tok, end_location)| match tok {
Tok::String {
@@ -985,7 +1045,7 @@ pub fn else_range(stmt: &Stmt, locator: &Locator) -> Option<Range> {
.expect("Expected orelse to be non-empty")
.location,
});
let range = lexer::make_tokenizer_located(contents, body_end)
let range = lexer::lex_located(contents, Mode::Module, body_end)
.flatten()
.find(|(_, kind, _)| matches!(kind, Tok::Else))
.map(|(location, _, end_location)| Range {
@@ -1001,7 +1061,7 @@ pub fn else_range(stmt: &Stmt, locator: &Locator) -> Option<Range> {
/// Return the `Range` of the first `Tok::Colon` token in a `Range`.
pub fn first_colon_range(range: Range, locator: &Locator) -> Option<Range> {
let contents = locator.slice(&range);
let range = lexer::make_tokenizer_located(contents, range.location)
let range = lexer::lex_located(contents, Mode::Module, range.location)
.flatten()
.find(|(_, kind, _)| matches!(kind, Tok::Colon))
.map(|(location, _, end_location)| Range {
@@ -1031,7 +1091,7 @@ pub fn elif_else_range(stmt: &Stmt, locator: &Locator) -> Option<Range> {
_ => return None,
};
let contents = locator.slice(&Range::new(start, end));
let range = lexer::make_tokenizer_located(contents, start)
let range = lexer::lex_located(contents, Mode::Module, start)
.flatten()
.find(|(_, kind, _)| matches!(kind, Tok::Elif | Tok::Else))
.map(|(location, _, end_location)| Range {
@@ -1147,8 +1207,8 @@ pub fn is_logger_candidate(func: &Expr) -> bool {
#[cfg(test)]
mod tests {
use anyhow::Result;
use rustpython_parser as parser;
use rustpython_parser::ast::Location;
use rustpython_parser::parser;
use crate::ast::helpers::{
elif_else_range, else_range, first_colon_range, identifier_range, match_trailing_content,

View File

@@ -0,0 +1,24 @@
pub enum LoggingLevel {
Debug,
Critical,
Error,
Exception,
Info,
Warn,
Warning,
}
impl LoggingLevel {
pub fn from_str(level: &str) -> Option<Self> {
match level {
"debug" => Some(LoggingLevel::Debug),
"critical" => Some(LoggingLevel::Critical),
"error" => Some(LoggingLevel::Error),
"exception" => Some(LoggingLevel::Exception),
"info" => Some(LoggingLevel::Info),
"warn" => Some(LoggingLevel::Warn),
"warning" => Some(LoggingLevel::Warning),
_ => None,
}
}
}

View File

@@ -4,6 +4,7 @@ pub mod comparable;
pub mod function_type;
pub mod hashable;
pub mod helpers;
pub mod logging;
pub mod operations;
pub mod relocate;
pub mod types;

View File

@@ -1,8 +1,7 @@
use bitflags::bitflags;
use rustc_hash::FxHashMap;
use rustpython_parser::ast::{Cmpop, Constant, Expr, ExprKind, Located, Stmt, StmtKind};
use rustpython_parser::lexer;
use rustpython_parser::lexer::Tok;
use rustpython_parser::{lexer, Mode, Tok};
use crate::ast::helpers::any_over_expr;
use crate::ast::types::{BindingKind, Scope};
@@ -181,7 +180,10 @@ pub fn extract_globals(body: &[Stmt]) -> FxHashMap<&str, &Stmt> {
/// Check if a node is parent of a conditional branch.
pub fn on_conditional_branch<'a>(parents: &mut impl Iterator<Item = &'a Stmt>) -> bool {
parents.any(|parent| {
if matches!(parent.node, StmtKind::If { .. } | StmtKind::While { .. }) {
if matches!(
parent.node,
StmtKind::If { .. } | StmtKind::While { .. } | StmtKind::Match { .. }
) {
return true;
}
if let StmtKind::Expr { value } = &parent.node {
@@ -198,7 +200,11 @@ pub fn in_nested_block<'a>(mut parents: impl Iterator<Item = &'a Stmt>) -> bool
parents.any(|parent| {
matches!(
parent.node,
StmtKind::Try { .. } | StmtKind::If { .. } | StmtKind::With { .. }
StmtKind::Try { .. }
| StmtKind::TryStar { .. }
| StmtKind::If { .. }
| StmtKind::With { .. }
| StmtKind::Match { .. }
)
})
}
@@ -277,7 +283,7 @@ pub type LocatedCmpop<U = ()> = Located<Cmpop, U>;
/// `CPython` doesn't either. This method iterates over the token stream and
/// re-identifies [`Cmpop`] nodes, annotating them with valid ranges.
pub fn locate_cmpops(contents: &str) -> Vec<LocatedCmpop> {
let mut tok_iter = lexer::make_tokenizer(contents).flatten().peekable();
let mut tok_iter = lexer::lex(contents, Mode::Module).flatten().peekable();
let mut ops: Vec<LocatedCmpop> = vec![];
let mut count: usize = 0;
loop {

View File

@@ -29,7 +29,7 @@ impl Range {
}
}
pub fn from_located<T>(located: &Located<T>) -> Self {
pub fn from_located<T, U>(located: &Located<T, U>) -> Self {
Range::new(located.location, located.end_location.unwrap())
}

View File

@@ -205,7 +205,6 @@ pub fn walk_stmt<'a, V: Visitor<'a> + ?Sized>(visitor: &mut V, stmt: &'a Stmt) {
visitor.visit_body(body);
}
StmtKind::Match { subject, cases } => {
// TODO(charlie): Handle `cases`.
visitor.visit_expr(subject);
for match_case in cases {
visitor.visit_match_case(match_case);
@@ -232,6 +231,19 @@ pub fn walk_stmt<'a, V: Visitor<'a> + ?Sized>(visitor: &mut V, stmt: &'a Stmt) {
visitor.visit_body(orelse);
visitor.visit_body(finalbody);
}
StmtKind::TryStar {
body,
handlers,
orelse,
finalbody,
} => {
visitor.visit_body(body);
for excepthandler in handlers {
visitor.visit_excepthandler(excepthandler);
}
visitor.visit_body(orelse);
visitor.visit_body(finalbody);
}
StmtKind::Assert { test, msg } => {
visitor.visit_expr(test);
if let Some(expr) = msg {

View File

@@ -4,8 +4,7 @@ use libcst_native::{
Codegen, CodegenState, ImportNames, ParenthesizableWhitespace, SmallStatement, Statement,
};
use rustpython_parser::ast::{ExcepthandlerKind, Expr, Keyword, Location, Stmt, StmtKind};
use rustpython_parser::lexer;
use rustpython_parser::lexer::Tok;
use rustpython_parser::{lexer, Mode, Tok};
use crate::ast::helpers;
use crate::ast::helpers::to_absolute;
@@ -53,6 +52,12 @@ fn is_lone_child(child: &Stmt, parent: &Stmt, deleted: &[&Stmt]) -> Result<bool>
handlers,
orelse,
finalbody,
}
| StmtKind::TryStar {
body,
handlers,
orelse,
finalbody,
} => {
if body.iter().contains(child) {
Ok(has_single_child(body, deleted))
@@ -74,6 +79,19 @@ fn is_lone_child(child: &Stmt, parent: &Stmt, deleted: &[&Stmt]) -> Result<bool>
bail!("Unable to find child in parent body")
}
}
StmtKind::Match { cases, .. } => {
if let Some(body) = cases.iter().find_map(|case| {
if case.body.iter().contains(child) {
Some(&case.body)
} else {
None
}
}) {
Ok(has_single_child(body, deleted))
} else {
bail!("Unable to find child in parent body")
}
}
_ => bail!("Unable to find child in parent body"),
}
}
@@ -352,7 +370,7 @@ pub fn remove_argument(
if n_arguments == 1 {
// Case 1: there is only one argument.
let mut count: usize = 0;
for (start, tok, end) in lexer::make_tokenizer_located(contents, stmt_at).flatten() {
for (start, tok, end) in lexer::lex_located(contents, Mode::Module, stmt_at).flatten() {
if matches!(tok, Tok::Lpar) {
if count == 0 {
fix_start = Some(if remove_parentheses {
@@ -384,7 +402,7 @@ pub fn remove_argument(
{
// Case 2: argument or keyword is _not_ the last node.
let mut seen_comma = false;
for (start, tok, end) in lexer::make_tokenizer_located(contents, stmt_at).flatten() {
for (start, tok, end) in lexer::lex_located(contents, Mode::Module, stmt_at).flatten() {
if seen_comma {
if matches!(tok, Tok::NonLogicalNewline) {
// Also delete any non-logical newlines after the comma.
@@ -407,7 +425,7 @@ pub fn remove_argument(
} else {
// Case 3: argument or keyword is the last node, so we have to find the last
// comma in the stmt.
for (start, tok, _) in lexer::make_tokenizer_located(contents, stmt_at).flatten() {
for (start, tok, _) in lexer::lex_located(contents, Mode::Module, stmt_at).flatten() {
if start == expr_at {
fix_end = Some(expr_end);
break;
@@ -429,8 +447,8 @@ pub fn remove_argument(
#[cfg(test)]
mod tests {
use anyhow::Result;
use rustpython_parser as parser;
use rustpython_parser::ast::Location;
use rustpython_parser::parser;
use crate::autofix::helpers::{next_stmt_break, trailing_semicolon};
use crate::source_code::Locator;

View File

@@ -6,17 +6,17 @@ use std::path::Path;
use itertools::Itertools;
use log::error;
use nohash_hasher::IntMap;
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_common::cformat::{CFormatError, CFormatErrorType};
use rustpython_parser::ast::{
Arg, Arguments, Comprehension, Constant, Excepthandler, ExcepthandlerKind, Expr, ExprContext,
ExprKind, KeywordData, Located, Location, Operator, Stmt, StmtKind, Suite,
};
use rustpython_parser::parser;
use smallvec::smallvec;
use ruff_python::builtins::{BUILTINS, MAGIC_GLOBALS};
use ruff_python::typing::TYPING_EXTENSIONS;
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_common::cformat::{CFormatError, CFormatErrorType};
use rustpython_parser as parser;
use rustpython_parser::ast::{
Arg, Arguments, Comprehension, Constant, Excepthandler, ExcepthandlerKind, Expr, ExprContext,
ExprKind, KeywordData, Located, Location, Operator, Pattern, PatternKind, Stmt, StmtKind,
Suite,
};
use smallvec::smallvec;
use crate::ast::helpers::{
binding_range, collect_call_path, extract_handler_names, from_relative_import, to_module_path,
@@ -28,7 +28,7 @@ use crate::ast::types::{
RefEquality, Scope, ScopeKind,
};
use crate::ast::typing::{match_annotated_subscript, Callable, SubscriptKind};
use crate::ast::visitor::{walk_excepthandler, Visitor};
use crate::ast::visitor::{walk_excepthandler, walk_pattern, Visitor};
use crate::ast::{branch_detection, cast, helpers, operations, typing, visitor};
use crate::docstrings::definition::{Definition, DefinitionKind, Docstring, Documentable};
use crate::registry::{Diagnostic, Rule};
@@ -341,7 +341,7 @@ where
match &stmt.node {
StmtKind::Global { names } => {
let scope_index = *self.scope_stack.last().expect("No current scope found");
let ranges = helpers::find_names(stmt, self.locator);
let ranges: Vec<Range> = helpers::find_names(stmt, self.locator).collect();
if scope_index != GLOBAL_SCOPE_INDEX {
// Add the binding to the current scope.
let context = self.execution_context();
@@ -371,7 +371,7 @@ where
}
StmtKind::Nonlocal { names } => {
let scope_index = *self.scope_stack.last().expect("No current scope found");
let ranges = helpers::find_names(stmt, self.locator);
let ranges: Vec<Range> = helpers::find_names(stmt, self.locator).collect();
if scope_index != GLOBAL_SCOPE_INDEX {
let context = self.execution_context();
let scope = &mut self.scopes[scope_index];
@@ -706,7 +706,12 @@ where
.rules
.enabled(&Rule::BooleanPositionalArgInFunctionDefinition)
{
flake8_boolean_trap::rules::check_positional_boolean_in_def(self, name, args);
flake8_boolean_trap::rules::check_positional_boolean_in_def(
self,
name,
decorator_list,
args,
);
}
if self
@@ -715,7 +720,10 @@ where
.enabled(&Rule::BooleanDefaultValueInFunctionDefinition)
{
flake8_boolean_trap::rules::check_boolean_default_value_in_function_definition(
self, name, args,
self,
name,
decorator_list,
args,
);
}
@@ -1565,6 +1573,9 @@ where
if self.settings.rules.enabled(&Rule::NeedlessBool) {
flake8_simplify::rules::return_bool_condition_directly(self, stmt);
}
if self.settings.rules.enabled(&Rule::ManualDictLookup) {
flake8_simplify::rules::manual_dict_lookup(self, stmt, test, body, orelse);
}
if self.settings.rules.enabled(&Rule::UseTernaryOperator) {
flake8_simplify::rules::use_ternary_operator(
self,
@@ -1639,6 +1650,9 @@ where
self.current_stmt_parent().map(Into::into),
);
}
if self.settings.rules.enabled(&Rule::RedefinedLoopName) {
pylint::rules::redefined_loop_name(self, &Node::Stmt(stmt));
}
}
StmtKind::While { body, orelse, .. } => {
if self.settings.rules.enabled(&Rule::FunctionUsesLoopVariable) {
@@ -1683,6 +1697,9 @@ where
if self.settings.rules.enabled(&Rule::UselessElseOnLoop) {
pylint::rules::useless_else_on_loop(self, stmt, body, orelse);
}
if self.settings.rules.enabled(&Rule::RedefinedLoopName) {
pylint::rules::redefined_loop_name(self, &Node::Stmt(stmt));
}
if matches!(stmt.node, StmtKind::For { .. }) {
if self.settings.rules.enabled(&Rule::ReimplementedBuiltin) {
flake8_simplify::rules::convert_for_loop_to_any_all(
@@ -1702,6 +1719,13 @@ where
orelse,
finalbody,
..
}
| StmtKind::TryStar {
body,
handlers,
orelse,
finalbody,
..
} => {
if self.settings.rules.enabled(&Rule::DefaultExceptNotLast) {
if let Some(diagnostic) =
@@ -1752,6 +1776,9 @@ where
if self.settings.rules.enabled(&Rule::VerboseRaise) {
tryceratops::rules::verbose_raise(self, handlers);
}
if self.settings.rules.enabled(&Rule::VerboseLogMessage) {
tryceratops::rules::verbose_log_message(self, handlers);
}
if self.settings.rules.enabled(&Rule::RaiseWithinTry) {
tryceratops::rules::raise_within_try(self, body);
}
@@ -1821,6 +1848,18 @@ where
pycodestyle::rules::lambda_assignment(self, target, value, stmt);
}
}
if self
.settings
.rules
.enabled(&Rule::UnintentionalTypeAnnotation)
{
flake8_bugbear::rules::unintentional_type_annotation(
self,
target,
value.as_deref(),
stmt,
);
}
}
StmtKind::Delete { .. } => {}
StmtKind::Expr { value, .. } => {
@@ -1986,6 +2025,12 @@ where
handlers,
orelse,
finalbody,
}
| StmtKind::TryStar {
body,
handlers,
orelse,
finalbody,
} => {
// TODO(charlie): The use of `smallvec` here leads to a lifetime issue.
let handler_names = extract_handler_names(handlers)
@@ -2014,8 +2059,8 @@ where
value,
..
} => {
// If we're in a class or module scope, then the annotation needs to be available
// at runtime.
// If we're in a class or module scope, then the annotation needs to be
// available at runtime.
// See: https://docs.python.org/3/reference/simple_stmts.html#annotated-assignment-statements
if !self.annotations_future_enabled
&& matches!(
@@ -2872,6 +2917,13 @@ where
{
flake8_logging_format::rules::logging_call(self, func, args, keywords);
}
// pylint logging checker
if self.settings.rules.enabled(&Rule::LoggingTooFewArgs)
|| self.settings.rules.enabled(&Rule::LoggingTooManyArgs)
{
pylint::rules::logging_call(self, func, args, keywords);
}
}
ExprKind::Dict { keys, values } => {
if self
@@ -3702,6 +3754,9 @@ where
self.settings.flake8_bandit.check_typed_exception,
);
}
if self.settings.rules.enabled(&Rule::ExceptWithEmptyTuple) {
flake8_bugbear::rules::except_with_empty_tuple(self, excepthandler);
}
if self.settings.rules.enabled(&Rule::ReraiseNoCause) {
tryceratops::rules::reraise_no_cause(self, body);
}
@@ -3798,6 +3853,28 @@ where
}
}
fn visit_pattern(&mut self, pattern: &'b Pattern) {
if let PatternKind::MatchAs {
name: Some(name), ..
} = &pattern.node
{
self.add_binding(
name,
Binding {
kind: BindingKind::Assignment,
runtime_usage: None,
synthetic_usage: None,
typing_usage: None,
range: Range::from_located(pattern),
source: Some(self.current_stmt().clone()),
context: self.execution_context(),
},
);
}
walk_pattern(self, pattern);
}
fn visit_format_spec(&mut self, format_spec: &'b Expr) {
match &format_spec.node {
ExprKind::JoinedStr { values } => {

View File

@@ -152,8 +152,8 @@ pub fn check_logical_lines(
#[cfg(test)]
mod tests {
use rustpython_parser::lexer;
use rustpython_parser::lexer::LexResult;
use rustpython_parser::{lexer, Mode};
use crate::checkers::logical_lines::iter_logical_lines;
use crate::source_code::Locator;
@@ -164,7 +164,7 @@ mod tests {
x = 1
y = 2
z = x + 1"#;
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let actual: Vec<String> = iter_logical_lines(&lxr, &locator)
.into_iter()
@@ -185,7 +185,7 @@ x = [
]
y = 2
z = x + 1"#;
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let actual: Vec<String> = iter_logical_lines(&lxr, &locator)
.into_iter()
@@ -199,7 +199,7 @@ z = x + 1"#;
assert_eq!(actual, expected);
let contents = "x = 'abc'";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let actual: Vec<String> = iter_logical_lines(&lxr, &locator)
.into_iter()
@@ -212,7 +212,7 @@ z = x + 1"#;
def f():
x = 1
f()"#;
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let actual: Vec<String> = iter_logical_lines(&lxr, &locator)
.into_iter()
@@ -227,7 +227,7 @@ def f():
# Comment goes here.
x = 1
f()"#;
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let actual: Vec<String> = iter_logical_lines(&lxr, &locator)
.into_iter()

View File

@@ -1,6 +1,7 @@
//! Lint rules based on token traversal.
use rustpython_parser::lexer::{LexResult, Tok};
use rustpython_parser::lexer::LexResult;
use rustpython_parser::Tok;
use crate::lex::docstring_detection::StateMachine;
use crate::registry::{Diagnostic, Rule};

View File

@@ -126,6 +126,8 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Pylint, "E0101") => Rule::ReturnInInit,
(Pylint, "E0604") => Rule::InvalidAllObject,
(Pylint, "E0605") => Rule::InvalidAllFormat,
(Pylint, "E1205") => Rule::LoggingTooManyArgs,
(Pylint, "E1206") => Rule::LoggingTooFewArgs,
(Pylint, "E1307") => Rule::BadStringFormatType,
(Pylint, "E2502") => Rule::BidirectionalUnicode,
(Pylint, "E1310") => Rule::BadStrStripCall,
@@ -146,6 +148,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Pylint, "R0913") => Rule::TooManyArguments,
(Pylint, "R0912") => Rule::TooManyBranches,
(Pylint, "R0915") => Rule::TooManyStatements,
(Pylint, "W2901") => Rule::RedefinedLoopName,
// flake8-builtins
(Flake8Builtins, "001") => Rule::BuiltinVariableShadowing,
@@ -179,6 +182,8 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Flake8Bugbear, "025") => Rule::DuplicateTryBlockException,
(Flake8Bugbear, "026") => Rule::StarArgUnpackingAfterKeywordArg,
(Flake8Bugbear, "027") => Rule::EmptyMethodWithoutAbstractDecorator,
(Flake8Bugbear, "029") => Rule::ExceptWithEmptyTuple,
(Flake8Bugbear, "032") => Rule::UnintentionalTypeAnnotation,
(Flake8Bugbear, "904") => Rule::RaiseWithoutFromInsideExcept,
(Flake8Bugbear, "905") => Rule::ZipWithoutExplicitStrict,
@@ -276,6 +281,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Flake8Simplify, "112") => Rule::UseCapitalEnvironmentVariables,
(Flake8Simplify, "114") => Rule::IfWithSameArms,
(Flake8Simplify, "115") => Rule::OpenFileWithContextHandler,
(Flake8Simplify, "116") => Rule::ManualDictLookup,
(Flake8Simplify, "117") => Rule::MultipleWithStatements,
(Flake8Simplify, "118") => Rule::KeyInDict,
(Flake8Simplify, "201") => Rule::NegateEqualOp,
@@ -545,6 +551,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Tryceratops, "300") => Rule::TryConsiderElse,
(Tryceratops, "301") => Rule::RaiseWithinTry,
(Tryceratops, "400") => Rule::ErrorInsteadOfException,
(Tryceratops, "401") => Rule::VerboseLogMessage,
// flake8-use-pathlib
(Flake8UsePathlib, "100") => Rule::PathlibAbspath,

View File

@@ -3,7 +3,8 @@
use bitflags::bitflags;
use nohash_hasher::{IntMap, IntSet};
use rustpython_parser::ast::Location;
use rustpython_parser::lexer::{LexResult, Tok};
use rustpython_parser::lexer::LexResult;
use rustpython_parser::Tok;
use crate::registry::LintSource;
use crate::settings::Settings;
@@ -150,56 +151,61 @@ pub fn extract_isort_directives(lxr: &[LexResult]) -> IsortDirectives {
#[cfg(test)]
mod tests {
use nohash_hasher::{IntMap, IntSet};
use rustpython_parser::lexer;
use rustpython_parser::lexer::LexResult;
use rustpython_parser::{lexer, Mode};
use crate::directives::{extract_isort_directives, extract_noqa_line_for};
#[test]
fn noqa_extraction() {
let lxr: Vec<LexResult> = lexer::make_tokenizer(
let lxr: Vec<LexResult> = lexer::lex(
"x = 1
y = 2
z = x + 1",
Mode::Module,
)
.collect();
assert_eq!(extract_noqa_line_for(&lxr), IntMap::default());
let lxr: Vec<LexResult> = lexer::make_tokenizer(
let lxr: Vec<LexResult> = lexer::lex(
"
x = 1
y = 2
z = x + 1",
Mode::Module,
)
.collect();
assert_eq!(extract_noqa_line_for(&lxr), IntMap::default());
let lxr: Vec<LexResult> = lexer::make_tokenizer(
let lxr: Vec<LexResult> = lexer::lex(
"x = 1
y = 2
z = x + 1
",
Mode::Module,
)
.collect();
assert_eq!(extract_noqa_line_for(&lxr), IntMap::default());
let lxr: Vec<LexResult> = lexer::make_tokenizer(
let lxr: Vec<LexResult> = lexer::lex(
"x = 1
y = 2
z = x + 1
",
Mode::Module,
)
.collect();
assert_eq!(extract_noqa_line_for(&lxr), IntMap::default());
let lxr: Vec<LexResult> = lexer::make_tokenizer(
let lxr: Vec<LexResult> = lexer::lex(
"x = '''abc
def
ghi
'''
y = 2
z = x + 1",
Mode::Module,
)
.collect();
assert_eq!(
@@ -207,13 +213,14 @@ z = x + 1",
IntMap::from_iter([(1, 4), (2, 4), (3, 4)])
);
let lxr: Vec<LexResult> = lexer::make_tokenizer(
let lxr: Vec<LexResult> = lexer::lex(
"x = 1
y = '''abc
def
ghi
'''
z = 2",
Mode::Module,
)
.collect();
assert_eq!(
@@ -221,12 +228,13 @@ z = 2",
IntMap::from_iter([(2, 5), (3, 5), (4, 5)])
);
let lxr: Vec<LexResult> = lexer::make_tokenizer(
let lxr: Vec<LexResult> = lexer::lex(
"x = 1
y = '''abc
def
ghi
'''",
Mode::Module,
)
.collect();
assert_eq!(
@@ -234,17 +242,19 @@ ghi
IntMap::from_iter([(2, 5), (3, 5), (4, 5)])
);
let lxr: Vec<LexResult> = lexer::make_tokenizer(
let lxr: Vec<LexResult> = lexer::lex(
r#"x = \
1"#,
Mode::Module,
)
.collect();
assert_eq!(extract_noqa_line_for(&lxr), IntMap::from_iter([(1, 2)]));
let lxr: Vec<LexResult> = lexer::make_tokenizer(
let lxr: Vec<LexResult> = lexer::lex(
r#"from foo import \
bar as baz, \
qux as quux"#,
Mode::Module,
)
.collect();
assert_eq!(
@@ -252,7 +262,7 @@ ghi
IntMap::from_iter([(1, 3), (2, 3)])
);
let lxr: Vec<LexResult> = lexer::make_tokenizer(
let lxr: Vec<LexResult> = lexer::lex(
r#"
# Foo
from foo import \
@@ -262,6 +272,7 @@ x = \
1
y = \
2"#,
Mode::Module,
)
.collect();
assert_eq!(
@@ -275,7 +286,7 @@ y = \
let contents = "x = 1
y = 2
z = x + 1";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
assert_eq!(extract_isort_directives(&lxr).exclusions, IntSet::default());
let contents = "# isort: off
@@ -283,7 +294,7 @@ x = 1
y = 2
# isort: on
z = x + 1";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
assert_eq!(
extract_isort_directives(&lxr).exclusions,
IntSet::from_iter([2, 3, 4])
@@ -296,7 +307,7 @@ y = 2
# isort: on
z = x + 1
# isort: on";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
assert_eq!(
extract_isort_directives(&lxr).exclusions,
IntSet::from_iter([2, 3, 4, 5])
@@ -306,7 +317,7 @@ z = x + 1
x = 1
y = 2
z = x + 1";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
assert_eq!(
extract_isort_directives(&lxr).exclusions,
IntSet::from_iter([2, 3, 4])
@@ -316,7 +327,7 @@ z = x + 1";
x = 1
y = 2
z = x + 1";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
assert_eq!(extract_isort_directives(&lxr).exclusions, IntSet::default());
let contents = "# isort: off
@@ -325,7 +336,7 @@ x = 1
y = 2
# isort: skip_file
z = x + 1";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
assert_eq!(extract_isort_directives(&lxr).exclusions, IntSet::default());
}
@@ -334,20 +345,20 @@ z = x + 1";
let contents = "x = 1
y = 2
z = x + 1";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
assert_eq!(extract_isort_directives(&lxr).splits, Vec::<usize>::new());
let contents = "x = 1
y = 2
# isort: split
z = x + 1";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
assert_eq!(extract_isort_directives(&lxr).splits, vec![3]);
let contents = "x = 1
y = 2 # isort: split
z = x + 1";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
assert_eq!(extract_isort_directives(&lxr).splits, vec![2]);
}
}

View File

@@ -1,34 +1,62 @@
//! Doc line extraction. In this context, a doc line is a line consisting of a
//! standalone comment or a constant string statement.
use std::iter::FusedIterator;
use rustpython_parser::ast::{Constant, ExprKind, Stmt, StmtKind, Suite};
use rustpython_parser::lexer::{LexResult, Tok};
use rustpython_parser::lexer::LexResult;
use rustpython_parser::Tok;
use crate::ast::visitor;
use crate::ast::visitor::Visitor;
/// Extract doc lines (standalone comments) from a token sequence.
pub fn doc_lines_from_tokens(lxr: &[LexResult]) -> Vec<usize> {
let mut doc_lines: Vec<usize> = Vec::default();
let mut prev: Option<usize> = None;
for (start, tok, end) in lxr.iter().flatten() {
if matches!(tok, Tok::Indent | Tok::Dedent | Tok::Newline) {
continue;
}
if matches!(tok, Tok::Comment(..)) {
if let Some(prev) = prev {
if start.row() > prev {
doc_lines.push(start.row());
}
} else {
doc_lines.push(start.row());
}
}
prev = Some(end.row());
}
doc_lines
pub fn doc_lines_from_tokens(lxr: &[LexResult]) -> DocLines {
DocLines::new(lxr)
}
pub struct DocLines<'a> {
inner: std::iter::Flatten<core::slice::Iter<'a, LexResult>>,
prev: Option<usize>,
}
impl<'a> DocLines<'a> {
fn new(lxr: &'a [LexResult]) -> Self {
Self {
inner: lxr.iter().flatten(),
prev: None,
}
}
}
impl Iterator for DocLines<'_> {
type Item = usize;
fn next(&mut self) -> Option<Self::Item> {
loop {
let (start, tok, end) = self.inner.next()?;
match tok {
Tok::Indent | Tok::Dedent | Tok::Newline => continue,
Tok::Comment(..) => {
if let Some(prev) = self.prev {
if start.row() > prev {
break Some(start.row());
}
} else {
break Some(start.row());
}
}
_ => {}
}
self.prev = Some(end.row());
}
}
}
impl FusedIterator for DocLines<'_> {}
#[derive(Default)]
struct StringLinesVisitor {
string_lines: Vec<usize>,

View File

@@ -1,5 +1,3 @@
use std::fs::File;
use std::io::{BufReader, Read};
use std::ops::Deref;
use std::path::{Path, PathBuf};
@@ -88,12 +86,3 @@ pub fn relativize_path(path: impl AsRef<Path>) -> String {
}
format!("{}", path.display())
}
/// Read a file's contents from disk.
pub fn read_file<P: AsRef<Path>>(path: P) -> Result<String> {
let file = File::open(path)?;
let mut buf_reader = BufReader::new(file);
let mut contents = String::new();
buf_reader.read_to_string(&mut contents)?;
Ok(contents)
}

View File

@@ -4,7 +4,7 @@
//!
//! TODO(charlie): Consolidate with the existing AST-based docstring extraction.
use rustpython_parser::lexer::Tok;
use rustpython_parser::Tok;
#[derive(Default)]
enum State {

View File

@@ -11,7 +11,6 @@ pub use rule_selector::RuleSelector;
pub use rules::pycodestyle::rules::IOError;
pub use violation::{AutofixKind, Availability as AutofixAvailability};
mod assert_yaml_snapshot;
mod ast;
mod autofix;
pub mod cache;

View File

@@ -5,8 +5,8 @@ use anyhow::{anyhow, Result};
use colored::Colorize;
use log::error;
use rustc_hash::FxHashMap;
use rustpython_parser::error::ParseError;
use rustpython_parser::lexer::LexResult;
use rustpython_parser::ParseError;
use crate::autofix::fix_file;
use crate::checkers::ast::check_ast;
@@ -223,7 +223,7 @@ const MAX_ITERATIONS: usize = 100;
/// Add any missing `# noqa` pragmas to the source code at the given `Path`.
pub fn add_noqa_to_path(path: &Path, package: Option<&Path>, settings: &Settings) -> Result<usize> {
// Read the file from disk.
let contents = fs::read_file(path)?;
let contents = std::fs::read_to_string(path)?;
// Tokenize once.
let tokens: Vec<LexResult> = rustpython_helpers::tokenize(&contents);

View File

@@ -149,6 +149,9 @@ ruff_macros::register_rules!(
rules::pylint::rules::TooManyArguments,
rules::pylint::rules::TooManyBranches,
rules::pylint::rules::TooManyStatements,
rules::pylint::rules::RedefinedLoopName,
rules::pylint::rules::LoggingTooFewArgs,
rules::pylint::rules::LoggingTooManyArgs,
// flake8-builtins
rules::flake8_builtins::rules::BuiltinVariableShadowing,
rules::flake8_builtins::rules::BuiltinArgumentShadowing,
@@ -182,6 +185,8 @@ ruff_macros::register_rules!(
rules::flake8_bugbear::rules::EmptyMethodWithoutAbstractDecorator,
rules::flake8_bugbear::rules::RaiseWithoutFromInsideExcept,
rules::flake8_bugbear::rules::ZipWithoutExplicitStrict,
rules::flake8_bugbear::rules::ExceptWithEmptyTuple,
rules::flake8_bugbear::rules::UnintentionalTypeAnnotation,
// flake8-blind-except
rules::flake8_blind_except::rules::BlindExcept,
// flake8-comprehensions
@@ -253,6 +258,7 @@ ruff_macros::register_rules!(
rules::flake8_2020::rules::SysVersionCmpStr10,
rules::flake8_2020::rules::SysVersionSlice1Referenced,
// flake8-simplify
rules::flake8_simplify::rules::ManualDictLookup,
rules::flake8_simplify::rules::DuplicateIsinstanceCall,
rules::flake8_simplify::rules::CollapsibleIf,
rules::flake8_simplify::rules::NeedlessBool,
@@ -512,6 +518,7 @@ ruff_macros::register_rules!(
rules::tryceratops::rules::TryConsiderElse,
rules::tryceratops::rules::RaiseWithinTry,
rules::tryceratops::rules::ErrorInsteadOfException,
rules::tryceratops::rules::VerboseLogMessage,
// flake8-use-pathlib
rules::flake8_use_pathlib::violations::PathlibAbspath,
rules::flake8_use_pathlib::violations::PathlibChmod,

View File

@@ -1,6 +1,7 @@
/// See: [eradicate.py](https://github.com/myint/eradicate/blob/98f199940979c94447a461d50d27862b118b282d/eradicate.py)
use once_cell::sync::Lazy;
use regex::Regex;
use rustpython_parser as parser;
static ALLOWLIST_REGEX: Lazy<Regex> = Lazy::new(|| {
Regex::new(
@@ -77,7 +78,7 @@ pub fn comment_contains_code(line: &str, task_tags: &[String]) -> bool {
}
// Finally, compile the source code.
rustpython_parser::parser::parse_program(&line, "<filename>").is_ok()
parser::parse_program(&line, "<filename>").is_ok()
}
/// Returns `true` if a line is probably part of some multiline code.

View File

@@ -7,11 +7,12 @@ mod tests {
use std::path::Path;
use anyhow::Result;
use insta::assert_yaml_snapshot;
use test_case::test_case;
use crate::registry::Rule;
use crate::settings;
use crate::test::test_path;
use crate::{assert_yaml_snapshot, settings};
#[test_case(Rule::CommentedOutCode, Path::new("ERA001.py"); "ERA001")]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {

View File

@@ -1,5 +1,5 @@
---
source: src/rules/eradicate/mod.rs
source: crates/ruff/src/rules/eradicate/mod.rs
expression: diagnostics
---
- kind:
@@ -11,8 +11,7 @@ expression: diagnostics
row: 1
column: 10
fix:
content:
- ""
content: ""
location:
row: 1
column: 0
@@ -29,8 +28,7 @@ expression: diagnostics
row: 2
column: 22
fix:
content:
- ""
content: ""
location:
row: 2
column: 0
@@ -47,8 +45,7 @@ expression: diagnostics
row: 3
column: 6
fix:
content:
- ""
content: ""
location:
row: 3
column: 0
@@ -65,8 +62,7 @@ expression: diagnostics
row: 5
column: 13
fix:
content:
- ""
content: ""
location:
row: 5
column: 0
@@ -83,8 +79,7 @@ expression: diagnostics
row: 12
column: 16
fix:
content:
- ""
content: ""
location:
row: 12
column: 0

View File

@@ -6,11 +6,12 @@ mod tests {
use std::path::Path;
use anyhow::Result;
use insta::assert_yaml_snapshot;
use test_case::test_case;
use crate::registry::Rule;
use crate::settings;
use crate::test::test_path;
use crate::{assert_yaml_snapshot, settings};
#[test_case(Rule::SysVersionSlice3Referenced, Path::new("YTT101.py"); "YTT101")]
#[test_case(Rule::SysVersion2Referenced, Path::new("YTT102.py"); "YTT102")]

View File

@@ -1,7 +1,6 @@
use anyhow::{bail, Result};
use rustpython_parser::ast::Stmt;
use rustpython_parser::lexer;
use rustpython_parser::lexer::Tok;
use rustpython_parser::{lexer, Mode, Tok};
use crate::ast::types::Range;
use crate::fix::Fix;
@@ -16,7 +15,7 @@ pub fn add_return_none_annotation(locator: &Locator, stmt: &Stmt) -> Result<Fix>
let mut seen_lpar = false;
let mut seen_rpar = false;
let mut count: usize = 0;
for (start, tok, ..) in lexer::make_tokenizer_located(contents, range.location).flatten() {
for (start, tok, ..) in lexer::lex_located(contents, Mode::Module, range.location).flatten() {
if seen_lpar && seen_rpar {
if matches!(tok, Tok::Colon) {
return Ok(Fix::insertion(" -> None".to_string(), start));

View File

@@ -9,8 +9,8 @@ mod tests {
use std::path::Path;
use anyhow::Result;
use insta::assert_yaml_snapshot;
use crate::assert_yaml_snapshot;
use crate::registry::Rule;
use crate::settings::Settings;
use crate::test::test_path;

View File

@@ -8,9 +8,9 @@ mod tests {
use std::path::Path;
use anyhow::Result;
use insta::assert_yaml_snapshot;
use test_case::test_case;
use crate::assert_yaml_snapshot;
use crate::registry::Rule;
use crate::settings::Settings;
use crate::test::test_path;

View File

@@ -6,11 +6,12 @@ mod tests {
use std::path::Path;
use anyhow::Result;
use insta::assert_yaml_snapshot;
use test_case::test_case;
use crate::registry::Rule;
use crate::settings;
use crate::test::test_path;
use crate::{assert_yaml_snapshot, settings};
#[test_case(Rule::BlindExcept, Path::new("BLE.py"); "BLE001")]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {

View File

@@ -6,11 +6,12 @@ mod tests {
use std::path::Path;
use anyhow::Result;
use insta::assert_yaml_snapshot;
use test_case::test_case;
use crate::registry::Rule;
use crate::settings;
use crate::test::test_path;
use crate::{assert_yaml_snapshot, settings};
#[test_case(Rule::BooleanPositionalArgInFunctionDefinition, Path::new("FBT.py"); "FBT001")]
#[test_case(Rule::BooleanDefaultValueInFunctionDefinition, Path::new("FBT.py"); "FBT002")]

View File

@@ -1,6 +1,8 @@
use ruff_macros::{define_violation, derive_message_formats};
use rustpython_parser::ast::{Arguments, Constant, Expr, ExprKind};
use ruff_macros::{define_violation, derive_message_formats};
use crate::ast::helpers::collect_call_path;
use crate::ast::types::Range;
use crate::checkers::ast::Checker;
use crate::registry::{Diagnostic, DiagnosticKind};
@@ -50,6 +52,10 @@ const FUNC_CALL_NAME_ALLOWLIST: &[&str] = &[
"pop",
"setattr",
"setdefault",
"str",
"bytes",
"int",
"float",
];
const FUNC_DEF_NAME_ALLOWLIST: &[&str] = &["__setitem__"];
@@ -87,10 +93,23 @@ fn add_if_boolean(checker: &mut Checker, arg: &Expr, kind: DiagnosticKind) {
}
}
pub fn check_positional_boolean_in_def(checker: &mut Checker, name: &str, arguments: &Arguments) {
pub fn check_positional_boolean_in_def(
checker: &mut Checker,
name: &str,
decorator_list: &[Expr],
arguments: &Arguments,
) {
if FUNC_DEF_NAME_ALLOWLIST.contains(&name) {
return;
}
if decorator_list.iter().any(|expr| {
let call_path = collect_call_path(expr);
call_path.as_slice() == [name, "setter"]
}) {
return;
}
for arg in arguments.posonlyargs.iter().chain(arguments.args.iter()) {
if arg.node.annotation.is_none() {
continue;
@@ -121,11 +140,20 @@ pub fn check_positional_boolean_in_def(checker: &mut Checker, name: &str, argume
pub fn check_boolean_default_value_in_function_definition(
checker: &mut Checker,
name: &str,
decorator_list: &[Expr],
arguments: &Arguments,
) {
if FUNC_DEF_NAME_ALLOWLIST.contains(&name) {
return;
}
if decorator_list.iter().any(|expr| {
let call_path = collect_call_path(expr);
call_path.as_slice() == [name, "setter"]
}) {
return;
}
for arg in &arguments.defaults {
add_if_boolean(checker, arg, BooleanDefaultValueInFunctionDefinition.into());
}

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_boolean_trap/mod.rs
source: crates/ruff/src/rules/flake8_boolean_trap/mod.rs
expression: diagnostics
---
- kind:
@@ -82,4 +82,14 @@ expression: diagnostics
column: 45
fix: ~
parent: ~
- kind:
BooleanPositionalArgInFunctionDefinition: ~
location:
row: 77
column: 18
end_location:
row: 77
column: 29
fix: ~
parent: ~

View File

@@ -7,9 +7,9 @@ mod tests {
use std::path::Path;
use anyhow::Result;
use insta::assert_yaml_snapshot;
use test_case::test_case;
use crate::assert_yaml_snapshot;
use crate::registry::Rule;
use crate::settings::Settings;
use crate::test::test_path;
@@ -41,6 +41,8 @@ mod tests {
#[test_case(Rule::StarArgUnpackingAfterKeywordArg, Path::new("B026.py"); "B026")]
#[test_case(Rule::EmptyMethodWithoutAbstractDecorator, Path::new("B027.py"); "B027")]
#[test_case(Rule::EmptyMethodWithoutAbstractDecorator, Path::new("B027.pyi"); "B027_pyi")]
#[test_case(Rule::ExceptWithEmptyTuple, Path::new("B029.py"); "B029")]
#[test_case(Rule::UnintentionalTypeAnnotation, Path::new("B032.py"); "B032")]
#[test_case(Rule::RaiseWithoutFromInsideExcept, Path::new("B904.py"); "B904")]
#[test_case(Rule::ZipWithoutExplicitStrict, Path::new("B905.py"); "B905")]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {

View File

@@ -0,0 +1,36 @@
use ruff_macros::{define_violation, derive_message_formats};
use rustpython_parser::ast::Excepthandler;
use crate::ast::types::Range;
use crate::checkers::ast::Checker;
use crate::registry::Diagnostic;
use crate::violation::Violation;
use rustpython_parser::ast::{ExcepthandlerKind, ExprKind};
define_violation!(
pub struct ExceptWithEmptyTuple;
);
impl Violation for ExceptWithEmptyTuple {
#[derive_message_formats]
fn message(&self) -> String {
format!("Using except (): with an empty tuple does not handle/catch anything. Add exceptions to handle.")
}
}
/// B029
pub fn except_with_empty_tuple(checker: &mut Checker, excepthandler: &Excepthandler) {
let ExcepthandlerKind::ExceptHandler { type_, .. } = &excepthandler.node;
if type_.is_none() {
return;
}
let ExprKind::Tuple { elts, .. } = &type_.as_ref().unwrap().node else {
return;
};
if elts.is_empty() {
checker.diagnostics.push(Diagnostic::new(
ExceptWithEmptyTuple,
Range::from_located(excepthandler),
));
}
}

View File

@@ -46,10 +46,16 @@ fn walk_stmt(checker: &mut Checker, body: &[Stmt], f: fn(&Stmt) -> bool) {
}
StmtKind::If { body, .. }
| StmtKind::Try { body, .. }
| StmtKind::TryStar { body, .. }
| StmtKind::With { body, .. }
| StmtKind::AsyncWith { body, .. } => {
walk_stmt(checker, body, f);
}
StmtKind::Match { cases, .. } => {
for case in cases {
walk_stmt(checker, &case.body, f);
}
}
_ => {}
}
}

View File

@@ -10,6 +10,7 @@ pub use cannot_raise_literal::{cannot_raise_literal, CannotRaiseLiteral};
pub use duplicate_exceptions::{
duplicate_exceptions, DuplicateHandlerException, DuplicateTryBlockException,
};
pub use except_with_empty_tuple::{except_with_empty_tuple, ExceptWithEmptyTuple};
pub use f_string_docstring::{f_string_docstring, FStringDocstring};
pub use function_call_argument_default::{
function_call_argument_default, FunctionCallArgumentDefault,
@@ -40,6 +41,10 @@ pub use useless_contextlib_suppress::{useless_contextlib_suppress, UselessContex
pub use useless_expression::{useless_expression, UselessExpression};
pub use zip_without_explicit_strict::{zip_without_explicit_strict, ZipWithoutExplicitStrict};
pub use unintentional_type_annotation::{
unintentional_type_annotation, UnintentionalTypeAnnotation,
};
mod abstract_base_class;
mod assert_false;
mod assert_raises_exception;
@@ -47,6 +52,7 @@ mod assignment_to_os_environ;
mod cached_instance_method;
mod cannot_raise_literal;
mod duplicate_exceptions;
mod except_with_empty_tuple;
mod f_string_docstring;
mod function_call_argument_default;
mod function_uses_loop_variable;
@@ -60,6 +66,7 @@ mod setattr_with_constant;
mod star_arg_unpacking_after_keyword_arg;
mod strip_with_multi_characters;
mod unary_prefix_increment;
mod unintentional_type_annotation;
mod unreliable_callable_check;
mod unused_loop_control_variable;
mod useless_comparison;

View File

@@ -44,7 +44,8 @@ impl<'a> Visitor<'a> for RaiseVisitor {
StmtKind::ClassDef { .. }
| StmtKind::FunctionDef { .. }
| StmtKind::AsyncFunctionDef { .. }
| StmtKind::Try { .. } => {}
| StmtKind::Try { .. }
| StmtKind::TryStar { .. } => {}
StmtKind::If { body, orelse, .. } => {
visitor::walk_body(self, body);
visitor::walk_body(self, orelse);
@@ -56,11 +57,17 @@ impl<'a> Visitor<'a> for RaiseVisitor {
| StmtKind::AsyncFor { body, .. } => {
visitor::walk_body(self, body);
}
StmtKind::Match { cases, .. } => {
for case in cases {
visitor::walk_body(self, &case.body);
}
}
_ => {}
}
}
}
/// B904
pub fn raise_without_from_inside_except(checker: &mut Checker, body: &[Stmt]) {
let mut visitor = RaiseVisitor {
diagnostics: vec![],

View File

@@ -0,0 +1,69 @@
use rustpython_parser::ast::{Expr, ExprKind, Stmt};
use ruff_macros::{define_violation, derive_message_formats};
use crate::ast::types::Range;
use crate::checkers::ast::Checker;
use crate::registry::Diagnostic;
use crate::violation::Violation;
define_violation!(
/// ## What it does
/// Checks for the unintentional use of type annotations.
///
/// ## Why is this bad?
/// The use of a colon (`:`) in lieu of an assignment (`=`) can be syntactically valid, but
/// is almost certainly a mistake when used in a subscript or attribute assignment.
///
/// ## Example
/// ```python
/// a["b"]: 1
/// ```
///
/// Use instead:
/// ```python
/// a["b"] = 1
/// ```
pub struct UnintentionalTypeAnnotation;
);
impl Violation for UnintentionalTypeAnnotation {
#[derive_message_formats]
fn message(&self) -> String {
format!(
"Possible unintentional type annotation (using `:`). Did you mean to assign (using `=`)?"
)
}
}
/// B032
pub fn unintentional_type_annotation(
checker: &mut Checker,
target: &Expr,
value: Option<&Expr>,
stmt: &Stmt,
) {
if value.is_some() {
return;
}
match &target.node {
ExprKind::Subscript { value, .. } => {
if matches!(&value.node, ExprKind::Name { .. }) {
checker.diagnostics.push(Diagnostic::new(
UnintentionalTypeAnnotation,
Range::from_located(stmt),
));
}
}
ExprKind::Attribute { value, .. } => {
if let ExprKind::Name { id, .. } = &value.node {
if id != "self" {
checker.diagnostics.push(Diagnostic::new(
UnintentionalTypeAnnotation,
Range::from_located(stmt),
));
}
}
}
_ => {}
};
}

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_bugbear/mod.rs
source: crates/ruff/src/rules/flake8_bugbear/mod.rs
expression: diagnostics
---
- kind:
@@ -27,8 +27,7 @@ expression: diagnostics
row: 18
column: 13
fix:
content:
- _k
content: _k
location:
row: 18
column: 12
@@ -61,8 +60,7 @@ expression: diagnostics
row: 30
column: 13
fix:
content:
- _k
content: _k
location:
row: 30
column: 12
@@ -134,8 +132,7 @@ expression: diagnostics
row: 52
column: 16
fix:
content:
- _bar
content: _bar
location:
row: 52
column: 13
@@ -168,8 +165,7 @@ expression: diagnostics
row: 68
column: 16
fix:
content:
- _bar
content: _bar
location:
row: 68
column: 13
@@ -189,8 +185,7 @@ expression: diagnostics
row: 77
column: 16
fix:
content:
- _bar
content: _bar
location:
row: 77
column: 13

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_bugbear/mod.rs
source: crates/ruff/src/rules/flake8_bugbear/mod.rs
expression: diagnostics
---
- kind:
@@ -11,8 +11,7 @@ expression: diagnostics
row: 19
column: 19
fix:
content:
- foo.bar
content: foo.bar
location:
row: 19
column: 0
@@ -29,8 +28,7 @@ expression: diagnostics
row: 20
column: 23
fix:
content:
- foo._123abc
content: foo._123abc
location:
row: 20
column: 0
@@ -47,8 +45,7 @@ expression: diagnostics
row: 21
column: 26
fix:
content:
- foo.__123abc__
content: foo.__123abc__
location:
row: 21
column: 0
@@ -65,8 +62,7 @@ expression: diagnostics
row: 22
column: 22
fix:
content:
- foo.abc123
content: foo.abc123
location:
row: 22
column: 0
@@ -83,8 +79,7 @@ expression: diagnostics
row: 23
column: 23
fix:
content:
- foo.abc123
content: foo.abc123
location:
row: 23
column: 0
@@ -101,8 +96,7 @@ expression: diagnostics
row: 24
column: 31
fix:
content:
- x.bar
content: x.bar
location:
row: 24
column: 14
@@ -119,8 +113,7 @@ expression: diagnostics
row: 25
column: 20
fix:
content:
- x.bar
content: x.bar
location:
row: 25
column: 3

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_bugbear/mod.rs
source: crates/ruff/src/rules/flake8_bugbear/mod.rs
expression: diagnostics
---
- kind:
@@ -11,8 +11,7 @@ expression: diagnostics
row: 40
column: 25
fix:
content:
- foo.bar = None
content: foo.bar = None
location:
row: 40
column: 0
@@ -29,8 +28,7 @@ expression: diagnostics
row: 41
column: 29
fix:
content:
- foo._123abc = None
content: foo._123abc = None
location:
row: 41
column: 0
@@ -47,8 +45,7 @@ expression: diagnostics
row: 42
column: 32
fix:
content:
- foo.__123abc__ = None
content: foo.__123abc__ = None
location:
row: 42
column: 0
@@ -65,8 +62,7 @@ expression: diagnostics
row: 43
column: 28
fix:
content:
- foo.abc123 = None
content: foo.abc123 = None
location:
row: 43
column: 0
@@ -83,8 +79,7 @@ expression: diagnostics
row: 44
column: 29
fix:
content:
- foo.abc123 = None
content: foo.abc123 = None
location:
row: 44
column: 0
@@ -101,8 +96,7 @@ expression: diagnostics
row: 45
column: 30
fix:
content:
- foo.bar.baz = None
content: foo.bar.baz = None
location:
row: 45
column: 0

View File

@@ -11,8 +11,7 @@ expression: diagnostics
row: 8
column: 12
fix:
content:
- raise AssertionError()
content: raise AssertionError()
location:
row: 8
column: 0
@@ -29,8 +28,7 @@ expression: diagnostics
row: 10
column: 12
fix:
content:
- "raise AssertionError(\"message\")"
content: "raise AssertionError(\"message\")"
location:
row: 10
column: 0

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_bugbear/mod.rs
source: crates/ruff/src/rules/flake8_bugbear/mod.rs
expression: diagnostics
---
- kind:
@@ -112,4 +112,15 @@ expression: diagnostics
column: 13
fix: ~
parent: ~
- kind:
JumpStatementInFinally:
name: break
location:
row: 118
column: 16
end_location:
row: 118
column: 21
fix: ~
parent: ~

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_bugbear/mod.rs
source: crates/ruff/src/rules/flake8_bugbear/mod.rs
expression: diagnostics
---
- kind:
@@ -12,8 +12,7 @@ expression: diagnostics
row: 3
column: 20
fix:
content:
- ValueError
content: ValueError
location:
row: 3
column: 7

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_bugbear/mod.rs
source: crates/ruff/src/rules/flake8_bugbear/mod.rs
expression: diagnostics
---
- kind:
@@ -13,8 +13,7 @@ expression: diagnostics
row: 17
column: 25
fix:
content:
- OSError
content: OSError
location:
row: 17
column: 7
@@ -33,8 +32,7 @@ expression: diagnostics
row: 28
column: 25
fix:
content:
- MyError
content: MyError
location:
row: 28
column: 7
@@ -53,8 +51,7 @@ expression: diagnostics
row: 49
column: 27
fix:
content:
- re.error
content: re.error
location:
row: 49
column: 7

View File

@@ -0,0 +1,25 @@
---
source: crates/ruff/src/rules/flake8_bugbear/mod.rs
expression: diagnostics
---
- kind:
ExceptWithEmptyTuple: ~
location:
row: 8
column: 0
end_location:
row: 9
column: 8
fix: ~
parent: ~
- kind:
ExceptWithEmptyTuple: ~
location:
row: 13
column: 0
end_location:
row: 14
column: 8
fix: ~
parent: ~

View File

@@ -0,0 +1,85 @@
---
source: crates/ruff/src/rules/flake8_bugbear/mod.rs
expression: diagnostics
---
- kind:
UnintentionalTypeAnnotation: ~
location:
row: 9
column: 0
end_location:
row: 9
column: 11
fix: ~
parent: ~
- kind:
UnintentionalTypeAnnotation: ~
location:
row: 10
column: 0
end_location:
row: 10
column: 8
fix: ~
parent: ~
- kind:
UnintentionalTypeAnnotation: ~
location:
row: 12
column: 0
end_location:
row: 12
column: 16
fix: ~
parent: ~
- kind:
UnintentionalTypeAnnotation: ~
location:
row: 13
column: 0
end_location:
row: 13
column: 13
fix: ~
parent: ~
- kind:
UnintentionalTypeAnnotation: ~
location:
row: 16
column: 0
end_location:
row: 16
column: 14
fix: ~
parent: ~
- kind:
UnintentionalTypeAnnotation: ~
location:
row: 17
column: 0
end_location:
row: 17
column: 22
fix: ~
parent: ~
- kind:
UnintentionalTypeAnnotation: ~
location:
row: 18
column: 0
end_location:
row: 18
column: 11
fix: ~
parent: ~
- kind:
UnintentionalTypeAnnotation: ~
location:
row: 19
column: 0
end_location:
row: 19
column: 19
fix: ~
parent: ~

View File

@@ -52,4 +52,14 @@ expression: diagnostics
column: 35
fix: ~
parent: ~
- kind:
RaiseWithoutFromInsideExcept: ~
location:
row: 72
column: 12
end_location:
row: 72
column: 39
fix: ~
parent: ~

View File

@@ -8,9 +8,9 @@ mod tests {
use std::path::Path;
use anyhow::Result;
use insta::assert_yaml_snapshot;
use test_case::test_case;
use crate::assert_yaml_snapshot;
use crate::registry::Rule;
use crate::settings::Settings;
use crate::test::test_path;

View File

@@ -6,11 +6,12 @@ mod tests {
use std::path::Path;
use anyhow::Result;
use insta::assert_yaml_snapshot;
use test_case::test_case;
use crate::registry::Rule;
use crate::settings;
use crate::test::test_path;
use crate::{assert_yaml_snapshot, settings};
#[test_case(Path::new("COM81.py"); "COM81")]
fn rules(path: &Path) -> Result<()> {

View File

@@ -1,7 +1,7 @@
use itertools::Itertools;
use ruff_macros::{define_violation, derive_message_formats};
use rustpython_parser::lexer::{LexResult, Spanned};
use rustpython_parser::token::Tok;
use rustpython_parser::Tok;
use crate::ast::types::Range;
use crate::fix::Fix;

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_commas/mod.rs
source: crates/ruff/src/rules/flake8_commas/mod.rs
expression: diagnostics
---
- kind:
@@ -11,8 +11,7 @@ expression: diagnostics
row: 4
column: 17
fix:
content:
- ","
content: ","
location:
row: 4
column: 17
@@ -29,8 +28,7 @@ expression: diagnostics
row: 10
column: 5
fix:
content:
- ","
content: ","
location:
row: 10
column: 5
@@ -47,8 +45,7 @@ expression: diagnostics
row: 16
column: 5
fix:
content:
- ","
content: ","
location:
row: 16
column: 5
@@ -65,8 +62,7 @@ expression: diagnostics
row: 23
column: 5
fix:
content:
- ","
content: ","
location:
row: 23
column: 5
@@ -153,8 +149,7 @@ expression: diagnostics
row: 70
column: 7
fix:
content:
- ","
content: ","
location:
row: 70
column: 7
@@ -171,8 +166,7 @@ expression: diagnostics
row: 78
column: 7
fix:
content:
- ","
content: ","
location:
row: 78
column: 7
@@ -189,8 +183,7 @@ expression: diagnostics
row: 86
column: 7
fix:
content:
- ","
content: ","
location:
row: 86
column: 7
@@ -207,8 +200,7 @@ expression: diagnostics
row: 152
column: 5
fix:
content:
- ","
content: ","
location:
row: 152
column: 5
@@ -225,8 +217,7 @@ expression: diagnostics
row: 158
column: 10
fix:
content:
- ","
content: ","
location:
row: 158
column: 10
@@ -243,8 +234,7 @@ expression: diagnostics
row: 293
column: 14
fix:
content:
- ","
content: ","
location:
row: 293
column: 14
@@ -261,8 +251,7 @@ expression: diagnostics
row: 304
column: 13
fix:
content:
- ","
content: ","
location:
row: 304
column: 13
@@ -279,8 +268,7 @@ expression: diagnostics
row: 310
column: 13
fix:
content:
- ","
content: ","
location:
row: 310
column: 13
@@ -297,8 +285,7 @@ expression: diagnostics
row: 316
column: 9
fix:
content:
- ","
content: ","
location:
row: 316
column: 9
@@ -315,8 +302,7 @@ expression: diagnostics
row: 322
column: 14
fix:
content:
- ","
content: ","
location:
row: 322
column: 14
@@ -333,8 +319,7 @@ expression: diagnostics
row: 368
column: 14
fix:
content:
- ","
content: ","
location:
row: 368
column: 14
@@ -351,8 +336,7 @@ expression: diagnostics
row: 375
column: 14
fix:
content:
- ","
content: ","
location:
row: 375
column: 14
@@ -369,8 +353,7 @@ expression: diagnostics
row: 404
column: 14
fix:
content:
- ","
content: ","
location:
row: 404
column: 14
@@ -387,8 +370,7 @@ expression: diagnostics
row: 432
column: 14
fix:
content:
- ","
content: ","
location:
row: 432
column: 14
@@ -405,8 +387,7 @@ expression: diagnostics
row: 485
column: 21
fix:
content:
- ""
content: ""
location:
row: 485
column: 20
@@ -423,8 +404,7 @@ expression: diagnostics
row: 487
column: 13
fix:
content:
- ""
content: ""
location:
row: 487
column: 12
@@ -441,8 +421,7 @@ expression: diagnostics
row: 489
column: 18
fix:
content:
- ""
content: ""
location:
row: 489
column: 17
@@ -459,8 +438,7 @@ expression: diagnostics
row: 494
column: 6
fix:
content:
- ""
content: ""
location:
row: 494
column: 5
@@ -477,8 +455,7 @@ expression: diagnostics
row: 496
column: 21
fix:
content:
- ""
content: ""
location:
row: 496
column: 20
@@ -495,8 +472,7 @@ expression: diagnostics
row: 498
column: 13
fix:
content:
- ""
content: ""
location:
row: 498
column: 12
@@ -513,8 +489,7 @@ expression: diagnostics
row: 500
column: 18
fix:
content:
- ""
content: ""
location:
row: 500
column: 17
@@ -531,8 +506,7 @@ expression: diagnostics
row: 505
column: 6
fix:
content:
- ""
content: ""
location:
row: 505
column: 5
@@ -549,8 +523,7 @@ expression: diagnostics
row: 511
column: 10
fix:
content:
- ""
content: ""
location:
row: 511
column: 9
@@ -567,8 +540,7 @@ expression: diagnostics
row: 513
column: 9
fix:
content:
- ""
content: ""
location:
row: 513
column: 8
@@ -585,8 +557,7 @@ expression: diagnostics
row: 519
column: 12
fix:
content:
- ","
content: ","
location:
row: 519
column: 12
@@ -603,8 +574,7 @@ expression: diagnostics
row: 526
column: 9
fix:
content:
- ","
content: ","
location:
row: 526
column: 9
@@ -621,8 +591,7 @@ expression: diagnostics
row: 534
column: 15
fix:
content:
- ","
content: ","
location:
row: 534
column: 15
@@ -639,8 +608,7 @@ expression: diagnostics
row: 541
column: 12
fix:
content:
- ","
content: ","
location:
row: 541
column: 12
@@ -657,8 +625,7 @@ expression: diagnostics
row: 547
column: 23
fix:
content:
- ","
content: ","
location:
row: 547
column: 23
@@ -675,8 +642,7 @@ expression: diagnostics
row: 554
column: 14
fix:
content:
- ","
content: ","
location:
row: 554
column: 14
@@ -693,8 +659,7 @@ expression: diagnostics
row: 561
column: 12
fix:
content:
- ","
content: ","
location:
row: 561
column: 12
@@ -711,8 +676,7 @@ expression: diagnostics
row: 565
column: 12
fix:
content:
- ","
content: ","
location:
row: 565
column: 12
@@ -729,8 +693,7 @@ expression: diagnostics
row: 573
column: 9
fix:
content:
- ","
content: ","
location:
row: 573
column: 9
@@ -747,8 +710,7 @@ expression: diagnostics
row: 577
column: 9
fix:
content:
- ","
content: ","
location:
row: 577
column: 9
@@ -765,8 +727,7 @@ expression: diagnostics
row: 583
column: 9
fix:
content:
- ","
content: ","
location:
row: 583
column: 9
@@ -783,8 +744,7 @@ expression: diagnostics
row: 590
column: 12
fix:
content:
- ","
content: ","
location:
row: 590
column: 12
@@ -801,8 +761,7 @@ expression: diagnostics
row: 598
column: 14
fix:
content:
- ","
content: ","
location:
row: 598
column: 14
@@ -819,8 +778,7 @@ expression: diagnostics
row: 627
column: 19
fix:
content:
- ","
content: ","
location:
row: 627
column: 19

View File

@@ -522,11 +522,13 @@ pub fn fix_unnecessary_collection_call(
// Quote each argument.
for arg in &call.args {
let quoted = format!(
"\"{}\"",
"{}{}{}",
stylist.quote(),
arg.keyword
.as_ref()
.expect("Expected dictionary argument to be kwarg")
.value
.value,
stylist.quote(),
);
arena.push(quoted);
}

View File

@@ -8,9 +8,9 @@ mod tests {
use std::path::Path;
use anyhow::Result;
use insta::assert_yaml_snapshot;
use test_case::test_case;
use crate::assert_yaml_snapshot;
use crate::registry::Rule;
use crate::settings::Settings;
use crate::test::test_path;
@@ -31,7 +31,6 @@ mod tests {
#[test_case(Rule::UnnecessarySubscriptReversal, Path::new("C415.py"); "C415")]
#[test_case(Rule::UnnecessaryComprehension, Path::new("C416.py"); "C416")]
#[test_case(Rule::UnnecessaryMap, Path::new("C417.py"); "C417")]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
let diagnostics = test_path(

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_comprehensions/mod.rs
source: crates/ruff/src/rules/flake8_comprehensions/mod.rs
expression: diagnostics
---
- kind:
@@ -11,8 +11,7 @@ expression: diagnostics
row: 1
column: 29
fix:
content:
- "[x for x in range(3)]"
content: "[x for x in range(3)]"
location:
row: 1
column: 4
@@ -29,10 +28,7 @@ expression: diagnostics
row: 4
column: 1
fix:
content:
- "["
- " x for x in range(3)"
- "]"
content: "[\n x for x in range(3)\n]"
location:
row: 2
column: 4

View File

@@ -11,8 +11,7 @@ expression: diagnostics
row: 1
column: 28
fix:
content:
- "{x for x in range(3)}"
content: "{x for x in range(3)}"
location:
row: 1
column: 4
@@ -29,10 +28,7 @@ expression: diagnostics
row: 4
column: 1
fix:
content:
- "{"
- " x for x in range(3)"
- "}"
content: "{\n x for x in range(3)\n}"
location:
row: 2
column: 4
@@ -49,8 +45,7 @@ expression: diagnostics
row: 5
column: 48
fix:
content:
- " {a if a < 6 else 0 for a in range(3)} "
content: " {a if a < 6 else 0 for a in range(3)} "
location:
row: 5
column: 7
@@ -67,8 +62,7 @@ expression: diagnostics
row: 6
column: 57
fix:
content:
- "{a if a < 6 else 0 for a in range(3)}"
content: "{a if a < 6 else 0 for a in range(3)}"
location:
row: 6
column: 16
@@ -85,8 +79,7 @@ expression: diagnostics
row: 7
column: 39
fix:
content:
- " {a for a in range(3)} "
content: " {a for a in range(3)} "
location:
row: 7
column: 15

View File

@@ -11,8 +11,7 @@ expression: diagnostics
row: 1
column: 30
fix:
content:
- "{x: x for x in range(3)}"
content: "{x: x for x in range(3)}"
location:
row: 1
column: 0
@@ -29,10 +28,7 @@ expression: diagnostics
row: 4
column: 1
fix:
content:
- "{"
- " x: x for x in range(3)"
- "}"
content: "{\n x: x for x in range(3)\n}"
location:
row: 2
column: 0
@@ -49,8 +45,7 @@ expression: diagnostics
row: 6
column: 37
fix:
content:
- " {x: x for x in range(3)} "
content: " {x: x for x in range(3)} "
location:
row: 6
column: 7
@@ -67,8 +62,7 @@ expression: diagnostics
row: 7
column: 45
fix:
content:
- " {x: x for x in range(3)} "
content: " {x: x for x in range(3)} "
location:
row: 7
column: 15

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_comprehensions/mod.rs
source: crates/ruff/src/rules/flake8_comprehensions/mod.rs
expression: diagnostics
---
- kind:
@@ -11,8 +11,7 @@ expression: diagnostics
row: 1
column: 30
fix:
content:
- "{x for x in range(3)}"
content: "{x for x in range(3)}"
location:
row: 1
column: 4
@@ -29,10 +28,7 @@ expression: diagnostics
row: 4
column: 1
fix:
content:
- "{"
- " x for x in range(3)"
- "}"
content: "{\n x for x in range(3)\n}"
location:
row: 2
column: 4

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_comprehensions/mod.rs
source: crates/ruff/src/rules/flake8_comprehensions/mod.rs
expression: diagnostics
---
- kind:
@@ -11,8 +11,7 @@ expression: diagnostics
row: 1
column: 32
fix:
content:
- "{i: i for i in range(3)}"
content: "{i: i for i in range(3)}"
location:
row: 1
column: 0

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_comprehensions/mod.rs
source: crates/ruff/src/rules/flake8_comprehensions/mod.rs
expression: diagnostics
---
- kind:
@@ -12,8 +12,7 @@ expression: diagnostics
row: 1
column: 11
fix:
content:
- "{1, 2}"
content: "{1, 2}"
location:
row: 1
column: 0
@@ -31,8 +30,7 @@ expression: diagnostics
row: 2
column: 11
fix:
content:
- "{1, 2}"
content: "{1, 2}"
location:
row: 2
column: 0
@@ -50,8 +48,7 @@ expression: diagnostics
row: 3
column: 7
fix:
content:
- set()
content: set()
location:
row: 3
column: 0
@@ -69,8 +66,7 @@ expression: diagnostics
row: 4
column: 7
fix:
content:
- set()
content: set()
location:
row: 4
column: 0
@@ -88,8 +84,7 @@ expression: diagnostics
row: 6
column: 9
fix:
content:
- "{1}"
content: "{1}"
location:
row: 6
column: 0
@@ -107,10 +102,7 @@ expression: diagnostics
row: 9
column: 2
fix:
content:
- "{"
- " 1,"
- "}"
content: "{\n 1,\n}"
location:
row: 7
column: 0
@@ -128,10 +120,7 @@ expression: diagnostics
row: 12
column: 2
fix:
content:
- "{"
- " 1,"
- "}"
content: "{\n 1,\n}"
location:
row: 10
column: 0
@@ -149,8 +138,7 @@ expression: diagnostics
row: 15
column: 1
fix:
content:
- "{1}"
content: "{1}"
location:
row: 13
column: 0
@@ -168,8 +156,7 @@ expression: diagnostics
row: 18
column: 1
fix:
content:
- "{1,}"
content: "{1,}"
location:
row: 16
column: 0

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_comprehensions/mod.rs
source: crates/ruff/src/rules/flake8_comprehensions/mod.rs
expression: diagnostics
---
- kind:
@@ -12,8 +12,7 @@ expression: diagnostics
row: 1
column: 19
fix:
content:
- "{1: 2}"
content: "{1: 2}"
location:
row: 1
column: 5
@@ -31,8 +30,7 @@ expression: diagnostics
row: 2
column: 20
fix:
content:
- "{1: 2,}"
content: "{1: 2,}"
location:
row: 2
column: 5
@@ -50,8 +48,7 @@ expression: diagnostics
row: 3
column: 13
fix:
content:
- "{}"
content: "{}"
location:
row: 3
column: 5
@@ -69,8 +66,7 @@ expression: diagnostics
row: 4
column: 13
fix:
content:
- "{}"
content: "{}"
location:
row: 4
column: 5

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_comprehensions/mod.rs
source: crates/ruff/src/rules/flake8_comprehensions/mod.rs
expression: diagnostics
---
- kind:
@@ -12,8 +12,7 @@ expression: diagnostics
row: 1
column: 11
fix:
content:
- ()
content: ()
location:
row: 1
column: 4
@@ -31,8 +30,7 @@ expression: diagnostics
row: 2
column: 10
fix:
content:
- "[]"
content: "[]"
location:
row: 2
column: 4
@@ -50,8 +48,7 @@ expression: diagnostics
row: 3
column: 11
fix:
content:
- "{}"
content: "{}"
location:
row: 3
column: 5
@@ -69,8 +66,7 @@ expression: diagnostics
row: 4
column: 14
fix:
content:
- "{\"a\": 1}"
content: "{\"a\": 1}"
location:
row: 4
column: 5

View File

@@ -12,8 +12,7 @@ expression: diagnostics
row: 1
column: 11
fix:
content:
- ()
content: ()
location:
row: 1
column: 4
@@ -31,8 +30,7 @@ expression: diagnostics
row: 2
column: 10
fix:
content:
- "[]"
content: "[]"
location:
row: 2
column: 4
@@ -50,8 +48,7 @@ expression: diagnostics
row: 3
column: 11
fix:
content:
- "{}"
content: "{}"
location:
row: 3
column: 5

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_comprehensions/mod.rs
source: crates/ruff/src/rules/flake8_comprehensions/mod.rs
expression: diagnostics
---
- kind:
@@ -12,8 +12,7 @@ expression: diagnostics
row: 1
column: 14
fix:
content:
- ()
content: ()
location:
row: 1
column: 5
@@ -31,8 +30,7 @@ expression: diagnostics
row: 2
column: 18
fix:
content:
- "(1, 2)"
content: "(1, 2)"
location:
row: 2
column: 5
@@ -50,8 +48,7 @@ expression: diagnostics
row: 3
column: 18
fix:
content:
- "(1, 2)"
content: "(1, 2)"
location:
row: 3
column: 5
@@ -69,11 +66,7 @@ expression: diagnostics
row: 7
column: 2
fix:
content:
- (
- " 1,"
- " 2"
- )
content: "(\n 1,\n 2\n)"
location:
row: 4
column: 5
@@ -91,8 +84,7 @@ expression: diagnostics
row: 10
column: 1
fix:
content:
- "(1, 2)"
content: "(1, 2)"
location:
row: 8
column: 5

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_comprehensions/mod.rs
source: crates/ruff/src/rules/flake8_comprehensions/mod.rs
expression: diagnostics
---
- kind:
@@ -12,8 +12,7 @@ expression: diagnostics
row: 1
column: 17
fix:
content:
- "[1, 2]"
content: "[1, 2]"
location:
row: 1
column: 5
@@ -31,8 +30,7 @@ expression: diagnostics
row: 2
column: 17
fix:
content:
- "[1, 2]"
content: "[1, 2]"
location:
row: 2
column: 5
@@ -50,8 +48,7 @@ expression: diagnostics
row: 3
column: 13
fix:
content:
- "[]"
content: "[]"
location:
row: 3
column: 5
@@ -69,8 +66,7 @@ expression: diagnostics
row: 4
column: 13
fix:
content:
- "[]"
content: "[]"
location:
row: 4
column: 5

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_comprehensions/mod.rs
source: crates/ruff/src/rules/flake8_comprehensions/mod.rs
expression: diagnostics
---
- kind:
@@ -11,8 +11,7 @@ expression: diagnostics
row: 2
column: 20
fix:
content:
- "[i for i in x]"
content: "[i for i in x]"
location:
row: 2
column: 0

View File

@@ -12,8 +12,7 @@ expression: diagnostics
row: 3
column: 15
fix:
content:
- sorted(x)
content: sorted(x)
location:
row: 3
column: 0
@@ -31,8 +30,7 @@ expression: diagnostics
row: 4
column: 19
fix:
content:
- "sorted(x, reverse=True)"
content: "sorted(x, reverse=True)"
location:
row: 4
column: 0
@@ -50,8 +48,7 @@ expression: diagnostics
row: 5
column: 36
fix:
content:
- "sorted(x, key=lambda e: e, reverse=True)"
content: "sorted(x, key=lambda e: e, reverse=True)"
location:
row: 5
column: 0
@@ -69,8 +66,7 @@ expression: diagnostics
row: 6
column: 33
fix:
content:
- "sorted(x, reverse=False)"
content: "sorted(x, reverse=False)"
location:
row: 6
column: 0
@@ -88,8 +84,7 @@ expression: diagnostics
row: 7
column: 50
fix:
content:
- "sorted(x, key=lambda e: e, reverse=False)"
content: "sorted(x, key=lambda e: e, reverse=False)"
location:
row: 7
column: 0
@@ -107,8 +102,7 @@ expression: diagnostics
row: 8
column: 50
fix:
content:
- "sorted(x, reverse=False, key=lambda e: e)"
content: "sorted(x, reverse=False, key=lambda e: e)"
location:
row: 8
column: 0
@@ -126,8 +120,7 @@ expression: diagnostics
row: 9
column: 34
fix:
content:
- "sorted(x, reverse=True)"
content: "sorted(x, reverse=True)"
location:
row: 9
column: 0

View File

@@ -13,8 +13,7 @@ expression: diagnostics
row: 2
column: 13
fix:
content:
- list(x)
content: list(x)
location:
row: 2
column: 0
@@ -33,8 +32,7 @@ expression: diagnostics
row: 3
column: 14
fix:
content:
- list(x)
content: list(x)
location:
row: 3
column: 0
@@ -53,8 +51,7 @@ expression: diagnostics
row: 4
column: 14
fix:
content:
- tuple(x)
content: tuple(x)
location:
row: 4
column: 0
@@ -73,8 +70,7 @@ expression: diagnostics
row: 5
column: 15
fix:
content:
- tuple(x)
content: tuple(x)
location:
row: 5
column: 0
@@ -93,8 +89,7 @@ expression: diagnostics
row: 6
column: 11
fix:
content:
- set(x)
content: set(x)
location:
row: 6
column: 0
@@ -113,8 +108,7 @@ expression: diagnostics
row: 7
column: 12
fix:
content:
- set(x)
content: set(x)
location:
row: 7
column: 0
@@ -133,8 +127,7 @@ expression: diagnostics
row: 8
column: 13
fix:
content:
- set(x)
content: set(x)
location:
row: 8
column: 0
@@ -153,8 +146,7 @@ expression: diagnostics
row: 9
column: 14
fix:
content:
- set(x)
content: set(x)
location:
row: 9
column: 0
@@ -173,8 +165,7 @@ expression: diagnostics
row: 10
column: 16
fix:
content:
- set(x)
content: set(x)
location:
row: 10
column: 0
@@ -193,8 +184,7 @@ expression: diagnostics
row: 11
column: 15
fix:
content:
- sorted(x)
content: sorted(x)
location:
row: 11
column: 0
@@ -213,8 +203,7 @@ expression: diagnostics
row: 12
column: 16
fix:
content:
- sorted(x)
content: sorted(x)
location:
row: 12
column: 0
@@ -233,8 +222,7 @@ expression: diagnostics
row: 13
column: 17
fix:
content:
- sorted(x)
content: sorted(x)
location:
row: 13
column: 0
@@ -253,8 +241,7 @@ expression: diagnostics
row: 14
column: 19
fix:
content:
- sorted(x)
content: sorted(x)
location:
row: 14
column: 0
@@ -273,11 +260,7 @@ expression: diagnostics
row: 20
column: 1
fix:
content:
- tuple(
- " [x, 3, \"hell\"\\"
- " \"o\"]"
- " )"
content: "tuple(\n [x, 3, \"hell\"\\\n \"o\"]\n )"
location:
row: 15
column: 0

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_comprehensions/mod.rs
source: crates/ruff/src/rules/flake8_comprehensions/mod.rs
expression: diagnostics
---
- kind:
@@ -12,8 +12,7 @@ expression: diagnostics
row: 2
column: 14
fix:
content:
- list(x)
content: list(x)
location:
row: 2
column: 0
@@ -31,8 +30,7 @@ expression: diagnostics
row: 3
column: 14
fix:
content:
- set(x)
content: set(x)
location:
row: 3
column: 0

View File

@@ -12,8 +12,7 @@ expression: diagnostics
row: 3
column: 26
fix:
content:
- (x + 1 for x in nums)
content: (x + 1 for x in nums)
location:
row: 3
column: 0
@@ -31,8 +30,7 @@ expression: diagnostics
row: 4
column: 27
fix:
content:
- (str(x) for x in nums)
content: (str(x) for x in nums)
location:
row: 4
column: 0
@@ -50,8 +48,7 @@ expression: diagnostics
row: 5
column: 32
fix:
content:
- "[x * 2 for x in nums]"
content: "[x * 2 for x in nums]"
location:
row: 5
column: 0
@@ -69,8 +66,7 @@ expression: diagnostics
row: 6
column: 36
fix:
content:
- "{x % 2 == 0 for x in nums}"
content: "{x % 2 == 0 for x in nums}"
location:
row: 6
column: 0
@@ -88,8 +84,7 @@ expression: diagnostics
row: 7
column: 36
fix:
content:
- "{v: v**2 for v in nums}"
content: "{v: v**2 for v in nums}"
location:
row: 7
column: 0
@@ -107,8 +102,7 @@ expression: diagnostics
row: 8
column: 26
fix:
content:
- "(\"const\" for _ in nums)"
content: "(\"const\" for _ in nums)"
location:
row: 8
column: 0
@@ -126,8 +120,7 @@ expression: diagnostics
row: 9
column: 24
fix:
content:
- (3.0 for _ in nums)
content: (3.0 for _ in nums)
location:
row: 9
column: 0
@@ -145,8 +138,7 @@ expression: diagnostics
row: 10
column: 63
fix:
content:
- "(x in nums and \"1\" or \"0\" for x in range(123))"
content: "(x in nums and \"1\" or \"0\" for x in range(123))"
location:
row: 10
column: 12
@@ -164,8 +156,7 @@ expression: diagnostics
row: 11
column: 44
fix:
content:
- "(isinstance(v, dict) for v in nums)"
content: "(isinstance(v, dict) for v in nums)"
location:
row: 11
column: 4
@@ -183,8 +174,7 @@ expression: diagnostics
row: 12
column: 35
fix:
content:
- (v for v in nums)
content: (v for v in nums)
location:
row: 12
column: 13
@@ -202,8 +192,7 @@ expression: diagnostics
row: 15
column: 43
fix:
content:
- " {x % 2 == 0 for x in nums} "
content: " {x % 2 == 0 for x in nums} "
location:
row: 15
column: 7
@@ -221,8 +210,7 @@ expression: diagnostics
row: 16
column: 43
fix:
content:
- " {v: v**2 for v in nums} "
content: " {v: v**2 for v in nums} "
location:
row: 16
column: 7

Some files were not shown because too many files have changed in this diff Show More