Compare commits

...

59 Commits

Author SHA1 Message Date
Jonathan Plasse
cb588d1d6d Allow TID252 to fix all valid module paths (#3796) 2023-03-29 15:13:12 -04:00
Charlie Marsh
9d3b8eb67b Bump version to v0.0.260 (#3799) 2023-03-29 14:51:50 -04:00
Charlie Marsh
e1e5532ab1 Add flymake-ruff to docs (#3800) 2023-03-29 18:48:59 +00:00
Andy Freeland
7d962bf80c [flake8-bugbear] Allow pathlib.Path() in B008 (#3794) 2023-03-29 15:42:43 +00:00
Micha Reiser
595cd065f3 Reduce explcit clones (#3793) 2023-03-29 15:15:14 +02:00
Anže Starič
b6f1fed424 [isort]: support submodules in known_(first|third)_party config options (#3768) 2023-03-29 03:53:38 +00:00
Jonathan Plasse
5501fc9572 Exempt return with side effects for TRY300 (#3780) 2023-03-28 19:52:05 -04:00
Charlie Marsh
5977862a60 Enumerate all codes in default configuration example (#3790) 2023-03-28 23:36:22 +00:00
Leiser Fernández Gallo
224e85c6d7 Implement flake8-gettext (#3785) 2023-03-28 23:32:02 +00:00
Charlie Marsh
515e436cfa Clarify order of pre-commit hooks (#3789) 2023-03-28 23:15:36 +00:00
Charlie Marsh
f322bcd2bd Minor nits on reference names (#3786) 2023-03-28 22:18:19 +00:00
Charlie Marsh
22d5b0071d Rename end_of_statement to end_of_last_statement (#3775) 2023-03-28 12:31:06 -04:00
Charlie Marsh
990b378c4d Set parents even in same-line cases (#3773) 2023-03-28 12:09:30 -04:00
Charlie Marsh
e88fbae926 Use import alias locations for pep8-naming import rules (#3772) 2023-03-28 11:41:23 -04:00
Charlie Marsh
81de3a16bc Include with statements in complexity calculation (#3771) 2023-03-28 15:20:22 +00:00
Andy Freeland
bfecf684ce [flake8-bugbear] Add more immutable functions for B008 (#3764) 2023-03-28 10:50:05 -04:00
konstin
756e9956a2 Fix cargo test --doc (#3766) 2023-03-28 11:36:07 +00:00
Micha Reiser
f68c26a506 perf(pycodestyle): Initialize Stylist from tokens (#3757) 2023-03-28 11:53:35 +02:00
Micha Reiser
000394f428 perf(pycodestyle): Introduce TokenKind (#3745) 2023-03-28 11:22:39 +02:00
Micha Reiser
2fdf98ef4e perf(pycodestyle): Refactor checks to iterate over tokens insteadof text (#3736) 2023-03-28 10:37:13 +02:00
Micha Reiser
1d724b1495 perf(pycodestyle): Remove regex captures (#3735) 2023-03-28 09:50:34 +02:00
Micha Reiser
113a8b8fda perf(pycodestyle): Reduce allocations when computing logical lines (#3715) 2023-03-28 09:09:27 +02:00
Charlie Marsh
c3917eab38 Revert "Implement flake8-i18n (#3741)" (#3765) 2023-03-27 21:14:38 +00:00
JBLDSKY
0eb5a22dd1 [flake8-pyi] Implement PYI012 (#3743) 2023-03-27 18:27:24 +00:00
Charlie Marsh
450c6780ff Avoid useless-import alias (C0414) in .pyi files (#3761) 2023-03-27 18:27:03 +00:00
Leiser Fernández Gallo
5cb120327c Implement flake8-i18n (#3741) 2023-03-27 18:03:39 +00:00
trag1c
8dbffb576d Removed unnecessary pipe escape (#3760) 2023-03-27 13:49:47 -04:00
Charlie Marsh
31fff4b10e Disallow some restriction lints (#3754) 2023-03-26 23:20:20 +00:00
Jonathan Plasse
2326335f5c Improve performance of statistics (#3751) 2023-03-26 18:46:44 -04:00
Charlie Marsh
6ed6da3e82 Move fix::FixMode to flags::FixMode (#3753) 2023-03-26 21:40:06 +00:00
Jonathan Plasse
cd75b57036 Sort statistics by count (#3748) 2023-03-26 16:45:35 -04:00
Charlie Marsh
e603382cf0 Allow diagnostics to generate multi-edit fixes (#3709) 2023-03-26 16:45:19 -04:00
Charlie Marsh
32be63fd1e Avoid overlong-line errors for lines that end with URLs (#3663) 2023-03-26 18:17:35 +00:00
Jonathan Plasse
d594179275 Fix SIM222 and SIM223 false negatives (#3740) 2023-03-26 18:09:11 +00:00
Agriya Khetarpal
c0befb4670 Use wild::args() and add wild as a dependency (#3739) 2023-03-26 14:32:45 +00:00
Charlie Marsh
a66481ed28 Rename setter methods on Diagnostic (#3738) 2023-03-26 10:28:30 -04:00
Charlie Marsh
5c7898124f Traverse over nested string type annotations (#3724) 2023-03-25 21:56:09 -04:00
Jonathan Plasse
50a7916e84 [pydocstyle] Implement autofix for D403 (#3731) 2023-03-25 19:21:45 +00:00
Charlie Marsh
6a40a5c5a2 Add a note on src (#3733) 2023-03-25 16:18:34 +00:00
Jonathan Plasse
fec4fa39a7 Improve add_rule.py and add_plugin.py scripts (#3725) 2023-03-25 16:05:39 +00:00
Dhruv Manilawala
2659336ed1 Add support for .log(level, msg) calls in flake8-logging-format (#3726) 2023-03-25 15:55:53 +00:00
Jonathan Plasse
8ac7584756 [flake8-pyi] Implement PYI015 (#3728) 2023-03-25 15:48:11 +00:00
Jonathan Plasse
4a1740a4c4 [flake8-pyi] Add autofix for PYI014 (#3729) 2023-03-25 15:41:11 +00:00
Charlie Marsh
2083134a96 Rename Fix to Edit (#3702) 2023-03-24 19:29:14 -04:00
Charlie Marsh
c721eedc37 Remove 'b lifetime from Checker (#3723) 2023-03-24 21:42:18 +00:00
Dhruv Manilawala
c1d89d8c93 [flake8-bugbear]: Implement rule B031 (#3680) 2023-03-24 17:26:11 -04:00
Jonathan Plasse
b8ae1e0e05 Add pre-commit in CI (#3707) 2023-03-24 17:20:13 -04:00
Dhruv Manilawala
63adf9f5e8 Allow aliased logging module as a logger candidate (#3718) 2023-03-24 17:19:09 -04:00
Micha Reiser
7af83460ce Use unicode-width to determine line-length instead of character count (#3714) 2023-03-24 17:17:05 -04:00
Jonathan Plasse
dc4d7619ee Add Diagnostic.try_amend() to simplify error handling (#3701) 2023-03-24 17:10:11 -04:00
Jonathan Plasse
1bac206995 Revert "Replace logical_lines feature with debug_assertions (#3648)" (#3708) 2023-03-23 23:42:56 -04:00
Jonathan Plasse
efc6e8cb39 Exempt PLR1711 and RET501 if non-None annotation (#3705) 2023-03-24 03:11:58 +00:00
Jonathan Plasse
7f3b748401 Fix Ruff pre-commit hook errors (#3706) 2023-03-23 22:52:24 -04:00
Jonathan Plasse
7da06b9741 Allow simple container literals as default values (#3703) 2023-03-23 22:51:36 -04:00
Charlie Marsh
0f95056f13 Avoid panics for implicitly concatenated forward references (#3700) 2023-03-23 19:13:50 -04:00
Charlie Marsh
028329854b Avoid parsing f-strings in type annotations (#3699) 2023-03-23 18:51:44 -04:00
Charlie Marsh
ba43d6bd0b Avoid parsing ForwardRef contents as type references (#3698) 2023-03-23 18:44:02 -04:00
Charlie Marsh
e8d17d23cb Expand the scope of useless-expression (B018) (#3455) 2023-03-23 18:33:58 -04:00
Jonathan Plasse
aea925a898 Fix SIM118 auto-fix (#3695) 2023-03-23 17:14:56 -04:00
1046 changed files with 24760 additions and 16832 deletions

View File

@@ -26,4 +26,11 @@ rustflags = [
"-Wclippy::print_stdout",
"-Wclippy::print_stderr",
"-Wclippy::dbg_macro",
"-Wclippy::empty_drop",
"-Wclippy::empty_structs_with_brackets",
"-Wclippy::exit",
"-Wclippy::get_unwrap",
"-Wclippy::rc_buffer",
"-Wclippy::rc_mutex",
"-Wclippy::rest_pat_in_fully_bound_structs",
]

View File

@@ -110,14 +110,16 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: "Install Rust toolchain"
run: rustup show
run: rustup component add rustfmt
- uses: Swatinem/rust-cache@v2
- run: ./scripts/add_rule.py --name DoTheThing --code PLC999 --linter pylint
- run: ./scripts/add_rule.py --name DoTheThing --prefix PL --code C0999 --linter pylint
- run: cargo check
- run: cargo fmt --all --check
- run: |
./scripts/add_plugin.py test --url https://pypi.org/project/-test/0.1.0/ --prefix TST
./scripts/add_rule.py --name FirstRule --code TST001 --linter test
./scripts/add_rule.py --name FirstRule --prefix TST --code 001 --linter test
- run: cargo check
- run: cargo fmt --all --check
typos:
name: "spell check"
@@ -198,3 +200,29 @@ jobs:
echo '```' >> $GITHUB_STEP_SUMMARY
exit 1
fi
pre-commit:
name: "pre-commit"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: "3.11"
- name: "Install Rust toolchain"
run: rustup show
- uses: Swatinem/rust-cache@v2
- name: "Install pre-commit"
run: pip install pre-commit
- name: "Cache pre-commit"
uses: actions/cache@v3
with:
path: ~/.cache/pre-commit
key: pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: "Run pre-commit"
run: |
echo '```console' > $GITHUB_STEP_SUMMARY
# Enable color output for pre-commit and remove it for the summary
SKIP=cargo-fmt,clippy,dev-generate-all pre-commit run --all-files --show-diff-on-failure --color=always | \
tee >(sed -E 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})*)?[mGK]//g' >> $GITHUB_STEP_SUMMARY) >&1
echo '```' >> $GITHUB_STEP_SUMMARY

34
Cargo.lock generated
View File

@@ -132,12 +132,6 @@ dependencies = [
"serde",
]
[[package]]
name = "bisection"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "021e079a1bab0ecce6cf4b4b74c0c37afa4a697136eb3b127875c84a8f04a8c3"
[[package]]
name = "bit-set"
version = "0.5.3"
@@ -780,7 +774,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.259"
version = "0.0.260"
dependencies = [
"anyhow",
"clap 4.1.8",
@@ -1982,10 +1976,9 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.259"
version = "0.0.260"
dependencies = [
"anyhow",
"bisection",
"bitflags",
"chrono",
"clap 4.1.8",
@@ -2033,6 +2026,8 @@ dependencies = [
"textwrap",
"thiserror",
"toml",
"typed-arena",
"unicode-width",
]
[[package]]
@@ -2063,7 +2058,7 @@ dependencies = [
[[package]]
name = "ruff_cli"
version = "0.0.259"
version = "0.0.260"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -2102,6 +2097,7 @@ dependencies = [
"tikv-jemallocator",
"ureq",
"walkdir",
"wild",
]
[[package]]
@@ -2131,6 +2127,8 @@ dependencies = [
name = "ruff_diagnostics"
version = "0.0.0"
dependencies = [
"anyhow",
"log",
"ruff_python_ast",
"rustpython-parser",
"serde",
@@ -2254,6 +2252,7 @@ dependencies = [
"js-sys",
"log",
"ruff",
"ruff_diagnostics",
"ruff_python_ast",
"ruff_rustpython",
"rustpython-parser",
@@ -2916,6 +2915,12 @@ dependencies = [
"static_assertions",
]
[[package]]
name = "typed-arena"
version = "2.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6af6ae20167a9ece4bcb41af5b80f8a1f1df981f6391189ce00fd257af04126a"
[[package]]
name = "typenum"
version = "1.16.0"
@@ -3262,6 +3267,15 @@ version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "17882f045410753661207383517a6f62ec3dbeb6a4ed2acce01f0728238d1983"
[[package]]
name = "wild"
version = "2.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "05b116685a6be0c52f5a103334cbff26db643826c7b3735fc0a3ba9871310a74"
dependencies = [
"glob",
]
[[package]]
name = "winapi"
version = "0.3.9"

View File

@@ -49,8 +49,6 @@ toml = { version = "0.7.2" }
[profile.release]
lto = "fat"
codegen-units = 1
opt-level = 3
[profile.dev.package.insta]
opt-level = 3

View File

@@ -195,6 +195,15 @@ are:
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
- flake8-gettext, licensed as follows:
"""
BSD Zero Clause License
Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
"""
- flake8-implicit-str-concat, licensed as follows:
"""
The MIT License (MIT)

View File

@@ -137,7 +137,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com) hook:
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.259'
rev: 'v0.0.260'
hooks:
- id: ruff
```
@@ -160,7 +160,7 @@ select = ["E", "F"]
ignore = []
# Allow autofix for all enabled rules (when `--fix`) is provided.
fixable = ["A", "B", "C", "D", "E", "F", "..."]
fixable = ["A", "B", "C", "D", "E", "F", "G", "I", "N", "Q", "S", "T", "W", "ANN", "ARG", "BLE", "COM", "DJ", "DTZ", "EM", "ERA", "EXE", "FBT", "ICN", "INP", "ISC", "NPY", "PD", "PGH", "PIE", "PL", "PT", "PTH", "PYI", "RET", "RSE", "RUF", "SIM", "SLF", "TCH", "TID", "TRY", "UP", "YTT"]
unfixable = []
# Exclude a variety of commonly ignored directories.
@@ -248,6 +248,7 @@ quality tools, including:
- [flake8-eradicate](https://pypi.org/project/flake8-eradicate/)
- [flake8-errmsg](https://pypi.org/project/flake8-errmsg/)
- [flake8-executable](https://pypi.org/project/flake8-executable/)
- [flake8-gettext](https://pypi.org/project/flake8-gettext/)
- [flake8-implicit-str-concat](https://pypi.org/project/flake8-implicit-str-concat/)
- [flake8-import-conventions](https://github.com/joaopalmeiro/flake8-import-conventions)
- [flake8-logging-format](https://pypi.org/project/flake8-logging-format/)

View File

@@ -1,6 +1,6 @@
[package]
name = "flake8-to-ruff"
version = "0.0.259"
version = "0.0.260"
edition = { workspace = true }
rust-version = { workspace = true }

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.0.259"
version = "0.0.260"
authors.workspace = true
edition.workspace = true
rust-version.workspace = true
@@ -22,7 +22,6 @@ ruff_python_stdlib = { path = "../ruff_python_stdlib" }
ruff_rustpython = { path = "../ruff_rustpython" }
anyhow = { workspace = true }
bisection = { version = "0.1.0" }
bitflags = { workspace = true }
chrono = { workspace = true }
clap = { workspace = true, features = ["derive", "string"], optional = true }
@@ -58,6 +57,7 @@ rustpython-parser = { workspace = true }
schemars = { workspace = true }
semver = { version = "1.0.16" }
serde = { workspace = true }
serde_json = { workspace = true }
shellexpand = { workspace = true }
smallvec = { version = "1.10.0" }
strum = { workspace = true }
@@ -65,7 +65,8 @@ strum_macros = { workspace = true }
textwrap = { workspace = true }
thiserror = { version = "1.0.38" }
toml = { workspace = true }
serde_json = { workspace = true }
typed-arena = { version = "2.0.2" }
unicode-width = {version ="0.1.10"}
[dev-dependencies]
insta = { workspace = true, features = ["yaml", "redactions"] }
@@ -74,4 +75,5 @@ test-case = { workspace = true }
[features]
default = []
logical_lines = []
jupyter_notebook = []

View File

@@ -1,7 +1,9 @@
import collections
import datetime as dt
from decimal import Decimal
import logging
import operator
from pathlib import Path
import random
import re
import time
@@ -165,6 +167,26 @@ def float_str_not_inf_or_nan_is_wrong(value=float("3.14")):
pass
# Allow decimals
def decimal_okay(value=Decimal("0.1")):
pass
# Allow dates
def date_okay(value=dt.date(2023, 3, 27)):
pass
# Allow datetimes
def datetime_okay(value=dt.datetime(2023, 3, 27, 13, 51, 59)):
pass
# Allow timedeltas
def timedelta_okay(value=dt.timedelta(hours=1)):
pass
# Allow paths
def path_okay(value=Path(".")):
pass
# B006 and B008
# We should handle arbitrary nesting of these B008.
def nested_combo(a=[float(3), dt.datetime.now()]):

View File

@@ -57,3 +57,9 @@ def foo3():
def foo4():
...
def foo5():
foo.bar # Attribute (raise)
object().__class__ # Attribute (raise)
"foo" + "bar" # BinOp (raise)

View File

@@ -0,0 +1,112 @@
import itertools
from itertools import groupby
shoppers = ["Jane", "Joe", "Sarah"]
items = [
("lettuce", "greens"),
("tomatoes", "greens"),
("cucumber", "greens"),
("chicken breast", "meats & fish"),
("salmon", "meats & fish"),
("ice cream", "frozen items"),
]
carts = {shopper: [] for shopper in shoppers}
def collect_shop_items(shopper, items):
# Imagine this an expensive database query or calculation that is
# advantageous to batch.
carts[shopper] += items
# Invoking the `groupby` function directly
for _section, section_items in groupby(items, key=lambda p: p[1]):
for shopper in shoppers:
shopper = shopper.title()
collect_shop_items(shopper, section_items) # B031
# We're outside the nested loop and used the group again.
collect_shop_items(shopper, section_items) # B031
for _section, section_items in groupby(items, key=lambda p: p[1]):
collect_shop_items("Jane", section_items)
collect_shop_items("Joe", section_items) # B031
# Make sure to detect in other loop constructs as well - `while` loop
for _section, section_items in groupby(items, key=lambda p: p[1]):
countdown = 3
while countdown > 0:
collect_shop_items(shopper, section_items) # B031
countdown -= 1
# Make sure to detect in other loop constructs as well - `list` comprehension
collection = []
for _section, section_items in groupby(items, key=lambda p: p[1]):
collection.append([list(section_items) for _ in range(3)]) # B031
unique_items = set()
another_set = set()
for _section, section_items in groupby(items, key=lambda p: p[1]):
# For nested loops, it should not flag the usage of the name
for item in section_items:
unique_items.add(item)
# But it should be detected when used again
for item in section_items: # B031
another_set.add(item)
for _section, section_items in groupby(items, key=lambda p: p[1]):
# Variable has been overridden, skip checking
section_items = list(unique_items)
collect_shop_items("Jane", section_items)
collect_shop_items("Jane", section_items)
for _section, section_items in groupby(items, key=lambda p: p[1]):
# Variable has been overridden, skip checking
# Not a realistic situation, just for testing purpose
(section_items := list(unique_items))
collect_shop_items("Jane", section_items)
collect_shop_items("Jane", section_items)
for _section, section_items in groupby(items, key=lambda p: p[1]):
# This is ok
collect_shop_items("Jane", section_items)
# Invocation via the `itertools` module
for _section, section_items in itertools.groupby(items, key=lambda p: p[1]):
for shopper in shoppers:
collect_shop_items(shopper, section_items) # B031
for group in groupby(items, key=lambda p: p[1]):
# This is bad, but not detected currently
collect_shop_items("Jane", group[1])
collect_shop_items("Joe", group[1])
# Make sure we ignore - but don't fail on more complicated invocations
for _key, (_value1, _value2) in groupby(
[("a", (1, 2)), ("b", (3, 4)), ("a", (5, 6))], key=lambda p: p[1]
):
collect_shop_items("Jane", group[1])
collect_shop_items("Joe", group[1])
# Make sure we ignore - but don't fail on more complicated invocations
for (_key1, _key2), (_value1, _value2) in groupby(
[(("a", "a"), (1, 2)), (("b", "b"), (3, 4)), (("a", "a"), (5, 6))],
key=lambda p: p[1],
):
collect_shop_items("Jane", group[1])
collect_shop_items("Joe", group[1])
# Let's redefine the `groupby` function to make sure we pick up the correct one.
# NOTE: This should always be at the end of the file.
def groupby(data, key=None):
pass
for name, group in groupby(items):
collect_shop_items("Jane", items)
# This shouldn't be flagged as the `groupby` function is different
collect_shop_items("Joe", items)

View File

@@ -0,0 +1 @@
_(f"{'value'}")

View File

@@ -0,0 +1 @@
_("{}".format("line"))

View File

@@ -0,0 +1 @@
_("%s" % "line")

View File

@@ -1,3 +1,9 @@
import logging
import logging as foo
logging.info("Hello {}".format("World!"))
logging.log(logging.INFO, "Hello {}".format("World!"))
foo.info("Hello {}".format("World!"))
logging.log(logging.INFO, msg="Hello {}".format("World!"))
logging.log(level=logging.INFO, msg="Hello {}".format("World!"))
logging.log(msg="Hello {}".format("World!"), level=logging.INFO)

View File

@@ -1,3 +1,4 @@
import logging
logging.info("Hello %s" % "World!")
logging.log(logging.INFO, "Hello %s" % "World!")

View File

@@ -1,3 +1,4 @@
import logging
logging.info("Hello" + " " + "World!")
logging.log(logging.INFO, "Hello" + " " + "World!")

View File

@@ -2,3 +2,4 @@ import logging
name = "world"
logging.info(f"Hello {name}")
logging.log(logging.INFO, f"Hello {name}")

View File

@@ -1,79 +1,161 @@
import math
import os
import sys
from math import inf
import numpy as np
def f12(
x,
y: str = os.pathsep, # OK
) -> None:
...
def f11(*, x: str = "x") -> None: # OK
...
) -> None: ...
def f11(*, x: str = "x") -> None: ... # OK
def f13(
x: list[str] = [
x: list[str] = [ # OK
"foo",
"bar",
"baz",
] # OK
) -> None:
...
]
) -> None: ...
def f14(
x: tuple[str, ...] = (
x: tuple[str, ...] = ( # OK
"foo",
"bar",
"baz",
) # OK
) -> None:
...
)
) -> None: ...
def f15(
x: set[str] = {
x: set[str] = { # OK
"foo",
"bar",
"baz",
} # OK
) -> None:
...
def f16(x: frozenset[bytes] = frozenset({b"foo", b"bar", b"baz"})) -> None: # OK
...
}
) -> None: ...
def f151(x: dict[int, int] = {1: 2}) -> None: ... # Ok
def f152(
x: dict[
int, int
] = { # OK
1: 2,
**{3: 4},
}
) -> None: ...
def f153(
x: list[
int
] = [ # OK
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
]
) -> None: ...
def f154(
x: tuple[
str, tuple[str, ...]
] = ( # OK
"foo",
("bar", "baz"),
)
) -> None: ...
def f141(
x: list[
int
] = [ # OK
*range(10)
],
) -> None: ...
def f142(
x: list[
int
] = list( # OK
range(10)
),
) -> None: ...
def f16(
x: frozenset[
bytes
] = frozenset( # OK
{b"foo", b"bar", b"baz"}
)
) -> None: ...
def f17(
x: str = "foo" + "bar", # OK
) -> None:
...
x: str = "foo" # OK
+ "bar",
) -> None: ...
def f18(
x: str = b"foo" + b"bar", # OK
) -> None:
...
x: str = b"foo" # OK
+ b"bar",
) -> None: ...
def f19(
x: object = "foo" + 4, # OK
) -> None:
...
x: object = "foo" # OK
+ 4,
) -> None: ...
def f20(
x: int = 5 + 5, # OK
) -> None:
...
x: int = 5
+ 5, # OK
) -> None: ...
def f21(
x: complex = 3j - 3j, # OK
) -> None:
...
x: complex = 3j
- 3j, # OK
) -> None: ...
def f22(
x: complex = -42.5j + 4.3j, # OK
) -> None:
...
x: complex = -42.5j # OK
+ 4.3j,
) -> None: ...
def f23(
x: bool = True, # OK
) -> None: ...
def f24(
x: float = 3.14, # OK
) -> None: ...
def f25(
x: float = -3.14, # OK
) -> None: ...
def f26(
x: complex = -3.14j, # OK
) -> None: ...
def f27(
x: complex = -3 - 3.14j, # OK
) -> None: ...
def f28(
x: float = math.tau, # OK
) -> None: ...
def f29(
x: float = math.inf, # OK
) -> None: ...
def f30(
x: float = -math.inf, # OK
) -> None: ...
def f31(
x: float = inf, # OK
) -> None: ...
def f32(
x: float = np.inf, # OK
) -> None: ...
def f33(
x: float = math.nan, # OK
) -> None: ...
def f34(
x: float = -math.nan, # OK
) -> None: ...
def f35(
x: complex = math.inf # OK
+ 1j,
) -> None: ...
def f36(
*,
x: str = sys.version, # OK
) -> None: ...
def f37(
*,
x: str = "" # OK
+ "",
) -> None: ...

View File

@@ -11,32 +11,74 @@ def f12(
) -> None: ...
def f11(*, x: str = "x") -> None: ... # OK
def f13(
x: list[
str
] = [ # Error PYI011 Only simple default values allowed for typed arguments
x: list[str] = [ # OK
"foo",
"bar",
"baz",
]
) -> None: ...
def f14(
x: tuple[
str, ...
] = ( # Error PYI011 Only simple default values allowed for typed arguments
x: tuple[str, ...] = ( # OK
"foo",
"bar",
"baz",
)
) -> None: ...
def f15(
x: set[
str
] = { # Error PYI011 Only simple default values allowed for typed arguments
x: set[str] = { # OK
"foo",
"bar",
"baz",
}
) -> None: ...
def f151(x: dict[int, int] = {1: 2}) -> None: ... # Ok
def f152(
x: dict[
int, int
] = { # Error PYI011 Only simple default values allowed for typed arguments
1: 2,
**{3: 4},
}
) -> None: ...
def f153(
x: list[
int
] = [ # Error PYI011 Only simple default values allowed for typed arguments
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
]
) -> None: ...
def f154(
x: tuple[
str, tuple[str, ...]
] = ( # Error PYI011 Only simple default values allowed for typed arguments
"foo",
("bar", "baz"),
)
) -> None: ...
def f141(
x: list[
int
] = [ # Error PYI011 Only simple default values allowed for typed arguments
*range(10)
],
) -> None: ...
def f142(
x: list[
int
] = list( # Error PYI011 Only simple default values allowed for typed arguments
range(10)
),
) -> None: ...
def f16(
x: frozenset[
bytes
@@ -109,8 +151,11 @@ def f35(
+ 1j,
) -> None: ...
def f36(
*, x: str = sys.version, # OK
*,
x: str = sys.version, # OK
) -> None: ...
def f37(
*, x: str = "" + "", # Error PYI011 Only simple default values allowed for typed arguments
*,
x: str = "" # Error PYI011 Only simple default values allowed for typed arguments
+ "",
) -> None: ...

View File

@@ -0,0 +1,75 @@
# Violations of PYI012
class OneAttributeClass:
value: int
pass # PYI012 Class body must not contain `pass`
class OneAttributeClassRev:
pass # PYI012 Class body must not contain `pass`
value: int
class DocstringClass:
"""
My body only contains pass.
"""
pass # PYI012 Class body must not contain `pass`
class NonEmptyChild(Exception):
value: int
pass # PYI012 Class body must not contain `pass`
class NonEmptyChild2(Exception):
pass # PYI012 Class body must not contain `pass`
value: int
class NonEmptyWithInit:
value: int
pass # PYI012 Class body must not contain `pass`
def __init__():
pass
# Not violations (of PYI012)
class EmptyClass:
pass # Y009 Empty body should contain `...`, not `pass`
class EmptyOneLine:
pass # Y009 Empty body should contain `...`, not `pass`
class Dog:
eyes: int = 2
class EmptyEllipsis:
...
class NonEmptyEllipsis:
value: int
... # Y013 Non-empty class body must not contain `...`
class WithInit:
value: int = 0
def __init__():
pass
def function():
pass
pass

View File

@@ -0,0 +1,59 @@
# Violations of PYI012
class OneAttributeClass:
value: int
pass # PYI012 Class body must not contain `pass`
class OneAttributeClassRev:
pass # PYI012 Class body must not contain `pass`
value: int
class DocstringClass:
"""
My body only contains pass.
"""
pass # PYI012 Class body must not contain `pass`
class NonEmptyChild(Exception):
value: int
pass # PYI012 Class body must not contain `pass`
class NonEmptyChild2(Exception):
pass # PYI012 Class body must not contain `pass`
value: int
class NonEmptyWithInit:
value: int
pass # PYI012 Class body must not contain `pass`
def __init__():
pass
# Not violations (of PYI012)
class EmptyClass:
pass # Y009 Empty body should contain `...`, not `pass`
class EmptyOneLine:
pass # Y009 Empty body should contain `...`, not `pass`
class Dog:
eyes: int = 2
class EmptyEllipsis: ...
class NonEmptyEllipsis:
value: int
... # Y013 Non-empty class body must not contain `...`
class WithInit:
value: int = 0
def __init__():
pass
def function():
pass
pass

View File

@@ -39,6 +39,58 @@ def f15(
...
def f151(x={1: 2}) -> None: # Ok
...
def f152(
x={ # OK
1: 2,
**{3: 4},
}
) -> None:
...
def f153(
x=[ # OK
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
]
) -> None:
...
def f154(
x=( # OK
"foo",
("bar", "baz"),
)
) -> None:
...
def f141(
x=[*range(10)], # OK
) -> None:
...
def f142(
x=list(range(10)), # OK
) -> None:
...
def f16(x=frozenset({b"foo", b"bar", b"baz"})) -> None:
... # OK

View File

@@ -4,26 +4,60 @@ def f12(
) -> None: ...
def f11(*, x="x") -> None: ... # OK
def f13(
x=[ # Error PYI014
x=[ # OK
"foo",
"bar",
"baz",
]
) -> None: ...
def f14(
x=( # Error PYI014
x=( # OK
"foo",
"bar",
"baz",
)
) -> None: ...
def f15(
x={ # Error PYI014
x={ # OK
"foo",
"bar",
"baz",
}
) -> None: ...
def f151(x={1: 2}) -> None: ...
def f152(
x={ # Error PYI014
1: 2,
**{3: 4},
}
) -> None: ...
def f153(
x=[ # Error PYI014
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
]
) -> None: ...
def f154(
x=( # Error PYI014
"foo",
("bar", "baz"),
)
) -> None: ...
def f141(
x=[*range(10)], # Error PYI014
) -> None: ...
def f142(
x=list(range(10)), # Error PYI014
) -> None: ...
def f16(x=frozenset({b"foo", b"bar", b"baz"})) -> None: ... # Error PYI014
def f17(
x="foo" + "bar", # Error PYI014
@@ -44,5 +78,5 @@ def f22(
x=-42.5j + 4.3j, # Error PYI014
) -> None: ...
def f23(
x=True, # OK
x=True, # OK
) -> None: ...

View File

@@ -0,0 +1,48 @@
import builtins
import typing
from typing import TypeAlias, Final
field1: int
field2: int = ...
field3 = ... # type: int # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
field4: int = 0
field41: int = 0xFFFFFFFF
field42: int = 1234567890
field43: int = -0xFFFFFFFF
field44: int = -1234567890
field5 = 0 # type: int # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int") # Y052 Need type annotation for "field5"
field6 = 0 # Y052 Need type annotation for "field6"
field7 = b"" # Y052 Need type annotation for "field7"
field71 = "foo" # Y052 Need type annotation for "field71"
field72: str = "foo"
field8 = False # Y052 Need type annotation for "field8"
field81 = -1 # Y052 Need type annotation for "field81"
field82: float = -98.43
field83 = -42j # Y052 Need type annotation for "field83"
field84 = 5 + 42j # Y052 Need type annotation for "field84"
field85 = -5 - 42j # Y052 Need type annotation for "field85"
field9 = None # Y026 Use typing_extensions.TypeAlias for type aliases, e.g. "field9: TypeAlias = None"
Field95: TypeAlias = None
Field96: TypeAlias = int | None
Field97: TypeAlias = None | typing.SupportsInt | builtins.str | float | bool
field19 = [1, 2, 3] # Y052 Need type annotation for "field19"
field191: list[int] = [1, 2, 3]
field20 = (1, 2, 3) # Y052 Need type annotation for "field20"
field201: tuple[int, ...] = (1, 2, 3)
field21 = {1, 2, 3} # Y052 Need type annotation for "field21"
field211: set[int] = {1, 2, 3}
field212 = {"foo": "bar"} # Y052 Need type annotation for "field212"
field213: dict[str, str] = {"foo": "bar"}
field22: Final = {"foo": 5}
field221: list[int] = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] # Y015 Only simple default values are allowed for assignments
field223: list[int] = [*range(10)] # Y015 Only simple default values are allowed for assignments
field224: list[int] = list(range(10)) # Y015 Only simple default values are allowed for assignments
field225: list[object] = [{}, 1, 2] # Y015 Only simple default values are allowed for assignments
field226: tuple[str | tuple[str, ...], ...] = ("foo", ("foo", "bar")) # Y015 Only simple default values are allowed for assignments
field227: dict[str, object] = {"foo": {"foo": "bar"}} # Y015 Only simple default values are allowed for assignments
field228: dict[str, list[object]] = {"foo": []} # Y015 Only simple default values are allowed for assignments
# When parsed, this case results in `None` being placed in the `.keys` list for the `ast.Dict` node
field229: dict[int, int] = {1: 2, **{3: 4}} # Y015 Only simple default values are allowed for assignments
field23 = "foo" + "bar" # Y015 Only simple default values are allowed for assignments
field24 = b"foo" + b"bar" # Y015 Only simple default values are allowed for assignments
field25 = 5 * 5 # Y015 Only simple default values are allowed for assignments

View File

@@ -0,0 +1,51 @@
import builtins
import typing
from typing import TypeAlias, Final
# We shouldn't emit Y015 for simple default values
field1: int
field2: int = ...
field3 = ... # type: int # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
field4: int = 0
field41: int = 0xFFFFFFFF
field42: int = 1234567890
field43: int = -0xFFFFFFFF
field44: int = -1234567890
field5 = 0 # type: int # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int") # Y052 Need type annotation for "field5"
field6 = 0 # Y052 Need type annotation for "field6"
field7 = b"" # Y052 Need type annotation for "field7"
field71 = "foo" # Y052 Need type annotation for "field71"
field72: str = "foo"
field8 = False # Y052 Need type annotation for "field8"
field81 = -1 # Y052 Need type annotation for "field81"
field82: float = -98.43
field83 = -42j # Y052 Need type annotation for "field83"
field84 = 5 + 42j # Y052 Need type annotation for "field84"
field85 = -5 - 42j # Y052 Need type annotation for "field85"
field9 = None # Y026 Use typing_extensions.TypeAlias for type aliases, e.g. "field9: TypeAlias = None"
Field95: TypeAlias = None
Field96: TypeAlias = int | None
Field97: TypeAlias = None | typing.SupportsInt | builtins.str | float | bool
field19 = [1, 2, 3] # Y052 Need type annotation for "field19"
field191: list[int] = [1, 2, 3]
field20 = (1, 2, 3) # Y052 Need type annotation for "field20"
field201: tuple[int, ...] = (1, 2, 3)
field21 = {1, 2, 3} # Y052 Need type annotation for "field21"
field211: set[int] = {1, 2, 3}
field212 = {"foo": "bar"} # Y052 Need type annotation for "field212"
field213: dict[str, str] = {"foo": "bar"}
field22: Final = {"foo": 5}
# We *should* emit Y015 for more complex default values
field221: list[int] = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] # Y015 Only simple default values are allowed for assignments
field223: list[int] = [*range(10)] # Y015 Only simple default values are allowed for assignments
field224: list[int] = list(range(10)) # Y015 Only simple default values are allowed for assignments
field225: list[object] = [{}, 1, 2] # Y015 Only simple default values are allowed for assignments
field226: tuple[str | tuple[str, ...], ...] = ("foo", ("foo", "bar")) # Y015 Only simple default values are allowed for assignments
field227: dict[str, object] = {"foo": {"foo": "bar"}} # Y015 Only simple default values are allowed for assignments
field228: dict[str, list[object]] = {"foo": []} # Y015 Only simple default values are allowed for assignments
# When parsed, this case results in `None` being placed in the `.keys` list for the `ast.Dict` node
field229: dict[int, int] = {1: 2, **{3: 4}} # Y015 Only simple default values are allowed for assignments
field23 = "foo" + "bar" # Y015 Only simple default values are allowed for assignments
field24 = b"foo" + b"bar" # Y015 Only simple default values are allowed for assignments
field25 = 5 * 5 # Y015 Only simple default values are allowed for assignments

View File

@@ -2,3 +2,13 @@ def x(y):
if not y:
return
return None # error
class BaseCache:
def get(self, key: str) -> str | None:
print(f"{key} not found")
return None
def get(self, key: str) -> None:
print(f"{key} not found")
return None

View File

@@ -20,3 +20,5 @@ for key in list(obj.keys()):
{k: k for k in obj.keys()} # SIM118
(k for k in obj.keys()) # SIM118
key in (obj or {}).keys() # SIM118

View File

@@ -1,10 +1,10 @@
if a or True: # SIM223
if a or True: # SIM222
pass
if (a or b) or True: # SIM223
if (a or b) or True: # SIM222
pass
if a or (b or True): # SIM223
if a or (b or True): # SIM222
pass
if a and True: # OK
@@ -16,3 +16,16 @@ if True: # OK
def validate(self, value):
return json.loads(value) or True # OK
if a or f() or b or g() or True: # OK
pass
if a or f() or True or g() or b: # SIM222
pass
if True or f() or a or g() or b: # SIM222
pass
if a or True or f() or b or g(): # SIM222
pass

View File

@@ -12,3 +12,15 @@ if a or False:
if False:
pass
if a and f() and b and g() and False: # OK
pass
if a and f() and False and g() and b: # SIM222
pass
if False and f() and a and g() and b: # SIM222
pass
if a and False and f() and b and g(): # SIM222
pass

View File

@@ -7,3 +7,4 @@ from ..protocol import commands, definitions, responses
from ..server import example
from .. import server
from . import logger, models
from ..protocol.UpperCaseModule import some_function

View File

@@ -0,0 +1,2 @@
def some_function():
pass

View File

@@ -0,0 +1,8 @@
import sys
import baz
from foo import bar, baz
from foo.bar import blah, blub
from foo.bar.baz import something
import foo
import foo.bar
import foo.bar.baz

View File

@@ -56,7 +56,29 @@ sit amet consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labor
# OK
# https://loooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong.url.com
# Not OK
# OK
_ = """
Source: https://github.com/PyCQA/pycodestyle/pull/258/files#diff-841c622497a8033d10152bfdfb15b20b92437ecdea21a260944ea86b77b51533
"""
# OK
_ = """
[this-is-ok](https://github.com/PyCQA/pycodestyle/pull/258/files#diff-841c622497a8033d10152bfdfb15b20b92437ecdea21a260944ea86b77b51533)
[this is ok](https://github.com/PyCQA/pycodestyle/pull/258/files#diff-841c622497a8033d10152bfdfb15b20b92437ecdea21a260944ea86b77b51533)
"""
# OK
class Foo:
"""
@see https://looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong.url.com
:param dynamodb_scan_kwargs: kwargs pass to <https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb.html#DynamoDB.Table.scan>
"""
# Error
class Bar:
"""
This is a long sentence that ends with a shortened URL and, therefore, could easily be broken across multiple lines ([source](https://ruff.rs))
"""

View File

@@ -0,0 +1,15 @@
def bad_function():
"""this docstring is not capitalized"""
def good_function():
"""This docstring is capitalized."""
def other_function():
"""
This docstring is capitalized."""
def another_function():
""" This docstring is capitalized."""
def utf8_function():
"""éste docstring is capitalized."""

View File

@@ -0,0 +1,7 @@
"""Test: parsing of nested string annotations."""
from typing import List
from pathlib import Path, PurePath
x: """List['Path']""" = []

View File

@@ -8,3 +8,6 @@ def f() -> "A":
def g() -> "///":
pass
X: """List[int]"""'' = []

View File

@@ -0,0 +1,8 @@
"""Test case: ForwardRef."""
from typing import ForwardRef, TypeVar
X = ForwardRef("List[int]")
Y: ForwardRef("List[int]")
Z = TypeVar("X", "List[int]", "int")

View File

@@ -0,0 +1,7 @@
"""Test case: f-strings in type annotations."""
from typing import List
x = 1
x: List[f"i{x}nt"] = []

View File

@@ -0,0 +1,9 @@
"""Test case: f-strings in future type annotations."""
from __future__ import annotations
from typing import List
x = 1
x: List[f"i{x}nt"] = []

View File

@@ -48,3 +48,13 @@ def print_python_version():
"""This function returns None."""
print(sys.version)
return None # [useless-return]
class BaseCache:
def get(self, key: str) -> str | None:
print(f"{key} not found")
return None
def get(self, key: str) -> None:
print(f"{key} not found")
return None

View File

@@ -44,3 +44,15 @@ def f(x: """List[str]""") -> None:
def f(x: "Li" "st[str]") -> None:
...
def f(x: "List['List[str]']") -> None:
...
def f(x: "List['Li' 'st[str]']") -> None:
...
def f(x: "Li" "st['List[str]']") -> None:
...

View File

@@ -40,8 +40,9 @@ def noreturn():
logger.exception("process failed")
def still_good():
def good_return_with_side_effects():
try:
pass
return process()
except MyException:
logger.exception("process failed")

View File

@@ -6,7 +6,7 @@ use libcst_native::{
use rustpython_parser::ast::{ExcepthandlerKind, Expr, Keyword, Location, Stmt, StmtKind};
use rustpython_parser::{lexer, Mode, Tok};
use ruff_diagnostics::Fix;
use ruff_diagnostics::Edit;
use ruff_python_ast::helpers;
use ruff_python_ast::helpers::to_absolute;
use ruff_python_ast::newlines::NewlineWithTrailingNewline;
@@ -178,7 +178,7 @@ pub fn delete_stmt(
locator: &Locator,
indexer: &Indexer,
stylist: &Stylist,
) -> Result<Fix> {
) -> Result<Edit> {
if parent
.map(|parent| is_lone_child(stmt, parent, deleted))
.map_or(Ok(None), |v| v.map(Some))?
@@ -186,7 +186,7 @@ pub fn delete_stmt(
{
// If removing this node would lead to an invalid syntax tree, replace
// it with a `pass`.
Ok(Fix::replacement(
Ok(Edit::replacement(
"pass".to_string(),
stmt.location,
stmt.end_location.unwrap(),
@@ -194,22 +194,22 @@ pub fn delete_stmt(
} else {
Ok(if let Some(semicolon) = trailing_semicolon(stmt, locator) {
let next = next_stmt_break(semicolon, locator);
Fix::deletion(stmt.location, next)
Edit::deletion(stmt.location, next)
} else if helpers::match_leading_content(stmt, locator) {
Fix::deletion(stmt.location, stmt.end_location.unwrap())
Edit::deletion(stmt.location, stmt.end_location.unwrap())
} else if helpers::preceded_by_continuation(stmt, indexer) {
if is_end_of_file(stmt, locator) && stmt.location.column() == 0 {
// Special-case: a file can't end in a continuation.
Fix::replacement(
Edit::replacement(
stylist.line_ending().to_string(),
stmt.location,
stmt.end_location.unwrap(),
)
} else {
Fix::deletion(stmt.location, stmt.end_location.unwrap())
Edit::deletion(stmt.location, stmt.end_location.unwrap())
}
} else {
Fix::deletion(
Edit::deletion(
Location::new(stmt.location.row(), 0),
Location::new(stmt.end_location.unwrap().row() + 1, 0),
)
@@ -226,7 +226,7 @@ pub fn remove_unused_imports<'a>(
locator: &Locator,
indexer: &Indexer,
stylist: &Stylist,
) -> Result<Fix> {
) -> Result<Edit> {
let module_text = locator.slice(stmt);
let mut tree = match_module(module_text)?;
@@ -327,13 +327,13 @@ pub fn remove_unused_imports<'a>(
delete_stmt(stmt, parent, deleted, locator, indexer, stylist)
} else {
let mut state = CodegenState {
default_newline: stylist.line_ending(),
default_newline: &stylist.line_ending(),
default_indent: stylist.indentation(),
..CodegenState::default()
};
tree.codegen(&mut state);
Ok(Fix::replacement(
Ok(Edit::replacement(
state.to_string(),
stmt.location,
stmt.end_location.unwrap(),
@@ -355,7 +355,7 @@ pub fn remove_argument(
args: &[Expr],
keywords: &[Keyword],
remove_parentheses: bool,
) -> Result<Fix> {
) -> Result<Edit> {
// TODO(sbrugman): Preserve trailing comments.
let contents = locator.skip(stmt_at);
@@ -437,7 +437,7 @@ pub fn remove_argument(
}
match (fix_start, fix_end) {
(Some(start), Some(end)) => Ok(Fix::deletion(start, end)),
(Some(start), Some(end)) => Ok(Edit::deletion(start, end)),
_ => {
bail!("No fix could be constructed")
}

View File

@@ -4,7 +4,7 @@ use itertools::Itertools;
use rustc_hash::FxHashMap;
use rustpython_parser::ast::Location;
use ruff_diagnostics::{Diagnostic, Fix};
use ruff_diagnostics::{Diagnostic, Edit, Fix};
use ruff_python_ast::source_code::Locator;
use ruff_python_ast::types::Range;
@@ -15,7 +15,7 @@ pub mod helpers;
/// Auto-fix errors in a file, and write the fixed source code to disk.
pub fn fix_file(diagnostics: &[Diagnostic], locator: &Locator) -> Option<(String, FixTable)> {
if diagnostics.iter().all(|check| check.fix.is_none()) {
if diagnostics.iter().all(|check| check.fix.is_empty()) {
None
} else {
Some(apply_fixes(diagnostics.iter(), locator))
@@ -29,41 +29,48 @@ fn apply_fixes<'a>(
) -> (String, FixTable) {
let mut output = String::with_capacity(locator.len());
let mut last_pos: Option<Location> = None;
let mut applied: BTreeSet<&Fix> = BTreeSet::default();
let mut applied: BTreeSet<&Edit> = BTreeSet::default();
let mut fixed = FxHashMap::default();
for (rule, fix) in diagnostics
.filter_map(|diagnostic| {
diagnostic
.fix
.as_ref()
.map(|fix| (diagnostic.kind.rule(), fix))
if diagnostic.fix.is_empty() {
None
} else {
Some((diagnostic.kind.rule(), &diagnostic.fix))
}
})
.sorted_by(|(rule1, fix1), (rule2, fix2)| cmp_fix(*rule1, *rule2, fix1, fix2))
{
// If we already applied an identical fix as part of another correction, skip
// any re-application.
if applied.contains(&fix) {
if fix.edits().iter().all(|edit| applied.contains(edit)) {
*fixed.entry(rule).or_default() += 1;
continue;
}
// Best-effort approach: if this fix overlaps with a fix we've already applied,
// skip it.
if last_pos.map_or(false, |last_pos| last_pos >= fix.location) {
if last_pos.map_or(false, |last_pos| {
fix.location()
.map_or(false, |fix_location| last_pos >= fix_location)
}) {
continue;
}
// Add all contents from `last_pos` to `fix.location`.
let slice = locator.slice(Range::new(last_pos.unwrap_or_default(), fix.location));
output.push_str(slice);
for edit in fix.edits() {
// Add all contents from `last_pos` to `fix.location`.
let slice = locator.slice(Range::new(last_pos.unwrap_or_default(), edit.location));
output.push_str(slice);
// Add the patch itself.
output.push_str(&fix.content);
// Add the patch itself.
output.push_str(&edit.content);
// Track that the edit was applied.
last_pos = Some(edit.end_location);
applied.insert(edit);
}
// Track that the fix was applied.
last_pos = Some(fix.end_location);
applied.insert(fix);
*fixed.entry(rule).or_default() += 1;
}
@@ -75,7 +82,7 @@ fn apply_fixes<'a>(
}
/// Apply a single fix.
pub(crate) fn apply_fix(fix: &Fix, locator: &Locator) -> String {
pub(crate) fn apply_fix(fix: &Edit, locator: &Locator) -> String {
let mut output = String::with_capacity(locator.len());
// Add all contents from `last_pos` to `fix.location`.
@@ -94,8 +101,8 @@ pub(crate) fn apply_fix(fix: &Fix, locator: &Locator) -> String {
/// Compare two fixes.
fn cmp_fix(rule1: Rule, rule2: Rule, fix1: &Fix, fix2: &Fix) -> std::cmp::Ordering {
fix1.location
.cmp(&fix2.location)
fix1.location()
.cmp(&fix2.location())
.then_with(|| match (&rule1, &rule2) {
// Apply `EndsInPeriod` fixes before `NewLineAfterLastParagraph` fixes.
(Rule::EndsInPeriod, Rule::NewLineAfterLastParagraph) => std::cmp::Ordering::Less,
@@ -109,21 +116,20 @@ mod tests {
use rustpython_parser::ast::Location;
use ruff_diagnostics::Diagnostic;
use ruff_diagnostics::Fix;
use ruff_diagnostics::Edit;
use ruff_python_ast::source_code::Locator;
use crate::autofix::{apply_fix, apply_fixes};
use crate::rules::pycodestyle::rules::MissingNewlineAtEndOfFile;
fn create_diagnostics(fixes: impl IntoIterator<Item = Fix>) -> Vec<Diagnostic> {
fixes
.into_iter()
.map(|fix| Diagnostic {
fn create_diagnostics(edit: impl IntoIterator<Item = Edit>) -> Vec<Diagnostic> {
edit.into_iter()
.map(|edit| Diagnostic {
// The choice of rule here is arbitrary.
kind: MissingNewlineAtEndOfFile.into(),
location: fix.location,
end_location: fix.end_location,
fix: Some(fix),
location: edit.location,
end_location: edit.end_location,
fix: edit.into(),
parent: None,
})
.collect()
@@ -147,7 +153,7 @@ class A(object):
"#
.trim(),
);
let diagnostics = create_diagnostics([Fix {
let diagnostics = create_diagnostics([Edit {
content: "Bar".to_string(),
location: Location::new(1, 8),
end_location: Location::new(1, 14),
@@ -173,7 +179,7 @@ class A(object):
"#
.trim(),
);
let diagnostics = create_diagnostics([Fix {
let diagnostics = create_diagnostics([Edit {
content: String::new(),
location: Location::new(1, 7),
end_location: Location::new(1, 15),
@@ -200,12 +206,12 @@ class A(object, object, object):
.trim(),
);
let diagnostics = create_diagnostics([
Fix {
Edit {
content: String::new(),
location: Location::new(1, 8),
end_location: Location::new(1, 16),
},
Fix {
Edit {
content: String::new(),
location: Location::new(1, 22),
end_location: Location::new(1, 30),
@@ -234,12 +240,12 @@ class A(object):
.trim(),
);
let diagnostics = create_diagnostics([
Fix {
Edit {
content: String::new(),
location: Location::new(1, 7),
end_location: Location::new(1, 15),
},
Fix {
Edit {
content: "ignored".to_string(),
location: Location::new(1, 9),
end_location: Location::new(1, 11),
@@ -267,7 +273,7 @@ class A(object):
.trim(),
);
let contents = apply_fix(
&Fix {
&Edit {
content: String::new(),
location: Location::new(1, 7),
end_location: Location::new(1, 15),

File diff suppressed because it is too large Load Diff

View File

@@ -29,9 +29,7 @@ pub fn check_imports(
// Extract all imports from the AST.
let tracker = {
let mut tracker = ImportTracker::new(locator, directives, path);
for stmt in python_ast {
tracker.visit_stmt(stmt);
}
tracker.visit_body(python_ast);
tracker
};
let blocks: Vec<&Block> = tracker.iter().collect();

View File

@@ -1,42 +1,32 @@
#![allow(dead_code, unused_imports, unused_variables)]
use bisection::bisect_left;
use itertools::Itertools;
use rustpython_parser::ast::Location;
use rustpython_parser::lexer::LexResult;
use ruff_diagnostics::Diagnostic;
use ruff_diagnostics::{Diagnostic, Fix};
use ruff_python_ast::source_code::{Locator, Stylist};
use ruff_python_ast::types::Range;
use crate::registry::{AsRule, Rule};
use crate::rules::pycodestyle::logical_lines::{iter_logical_lines, TokenFlags};
use crate::rules::pycodestyle::rules::{
use crate::rules::pycodestyle::rules::logical_lines::{
extraneous_whitespace, indentation, missing_whitespace, missing_whitespace_after_keyword,
missing_whitespace_around_operator, space_around_operator, whitespace_around_keywords,
whitespace_around_named_parameter_equals, whitespace_before_comment,
whitespace_before_parameters,
whitespace_before_parameters, LogicalLines, TokenFlags,
};
use crate::settings::{flags, Settings};
/// Return the amount of indentation, expanding tabs to the next multiple of 8.
fn expand_indent(mut line: &str) -> usize {
while line.ends_with("\n\r") {
line = &line[..line.len() - 2];
}
if !line.contains('\t') {
return line.len() - line.trim_start().len();
}
fn expand_indent(line: &str) -> usize {
let line = line.trim_end_matches(['\n', '\r']);
let mut indent = 0;
for c in line.chars() {
if c == '\t' {
indent = (indent / 8) * 8 + 8;
} else if c == ' ' {
indent += 1;
} else {
break;
for c in line.bytes() {
match c {
b'\t' => indent = (indent / 8) * 8 + 8,
b' ' => indent += 1,
_ => break,
}
}
indent
}
@@ -49,153 +39,140 @@ pub fn check_logical_lines(
) -> Vec<Diagnostic> {
let mut diagnostics = vec![];
let indent_char = stylist.indentation().as_char();
#[cfg(feature = "logical_lines")]
let should_fix_missing_whitespace =
autofix.into() && settings.rules.should_fix(Rule::MissingWhitespace);
#[cfg(not(feature = "logical_lines"))]
let should_fix_missing_whitespace = false;
#[cfg(feature = "logical_lines")]
let should_fix_whitespace_before_parameters =
autofix.into() && settings.rules.should_fix(Rule::WhitespaceBeforeParameters);
#[cfg(not(feature = "logical_lines"))]
let should_fix_whitespace_before_parameters = false;
let mut prev_line = None;
let mut prev_indent_level = None;
for line in iter_logical_lines(tokens, locator) {
if line.mapping.is_empty() {
continue;
}
let indent_char = stylist.indentation().as_char();
// Extract the indentation level.
let start_loc = line.mapping[0].1;
let start_line = locator.slice(Range::new(Location::new(start_loc.row(), 0), start_loc));
let indent_level = expand_indent(start_line);
let indent_size = 4;
// Generate mapping from logical to physical offsets.
let mapping_offsets = line.mapping.iter().map(|(offset, _)| *offset).collect_vec();
if line.flags.contains(TokenFlags::OPERATOR) {
for (index, kind) in space_around_operator(&line.text) {
let (token_offset, pos) = line.mapping[bisect_left(&mapping_offsets, &index)];
let location = Location::new(pos.row(), pos.column() + index - token_offset);
for line in &LogicalLines::from_tokens(tokens, locator) {
if line.flags().contains(TokenFlags::OPERATOR) {
for (location, kind) in space_around_operator(&line) {
if settings.rules.enabled(kind.rule()) {
diagnostics.push(Diagnostic {
kind,
location,
end_location: location,
fix: None,
fix: Fix::empty(),
parent: None,
});
}
}
for (location, kind) in whitespace_around_named_parameter_equals(&line.tokens()) {
if settings.rules.enabled(kind.rule()) {
diagnostics.push(Diagnostic {
kind,
location,
end_location: location,
fix: Fix::empty(),
parent: None,
});
}
}
for (location, kind) in missing_whitespace_around_operator(&line.tokens()) {
if settings.rules.enabled(kind.rule()) {
diagnostics.push(Diagnostic {
kind,
location,
end_location: location,
fix: Fix::empty(),
parent: None,
});
}
}
for diagnostic in missing_whitespace(&line, should_fix_missing_whitespace) {
if settings.rules.enabled(diagnostic.kind.rule()) {
diagnostics.push(diagnostic);
}
}
}
if line
.flags
.flags()
.contains(TokenFlags::OPERATOR | TokenFlags::PUNCTUATION)
{
for (index, kind) in extraneous_whitespace(&line.text) {
let (token_offset, pos) = line.mapping[bisect_left(&mapping_offsets, &index)];
let location = Location::new(pos.row(), pos.column() + index - token_offset);
for (location, kind) in extraneous_whitespace(&line) {
if settings.rules.enabled(kind.rule()) {
diagnostics.push(Diagnostic {
kind,
location,
end_location: location,
fix: None,
fix: Fix::empty(),
parent: None,
});
}
}
}
if line.flags.contains(TokenFlags::KEYWORD) {
for (index, kind) in whitespace_around_keywords(&line.text) {
let (token_offset, pos) = line.mapping[bisect_left(&mapping_offsets, &index)];
let location = Location::new(pos.row(), pos.column() + index - token_offset);
if line.flags().contains(TokenFlags::KEYWORD) {
for (location, kind) in whitespace_around_keywords(&line) {
if settings.rules.enabled(kind.rule()) {
diagnostics.push(Diagnostic {
kind,
location,
end_location: location,
fix: None,
fix: Fix::empty(),
parent: None,
});
}
}
for (location, kind) in missing_whitespace_after_keyword(&line.tokens) {
for (location, kind) in missing_whitespace_after_keyword(&line.tokens()) {
if settings.rules.enabled(kind.rule()) {
diagnostics.push(Diagnostic {
kind,
location,
end_location: location,
fix: None,
fix: Fix::empty(),
parent: None,
});
}
}
}
if line.flags.contains(TokenFlags::COMMENT) {
for (range, kind) in whitespace_before_comment(&line.tokens, locator) {
if line.flags().contains(TokenFlags::COMMENT) {
for (range, kind) in whitespace_before_comment(&line.tokens(), locator) {
if settings.rules.enabled(kind.rule()) {
diagnostics.push(Diagnostic {
kind,
location: range.location,
end_location: range.end_location,
fix: None,
fix: Fix::empty(),
parent: None,
});
}
}
}
if line.flags.contains(TokenFlags::OPERATOR) {
for (location, kind) in
whitespace_around_named_parameter_equals(&line.tokens, &line.text)
{
if settings.rules.enabled(kind.rule()) {
diagnostics.push(Diagnostic {
kind,
location,
end_location: location,
fix: None,
parent: None,
});
}
}
for (location, kind) in missing_whitespace_around_operator(&line.tokens) {
if settings.rules.enabled(kind.rule()) {
diagnostics.push(Diagnostic {
kind,
location,
end_location: location,
fix: None,
parent: None,
});
}
}
#[cfg(debug_assertions)]
let should_fix = autofix.into() && settings.rules.should_fix(Rule::MissingWhitespace);
#[cfg(not(debug_assertions))]
let should_fix = false;
for diagnostic in
missing_whitespace(&line.text, start_loc.row(), should_fix, indent_level)
{
if line.flags().contains(TokenFlags::BRACKET) {
for diagnostic in whitespace_before_parameters(
&line.tokens(),
should_fix_whitespace_before_parameters,
) {
if settings.rules.enabled(diagnostic.kind.rule()) {
diagnostics.push(diagnostic);
}
}
}
if line.flags.contains(TokenFlags::BRACKET) {
#[cfg(debug_assertions)]
let should_fix =
autofix.into() && settings.rules.should_fix(Rule::WhitespaceBeforeParameters);
// Extract the indentation level.
let Some(start_loc) = line.first_token_location() else { continue; };
let start_line = locator.slice(Range::new(Location::new(start_loc.row(), 0), start_loc));
let indent_level = expand_indent(start_line);
let indent_size = 4;
#[cfg(not(debug_assertions))]
let should_fix = false;
for diagnostic in whitespace_before_parameters(&line.tokens, should_fix) {
if settings.rules.enabled(diagnostic.kind.rule()) {
diagnostics.push(diagnostic);
}
}
}
for (index, kind) in indentation(
for (location, kind) in indentation(
&line,
prev_line.as_ref(),
indent_char,
@@ -203,20 +180,18 @@ pub fn check_logical_lines(
prev_indent_level,
indent_size,
) {
let (token_offset, pos) = line.mapping[bisect_left(&mapping_offsets, &index)];
let location = Location::new(pos.row(), pos.column() + index - token_offset);
if settings.rules.enabled(kind.rule()) {
diagnostics.push(Diagnostic {
kind,
location,
end_location: location,
fix: None,
fix: Fix::empty(),
parent: None,
});
}
}
if !line.is_comment() {
if !line.is_comment_only() {
prev_line = Some(line);
prev_indent_level = Some(indent_level);
}
@@ -229,10 +204,9 @@ mod tests {
use rustpython_parser::lexer::LexResult;
use rustpython_parser::{lexer, Mode};
use crate::rules::pycodestyle::rules::logical_lines::LogicalLines;
use ruff_python_ast::source_code::Locator;
use crate::checkers::logical_lines::iter_logical_lines;
#[test]
fn split_logical_lines() {
let contents = r#"
@@ -241,9 +215,9 @@ y = 2
z = x + 1"#;
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let actual: Vec<String> = iter_logical_lines(&lxr, &locator)
let actual: Vec<String> = LogicalLines::from_tokens(&lxr, &locator)
.into_iter()
.map(|line| line.text)
.map(|line| line.text_trimmed().to_string())
.collect();
let expected = vec![
"x = 1".to_string(),
@@ -262,12 +236,12 @@ y = 2
z = x + 1"#;
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let actual: Vec<String> = iter_logical_lines(&lxr, &locator)
let actual: Vec<String> = LogicalLines::from_tokens(&lxr, &locator)
.into_iter()
.map(|line| line.text)
.map(|line| line.text_trimmed().to_string())
.collect();
let expected = vec![
"x = [1, 2, 3, ]".to_string(),
"x = [\n 1,\n 2,\n 3,\n]".to_string(),
"y = 2".to_string(),
"z = x + 1".to_string(),
];
@@ -276,11 +250,11 @@ z = x + 1"#;
let contents = "x = 'abc'";
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let actual: Vec<String> = iter_logical_lines(&lxr, &locator)
let actual: Vec<String> = LogicalLines::from_tokens(&lxr, &locator)
.into_iter()
.map(|line| line.text)
.map(|line| line.text_trimmed().to_string())
.collect();
let expected = vec!["x = \"xxx\"".to_string()];
let expected = vec!["x = 'abc'".to_string()];
assert_eq!(actual, expected);
let contents = r#"
@@ -289,9 +263,9 @@ def f():
f()"#;
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let actual: Vec<String> = iter_logical_lines(&lxr, &locator)
let actual: Vec<String> = LogicalLines::from_tokens(&lxr, &locator)
.into_iter()
.map(|line| line.text)
.map(|line| line.text_trimmed().to_string())
.collect();
let expected = vec!["def f():", "x = 1", "f()"];
assert_eq!(actual, expected);
@@ -304,11 +278,17 @@ def f():
f()"#;
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let actual: Vec<String> = iter_logical_lines(&lxr, &locator)
let actual: Vec<String> = LogicalLines::from_tokens(&lxr, &locator)
.into_iter()
.map(|line| line.text)
.map(|line| line.text_trimmed().to_string())
.collect();
let expected = vec!["def f():", "\"xxxxxxxxxxxxxxxxxxxx\"", "", "x = 1", "f()"];
let expected = vec![
"def f():",
"\"\"\"Docstring goes here.\"\"\"",
"",
"x = 1",
"f()",
];
assert_eq!(actual, expected);
}
}

View File

@@ -1,7 +1,8 @@
pub mod ast;
pub mod filesystem;
pub mod imports;
pub mod logical_lines;
#[cfg(feature = "logical_lines")]
pub(crate) mod logical_lines;
pub mod noqa;
pub mod physical_lines;
pub mod tokens;

View File

@@ -3,7 +3,7 @@
use nohash_hasher::IntMap;
use rustpython_parser::ast::Location;
use ruff_diagnostics::{Diagnostic, Fix};
use ruff_diagnostics::{Diagnostic, Edit};
use ruff_python_ast::newlines::StrExt;
use ruff_python_ast::types::Range;
@@ -68,33 +68,36 @@ pub fn check_noqa(
FileExemption::None => {}
}
let diagnostic_lineno = diagnostic.location.row();
// Is the violation ignored by a `noqa` directive on the parent line?
if let Some(parent_lineno) = diagnostic.parent.map(|location| location.row()) {
let noqa_lineno = noqa_line_for.get(&parent_lineno).unwrap_or(&parent_lineno);
if commented_lines.contains(noqa_lineno) {
let noqa = noqa_directives.entry(noqa_lineno - 1).or_insert_with(|| {
(noqa::extract_noqa_directive(lines[noqa_lineno - 1]), vec![])
});
match noqa {
(Directive::All(..), matches) => {
matches.push(diagnostic.kind.rule().noqa_code());
ignored_diagnostics.push(index);
continue;
}
(Directive::Codes(.., codes, _), matches) => {
if noqa::includes(diagnostic.kind.rule(), codes) {
if parent_lineno != diagnostic_lineno {
let noqa_lineno = noqa_line_for.get(&parent_lineno).unwrap_or(&parent_lineno);
if commented_lines.contains(noqa_lineno) {
let noqa = noqa_directives.entry(noqa_lineno - 1).or_insert_with(|| {
(noqa::extract_noqa_directive(lines[noqa_lineno - 1]), vec![])
});
match noqa {
(Directive::All(..), matches) => {
matches.push(diagnostic.kind.rule().noqa_code());
ignored_diagnostics.push(index);
continue;
}
(Directive::Codes(.., codes, _), matches) => {
if noqa::includes(diagnostic.kind.rule(), codes) {
matches.push(diagnostic.kind.rule().noqa_code());
ignored_diagnostics.push(index);
continue;
}
}
(Directive::None, ..) => {}
}
(Directive::None, ..) => {}
}
}
}
// Is the diagnostic ignored by a `noqa` directive on the same line?
let diagnostic_lineno = diagnostic.location.row();
let noqa_lineno = noqa_line_for
.get(&diagnostic_lineno)
.unwrap_or(&diagnostic_lineno);
@@ -138,7 +141,7 @@ pub fn check_noqa(
),
);
if autofix.into() && settings.rules.should_fix(diagnostic.kind.rule()) {
diagnostic.amend(delete_noqa(
diagnostic.set_fix(delete_noqa(
row,
lines[row],
leading_spaces,
@@ -214,7 +217,7 @@ pub fn check_noqa(
);
if autofix.into() && settings.rules.should_fix(diagnostic.kind.rule()) {
if valid_codes.is_empty() {
diagnostic.amend(delete_noqa(
diagnostic.set_fix(delete_noqa(
row,
lines[row],
leading_spaces,
@@ -223,7 +226,7 @@ pub fn check_noqa(
trailing_spaces,
));
} else {
diagnostic.amend(Fix::replacement(
diagnostic.set_fix(Edit::replacement(
format!("# noqa: {}", valid_codes.join(", ")),
Location::new(row + 1, start_char),
Location::new(row + 1, end_char),
@@ -242,7 +245,7 @@ pub fn check_noqa(
ignored_diagnostics
}
/// Generate a [`Fix`] to delete a `noqa` directive.
/// Generate a [`Edit`] to delete a `noqa` directive.
fn delete_noqa(
row: usize,
line: &str,
@@ -250,15 +253,15 @@ fn delete_noqa(
start_byte: usize,
end_byte: usize,
trailing_spaces: usize,
) -> Fix {
) -> Edit {
if start_byte - leading_spaces == 0 && end_byte == line.len() {
// Ex) `# noqa`
Fix::deletion(Location::new(row + 1, 0), Location::new(row + 2, 0))
Edit::deletion(Location::new(row + 1, 0), Location::new(row + 2, 0))
} else if end_byte == line.len() {
// Ex) `x = 1 # noqa`
let start_char = line[..start_byte].chars().count();
let end_char = start_char + line[start_byte..end_byte].chars().count();
Fix::deletion(
Edit::deletion(
Location::new(row + 1, start_char - leading_spaces),
Location::new(row + 1, end_char + trailing_spaces),
)
@@ -266,7 +269,7 @@ fn delete_noqa(
// Ex) `x = 1 # noqa # type: ignore`
let start_char = line[..start_byte].chars().count();
let end_char = start_char + line[start_byte..end_byte].chars().count();
Fix::deletion(
Edit::deletion(
Location::new(row + 1, start_char),
Location::new(row + 1, end_char + trailing_spaces),
)
@@ -274,7 +277,7 @@ fn delete_noqa(
// Ex) `x = 1 # noqa here`
let start_char = line[..start_byte].chars().count();
let end_char = start_char + line[start_byte..end_byte].chars().count();
Fix::deletion(
Edit::deletion(
Location::new(row + 1, start_char + 1 + 1),
Location::new(row + 1, end_char + trailing_spaces),
)

View File

@@ -182,6 +182,8 @@ pub fn check_physical_lines(
#[cfg(test)]
mod tests {
use rustpython_parser::lexer::lex;
use rustpython_parser::Mode;
use std::path::Path;
use ruff_python_ast::source_code::{Locator, Stylist};
@@ -195,7 +197,8 @@ mod tests {
fn e501_non_ascii_char() {
let line = "'\u{4e9c}' * 2"; // 7 in UTF-32, 9 in UTF-8.
let locator = Locator::new(line);
let stylist = Stylist::from_contents(line, &locator);
let tokens: Vec<_> = lex(line, Mode::Module).collect();
let stylist = Stylist::from_tokens(&tokens, &locator);
let check_with_max_line_length = |line_length: usize| {
check_physical_lines(
@@ -211,7 +214,7 @@ mod tests {
flags::Autofix::Enabled,
)
};
assert!(!check_with_max_line_length(6).is_empty());
assert!(check_with_max_line_length(7).is_empty());
assert_eq!(check_with_max_line_length(8), vec![]);
assert_eq!(check_with_max_line_length(8), vec![]);
}
}

View File

@@ -26,67 +26,67 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
Some(match (linter, code) {
// pycodestyle errors
(Pycodestyle, "E101") => Rule::MixedSpacesAndTabs,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E111") => Rule::IndentationWithInvalidMultiple,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E112") => Rule::NoIndentedBlock,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E113") => Rule::UnexpectedIndentation,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E114") => Rule::IndentationWithInvalidMultipleComment,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E115") => Rule::NoIndentedBlockComment,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E116") => Rule::UnexpectedIndentationComment,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E117") => Rule::OverIndented,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E201") => Rule::WhitespaceAfterOpenBracket,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E202") => Rule::WhitespaceBeforeCloseBracket,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E203") => Rule::WhitespaceBeforePunctuation,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E211") => Rule::WhitespaceBeforeParameters,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E221") => Rule::MultipleSpacesBeforeOperator,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E222") => Rule::MultipleSpacesAfterOperator,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E223") => Rule::TabBeforeOperator,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E224") => Rule::TabAfterOperator,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E225") => Rule::MissingWhitespaceAroundOperator,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E226") => Rule::MissingWhitespaceAroundArithmeticOperator,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E227") => Rule::MissingWhitespaceAroundBitwiseOrShiftOperator,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E228") => Rule::MissingWhitespaceAroundModuloOperator,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E231") => Rule::MissingWhitespace,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E251") => Rule::UnexpectedSpacesAroundKeywordParameterEquals,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E252") => Rule::MissingWhitespaceAroundParameterEquals,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E261") => Rule::TooFewSpacesBeforeInlineComment,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E262") => Rule::NoSpaceAfterInlineComment,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E265") => Rule::NoSpaceAfterBlockComment,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E266") => Rule::MultipleLeadingHashesForBlockComment,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E271") => Rule::MultipleSpacesAfterKeyword,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E272") => Rule::MultipleSpacesBeforeKeyword,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E273") => Rule::TabAfterKeyword,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E274") => Rule::TabBeforeKeyword,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E275") => Rule::MissingWhitespaceAfterKeyword,
(Pycodestyle, "E401") => Rule::MultipleImportsOnOneLine,
(Pycodestyle, "E402") => Rule::ModuleImportNotAtTopOfFile,
@@ -238,6 +238,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Flake8Bugbear, "028") => Rule::NoExplicitStacklevel,
(Flake8Bugbear, "029") => Rule::ExceptWithEmptyTuple,
(Flake8Bugbear, "030") => Rule::ExceptWithNonExceptionClasses,
(Flake8Bugbear, "031") => Rule::ReuseOfGroupbyGenerator,
(Flake8Bugbear, "032") => Rule::UnintentionalTypeAnnotation,
(Flake8Bugbear, "904") => Rule::RaiseWithoutFromInsideExcept,
(Flake8Bugbear, "905") => Rule::ZipWithoutExplicitStrict,
@@ -283,6 +284,11 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Flake8Return, "507") => Rule::SuperfluousElseContinue,
(Flake8Return, "508") => Rule::SuperfluousElseBreak,
// flake8-gettext
(Flake8GetText, "001") => Rule::FStringInGetTextFuncCall,
(Flake8GetText, "002") => Rule::FormatInGetTextFuncCall,
(Flake8GetText, "003") => Rule::PrintfInGetTextFuncCall,
// flake8-implicit-str-concat
(Flake8ImplicitStrConcat, "001") => Rule::SingleLineImplicitStringConcatenation,
(Flake8ImplicitStrConcat, "002") => Rule::MultiLineImplicitStringConcatenation,
@@ -563,7 +569,9 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Flake8Pyi, "009") => Rule::PassStatementStubBody,
(Flake8Pyi, "010") => Rule::NonEmptyStubBody,
(Flake8Pyi, "011") => Rule::TypedArgumentDefaultInStub,
(Flake8Pyi, "012") => Rule::PassInClassBody,
(Flake8Pyi, "014") => Rule::ArgumentDefaultInStub,
(Flake8Pyi, "015") => Rule::AssignmentDefaultInStub,
(Flake8Pyi, "021") => Rule::DocstringInStub,
(Flake8Pyi, "033") => Rule::TypeCommentInStub,

View File

@@ -29,12 +29,13 @@ pub struct Docstring<'a> {
pub indentation: &'a str,
}
#[derive(Copy, Clone)]
pub enum Documentable {
Class,
Function,
}
pub fn transition_scope(scope: &VisibleScope, stmt: &Stmt, kind: &Documentable) -> VisibleScope {
pub fn transition_scope(scope: VisibleScope, stmt: &Stmt, kind: Documentable) -> VisibleScope {
match kind {
Documentable::Function => VisibleScope {
modifier: Modifier::Function,

View File

@@ -28,10 +28,10 @@ pub fn docstring_from(suite: &[Stmt]) -> Option<&Expr> {
/// Extract a `Definition` from the AST node defined by a `Stmt`.
pub fn extract<'a>(
scope: &VisibleScope,
scope: VisibleScope,
stmt: &'a Stmt,
body: &'a [Stmt],
kind: &Documentable,
kind: Documentable,
) -> Definition<'a> {
let expr = docstring_from(body);
match kind {

View File

@@ -129,7 +129,7 @@ pub(crate) struct SectionContext<'a> {
pub(crate) original_index: usize,
}
fn suspected_as_section(line: &str, style: &SectionStyle) -> Option<SectionKind> {
fn suspected_as_section(line: &str, style: SectionStyle) -> Option<SectionKind> {
if let Some(kind) = SectionKind::from_str(whitespace::leading_words(line)) {
if style.sections().contains(&kind) {
return Some(kind);
@@ -168,7 +168,7 @@ fn is_docstring_section(context: &SectionContext) -> bool {
/// Extract all `SectionContext` values from a docstring.
pub(crate) fn section_contexts<'a>(
lines: &'a [&'a str],
style: &SectionStyle,
style: SectionStyle,
) -> Vec<SectionContext<'a>> {
let mut contexts = vec![];
for (kind, lineno) in lines

View File

@@ -2,6 +2,7 @@ use crate::docstrings::google::GOOGLE_SECTIONS;
use crate::docstrings::numpy::NUMPY_SECTIONS;
use crate::docstrings::sections::SectionKind;
#[derive(Copy, Clone)]
pub(crate) enum SectionStyle {
Numpy,
Google,

View File

@@ -1,17 +0,0 @@
#[derive(Debug, Copy, Clone, Hash)]
pub enum FixMode {
Generate,
Apply,
Diff,
None,
}
impl From<bool> for FixMode {
fn from(value: bool) -> Self {
if value {
Self::Apply
} else {
Self::None
}
}
}

View File

@@ -416,7 +416,7 @@ pub fn convert(
if let Some(src_paths) = &isort.src_paths {
match options.src.as_mut() {
Some(src) => {
src.extend(src_paths.clone());
src.extend_from_slice(src_paths);
}
None => {
options.src = Some(src_paths.clone());

View File

@@ -54,7 +54,7 @@ struct Token {
src: String,
}
#[derive(Debug)]
#[derive(Debug, Copy, Clone)]
enum TokenType {
Code,
File,

View File

@@ -7,7 +7,7 @@ use anyhow::anyhow;
use crate::registry::Linter;
use crate::rule_selector::RuleSelector;
#[derive(Clone, Ord, PartialOrd, Eq, PartialEq)]
#[derive(Copy, Clone, Ord, PartialOrd, Eq, PartialEq)]
pub enum Plugin {
Flake82020,
Flake8Annotations,

View File

@@ -226,7 +226,7 @@ pub enum CodemirrorMode {
}
/// String identifying the type of cell.
#[derive(Debug, Serialize, Deserialize, PartialEq)]
#[derive(Debug, Serialize, Deserialize, PartialEq, Copy, Clone)]
pub enum CellType {
#[serde(rename = "code")]
Code,
@@ -236,14 +236,14 @@ pub enum CellType {
Raw,
}
#[derive(Debug, Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize, Copy, Clone)]
pub enum ScrolledEnum {
#[serde(rename = "auto")]
Auto,
}
/// Type of cell output.
#[derive(Debug, Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize, Copy, Clone)]
pub enum OutputType {
#[serde(rename = "display_data")]
DisplayData,

View File

@@ -6,7 +6,7 @@
use rustpython_parser::Tok;
#[derive(Default)]
#[derive(Default, Copy, Clone)]
enum State {
// Start of the module: first string gets marked as a docstring.
#[default]

View File

@@ -17,7 +17,6 @@ mod cst;
pub mod directives;
mod doc_lines;
mod docstrings;
pub mod fix;
pub mod flake8_to_ruff;
pub mod fs;
pub mod jupyter;

View File

@@ -16,7 +16,6 @@ use crate::autofix::fix_file;
use crate::checkers::ast::check_ast;
use crate::checkers::filesystem::check_file_path;
use crate::checkers::imports::check_imports;
use crate::checkers::logical_lines::check_logical_lines;
use crate::checkers::noqa::check_noqa;
use crate::checkers::physical_lines::check_physical_lines;
use crate::checkers::tokens::check_tokens;
@@ -105,7 +104,8 @@ pub fn check_path(
.iter_enabled()
.any(|rule_code| rule_code.lint_source().is_logical_lines())
{
diagnostics.extend(check_logical_lines(
#[cfg(feature = "logical_lines")]
diagnostics.extend(crate::checkers::logical_lines::check_logical_lines(
&tokens,
locator,
stylist,
@@ -257,7 +257,7 @@ pub fn add_noqa_to_path(path: &Path, package: Option<&Path>, settings: &Settings
let locator = Locator::new(&contents);
// Detect the current code style (lazily).
let stylist = Stylist::from_contents(&contents, &locator);
let stylist = Stylist::from_tokens(&tokens, &locator);
// Extra indices from the code.
let indexer: Indexer = tokens.as_slice().into();
@@ -322,7 +322,7 @@ pub fn lint_only(
let locator = Locator::new(contents);
// Detect the current code style (lazily).
let stylist = Stylist::from_contents(contents, &locator);
let stylist = Stylist::from_tokens(&tokens, &locator);
// Extra indices from the code.
let indexer: Indexer = tokens.as_slice().into();
@@ -394,7 +394,7 @@ pub fn lint_fix<'a>(
let locator = Locator::new(&transformed);
// Detect the current code style (lazily).
let stylist = Stylist::from_contents(&transformed, &locator);
let stylist = Stylist::from_tokens(&tokens, &locator);
// Extra indices from the code.
let indexer: Indexer = tokens.as_slice().into();

View File

@@ -12,7 +12,7 @@ pub struct Message {
pub kind: DiagnosticKind,
pub location: Location,
pub end_location: Location,
pub fix: Option<Fix>,
pub fix: Fix,
pub filename: String,
pub source: Option<Source>,
pub noqa_row: usize,

View File

@@ -188,7 +188,7 @@ pub fn add_noqa(
contents: &str,
commented_lines: &[usize],
noqa_line_for: &IntMap<usize, usize>,
line_ending: &LineEnding,
line_ending: LineEnding,
) -> Result<usize> {
let (count, output) = add_noqa_inner(
diagnostics,
@@ -206,7 +206,7 @@ fn add_noqa_inner(
contents: &str,
commented_lines: &[usize],
noqa_line_for: &IntMap<usize, usize>,
line_ending: &LineEnding,
line_ending: LineEnding,
) -> (usize, String) {
// Map of line number to set of (non-ignored) diagnostic codes that are triggered on that line.
let mut matches_by_line: FxHashMap<usize, RuleSet> = FxHashMap::default();
@@ -233,26 +233,29 @@ fn add_noqa_inner(
FileExemption::None => {}
}
let diagnostic_lineno = diagnostic.location.row();
// Is the violation ignored by a `noqa` directive on the parent line?
if let Some(parent_lineno) = diagnostic.parent.map(|location| location.row()) {
let noqa_lineno = noqa_line_for.get(&parent_lineno).unwrap_or(&parent_lineno);
if commented_lines.contains(noqa_lineno) {
match extract_noqa_directive(lines[noqa_lineno - 1]) {
Directive::All(..) => {
continue;
}
Directive::Codes(.., codes, _) => {
if includes(diagnostic.kind.rule(), &codes) {
if parent_lineno != diagnostic_lineno {
let noqa_lineno = noqa_line_for.get(&parent_lineno).unwrap_or(&parent_lineno);
if commented_lines.contains(noqa_lineno) {
match extract_noqa_directive(lines[noqa_lineno - 1]) {
Directive::All(..) => {
continue;
}
Directive::Codes(.., codes, _) => {
if includes(diagnostic.kind.rule(), &codes) {
continue;
}
}
Directive::None => {}
}
Directive::None => {}
}
}
}
// Is the diagnostic ignored by a `noqa` directive on the same line?
let diagnostic_lineno = diagnostic.location.row();
let noqa_lineno = noqa_line_for
.get(&diagnostic_lineno)
.unwrap_or(&diagnostic_lineno);
@@ -285,7 +288,7 @@ fn add_noqa_inner(
match matches_by_line.get(&lineno) {
None => {
output.push_str(line);
output.push_str(line_ending);
output.push_str(&line_ending);
}
Some(rules) => {
match extract_noqa_directive(line) {
@@ -298,13 +301,13 @@ fn add_noqa_inner(
// Add codes.
push_codes(&mut output, rules.iter().map(|rule| rule.noqa_code()));
output.push_str(line_ending);
output.push_str(&line_ending);
count += 1;
}
Directive::All(..) => {
// Leave the line as-is.
output.push_str(line);
output.push_str(line_ending);
output.push_str(&line_ending);
}
Directive::Codes(_, start_byte, _, existing, _) => {
// Reconstruct the line based on the preserved rule codes.
@@ -328,7 +331,7 @@ fn add_noqa_inner(
);
output.push_str(&formatted);
output.push_str(line_ending);
output.push_str(&line_ending);
// Only count if the new line is an actual edit.
if formatted != line {
@@ -392,7 +395,7 @@ mod tests {
contents,
&commented_lines,
&noqa_line_for,
&LineEnding::Lf,
LineEnding::Lf,
);
assert_eq!(count, 0);
assert_eq!(output, format!("{contents}\n"));
@@ -411,7 +414,7 @@ mod tests {
contents,
&commented_lines,
&noqa_line_for,
&LineEnding::Lf,
LineEnding::Lf,
);
assert_eq!(count, 1);
assert_eq!(output, "x = 1 # noqa: F841\n");
@@ -436,7 +439,7 @@ mod tests {
contents,
&commented_lines,
&noqa_line_for,
&LineEnding::Lf,
LineEnding::Lf,
);
assert_eq!(count, 1);
assert_eq!(output, "x = 1 # noqa: E741, F841\n");
@@ -461,7 +464,7 @@ mod tests {
contents,
&commented_lines,
&noqa_line_for,
&LineEnding::Lf,
LineEnding::Lf,
);
assert_eq!(count, 0);
assert_eq!(output, "x = 1 # noqa\n");

View File

@@ -14,68 +14,68 @@ pub use rule_set::{RuleSet, RuleSetIterator};
ruff_macros::register_rules!(
// pycodestyle errors
rules::pycodestyle::rules::MixedSpacesAndTabs,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::IndentationWithInvalidMultiple,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::NoIndentedBlock,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::UnexpectedIndentation,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::IndentationWithInvalidMultipleComment,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::NoIndentedBlockComment,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::UnexpectedIndentationComment,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::OverIndented,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::WhitespaceAfterOpenBracket,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::WhitespaceBeforeCloseBracket,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::WhitespaceBeforePunctuation,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::MultipleSpacesBeforeOperator,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::MultipleSpacesAfterOperator,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::TabBeforeOperator,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::TabAfterOperator,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::TooFewSpacesBeforeInlineComment,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::NoSpaceAfterInlineComment,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::NoSpaceAfterBlockComment,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::MultipleLeadingHashesForBlockComment,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::MultipleSpacesAfterKeyword,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::MissingWhitespace,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::MissingWhitespaceAfterKeyword,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::MultipleSpacesBeforeKeyword,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::MissingWhitespaceAroundOperator,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::MissingWhitespaceAroundArithmeticOperator,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::MissingWhitespaceAroundBitwiseOrShiftOperator,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::MissingWhitespaceAroundModuloOperator,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::TabAfterKeyword,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::UnexpectedSpacesAroundKeywordParameterEquals,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::MissingWhitespaceAroundParameterEquals,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::WhitespaceBeforeParameters,
#[cfg(debug_assertions)]
rules::pycodestyle::rules::TabBeforeKeyword,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::IndentationWithInvalidMultiple,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::NoIndentedBlock,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::UnexpectedIndentation,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::IndentationWithInvalidMultipleComment,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::NoIndentedBlockComment,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::UnexpectedIndentationComment,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::OverIndented,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::WhitespaceAfterOpenBracket,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::WhitespaceBeforeCloseBracket,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::WhitespaceBeforePunctuation,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::MultipleSpacesBeforeOperator,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::MultipleSpacesAfterOperator,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::TabBeforeOperator,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::TabAfterOperator,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::TooFewSpacesBeforeInlineComment,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::NoSpaceAfterInlineComment,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::NoSpaceAfterBlockComment,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::MultipleLeadingHashesForBlockComment,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::MultipleSpacesAfterKeyword,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::MissingWhitespace,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::MissingWhitespaceAfterKeyword,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::MultipleSpacesBeforeKeyword,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::MissingWhitespaceAroundOperator,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::MissingWhitespaceAroundArithmeticOperator,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::MissingWhitespaceAroundBitwiseOrShiftOperator,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::MissingWhitespaceAroundModuloOperator,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::TabAfterKeyword,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::UnexpectedSpacesAroundKeywordParameterEquals,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::MissingWhitespaceAroundParameterEquals,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::WhitespaceBeforeParameters,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::logical_lines::TabBeforeKeyword,
rules::pycodestyle::rules::MultipleImportsOnOneLine,
rules::pycodestyle::rules::ModuleImportNotAtTopOfFile,
rules::pycodestyle::rules::LineTooLong,
@@ -223,6 +223,7 @@ ruff_macros::register_rules!(
rules::flake8_bugbear::rules::ZipWithoutExplicitStrict,
rules::flake8_bugbear::rules::ExceptWithEmptyTuple,
rules::flake8_bugbear::rules::ExceptWithNonExceptionClasses,
rules::flake8_bugbear::rules::ReuseOfGroupbyGenerator,
rules::flake8_bugbear::rules::UnintentionalTypeAnnotation,
// flake8-blind-except
rules::flake8_blind_except::rules::BlindExcept,
@@ -513,16 +514,18 @@ ruff_macros::register_rules!(
rules::flake8_errmsg::rules::FStringInException,
rules::flake8_errmsg::rules::DotFormatInException,
// flake8-pyi
rules::flake8_pyi::rules::UnprefixedTypeParam,
rules::flake8_pyi::rules::ArgumentDefaultInStub,
rules::flake8_pyi::rules::AssignmentDefaultInStub,
rules::flake8_pyi::rules::BadVersionInfoComparison,
rules::flake8_pyi::rules::DocstringInStub,
rules::flake8_pyi::rules::NonEmptyStubBody,
rules::flake8_pyi::rules::PassStatementStubBody,
rules::flake8_pyi::rules::TypeCommentInStub,
rules::flake8_pyi::rules::TypedArgumentDefaultInStub,
rules::flake8_pyi::rules::UnprefixedTypeParam,
rules::flake8_pyi::rules::UnrecognizedPlatformCheck,
rules::flake8_pyi::rules::UnrecognizedPlatformName,
rules::flake8_pyi::rules::PassStatementStubBody,
rules::flake8_pyi::rules::NonEmptyStubBody,
rules::flake8_pyi::rules::DocstringInStub,
rules::flake8_pyi::rules::TypedArgumentDefaultInStub,
rules::flake8_pyi::rules::ArgumentDefaultInStub,
rules::flake8_pyi::rules::TypeCommentInStub,
rules::flake8_pyi::rules::PassInClassBody,
// flake8-pytest-style
rules::flake8_pytest_style::rules::PytestFixtureIncorrectParenthesesStyle,
rules::flake8_pytest_style::rules::PytestFixturePositionalArgs,
@@ -625,6 +628,10 @@ ruff_macros::register_rules!(
rules::flake8_raise::rules::UnnecessaryParenOnRaiseException,
// flake8-self
rules::flake8_self::rules::PrivateMemberAccess,
// flake8-gettext
rules::flake8_gettext::rules::FStringInGetTextFuncCall,
rules::flake8_gettext::rules::FormatInGetTextFuncCall,
rules::flake8_gettext::rules::PrintfInGetTextFuncCall,
// numpy
rules::numpy::rules::NumpyDeprecatedTypeAlias,
rules::numpy::rules::NumpyLegacyRandom,
@@ -775,6 +782,9 @@ pub enum Linter {
/// [flake8-type-checking](https://pypi.org/project/flake8-type-checking/)
#[prefix = "TCH"]
Flake8TypeChecking,
/// [flake8-gettext](https://pypi.org/project/flake8-gettext/)
#[prefix = "INT"]
Flake8GetText,
/// [flake8-unused-arguments](https://pypi.org/project/flake8-unused-arguments/)
#[prefix = "ARG"]
Flake8UnusedArguments,
@@ -906,7 +916,7 @@ impl Rule {
Rule::IOError => LintSource::Io,
Rule::UnsortedImports | Rule::MissingRequiredImport => LintSource::Imports,
Rule::ImplicitNamespacePackage | Rule::InvalidModuleName => LintSource::Filesystem,
#[cfg(debug_assertions)]
#[cfg(feature = "logical_lines")]
Rule::IndentationWithInvalidMultiple
| Rule::IndentationWithInvalidMultipleComment
| Rule::MissingWhitespace

View File

@@ -41,6 +41,7 @@ impl PyprojectDiscovery {
}
/// The strategy for resolving file paths in a `pyproject.toml`.
#[derive(Copy, Clone)]
pub enum Relativity {
/// Resolve file paths relative to the current working directory.
Cwd,

View File

@@ -265,7 +265,7 @@ impl RuleSelector {
}
}
#[derive(EnumIter, PartialEq, Eq, PartialOrd, Ord)]
#[derive(EnumIter, PartialEq, Eq, PartialOrd, Ord, Copy, Clone)]
pub(crate) enum Specificity {
All,
LinterGroup,

View File

@@ -1,6 +1,6 @@
use rustpython_parser::ast::Location;
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Fix};
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::source_code::Locator;
use ruff_python_ast::types::Range;
@@ -62,7 +62,7 @@ pub fn commented_out_code(
if is_standalone_comment(line) && comment_contains_code(line, &settings.task_tags[..]) {
let mut diagnostic = Diagnostic::new(CommentedOutCode, Range::new(start, end));
if autofix.into() && settings.rules.should_fix(Rule::CommentedOutCode) {
diagnostic.amend(Fix::deletion(location, end_location));
diagnostic.set_fix(Edit::deletion(location, end_location));
}
Some(diagnostic)
} else {

View File

@@ -14,13 +14,14 @@ expression: diagnostics
row: 1
column: 10
fix:
content: ""
location:
row: 1
column: 0
end_location:
row: 2
column: 0
edits:
- content: ""
location:
row: 1
column: 0
end_location:
row: 2
column: 0
parent: ~
- kind:
name: CommentedOutCode
@@ -34,13 +35,14 @@ expression: diagnostics
row: 2
column: 22
fix:
content: ""
location:
row: 2
column: 0
end_location:
row: 3
column: 0
edits:
- content: ""
location:
row: 2
column: 0
end_location:
row: 3
column: 0
parent: ~
- kind:
name: CommentedOutCode
@@ -54,13 +56,14 @@ expression: diagnostics
row: 3
column: 6
fix:
content: ""
location:
row: 3
column: 0
end_location:
row: 4
column: 0
edits:
- content: ""
location:
row: 3
column: 0
end_location:
row: 4
column: 0
parent: ~
- kind:
name: CommentedOutCode
@@ -74,13 +77,14 @@ expression: diagnostics
row: 5
column: 13
fix:
content: ""
location:
row: 5
column: 0
end_location:
row: 6
column: 0
edits:
- content: ""
location:
row: 5
column: 0
end_location:
row: 6
column: 0
parent: ~
- kind:
name: CommentedOutCode
@@ -94,12 +98,13 @@ expression: diagnostics
row: 12
column: 16
fix:
content: ""
location:
row: 12
column: 0
end_location:
row: 13
column: 0
edits:
- content: ""
location:
row: 12
column: 0
end_location:
row: 13
column: 0
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 6
column: 17
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionSlice3
@@ -26,7 +27,8 @@ expression: diagnostics
end_location:
row: 7
column: 13
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionSlice3
@@ -39,6 +41,7 @@ expression: diagnostics
end_location:
row: 8
column: 7
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 4
column: 22
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersion2
@@ -26,6 +27,7 @@ expression: diagnostics
end_location:
row: 5
column: 18
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 4
column: 7
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionCmpStr3
@@ -26,7 +27,8 @@ expression: diagnostics
end_location:
row: 5
column: 11
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionCmpStr3
@@ -39,7 +41,8 @@ expression: diagnostics
end_location:
row: 6
column: 11
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionCmpStr3
@@ -52,7 +55,8 @@ expression: diagnostics
end_location:
row: 7
column: 11
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionCmpStr3
@@ -65,6 +69,7 @@ expression: diagnostics
end_location:
row: 8
column: 11
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 7
column: 25
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionInfo0Eq3
@@ -26,7 +27,8 @@ expression: diagnostics
end_location:
row: 8
column: 21
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionInfo0Eq3
@@ -39,7 +41,8 @@ expression: diagnostics
end_location:
row: 9
column: 25
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionInfo0Eq3
@@ -52,6 +55,7 @@ expression: diagnostics
end_location:
row: 10
column: 21
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 4
column: 10
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SixPY3
@@ -26,6 +27,7 @@ expression: diagnostics
end_location:
row: 6
column: 6
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 4
column: 19
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionInfo1CmpInt
@@ -26,6 +27,7 @@ expression: diagnostics
end_location:
row: 5
column: 15
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 4
column: 22
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionInfoMinorCmpInt
@@ -26,6 +27,7 @@ expression: diagnostics
end_location:
row: 5
column: 18
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 4
column: 22
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersion0
@@ -26,6 +27,7 @@ expression: diagnostics
end_location:
row: 5
column: 18
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 4
column: 7
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionCmpStr10
@@ -26,7 +27,8 @@ expression: diagnostics
end_location:
row: 5
column: 11
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionCmpStr10
@@ -39,7 +41,8 @@ expression: diagnostics
end_location:
row: 6
column: 11
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionCmpStr10
@@ -52,7 +55,8 @@ expression: diagnostics
end_location:
row: 7
column: 11
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionCmpStr10
@@ -65,6 +69,7 @@ expression: diagnostics
end_location:
row: 8
column: 11
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 4
column: 17
fix: ~
fix:
edits: []
parent: ~
- kind:
name: SysVersionSlice1
@@ -26,6 +27,7 @@ expression: diagnostics
end_location:
row: 5
column: 13
fix: ~
fix:
edits: []
parent: ~

View File

@@ -2,12 +2,12 @@ use anyhow::{bail, Result};
use rustpython_parser::ast::Stmt;
use rustpython_parser::{lexer, Mode, Tok};
use ruff_diagnostics::Fix;
use ruff_diagnostics::Edit;
use ruff_python_ast::source_code::Locator;
use ruff_python_ast::types::Range;
/// ANN204
pub fn add_return_annotation(locator: &Locator, stmt: &Stmt, annotation: &str) -> Result<Fix> {
pub fn add_return_annotation(locator: &Locator, stmt: &Stmt, annotation: &str) -> Result<Edit> {
let range = Range::from(stmt);
let contents = locator.slice(range);
@@ -18,7 +18,7 @@ pub fn add_return_annotation(locator: &Locator, stmt: &Stmt, annotation: &str) -
for (start, tok, ..) in lexer::lex_located(contents, Mode::Module, range.location).flatten() {
if seen_lpar && seen_rpar {
if matches!(tok, Tok::Colon) {
return Ok(Fix::insertion(format!(" -> {annotation}"), start));
return Ok(Edit::insertion(format!(" -> {annotation}"), start));
}
}

View File

@@ -6,9 +6,7 @@ use ruff_python_ast::visibility;
use crate::checkers::ast::Checker;
use crate::docstrings::definition::{Definition, DefinitionKind};
pub(super) fn match_function_def(
stmt: &Stmt,
) -> (&str, &Arguments, &Option<Box<Expr>>, &Vec<Stmt>) {
pub(super) fn match_function_def(stmt: &Stmt) -> (&str, &Arguments, Option<&Expr>, &Vec<Stmt>) {
match &stmt.node {
StmtKind::FunctionDef {
name,
@@ -23,7 +21,7 @@ pub(super) fn match_function_def(
returns,
body,
..
} => (name, args, returns, body),
} => (name, args, returns.as_ref().map(|expr| &**expr), body),
_ => panic!("Found non-FunctionDef in match_name"),
}
}

View File

@@ -1,4 +1,3 @@
use log::error;
use rustpython_parser::ast::{Constant, Expr, ExprKind, Stmt};
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Violation};
@@ -456,7 +455,7 @@ fn check_dynamically_typed<F>(
pub fn definition(
checker: &Checker,
definition: &Definition,
visibility: &Visibility,
visibility: Visibility,
) -> Vec<Diagnostic> {
// TODO(charlie): Consider using the AST directly here rather than `Definition`.
// We could adhere more closely to `flake8-annotations` by defining public
@@ -668,12 +667,9 @@ pub fn definition(
helpers::identifier_range(stmt, checker.locator),
);
if checker.patch(diagnostic.kind.rule()) {
match fixes::add_return_annotation(checker.locator, stmt, "None") {
Ok(fix) => {
diagnostic.amend(fix);
}
Err(e) => error!("Failed to generate fix: {e}"),
}
diagnostic.try_set_fix(|| {
fixes::add_return_annotation(checker.locator, stmt, "None")
});
}
diagnostics.push(diagnostic);
}
@@ -693,12 +689,9 @@ pub fn definition(
let return_type = SIMPLE_MAGIC_RETURN_TYPES.get(name);
if let Some(return_type) = return_type {
if checker.patch(diagnostic.kind.rule()) {
match fixes::add_return_annotation(checker.locator, stmt, return_type) {
Ok(fix) => {
diagnostic.amend(fix);
}
Err(e) => error!("Failed to generate fix: {e}"),
}
diagnostic.try_set_fix(|| {
fixes::add_return_annotation(checker.locator, stmt, return_type)
});
}
}
diagnostics.push(diagnostic);

View File

@@ -13,6 +13,7 @@ expression: diagnostics
end_location:
row: 29
column: 11
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 10
column: 14
fix: ~
fix:
edits: []
parent: ~
- kind:
name: AnyType
@@ -26,7 +27,8 @@ expression: diagnostics
end_location:
row: 15
column: 49
fix: ~
fix:
edits: []
parent: ~
- kind:
name: AnyType
@@ -39,7 +41,8 @@ expression: diagnostics
end_location:
row: 40
column: 31
fix: ~
fix:
edits: []
parent: ~
- kind:
name: AnyType
@@ -52,6 +55,7 @@ expression: diagnostics
end_location:
row: 44
column: 69
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 4
column: 7
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingTypeFunctionArgument
@@ -26,7 +27,8 @@ expression: diagnostics
end_location:
row: 4
column: 9
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingTypeFunctionArgument
@@ -39,7 +41,8 @@ expression: diagnostics
end_location:
row: 4
column: 12
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingReturnTypeUndocumentedPublicFunction
@@ -52,7 +55,8 @@ expression: diagnostics
end_location:
row: 9
column: 7
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingTypeFunctionArgument
@@ -65,7 +69,8 @@ expression: diagnostics
end_location:
row: 9
column: 17
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingTypeFunctionArgument
@@ -78,7 +83,8 @@ expression: diagnostics
end_location:
row: 14
column: 17
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingReturnTypeUndocumentedPublicFunction
@@ -91,7 +97,8 @@ expression: diagnostics
end_location:
row: 19
column: 7
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingReturnTypeUndocumentedPublicFunction
@@ -104,7 +111,8 @@ expression: diagnostics
end_location:
row: 24
column: 7
fix: ~
fix:
edits: []
parent: ~
- kind:
name: AnyType
@@ -117,7 +125,8 @@ expression: diagnostics
end_location:
row: 44
column: 14
fix: ~
fix:
edits: []
parent: ~
- kind:
name: AnyType
@@ -130,7 +139,8 @@ expression: diagnostics
end_location:
row: 49
column: 49
fix: ~
fix:
edits: []
parent: ~
- kind:
name: AnyType
@@ -143,7 +153,8 @@ expression: diagnostics
end_location:
row: 54
column: 26
fix: ~
fix:
edits: []
parent: ~
- kind:
name: AnyType
@@ -156,7 +167,8 @@ expression: diagnostics
end_location:
row: 54
column: 41
fix: ~
fix:
edits: []
parent: ~
- kind:
name: AnyType
@@ -169,7 +181,8 @@ expression: diagnostics
end_location:
row: 59
column: 26
fix: ~
fix:
edits: []
parent: ~
- kind:
name: AnyType
@@ -182,7 +195,8 @@ expression: diagnostics
end_location:
row: 64
column: 41
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingTypeSelf
@@ -195,7 +209,8 @@ expression: diagnostics
end_location:
row: 74
column: 16
fix: ~
fix:
edits: []
parent: ~
- kind:
name: AnyType
@@ -208,7 +223,8 @@ expression: diagnostics
end_location:
row: 78
column: 31
fix: ~
fix:
edits: []
parent: ~
- kind:
name: AnyType
@@ -221,7 +237,8 @@ expression: diagnostics
end_location:
row: 82
column: 69
fix: ~
fix:
edits: []
parent: ~
- kind:
name: AnyType
@@ -234,7 +251,8 @@ expression: diagnostics
end_location:
row: 86
column: 45
fix: ~
fix:
edits: []
parent: ~
- kind:
name: AnyType
@@ -247,7 +265,8 @@ expression: diagnostics
end_location:
row: 86
column: 61
fix: ~
fix:
edits: []
parent: ~
- kind:
name: AnyType
@@ -260,7 +279,8 @@ expression: diagnostics
end_location:
row: 90
column: 45
fix: ~
fix:
edits: []
parent: ~
- kind:
name: AnyType
@@ -273,7 +293,8 @@ expression: diagnostics
end_location:
row: 94
column: 61
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingTypeCls
@@ -286,7 +307,8 @@ expression: diagnostics
end_location:
row: 104
column: 15
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingTypeSelf
@@ -299,6 +321,7 @@ expression: diagnostics
end_location:
row: 108
column: 16
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 24
column: 27
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingTypeFunctionArgument
@@ -26,7 +27,8 @@ expression: diagnostics
end_location:
row: 24
column: 37
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingTypeFunctionArgument
@@ -39,7 +41,8 @@ expression: diagnostics
end_location:
row: 28
column: 37
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingReturnTypeUndocumentedPublicFunction
@@ -52,7 +55,8 @@ expression: diagnostics
end_location:
row: 32
column: 27
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingReturnTypeUndocumentedPublicFunction
@@ -65,6 +69,7 @@ expression: diagnostics
end_location:
row: 43
column: 24
fix: ~
fix:
edits: []
parent: ~

View File

@@ -14,13 +14,14 @@ expression: diagnostics
row: 5
column: 16
fix:
content: " -> None"
location:
row: 5
column: 22
end_location:
row: 5
column: 22
edits:
- content: " -> None"
location:
row: 5
column: 22
end_location:
row: 5
column: 22
parent: ~
- kind:
name: MissingReturnTypeSpecialMethod
@@ -34,13 +35,14 @@ expression: diagnostics
row: 11
column: 16
fix:
content: " -> None"
location:
row: 11
column: 27
end_location:
row: 11
column: 27
edits:
- content: " -> None"
location:
row: 11
column: 27
end_location:
row: 11
column: 27
parent: ~
- kind:
name: MissingReturnTypePrivateFunction
@@ -53,7 +55,8 @@ expression: diagnostics
end_location:
row: 40
column: 12
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingReturnTypeSpecialMethod
@@ -67,12 +70,13 @@ expression: diagnostics
row: 47
column: 16
fix:
content: " -> None"
location:
row: 47
column: 28
end_location:
row: 47
column: 28
edits:
- content: " -> None"
location:
row: 47
column: 28
end_location:
row: 47
column: 28
parent: ~

View File

@@ -14,13 +14,14 @@ expression: diagnostics
row: 2
column: 15
fix:
content: " -> str"
location:
row: 2
column: 21
end_location:
row: 2
column: 21
edits:
- content: " -> str"
location:
row: 2
column: 21
end_location:
row: 2
column: 21
parent: ~
- kind:
name: MissingReturnTypeSpecialMethod
@@ -34,13 +35,14 @@ expression: diagnostics
row: 5
column: 16
fix:
content: " -> str"
location:
row: 5
column: 22
end_location:
row: 5
column: 22
edits:
- content: " -> str"
location:
row: 5
column: 22
end_location:
row: 5
column: 22
parent: ~
- kind:
name: MissingReturnTypeSpecialMethod
@@ -54,13 +56,14 @@ expression: diagnostics
row: 8
column: 15
fix:
content: " -> int"
location:
row: 8
column: 21
end_location:
row: 8
column: 21
edits:
- content: " -> int"
location:
row: 8
column: 21
end_location:
row: 8
column: 21
parent: ~
- kind:
name: MissingReturnTypeSpecialMethod
@@ -74,13 +77,14 @@ expression: diagnostics
row: 11
column: 23
fix:
content: " -> int"
location:
row: 11
column: 29
end_location:
row: 11
column: 29
edits:
- content: " -> int"
location:
row: 11
column: 29
end_location:
row: 11
column: 29
parent: ~
- kind:
name: MissingReturnTypeSpecialMethod
@@ -94,13 +98,14 @@ expression: diagnostics
row: 14
column: 16
fix:
content: " -> None"
location:
row: 14
column: 22
end_location:
row: 14
column: 22
edits:
- content: " -> None"
location:
row: 14
column: 22
end_location:
row: 14
column: 22
parent: ~
- kind:
name: MissingReturnTypeSpecialMethod
@@ -114,13 +119,14 @@ expression: diagnostics
row: 17
column: 15
fix:
content: " -> None"
location:
row: 17
column: 21
end_location:
row: 17
column: 21
edits:
- content: " -> None"
location:
row: 17
column: 21
end_location:
row: 17
column: 21
parent: ~
- kind:
name: MissingReturnTypeSpecialMethod
@@ -134,13 +140,14 @@ expression: diagnostics
row: 20
column: 16
fix:
content: " -> bool"
location:
row: 20
column: 22
end_location:
row: 20
column: 22
edits:
- content: " -> bool"
location:
row: 20
column: 22
end_location:
row: 20
column: 22
parent: ~
- kind:
name: MissingReturnTypeSpecialMethod
@@ -154,13 +161,14 @@ expression: diagnostics
row: 23
column: 17
fix:
content: " -> bytes"
location:
row: 23
column: 23
end_location:
row: 23
column: 23
edits:
- content: " -> bytes"
location:
row: 23
column: 23
end_location:
row: 23
column: 23
parent: ~
- kind:
name: MissingReturnTypeSpecialMethod
@@ -174,13 +182,14 @@ expression: diagnostics
row: 26
column: 18
fix:
content: " -> str"
location:
row: 26
column: 37
end_location:
row: 26
column: 37
edits:
- content: " -> str"
location:
row: 26
column: 37
end_location:
row: 26
column: 37
parent: ~
- kind:
name: MissingReturnTypeSpecialMethod
@@ -194,13 +203,14 @@ expression: diagnostics
row: 29
column: 20
fix:
content: " -> bool"
location:
row: 29
column: 32
end_location:
row: 29
column: 32
edits:
- content: " -> bool"
location:
row: 29
column: 32
end_location:
row: 29
column: 32
parent: ~
- kind:
name: MissingReturnTypeSpecialMethod
@@ -214,13 +224,14 @@ expression: diagnostics
row: 32
column: 19
fix:
content: " -> complex"
location:
row: 32
column: 25
end_location:
row: 32
column: 25
edits:
- content: " -> complex"
location:
row: 32
column: 25
end_location:
row: 32
column: 25
parent: ~
- kind:
name: MissingReturnTypeSpecialMethod
@@ -234,13 +245,14 @@ expression: diagnostics
row: 35
column: 15
fix:
content: " -> int"
location:
row: 35
column: 21
end_location:
row: 35
column: 21
edits:
- content: " -> int"
location:
row: 35
column: 21
end_location:
row: 35
column: 21
parent: ~
- kind:
name: MissingReturnTypeSpecialMethod
@@ -254,13 +266,14 @@ expression: diagnostics
row: 38
column: 17
fix:
content: " -> float"
location:
row: 38
column: 23
end_location:
row: 38
column: 23
edits:
- content: " -> float"
location:
row: 38
column: 23
end_location:
row: 38
column: 23
parent: ~
- kind:
name: MissingReturnTypeSpecialMethod
@@ -274,12 +287,13 @@ expression: diagnostics
row: 41
column: 17
fix:
content: " -> int"
location:
row: 41
column: 23
end_location:
row: 41
column: 23
edits:
- content: " -> int"
location:
row: 41
column: 23
end_location:
row: 41
column: 23
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 45
column: 7
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingReturnTypeUndocumentedPublicFunction
@@ -26,7 +27,8 @@ expression: diagnostics
end_location:
row: 50
column: 7
fix: ~
fix:
edits: []
parent: ~
- kind:
name: MissingTypeFunctionArgument
@@ -39,6 +41,7 @@ expression: diagnostics
end_location:
row: 59
column: 9
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 2
column: 6
fix: ~
fix:
edits: []
parent: ~
- kind:
name: Assert
@@ -26,7 +27,8 @@ expression: diagnostics
end_location:
row: 8
column: 10
fix: ~
fix:
edits: []
parent: ~
- kind:
name: Assert
@@ -39,6 +41,7 @@ expression: diagnostics
end_location:
row: 11
column: 10
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 3
column: 17
fix: ~
fix:
edits: []
parent: ~
- kind:
name: ExecBuiltin
@@ -26,6 +27,7 @@ expression: diagnostics
end_location:
row: 5
column: 13
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 6
column: 29
fix: ~
fix:
edits: []
parent: ~
- kind:
name: BadFilePermissions
@@ -26,7 +27,8 @@ expression: diagnostics
end_location:
row: 7
column: 27
fix: ~
fix:
edits: []
parent: ~
- kind:
name: BadFilePermissions
@@ -39,7 +41,8 @@ expression: diagnostics
end_location:
row: 9
column: 29
fix: ~
fix:
edits: []
parent: ~
- kind:
name: BadFilePermissions
@@ -52,7 +55,8 @@ expression: diagnostics
end_location:
row: 10
column: 29
fix: ~
fix:
edits: []
parent: ~
- kind:
name: BadFilePermissions
@@ -65,7 +69,8 @@ expression: diagnostics
end_location:
row: 11
column: 29
fix: ~
fix:
edits: []
parent: ~
- kind:
name: BadFilePermissions
@@ -78,7 +83,8 @@ expression: diagnostics
end_location:
row: 13
column: 25
fix: ~
fix:
edits: []
parent: ~
- kind:
name: BadFilePermissions
@@ -91,7 +97,8 @@ expression: diagnostics
end_location:
row: 14
column: 28
fix: ~
fix:
edits: []
parent: ~
- kind:
name: BadFilePermissions
@@ -104,7 +111,8 @@ expression: diagnostics
end_location:
row: 15
column: 29
fix: ~
fix:
edits: []
parent: ~
- kind:
name: BadFilePermissions
@@ -117,7 +125,8 @@ expression: diagnostics
end_location:
row: 17
column: 23
fix: ~
fix:
edits: []
parent: ~
- kind:
name: BadFilePermissions
@@ -130,7 +139,8 @@ expression: diagnostics
end_location:
row: 18
column: 36
fix: ~
fix:
edits: []
parent: ~
- kind:
name: BadFilePermissions
@@ -143,7 +153,8 @@ expression: diagnostics
end_location:
row: 19
column: 60
fix: ~
fix:
edits: []
parent: ~
- kind:
name: BadFilePermissions
@@ -156,7 +167,8 @@ expression: diagnostics
end_location:
row: 20
column: 38
fix: ~
fix:
edits: []
parent: ~
- kind:
name: BadFilePermissions
@@ -169,6 +181,7 @@ expression: diagnostics
end_location:
row: 22
column: 36
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 9
column: 9
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedBindAllInterfaces
@@ -26,7 +27,8 @@ expression: diagnostics
end_location:
row: 10
column: 9
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedBindAllInterfaces
@@ -39,7 +41,8 @@ expression: diagnostics
end_location:
row: 14
column: 14
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedBindAllInterfaces
@@ -52,6 +55,7 @@ expression: diagnostics
end_location:
row: 18
column: 17
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 13
column: 19
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -26,7 +27,8 @@ expression: diagnostics
end_location:
row: 14
column: 16
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -39,7 +41,8 @@ expression: diagnostics
end_location:
row: 15
column: 17
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -52,7 +55,8 @@ expression: diagnostics
end_location:
row: 16
column: 14
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -65,7 +69,8 @@ expression: diagnostics
end_location:
row: 17
column: 17
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -78,7 +83,8 @@ expression: diagnostics
end_location:
row: 18
column: 16
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -91,7 +97,8 @@ expression: diagnostics
end_location:
row: 19
column: 18
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -104,7 +111,8 @@ expression: diagnostics
end_location:
row: 20
column: 26
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -117,7 +125,8 @@ expression: diagnostics
end_location:
row: 21
column: 26
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -130,7 +139,8 @@ expression: diagnostics
end_location:
row: 22
column: 19
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -143,7 +153,8 @@ expression: diagnostics
end_location:
row: 23
column: 19
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -156,7 +167,8 @@ expression: diagnostics
end_location:
row: 25
column: 24
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -169,7 +181,8 @@ expression: diagnostics
end_location:
row: 26
column: 20
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -182,7 +195,8 @@ expression: diagnostics
end_location:
row: 27
column: 22
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -195,7 +209,8 @@ expression: diagnostics
end_location:
row: 28
column: 19
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -208,7 +223,8 @@ expression: diagnostics
end_location:
row: 29
column: 22
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -221,7 +237,8 @@ expression: diagnostics
end_location:
row: 30
column: 21
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -234,7 +251,8 @@ expression: diagnostics
end_location:
row: 31
column: 23
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -247,7 +265,8 @@ expression: diagnostics
end_location:
row: 32
column: 31
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -260,7 +279,8 @@ expression: diagnostics
end_location:
row: 33
column: 31
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -273,7 +293,8 @@ expression: diagnostics
end_location:
row: 37
column: 23
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -286,7 +307,8 @@ expression: diagnostics
end_location:
row: 41
column: 27
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -299,7 +321,8 @@ expression: diagnostics
end_location:
row: 42
column: 24
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -312,7 +335,8 @@ expression: diagnostics
end_location:
row: 43
column: 25
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -325,7 +349,8 @@ expression: diagnostics
end_location:
row: 44
column: 22
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -338,7 +363,8 @@ expression: diagnostics
end_location:
row: 45
column: 25
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -351,7 +377,8 @@ expression: diagnostics
end_location:
row: 46
column: 24
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -364,7 +391,8 @@ expression: diagnostics
end_location:
row: 47
column: 26
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -377,7 +405,8 @@ expression: diagnostics
end_location:
row: 49
column: 20
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -390,7 +419,8 @@ expression: diagnostics
end_location:
row: 50
column: 17
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -403,7 +433,8 @@ expression: diagnostics
end_location:
row: 51
column: 18
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -416,7 +447,8 @@ expression: diagnostics
end_location:
row: 52
column: 15
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -429,7 +461,8 @@ expression: diagnostics
end_location:
row: 53
column: 18
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -442,7 +475,8 @@ expression: diagnostics
end_location:
row: 54
column: 17
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -455,7 +489,8 @@ expression: diagnostics
end_location:
row: 55
column: 19
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -468,7 +503,8 @@ expression: diagnostics
end_location:
row: 56
column: 28
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -481,7 +517,8 @@ expression: diagnostics
end_location:
row: 58
column: 18
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -494,7 +531,8 @@ expression: diagnostics
end_location:
row: 61
column: 18
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordString
@@ -507,6 +545,7 @@ expression: diagnostics
end_location:
row: 64
column: 18
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,6 +13,7 @@ expression: diagnostics
end_location:
row: 14
column: 25
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 5
column: 37
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordDefault
@@ -26,7 +27,8 @@ expression: diagnostics
end_location:
row: 13
column: 53
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordDefault
@@ -39,7 +41,8 @@ expression: diagnostics
end_location:
row: 21
column: 46
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordDefault
@@ -52,7 +55,8 @@ expression: diagnostics
end_location:
row: 29
column: 47
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedPasswordDefault
@@ -65,6 +69,7 @@ expression: diagnostics
end_location:
row: 29
column: 69
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 5
column: 20
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedTempFile
@@ -26,7 +27,8 @@ expression: diagnostics
end_location:
row: 8
column: 24
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedTempFile
@@ -39,6 +41,7 @@ expression: diagnostics
end_location:
row: 11
column: 30
fix: ~
fix:
edits: []
parent: ~

View File

@@ -13,7 +13,8 @@ expression: diagnostics
end_location:
row: 5
column: 20
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedTempFile
@@ -26,7 +27,8 @@ expression: diagnostics
end_location:
row: 8
column: 24
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedTempFile
@@ -39,7 +41,8 @@ expression: diagnostics
end_location:
row: 11
column: 30
fix: ~
fix:
edits: []
parent: ~
- kind:
name: HardcodedTempFile
@@ -52,6 +55,7 @@ expression: diagnostics
end_location:
row: 15
column: 20
fix: ~
fix:
edits: []
parent: ~

Some files were not shown because too many files have changed in this diff Show More