Compare commits

...

19 Commits

Author SHA1 Message Date
Charlie Marsh
3a3a5fcd81 Remove -dev suffix from flake8_to_ruff 2023-01-15 22:45:14 -05:00
Charlie Marsh
e8577d5e26 Bump version to 0.0.223 2023-01-15 22:44:01 -05:00
Charlie Marsh
bcb1e6ba20 Add flake8-commas to the README 2023-01-15 22:43:29 -05:00
Charlie Marsh
15403522c1 Avoid triggering SIM117 for async with statements (#1903)
Actually, it looks like _none_ of the existing rules should be triggered on async `with` statements.

Closes #1902.
2023-01-15 21:42:36 -05:00
messense
cb4f305ced Lock stdout once when printing diagnostics (#1901)
https://doc.rust-lang.org/stable/std/io/struct.Stdout.html

> Each handle shares a global buffer of data to be written to the standard output stream.
> Access is also synchronized via a lock and
> explicit control over locking is available via the [`lock`](https://doc.rust-lang.org/stable/std/io/struct.Stdout.html#method.lock) method.
2023-01-15 21:04:00 -05:00
Charlie Marsh
d71a615b18 Buffer diagnostic writes to stdout (#1900) 2023-01-15 19:34:15 -05:00
Charlie Marsh
dfc2a34878 Remove rogue println 2023-01-15 18:59:59 -05:00
Charlie Marsh
7608087776 Don't require docstrings for setters and deleters (#1899) 2023-01-15 18:57:38 -05:00
Charlie Marsh
228f033e15 Skip noqa checker if no diagnostics are found (#1898) 2023-01-15 18:53:00 -05:00
Martin Fischer
d75d6d7c7c refactor: Split CliSettings from Settings
We want to automatically derive Hash for the library settings, which
requires us to split off all the settings unused by the library
(since these shouldn't affect the hash used by ruff_cli::cache).
2023-01-15 15:19:42 -05:00
Martin Fischer
ef80ab205c Mark Settings::for_rule(s) as test-only 2023-01-15 15:19:42 -05:00
Ran Benita
d3041587ad Implement flake8-commas (#1872)
Implements [flake8-commas](https://github.com/PyCQA/flake8-commas). Fixes #1058.

The plugin is mostly redundant with Black (and also deprecated upstream), but very useful for projects which can't/won't use an auto-formatter. 

This linter works on tokens. Before porting to Rust, I cleaned up the Python code ([link](https://gist.github.com/bluetech/7c5dcbdec4a73dd5a74d4bc09c72b8b9)) and made sure the tests pass. In the Rust version I tried to add explanatory comments, to the best of my understanding of the original logic.

Some changes I did make:

- Got rid of rule C814 - "missing trailing comma in Python 2". Ruff doesn't support Python 2.
- Merged rules C815 - "missing trailing comma in Python 3.5+" and C816 - "missing trailing comma in Python 3.6+" into C812 - "missing trailing comma". These Python versions are outdated, didn't think it was worth the complication.
- Added autofixes for C812 and C819.

Autofix is missing for C818 - "trailing comma on bare tuple prohibited". It needs to turn e.g. `x = 1,` into `x = (1, )`, it's a bit difficult to do with tokens only, so I skipped it for now.

I ran the rules on cpython/Lib and on a big internal code base and it works as intended (though I only sampled the diffs).
2023-01-15 14:03:32 -05:00
Harutaka Kawamura
8d912404b7 Use more precise error ranges for RET505~508 (#1895) 2023-01-15 13:54:24 -05:00
Tom Fryers
85bdb45eca Improve magic value message wording (#1892)
The message previously specified 'number', but the error applies to more types.
2023-01-15 12:53:02 -05:00
messense
c7d0d26981 Update add plugin/rule scripts (#1889)
Adjusted some file locations and changed to use [`pathlib`](https://docs.python.org/3/library/pathlib.html) instead of `os.path`.
2023-01-15 12:49:42 -05:00
Charlie Marsh
5c6753e69e Remove some Clippy allows (#1888) 2023-01-15 02:32:36 -05:00
Charlie Marsh
3791ca721a Add a dedicated token indexer for continuations and comments (#1886)
The primary motivation is that we can now robustly detect `\` continuations due to the addition of `Tok::NonLogicalNewline`. This PR generalizes the approach we took to comments (track all lines that contain any comments), and applies it to continuations too.
2023-01-15 01:57:31 -05:00
Charlie Marsh
2c644619e0 Convert confusable violations to named fields (#1887)
See: #1871.
2023-01-15 01:56:18 -05:00
Martin Fischer
81996f1bcc Convert define_rule_mapping! to a procedural macro
define_rule_mapping! was previously implemented as a declarative macro,
which was however partially relying on an origin_by_code! proc macro
because declarative macros cannot match on substrings of identifiers.

Currently all define_rule_mapping! lines look like the following:

    TID251 => violations::BannedApi,
    TID252 => violations::BannedRelativeImport,

We want to break up violations.rs, moving the violation definitions to
the respective rule modules. To do this we want to change the previous
lines to:

    TID251 => rules::flake8_tidy_imports::banned_api::BannedApi,
    TID252 => rules::flake8_tidy_imports::relative_imports::RelativeImport,

This however doesn't work because the define_rule_mapping! macro is
currently defined as:

    ($($code:ident => $mod:ident::$name:ident,)+) => { ... }

That is it only supported $module::$name but not longer paths with
multiple modules. While we could define `=> $path:path`[1] then we
could no longer access the last path segment, which we need because
we use it for the DiagnosticKind variant names. And
`$path:path::$last:ident` doesn't work either because it would be
ambiguous (Rust wouldn't know where the path ends ... so path fragments
have to be followed by some punctuation/keyword that may not be part of
paths). And we also cannot just introduce a procedural macro like
path_basename!(...) because the following is not valid Rust code:

    enum Foo { foo!(...), }

(macros cannot be called in the place where you define variants.)

So we have to convert define_rule_mapping! into a proc macro in order to
support paths of arbitrary length and this commit implements that.

[1]: https://doc.rust-lang.org/reference/macros-by-example.html#metavariables
2023-01-15 01:54:57 -05:00
93 changed files with 2811 additions and 558 deletions

View File

@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.0.222
rev: v0.0.223
hooks:
- id: ruff

10
Cargo.lock generated
View File

@@ -735,7 +735,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.222-dev.0"
version = "0.0.223"
dependencies = [
"anyhow",
"clap 4.0.32",
@@ -1906,7 +1906,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.222"
version = "0.0.223"
dependencies = [
"anyhow",
"bitflags",
@@ -1958,7 +1958,7 @@ dependencies = [
[[package]]
name = "ruff_cli"
version = "0.0.222"
version = "0.0.223"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -1995,7 +1995,7 @@ dependencies = [
[[package]]
name = "ruff_dev"
version = "0.0.222"
version = "0.0.223"
dependencies = [
"anyhow",
"clap 4.0.32",
@@ -2016,7 +2016,7 @@ dependencies = [
[[package]]
name = "ruff_macros"
version = "0.0.222"
version = "0.0.223"
dependencies = [
"once_cell",
"proc-macro2",

View File

@@ -8,7 +8,7 @@ default-members = [".", "ruff_cli"]
[package]
name = "ruff"
version = "0.0.222"
version = "0.0.223"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = "2021"
rust-version = "1.65.0"
@@ -46,7 +46,7 @@ once_cell = { version = "1.16.0" }
path-absolutize = { version = "3.0.14", features = ["once_cell_cache", "use_unix_paths_on_wasm"] }
regex = { version = "1.6.0" }
ropey = { version = "1.5.0", features = ["cr_lines", "simd"], default-features = false }
ruff_macros = { version = "0.0.222", path = "ruff_macros" }
ruff_macros = { version = "0.0.223", path = "ruff_macros" }
rustc-hash = { version = "1.1.0" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "acbc517b55406c76da83d7b2711941d8d3f65b87" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "acbc517b55406c76da83d7b2711941d8d3f65b87" }

View File

@@ -123,6 +123,7 @@ of [Conda](https://docs.conda.io/en/latest/):
1. [pygrep-hooks (PGH)](#pygrep-hooks-pgh)
1. [Pylint (PLC, PLE, PLR, PLW)](#pylint-plc-ple-plr-plw)
1. [flake8-pie (PIE)](#flake8-pie-pie)
1. [flake8-commas (COM)](#flake8-commas-com)
1. [Ruff-specific rules (RUF)](#ruff-specific-rules-ruf)<!-- End auto-generated table of contents. -->
1. [Editor Integrations](#editor-integrations)
1. [FAQ](#faq)
@@ -184,7 +185,7 @@ Ruff also works with [pre-commit](https://pre-commit.com):
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.222'
rev: 'v0.0.223'
hooks:
- id: ruff
# Respect `exclude` and `extend-exclude` settings.
@@ -1111,7 +1112,7 @@ For more, see [Pylint](https://pypi.org/project/pylint/2.15.7/) on PyPI.
| PLR0402 | ConsiderUsingFromImport | Use `from ... import ...` in lieu of alias | |
| PLR1701 | ConsiderMergingIsinstance | Merge these isinstance calls: `isinstance(..., (...))` | |
| PLR1722 | UseSysExit | Use `sys.exit()` instead of `exit` | 🛠 |
| PLR2004 | MagicValueComparison | Magic number used in comparison, consider replacing magic with a constant variable | |
| PLR2004 | MagicValueComparison | Magic value used in comparison, consider replacing magic with a constant variable | |
#### Warning (PLW)
| Code | Name | Message | Fix |
@@ -1129,6 +1130,16 @@ For more, see [flake8-pie](https://pypi.org/project/flake8-pie/0.16.0/) on PyPI.
| PIE794 | DupeClassFieldDefinitions | Class field `...` is defined multiple times | 🛠 |
| PIE807 | PreferListBuiltin | Prefer `list()` over useless lambda | 🛠 |
### flake8-commas (COM)
For more, see [flake8-commas](https://pypi.org/project/flake8-commas/2.1.0/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| COM812 | TrailingCommaMissing | Trailing comma missing | 🛠 |
| COM818 | TrailingCommaOnBareTupleProhibited | Trailing comma on bare tuple prohibited | |
| COM819 | TrailingCommaProhibited | Trailing comma prohibited | 🛠 |
### Ruff-specific rules (RUF)
| Code | Name | Message | Fix |
@@ -1413,6 +1424,7 @@ natively, including:
- [`flake8-boolean-trap`](https://pypi.org/project/flake8-boolean-trap/)
- [`flake8-bugbear`](https://pypi.org/project/flake8-bugbear/)
- [`flake8-builtins`](https://pypi.org/project/flake8-builtins/)
- [`flake8-commas`](https://pypi.org/project/flake8-commas/)
- [`flake8-comprehensions`](https://pypi.org/project/flake8-comprehensions/)
- [`flake8-datetimez`](https://pypi.org/project/flake8-datetimez/)
- [`flake8-debugger`](https://pypi.org/project/flake8-debugger/)
@@ -1478,6 +1490,7 @@ Today, Ruff can be used to replace Flake8 when used with any of the following pl
- [`flake8-boolean-trap`](https://pypi.org/project/flake8-boolean-trap/)
- [`flake8-bugbear`](https://pypi.org/project/flake8-bugbear/)
- [`flake8-builtins`](https://pypi.org/project/flake8-builtins/)
- [`flake8-commas`](https://pypi.org/project/flake8-commas/)
- [`flake8-comprehensions`](https://pypi.org/project/flake8-comprehensions/)
- [`flake8-datetimez`](https://pypi.org/project/flake8-datetimez/)
- [`flake8-debugger`](https://pypi.org/project/flake8-debugger/)

View File

@@ -771,7 +771,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8_to_ruff"
version = "0.0.222"
version = "0.0.223"
dependencies = [
"anyhow",
"clap",
@@ -1975,7 +1975,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.222"
version = "0.0.223"
dependencies = [
"anyhow",
"bincode",

View File

@@ -1,6 +1,6 @@
[package]
name = "flake8-to-ruff"
version = "0.0.222-dev.0"
version = "0.0.223"
edition = "2021"
[dependencies]

View File

@@ -0,0 +1,45 @@
The MIT License (MIT)
Copyright (c) 2017 Thomas Grainger.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
Portions of this flake8-commas Software may utilize the following
copyrighted material, the use of which is hereby acknowledged.
Original flake8-commas: https://github.com/trevorcreech/flake8-commas/commit/e8563b71b1d5442e102c8734c11cb5202284293d
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -7,7 +7,7 @@ build-backend = "maturin"
[project]
name = "ruff"
version = "0.0.222"
version = "0.0.223"
description = "An extremely fast Python linter, written in Rust."
authors = [
{ name = "Charlie Marsh", email = "charlie.r.marsh@gmail.com" },

View File

@@ -0,0 +1,628 @@
# ==> bad_function_call.py <==
bad_function_call(
param1='test',
param2='test'
)
# ==> bad_list.py <==
bad_list = [
1,
2,
3
]
bad_list_with_comment = [
1,
2,
3
# still needs a comma!
]
bad_list_with_extra_empty = [
1,
2,
3
]
# ==> bare.py <==
bar = 1, 2
foo = 1
foo = (1,)
foo = 1,
bar = 1; foo = bar,
foo = (
3,
4,
)
foo = 3,
class A(object):
foo = 3
bar = 10,
foo_bar = 2
a = ('a',)
from foo import bar, baz
group_by = function_call('arg'),
group_by = ('foobar' * 3),
def foo():
return False,
==> callable_before_parenth_form.py <==
def foo(
bar,
):
pass
{'foo': foo}['foo'](
bar
)
{'foo': foo}['foo'](
bar,
)
(foo)(
bar
)
(foo)[0](
bar,
)
[foo][0](
bar
)
[foo][0](
bar,
)
# ==> comment_good_dict.py <==
multiline_good_dict = {
"good": 123, # this is a good number
}
# ==> dict_comprehension.py <==
not_a_dict = {
x: y
for x, y in ((1, 2), (3, 4))
}
# ==> good_empty_comma_context.py <==
def func2(
):
pass
func2(
)
func2(
)
[
]
[
]
(
)
(
)
{
}
# ==> good_list.py <==
stuff = [
'a',
'b',
# more stuff will go here
]
more_stuff = [
'a',
'b',
]
# ==> keyword_before_parenth_form/base_bad.py <==
from x import (
y
)
assert(
SyntaxWarning,
ThrownHere,
Anyway
)
# async await is fine outside an async def
# ruff: RustPython tokenizer treats async/await as keywords, not applicable.
# def await(
# foo
# ):
# async(
# foo
# )
# def async(
# foo
# ):
# await(
# foo
# )
# ==> keyword_before_parenth_form/base.py <==
from x import (
y,
)
assert(
SyntaxWarning,
ThrownHere,
Anyway,
)
assert (
foo
)
assert (
foo and
bar
)
if(
foo and
bar
):
pass
elif(
foo and
bar
):
pass
for x in(
[1,2,3]
):
print(x)
(x for x in (
[1, 2, 3]
))
(
'foo'
) is (
'foo'
)
if (
foo and
bar
) or not (
foo
) or (
spam
):
pass
def xyz():
raise(
Exception()
)
def abc():
return(
3
)
while(
False
):
pass
with(
loop
):
pass
def foo():
yield (
"foo"
)
# async await is fine outside an async def
# ruff: RustPython tokenizer treats async/await as keywords, not applicable.
# def await(
# foo,
# ):
# async(
# foo,
# )
# def async(
# foo,
# ):
# await(
# foo,
# )
# ==> keyword_before_parenth_form/py3.py <==
# Syntax error in Py2
def foo():
yield from (
foo
)
# ==> list_comprehension.py <==
not_a_list = [
s.strip()
for s in 'foo, bar, baz'.split(',')
]
# ==> multiline_bad_dict.py <==
multiline_bad_dict = {
"bad": 123
}
# ==> multiline_bad_function_def.py <==
def func_good(
a = 3,
b = 2):
pass
def func_bad(
a = 3,
b = 2
):
pass
# ==> multiline_bad_function_one_param.py <==
def func(
a = 3
):
pass
func(
a = 3
)
# ==> multiline_bad_or_dict.py <==
multiline_bad_or_dict = {
"good": True or False,
"bad": 123
}
# ==> multiline_good_dict.py <==
multiline_good_dict = {
"good": 123,
}
# ==> multiline_good_single_keyed_for_dict.py <==
good_dict = {
"good": x for x in y
}
# ==> multiline_if.py <==
if (
foo
and bar
):
print("Baz")
# ==> multiline_index_access.py <==
multiline_index_access[
"good"
]
multiline_index_access_after_function()[
"good"
]
multiline_index_access_after_inline_index_access['first'][
"good"
]
multiline_index_access[
"probably fine",
]
[0, 1, 2][
"good"
]
[0, 1, 2][
"probably fine",
]
multiline_index_access[
"probably fine",
"not good"
]
multiline_index_access[
"fine",
"fine",
:
"not good"
]
# ==> multiline_string.py <==
s = (
'this' +
'is a string'
)
s2 = (
'this'
'is a also a string'
)
t = (
'this' +
'is a tuple',
)
t2 = (
'this'
'is also a tuple',
)
# ==> multiline_subscript_slice.py <==
multiline_index_access[
"fine",
"fine"
:
"not fine"
]
multiline_index_access[
"fine"
"fine"
:
"fine"
:
"fine"
]
multiline_index_access[
"fine"
"fine",
:
"fine",
:
"fine",
]
multiline_index_access[
"fine"
"fine",
:
"fine"
:
"fine",
"not fine"
]
multiline_index_access[
"fine"
"fine",
:
"fine",
"fine"
:
"fine",
]
multiline_index_access[
lambda fine,
fine,
fine: (0,)
:
lambda fine,
fine,
fine: (0,),
"fine"
:
"fine",
]
# ==> one_line_dict.py <==
one_line_dict = {"good": 123}
# ==> parenth_form.py <==
parenth_form = (
a +
b +
c
)
parenth_form_with_lambda = (
lambda x, y: 0
)
parenth_form_with_default_lambda = (
lambda x=(
lambda
x,
y,
:
0
),
y = {a: b},
:
0
)
# ==> prohibited.py <==
foo = ['a', 'b', 'c',]
bar = { a: b,}
def bah(ham, spam,):
pass
(0,)
(0, 1,)
foo = ['a', 'b', 'c', ]
bar = { a: b, }
def bah(ham, spam, ):
pass
(0, )
(0, 1, )
image[:, :, 0]
image[:,]
image[:,:,]
lambda x, :
# ==> unpack.py <==
def function(
foo,
bar,
**kwargs
):
pass
def function(
foo,
bar,
*args
):
pass
def function(
foo,
bar,
*args,
extra_kwarg
):
pass
result = function(
foo,
bar,
**kwargs
)
result = function(
foo,
bar,
**not_called_kwargs
)
def foo(
ham,
spam,
*args,
kwarg_only
):
pass
# In python 3.5 if it's not a function def, commas are mandatory.
foo(
**kwargs
)
{
**kwargs
}
(
*args
)
{
*args
}
[
*args
]
def foo(
ham,
spam,
*args
):
pass
def foo(
ham,
spam,
**kwargs
):
pass
def foo(
ham,
spam,
*args,
kwarg_only
):
pass
# In python 3.5 if it's not a function def, commas are mandatory.
foo(
**kwargs,
)
{
**kwargs,
}
(
*args,
)
{
*args,
}
[
*args,
]
result = function(
foo,
bar,
**{'ham': spam}
)

View File

@@ -16,3 +16,15 @@ with A() as a:
with B() as b:
print("hello")
a()
async with A() as a:
with B() as b:
print("hello")
with A() as a:
async with B() as b:
print("hello")
async with A() as a:
async with B() as b:
print("hello")

View File

@@ -0,0 +1,17 @@
class PropertyWithSetter:
@property
def foo(self) -> str:
"""Docstring for foo."""
return "foo"
@foo.setter
def foo(self, value: str) -> None:
pass
@foo.deleter
def foo(self):
pass
@foo
def foo(self, value: str) -> None:
pass

View File

@@ -1156,6 +1156,12 @@
"C9",
"C90",
"C901",
"COM",
"COM8",
"COM81",
"COM812",
"COM818",
"COM819",
"D",
"D1",
"D10",

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_cli"
version = "0.0.222"
version = "0.0.223"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = "2021"
rust-version = "1.65.0"

View File

@@ -9,7 +9,7 @@ use filetime::FileTime;
use log::error;
use path_absolutize::Absolutize;
use ruff::message::Message;
use ruff::settings::{flags, Settings};
use ruff::settings::{flags, AllSettings, Settings};
use serde::{Deserialize, Serialize};
const CARGO_PKG_VERSION: &str = env!("CARGO_PKG_VERSION");
@@ -80,10 +80,14 @@ fn read_sync(cache_dir: &Path, key: u64) -> Result<Vec<u8>, std::io::Error> {
pub fn get<P: AsRef<Path>>(
path: P,
metadata: &fs::Metadata,
settings: &Settings,
settings: &AllSettings,
autofix: flags::Autofix,
) -> Option<Vec<Message>> {
let encoded = read_sync(&settings.cache_dir, cache_key(path, settings, autofix)).ok()?;
let encoded = read_sync(
&settings.cli.cache_dir,
cache_key(path, &settings.lib, autofix),
)
.ok()?;
let (mtime, messages) = match bincode::deserialize::<CheckResult>(&encoded[..]) {
Ok(CheckResult {
metadata: CacheMetadata { mtime },
@@ -104,7 +108,7 @@ pub fn get<P: AsRef<Path>>(
pub fn set<P: AsRef<Path>>(
path: P,
metadata: &fs::Metadata,
settings: &Settings,
settings: &AllSettings,
autofix: flags::Autofix,
messages: &[Message],
) {
@@ -115,8 +119,8 @@ pub fn set<P: AsRef<Path>>(
messages,
};
if let Err(e) = write_sync(
&settings.cache_dir,
cache_key(path, settings, autofix),
&settings.cli.cache_dir,
cache_key(path, &settings.lib, autofix),
&bincode::serialize(&check_result).unwrap(),
) {
error!("Failed to write to cache: {e:?}");

View File

@@ -56,19 +56,19 @@ pub fn run(
if matches!(cache, flags::Cache::Enabled) {
match &pyproject_strategy {
PyprojectDiscovery::Fixed(settings) => {
if let Err(e) = cache::init(&settings.cache_dir) {
if let Err(e) = cache::init(&settings.cli.cache_dir) {
error!(
"Failed to initialize cache at {}: {e:?}",
settings.cache_dir.to_string_lossy()
settings.cli.cache_dir.to_string_lossy()
);
}
}
PyprojectDiscovery::Hierarchical(default) => {
for settings in std::iter::once(default).chain(resolver.iter()) {
if let Err(e) = cache::init(&settings.cache_dir) {
if let Err(e) = cache::init(&settings.cli.cache_dir) {
error!(
"Failed to initialize cache at {}: {e:?}",
settings.cache_dir.to_string_lossy()
settings.cli.cache_dir.to_string_lossy()
);
}
}
@@ -97,7 +97,7 @@ pub fn run(
.parent()
.and_then(|parent| package_roots.get(parent))
.and_then(|package| *package);
let settings = resolver.resolve(path, pyproject_strategy);
let settings = resolver.resolve_all(path, pyproject_strategy);
lint_path(path, package, settings, cache, autofix)
.map_err(|e| (Some(path.to_owned()), e.to_string()))
}
@@ -171,9 +171,9 @@ pub fn run_stdin(
};
let package_root = filename
.and_then(Path::parent)
.and_then(|path| packaging::detect_package_root(path, &settings.namespace_packages));
.and_then(|path| packaging::detect_package_root(path, &settings.lib.namespace_packages));
let stdin = read_from_stdin()?;
let mut diagnostics = lint_stdin(filename, package_root, &stdin, settings, autofix)?;
let mut diagnostics = lint_stdin(filename, package_root, &stdin, &settings.lib, autofix)?;
diagnostics.messages.sort_unstable();
Ok(diagnostics)
}

View File

@@ -9,7 +9,7 @@ use anyhow::Result;
use log::debug;
use ruff::linter::{lint_fix, lint_only};
use ruff::message::Message;
use ruff::settings::{flags, Settings};
use ruff::settings::{flags, AllSettings, Settings};
use ruff::{fix, fs};
use similar::TextDiff;
@@ -38,12 +38,12 @@ impl AddAssign for Diagnostics {
pub fn lint_path(
path: &Path,
package: Option<&Path>,
settings: &Settings,
settings: &AllSettings,
cache: flags::Cache,
autofix: fix::FixMode,
) -> Result<Diagnostics> {
// Validate the `Settings` and return any errors.
settings.validate()?;
settings.lib.validate()?;
// Check the cache.
// TODO(charlie): `fixer::Mode::Apply` and `fixer::Mode::Diff` both have
@@ -69,7 +69,7 @@ pub fn lint_path(
// Lint the file.
let (messages, fixed) = if matches!(autofix, fix::FixMode::Apply | fix::FixMode::Diff) {
let (transformed, fixed, messages) = lint_fix(&contents, path, package, settings)?;
let (transformed, fixed, messages) = lint_fix(&contents, path, package, &settings.lib)?;
if fixed > 0 {
if matches!(autofix, fix::FixMode::Apply) {
write(path, transformed)?;
@@ -85,7 +85,7 @@ pub fn lint_path(
}
(messages, fixed)
} else {
let messages = lint_only(&contents, path, package, settings, autofix.into())?;
let messages = lint_only(&contents, path, package, &settings.lib, autofix.into())?;
let fixed = 0;
(messages, fixed)
};

View File

@@ -16,8 +16,8 @@ use ::ruff::resolver::{
resolve_settings_with_processor, ConfigProcessor, FileDiscovery, PyprojectDiscovery, Relativity,
};
use ::ruff::settings::configuration::Configuration;
use ::ruff::settings::pyproject;
use ::ruff::settings::types::SerializationFormat;
use ::ruff::settings::{pyproject, Settings};
use ::ruff::{fix, fs, warn_user_once};
use anyhow::Result;
use clap::{CommandFactory, Parser};
@@ -26,6 +26,7 @@ use colored::Colorize;
use notify::{recommended_watcher, RecursiveMode, Watcher};
use path_absolutize::path_dedot;
use printer::{Printer, Violations};
use ruff::settings::{AllSettings, CliSettings};
mod cache;
mod cli;
@@ -48,7 +49,7 @@ fn resolve(
// First priority: if we're running in isolated mode, use the default settings.
let mut config = Configuration::default();
overrides.process_config(&mut config);
let settings = Settings::from_configuration(config, &path_dedot::CWD)?;
let settings = AllSettings::from_configuration(config, &path_dedot::CWD)?;
Ok(PyprojectDiscovery::Fixed(settings))
} else if let Some(pyproject) = config {
// Second priority: the user specified a `pyproject.toml` file. Use that
@@ -82,7 +83,7 @@ fn resolve(
// as the "default" settings.)
let mut config = Configuration::default();
overrides.process_config(&mut config);
let settings = Settings::from_configuration(config, &path_dedot::CWD)?;
let settings = AllSettings::from_configuration(config, &path_dedot::CWD)?;
Ok(PyprojectDiscovery::Hierarchical(settings))
}
}
@@ -113,35 +114,31 @@ pub fn main() -> Result<ExitCode> {
// Validate the `Settings` and return any errors.
match &pyproject_strategy {
PyprojectDiscovery::Fixed(settings) => settings.validate()?,
PyprojectDiscovery::Hierarchical(settings) => settings.validate()?,
PyprojectDiscovery::Fixed(settings) => settings.lib.validate()?,
PyprojectDiscovery::Hierarchical(settings) => settings.lib.validate()?,
};
// Extract options that are included in `Settings`, but only apply at the top
// level.
let file_strategy = FileDiscovery {
force_exclude: match &pyproject_strategy {
PyprojectDiscovery::Fixed(settings) => settings.force_exclude,
PyprojectDiscovery::Hierarchical(settings) => settings.force_exclude,
PyprojectDiscovery::Fixed(settings) => settings.lib.force_exclude,
PyprojectDiscovery::Hierarchical(settings) => settings.lib.force_exclude,
},
respect_gitignore: match &pyproject_strategy {
PyprojectDiscovery::Fixed(settings) => settings.respect_gitignore,
PyprojectDiscovery::Hierarchical(settings) => settings.respect_gitignore,
PyprojectDiscovery::Fixed(settings) => settings.lib.respect_gitignore,
PyprojectDiscovery::Hierarchical(settings) => settings.lib.respect_gitignore,
},
};
let (fix, fix_only, format, update_check) = match &pyproject_strategy {
PyprojectDiscovery::Fixed(settings) => (
settings.fix,
settings.fix_only,
settings.format,
settings.update_check,
),
PyprojectDiscovery::Hierarchical(settings) => (
settings.fix,
settings.fix_only,
settings.format,
settings.update_check,
),
let CliSettings {
fix,
fix_only,
format,
update_check,
..
} = match &pyproject_strategy {
PyprojectDiscovery::Fixed(settings) => settings.cli.clone(),
PyprojectDiscovery::Hierarchical(settings) => settings.cli.clone(),
};
if let Some(code) = cli.explain {
@@ -200,7 +197,7 @@ pub fn main() -> Result<ExitCode> {
}
// Perform an initial run instantly.
printer.clear_screen()?;
Printer::clear_screen()?;
printer.write_to_user("Starting linter in watch mode...\n");
let messages = commands::run(
@@ -211,7 +208,7 @@ pub fn main() -> Result<ExitCode> {
cache.into(),
fix::FixMode::None,
)?;
printer.write_continuously(&messages);
printer.write_continuously(&messages)?;
// Configure the file watcher.
let (tx, rx) = channel();
@@ -230,7 +227,7 @@ pub fn main() -> Result<ExitCode> {
.unwrap_or_default()
});
if py_changed {
printer.clear_screen()?;
Printer::clear_screen()?;
printer.write_to_user("File change detected...\n");
let messages = commands::run(
@@ -241,7 +238,7 @@ pub fn main() -> Result<ExitCode> {
cache.into(),
fix::FixMode::None,
)?;
printer.write_continuously(&messages);
printer.write_continuously(&messages)?;
}
}
Err(err) => return Err(err.into()),

View File

@@ -1,4 +1,6 @@
use std::collections::BTreeMap;
use std::io;
use std::io::{BufWriter, Write};
use std::path::Path;
use annotate_snippets::display_list::{DisplayList, FormatOptions};
@@ -69,7 +71,7 @@ impl<'a> Printer<'a> {
}
}
fn post_text(&self, diagnostics: &Diagnostics) {
fn post_text<T: Write>(&self, stdout: &mut T, diagnostics: &Diagnostics) -> Result<()> {
if self.log_level >= &LogLevel::Default {
match self.violations {
Violations::Show => {
@@ -77,9 +79,12 @@ impl<'a> Printer<'a> {
let remaining = diagnostics.messages.len();
let total = fixed + remaining;
if fixed > 0 {
println!("Found {total} error(s) ({fixed} fixed, {remaining} remaining).");
writeln!(
stdout,
"Found {total} error(s) ({fixed} fixed, {remaining} remaining)."
)?;
} else if remaining > 0 {
println!("Found {remaining} error(s).");
writeln!(stdout, "Found {remaining} error(s).")?;
}
if !matches!(self.autofix, fix::FixMode::Apply) {
@@ -89,7 +94,10 @@ impl<'a> Printer<'a> {
.filter(|message| message.kind.fixable())
.count();
if num_fixable > 0 {
println!("{num_fixable} potentially fixable with the --fix option.");
writeln!(
stdout,
"{num_fixable} potentially fixable with the --fix option."
)?;
}
}
}
@@ -97,14 +105,15 @@ impl<'a> Printer<'a> {
let fixed = diagnostics.fixed;
if fixed > 0 {
if matches!(self.autofix, fix::FixMode::Apply) {
println!("Fixed {fixed} error(s).");
writeln!(stdout, "Fixed {fixed} error(s).")?;
} else if matches!(self.autofix, fix::FixMode::Diff) {
println!("Would fix {fixed} error(s).");
writeln!(stdout, "Would fix {fixed} error(s).")?;
}
}
}
}
}
Ok(())
}
pub fn write_once(&self, diagnostics: &Diagnostics) -> Result<()> {
@@ -113,18 +122,21 @@ impl<'a> Printer<'a> {
}
if matches!(self.violations, Violations::Hide) {
let mut stdout = BufWriter::new(io::stdout().lock());
if matches!(
self.format,
SerializationFormat::Text | SerializationFormat::Grouped
) {
self.post_text(diagnostics);
self.post_text(&mut stdout, diagnostics)?;
}
return Ok(());
}
let mut stdout = BufWriter::new(io::stdout().lock());
match self.format {
SerializationFormat::Json => {
println!(
writeln!(
stdout,
"{}",
serde_json::to_string_pretty(
&diagnostics
@@ -145,7 +157,7 @@ impl<'a> Printer<'a> {
})
.collect::<Vec<_>>()
)?
);
)?;
}
SerializationFormat::Junit => {
use quick_junit::{NonSuccessKind, Report, TestCase, TestCaseStatus, TestSuite};
@@ -180,14 +192,14 @@ impl<'a> Printer<'a> {
}
report.add_test_suite(test_suite);
}
println!("{}", report.to_string().unwrap());
writeln!(stdout, "{}", report.to_string().unwrap())?;
}
SerializationFormat::Text => {
for message in &diagnostics.messages {
print_message(message);
print_message(&mut stdout, message)?;
}
self.post_text(diagnostics);
self.post_text(&mut stdout, diagnostics)?;
}
SerializationFormat::Grouped => {
for (filename, messages) in group_messages_by_filename(&diagnostics.messages) {
@@ -209,21 +221,25 @@ impl<'a> Printer<'a> {
);
// Print the filename.
println!("{}:", relativize_path(Path::new(&filename)).underline());
writeln!(
stdout,
"{}:",
relativize_path(Path::new(&filename)).underline()
)?;
// Print each message.
for message in messages {
print_grouped_message(message, row_length, column_length);
print_grouped_message(&mut stdout, message, row_length, column_length)?;
}
println!();
writeln!(stdout)?;
}
self.post_text(diagnostics);
self.post_text(&mut stdout, diagnostics)?;
}
SerializationFormat::Github => {
// Generate error workflow command in GitHub Actions format.
// See: https://docs.github.com/en/actions/reference/workflow-commands-for-github-actions#setting-an-error-message
diagnostics.messages.iter().for_each(|message| {
for message in &diagnostics.messages {
let label = format!(
"{}{}{}{}{}{} {} {}",
relativize_path(Path::new(&message.filename)),
@@ -235,7 +251,8 @@ impl<'a> Printer<'a> {
message.kind.code().as_ref(),
message.kind.body(),
);
println!(
writeln!(
stdout,
"::error title=Ruff \
({}),file={},line={},col={},endLine={},endColumn={}::{}",
message.kind.code(),
@@ -245,13 +262,13 @@ impl<'a> Printer<'a> {
message.end_location.row(),
message.end_location.column(),
label,
);
});
)?;
}
}
SerializationFormat::Gitlab => {
// Generate JSON with errors in GitLab CI format
// https://docs.gitlab.com/ee/ci/testing/code_quality.html#implementing-a-custom-tool
println!(
writeln!(stdout,
"{}",
serde_json::to_string_pretty(
&diagnostics
@@ -274,16 +291,18 @@ impl<'a> Printer<'a> {
)
.collect::<Vec<_>>()
)?
);
)?;
}
}
stdout.flush()?;
Ok(())
}
pub fn write_continuously(&self, diagnostics: &Diagnostics) {
pub fn write_continuously(&self, diagnostics: &Diagnostics) -> Result<()> {
if matches!(self.log_level, LogLevel::Silent) {
return;
return Ok(());
}
if self.log_level >= &LogLevel::Default {
@@ -293,18 +312,21 @@ impl<'a> Printer<'a> {
);
}
let mut stdout = BufWriter::new(io::stdout().lock());
if !diagnostics.messages.is_empty() {
if self.log_level >= &LogLevel::Default {
println!();
writeln!(stdout)?;
}
for message in &diagnostics.messages {
print_message(message);
print_message(&mut stdout, message)?;
}
}
stdout.flush()?;
Ok(())
}
#[allow(clippy::unused_self)]
pub fn clear_screen(&self) -> Result<()> {
pub fn clear_screen() -> Result<()> {
#[cfg(not(target_family = "wasm"))]
clearscreen::clear()?;
Ok(())
@@ -330,7 +352,7 @@ fn num_digits(n: usize) -> usize {
}
/// Print a single `Message` with full details.
fn print_message(message: &Message) {
fn print_message<T: Write>(stdout: &mut T, message: &Message) -> Result<()> {
let label = format!(
"{}{}{}{}{}{} {} {}",
relativize_path(Path::new(&message.filename)).bold(),
@@ -342,7 +364,7 @@ fn print_message(message: &Message) {
message.kind.code().as_ref().red().bold(),
message.kind.body(),
);
println!("{label}");
writeln!(stdout, "{label}")?;
if let Some(source) = &message.source {
let commit = message.kind.commit();
let footer = if commit.is_some() {
@@ -354,7 +376,6 @@ fn print_message(message: &Message) {
} else {
vec![]
};
let snippet = Snippet {
title: Some(Annotation {
label: None,
@@ -384,13 +405,19 @@ fn print_message(message: &Message) {
// Skip the first line, since we format the `label` ourselves.
let message = DisplayList::from(snippet).to_string();
let (_, message) = message.split_once('\n').unwrap();
println!("{message}\n");
writeln!(stdout, "{message}\n")?;
}
Ok(())
}
/// Print a grouped `Message`, assumed to be printed in a group with others from
/// the same file.
fn print_grouped_message(message: &Message, row_length: usize, column_length: usize) {
fn print_grouped_message<T: Write>(
stdout: &mut T,
message: &Message,
row_length: usize,
column_length: usize,
) -> Result<()> {
let label = format!(
" {}{}{}{}{} {} {}",
" ".repeat(row_length - num_digits(message.location.row())),
@@ -401,7 +428,7 @@ fn print_grouped_message(message: &Message, row_length: usize, column_length: us
message.kind.code().as_ref().red().bold(),
message.kind.body(),
);
println!("{label}");
writeln!(stdout, "{label}")?;
if let Some(source) = &message.source {
let commit = message.kind.commit();
let footer = if commit.is_some() {
@@ -413,7 +440,6 @@ fn print_grouped_message(message: &Message, row_length: usize, column_length: us
} else {
vec![]
};
let snippet = Snippet {
title: Some(Annotation {
label: None,
@@ -444,6 +470,7 @@ fn print_grouped_message(message: &Message, row_length: usize, column_length: us
let message = DisplayList::from(snippet).to_string();
let (_, message) = message.split_once('\n').unwrap();
let message = textwrap::indent(message, " ");
println!("{message}");
writeln!(stdout, "{message}")?;
}
Ok(())
}

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_dev"
version = "0.0.222"
version = "0.0.223"
edition = "2021"
[dependencies]

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_macros"
version = "0.0.222"
version = "0.0.223"
edition = "2021"
[lib]

View File

@@ -0,0 +1,133 @@
use proc_macro2::Span;
use quote::quote;
use syn::parse::Parse;
use syn::{Ident, Path, Token};
pub fn define_rule_mapping(mapping: Mapping) -> proc_macro2::TokenStream {
let mut rulecode_variants = quote!();
let mut diagkind_variants = quote!();
let mut rulecode_kind_match_arms = quote!();
let mut rulecode_origin_match_arms = quote!();
let mut diagkind_code_match_arms = quote!();
let mut diagkind_body_match_arms = quote!();
let mut diagkind_fixable_match_arms = quote!();
let mut diagkind_commit_match_arms = quote!();
let mut from_impls_for_diagkind = quote!();
for (code, path, name) in mapping.entries {
rulecode_variants.extend(quote! {#code,});
diagkind_variants.extend(quote! {#name(#path),});
rulecode_kind_match_arms.extend(
quote! {RuleCode::#code => DiagnosticKind::#name(<#path as Violation>::placeholder()),},
);
let origin = get_origin(&code);
rulecode_origin_match_arms.extend(quote! {RuleCode::#code => RuleOrigin::#origin,});
diagkind_code_match_arms.extend(quote! {DiagnosticKind::#name(..) => &RuleCode::#code, });
diagkind_body_match_arms
.extend(quote! {DiagnosticKind::#name(x) => Violation::message(x), });
diagkind_fixable_match_arms
.extend(quote! {DiagnosticKind::#name(x) => x.autofix_title_formatter().is_some(),});
diagkind_commit_match_arms.extend(
quote! {DiagnosticKind::#name(x) => x.autofix_title_formatter().map(|f| f(x)), },
);
from_impls_for_diagkind.extend(quote! {
impl From<#path> for DiagnosticKind {
fn from(x: #path) -> Self {
DiagnosticKind::#name(x)
}
}
});
}
quote! {
#[derive(
AsRefStr,
RuleCodePrefix,
EnumIter,
EnumString,
Debug,
Display,
PartialEq,
Eq,
Clone,
Serialize,
Deserialize,
Hash,
PartialOrd,
Ord,
)]
pub enum RuleCode { #rulecode_variants }
#[derive(AsRefStr, Debug, PartialEq, Eq, Serialize, Deserialize)]
pub enum DiagnosticKind { #diagkind_variants }
impl RuleCode {
/// A placeholder representation of the `DiagnosticKind` for the diagnostic.
pub fn kind(&self) -> DiagnosticKind {
match self { #rulecode_kind_match_arms }
}
pub fn origin(&self) -> RuleOrigin {
match self { #rulecode_origin_match_arms }
}
}
impl DiagnosticKind {
/// A four-letter shorthand code for the diagnostic.
pub fn code(&self) -> &'static RuleCode {
match self { #diagkind_code_match_arms }
}
/// The body text for the diagnostic.
pub fn body(&self) -> String {
match self { #diagkind_body_match_arms }
}
/// Whether the diagnostic is (potentially) fixable.
pub fn fixable(&self) -> bool {
match self { #diagkind_fixable_match_arms }
}
/// The message used to describe the fix action for a given `DiagnosticKind`.
pub fn commit(&self) -> Option<String> {
match self { #diagkind_commit_match_arms }
}
}
#from_impls_for_diagkind
}
}
fn get_origin(ident: &Ident) -> Ident {
let ident = ident.to_string();
let mut iter = crate::prefixes::PREFIX_TO_ORIGIN.iter();
let origin = loop {
let (prefix, origin) = iter
.next()
.unwrap_or_else(|| panic!("code doesn't start with any recognized prefix: {ident}"));
if ident.starts_with(prefix) {
break origin;
}
};
Ident::new(origin, Span::call_site())
}
pub struct Mapping {
entries: Vec<(Ident, Path, Ident)>,
}
impl Parse for Mapping {
fn parse(input: syn::parse::ParseStream) -> syn::Result<Self> {
let mut entries = Vec::new();
while !input.is_empty() {
let code: Ident = input.parse()?;
let _: Token![=>] = input.parse()?;
let path: Path = input.parse()?;
let name = path.segments.last().unwrap().ident.clone();
let _: Token![,] = input.parse()?;
entries.push((code, path, name));
}
Ok(Mapping { entries })
}
}

View File

@@ -13,11 +13,10 @@
)]
#![forbid(unsafe_code)]
use proc_macro2::Span;
use quote::quote;
use syn::{parse_macro_input, DeriveInput, Ident};
use syn::{parse_macro_input, DeriveInput};
mod config;
mod define_rule_mapping;
mod prefixes;
mod rule_code_prefix;
@@ -40,21 +39,7 @@ pub fn derive_rule_code_prefix(input: proc_macro::TokenStream) -> proc_macro::To
}
#[proc_macro]
pub fn origin_by_code(item: proc_macro::TokenStream) -> proc_macro::TokenStream {
let ident = parse_macro_input!(item as Ident).to_string();
let mut iter = prefixes::PREFIX_TO_ORIGIN.iter();
let origin = loop {
let (prefix, origin) = iter
.next()
.unwrap_or_else(|| panic!("code doesn't start with any recognized prefix: {ident}"));
if ident.starts_with(prefix) {
break origin;
}
};
let prefix = Ident::new(origin, Span::call_site());
quote! {
RuleOrigin::#prefix
}
.into()
pub fn define_rule_mapping(item: proc_macro::TokenStream) -> proc_macro::TokenStream {
let mapping = parse_macro_input!(item as define_rule_mapping::Mapping);
define_rule_mapping::define_rule_mapping(mapping).into()
}

View File

@@ -9,6 +9,7 @@ pub const PREFIX_TO_ORIGIN: &[(&str, &str)] = &[
("B", "Flake8Bugbear"),
("C4", "Flake8Comprehensions"),
("C9", "McCabe"),
("COM", "Flake8Commas"),
("DTZ", "Flake8Datetimez"),
("D", "Pydocstyle"),
("ERA", "Eradicate"),

View File

@@ -10,8 +10,9 @@ Example usage:
import argparse
import os
from pathlib import Path
ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
ROOT_DIR = Path(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
def dir_name(plugin: str) -> str:
@@ -25,15 +26,16 @@ def pascal_case(plugin: str) -> str:
def main(*, plugin: str, url: str) -> None:
# Create the test fixture folder.
os.makedirs(
os.path.join(ROOT_DIR, f"resources/test/fixtures/{dir_name(plugin)}"),
ROOT_DIR / "resources/test/fixtures" / dir_name(plugin),
exist_ok=True,
)
# Create the Rust module.
os.makedirs(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}"), exist_ok=True)
with open(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}/rules.rs"), "w+") as fp:
rust_module = ROOT_DIR / "src/rules" / dir_name(plugin)
os.makedirs(rust_module, exist_ok=True)
with open(rust_module / "rules.rs", "w+") as fp:
fp.write("use crate::checkers::ast::Checker;\n")
with open(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}/mod.rs"), "w+") as fp:
with open(rust_module / "mod.rs", "w+") as fp:
fp.write("pub(crate) mod rules;\n")
fp.write("\n")
fp.write(
@@ -65,15 +67,14 @@ mod tests {
% dir_name(plugin)
)
# Add the plugin to `lib.rs`.
with open(os.path.join(ROOT_DIR, "src/lib.rs"), "a") as fp:
fp.write(f"mod {dir_name(plugin)};")
# Add the plugin to `rules/mod.rs`.
with open(ROOT_DIR / "src/rules/mod.rs", "a") as fp:
fp.write(f"pub mod {dir_name(plugin)};")
# Add the relevant sections to `src/registry.rs`.
with open(os.path.join(ROOT_DIR, "src/registry.rs")) as fp:
content = fp.read()
content = (ROOT_DIR / "src/registry.rs").read_text()
with open(os.path.join(ROOT_DIR, "src/registry.rs"), "w") as fp:
with open(ROOT_DIR / "src/registry.rs", "w") as fp:
for line in content.splitlines():
if line.strip() == "// Ruff":
indent = line.split("// Ruff")[0]
@@ -108,10 +109,9 @@ mod tests {
fp.write("\n")
# Add the relevant section to `src/violations.rs`.
with open(os.path.join(ROOT_DIR, "src/violations.rs")) as fp:
content = fp.read()
content = (ROOT_DIR / "src/violations.rs").read_text()
with open(os.path.join(ROOT_DIR, "src/violations.rs"), "w") as fp:
with open(ROOT_DIR / "src/violations.rs", "w") as fp:
for line in content.splitlines():
if line.strip() == "// Ruff":
indent = line.split("// Ruff")[0]

View File

@@ -11,8 +11,9 @@ Example usage:
import argparse
import os
from pathlib import Path
ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
ROOT_DIR = Path(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
def dir_name(origin: str) -> str:
@@ -32,16 +33,16 @@ def snake_case(name: str) -> str:
def main(*, name: str, code: str, origin: str) -> None:
# Create a test fixture.
with open(
os.path.join(ROOT_DIR, f"resources/test/fixtures/{dir_name(origin)}/{code}.py"),
ROOT_DIR / "resources/test/fixtures" / dir_name(origin) / f"{code}.py",
"a",
):
pass
# Add the relevant `#testcase` macro.
with open(os.path.join(ROOT_DIR, f"src/{dir_name(origin)}/mod.rs")) as fp:
content = fp.read()
mod_rs = ROOT_DIR / "src/rules" / dir_name(origin) / "mod.rs"
content = mod_rs.read_text()
with open(os.path.join(ROOT_DIR, f"src/{dir_name(origin)}/mod.rs"), "w") as fp:
with open(mod_rs, "w") as fp:
for line in content.splitlines():
if line.strip() == "fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {":
indent = line.split("fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {")[0]
@@ -52,7 +53,7 @@ def main(*, name: str, code: str, origin: str) -> None:
fp.write("\n")
# Add the relevant rule function.
with open(os.path.join(ROOT_DIR, f"src/{dir_name(origin)}/rules.rs"), "a") as fp:
with open(ROOT_DIR / "src/rules" / dir_name(origin) / "rules.rs", "a") as fp:
fp.write(
f"""
/// {code}
@@ -62,10 +63,9 @@ pub fn {snake_case(name)}(checker: &mut Checker) {{}}
fp.write("\n")
# Add the relevant struct to `src/violations.rs`.
with open(os.path.join(ROOT_DIR, "src/violations.rs")) as fp:
content = fp.read()
content = (ROOT_DIR / "src/violations.rs").read_text()
with open(os.path.join(ROOT_DIR, "src/violations.rs"), "w") as fp:
with open(ROOT_DIR / "src/violations.rs", "w") as fp:
for line in content.splitlines():
fp.write(line)
fp.write("\n")
@@ -90,12 +90,11 @@ impl Violation for %s {
fp.write("\n")
# Add the relevant code-to-violation pair to `src/registry.rs`.
with open(os.path.join(ROOT_DIR, "src/registry.rs")) as fp:
content = fp.read()
content = (ROOT_DIR / "src/registry.rs").read_text()
seen_macro = False
has_written = False
with open(os.path.join(ROOT_DIR, "src/registry.rs"), "w") as fp:
with open(ROOT_DIR / "src/registry.rs", "w") as fp:
for line in content.splitlines():
fp.write(line)
fp.write("\n")

View File

@@ -13,7 +13,7 @@ use rustpython_parser::token::StringKind;
use crate::ast::types::{Binding, BindingKind, Range};
use crate::checkers::ast::Checker;
use crate::source_code::{Generator, Locator, Stylist};
use crate::source_code::{Generator, Indexer, Locator, Stylist};
/// Create an `Expr` with default location from an `ExprKind`.
pub fn create_expr(node: ExprKind) -> Expr {
@@ -426,7 +426,7 @@ pub fn match_trailing_content(stmt: &Stmt, locator: &Locator) -> bool {
/// Return the number of trailing empty lines following a statement.
pub fn count_trailing_lines(stmt: &Stmt, locator: &Locator) -> usize {
let suffix =
locator.slice_source_code_at(&Location::new(stmt.end_location.unwrap().row() + 1, 0));
locator.slice_source_code_at(Location::new(stmt.end_location.unwrap().row() + 1, 0));
suffix
.lines()
.take_while(|line| line.trim().is_empty())
@@ -601,27 +601,6 @@ pub fn else_range(stmt: &Stmt, locator: &Locator) -> Option<Range> {
}
}
/// Return `true` if a `Stmt` appears to be part of a multi-statement line, with
/// other statements preceding it.
pub fn preceded_by_continuation(stmt: &Stmt, locator: &Locator) -> bool {
// Does the previous line end in a continuation? This will have a specific
// false-positive, which is that if the previous line ends in a comment, it
// will be treated as a continuation. So we should only use this information to
// make conservative choices.
// TODO(charlie): Come up with a more robust strategy.
if stmt.location.row() > 1 {
let range = Range::new(
Location::new(stmt.location.row() - 1, 0),
Location::new(stmt.location.row(), 0),
);
let line = locator.slice_source_code_range(&range);
if line.trim_end().ends_with('\\') {
return true;
}
}
false
}
/// Return the `Range` of the first `Tok::Colon` token in a `Range`.
pub fn first_colon_range(range: Range, locator: &Locator) -> Option<Range> {
let contents = locator.slice_source_code_range(&range);
@@ -635,10 +614,49 @@ pub fn first_colon_range(range: Range, locator: &Locator) -> Option<Range> {
range
}
/// Return the `Range` of the first `Elif` or `Else` token in an `If` statement.
pub fn elif_else_range(stmt: &Stmt, locator: &Locator) -> Option<Range> {
let StmtKind::If { body, orelse, .. } = &stmt.node else {
return None;
};
let start = body
.last()
.expect("Expected body to be non-empty")
.end_location
.unwrap();
let end = match &orelse[..] {
[Stmt {
node: StmtKind::If { test, .. },
..
}] => test.location,
[stmt, ..] => stmt.location,
_ => return None,
};
let contents = locator.slice_source_code_range(&Range::new(start, end));
let range = lexer::make_tokenizer_located(&contents, start)
.flatten()
.find(|(_, kind, _)| matches!(kind, Tok::Elif | Tok::Else))
.map(|(location, _, end_location)| Range {
location,
end_location,
});
range
}
/// Return `true` if a `Stmt` appears to be part of a multi-statement line, with
/// other statements preceding it.
pub fn preceded_by_multi_statement_line(stmt: &Stmt, locator: &Locator) -> bool {
match_leading_content(stmt, locator) || preceded_by_continuation(stmt, locator)
pub fn preceded_by_continuation(stmt: &Stmt, indexer: &Indexer) -> bool {
stmt.location.row() > 1
&& indexer
.continuation_lines()
.contains(&(stmt.location.row() - 1))
}
/// Return `true` if a `Stmt` appears to be part of a multi-statement line, with
/// other statements preceding it.
pub fn preceded_by_multi_statement_line(stmt: &Stmt, locator: &Locator, indexer: &Indexer) -> bool {
match_leading_content(stmt, locator) || preceded_by_continuation(stmt, indexer)
}
/// Return `true` if a `Stmt` appears to be part of a multi-statement line, with
@@ -709,7 +727,6 @@ impl<'a> SimpleCallArgs<'a> {
}
/// Get the number of positional and keyword arguments used.
#[allow(clippy::len_without_is_empty)]
pub fn len(&self) -> usize {
self.args.len() + self.kwargs.len()
}
@@ -722,7 +739,7 @@ mod tests {
use rustpython_parser::parser;
use crate::ast::helpers::{
else_range, first_colon_range, identifier_range, match_trailing_content,
elif_else_range, else_range, first_colon_range, identifier_range, match_trailing_content,
};
use crate::ast::types::Range;
use crate::source_code::Locator;
@@ -869,4 +886,39 @@ else:
assert_eq!(range.end_location.row(), 1);
assert_eq!(range.end_location.column(), 7);
}
#[test]
fn test_elif_else_range() -> Result<()> {
let contents = "
if a:
...
elif b:
...
"
.trim_start();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = Locator::new(contents);
let range = elif_else_range(stmt, &locator).unwrap();
assert_eq!(range.location.row(), 3);
assert_eq!(range.location.column(), 0);
assert_eq!(range.end_location.row(), 3);
assert_eq!(range.end_location.column(), 4);
let contents = "
if a:
...
else:
...
"
.trim_start();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = Locator::new(contents);
let range = elif_else_range(stmt, &locator).unwrap();
assert_eq!(range.location.row(), 3);
assert_eq!(range.location.column(), 0);
assert_eq!(range.end_location.row(), 3);
assert_eq!(range.end_location.column(), 4);
Ok(())
}
}

View File

@@ -12,7 +12,7 @@ use crate::ast::whitespace::LinesWithTrailingNewline;
use crate::cst::helpers::compose_module_path;
use crate::cst::matchers::match_module;
use crate::fix::Fix;
use crate::source_code::Locator;
use crate::source_code::{Indexer, Locator};
/// Determine if a body contains only a single statement, taking into account
/// deleted.
@@ -79,7 +79,7 @@ fn is_lone_child(child: &Stmt, parent: &Stmt, deleted: &[&Stmt]) -> Result<bool>
/// Return the location of a trailing semicolon following a `Stmt`, if it's part
/// of a multi-statement line.
fn trailing_semicolon(stmt: &Stmt, locator: &Locator) -> Option<Location> {
let contents = locator.slice_source_code_at(&stmt.end_location.unwrap());
let contents = locator.slice_source_code_at(stmt.end_location.unwrap());
for (row, line) in LinesWithTrailingNewline::from(&contents).enumerate() {
let trimmed = line.trim();
if trimmed.starts_with(';') {
@@ -102,7 +102,7 @@ fn trailing_semicolon(stmt: &Stmt, locator: &Locator) -> Option<Location> {
/// Find the next valid break for a `Stmt` after a semicolon.
fn next_stmt_break(semicolon: Location, locator: &Locator) -> Location {
let start_location = Location::new(semicolon.row(), semicolon.column() + 1);
let contents = locator.slice_source_code_at(&start_location);
let contents = locator.slice_source_code_at(start_location);
for (row, line) in LinesWithTrailingNewline::from(&contents).enumerate() {
let trimmed = line.trim();
// Skip past any continuations.
@@ -134,7 +134,7 @@ fn next_stmt_break(semicolon: Location, locator: &Locator) -> Location {
/// Return `true` if a `Stmt` occurs at the end of a file.
fn is_end_of_file(stmt: &Stmt, locator: &Locator) -> bool {
let contents = locator.slice_source_code_at(&stmt.end_location.unwrap());
let contents = locator.slice_source_code_at(stmt.end_location.unwrap());
contents.is_empty()
}
@@ -156,6 +156,7 @@ pub fn delete_stmt(
parent: Option<&Stmt>,
deleted: &[&Stmt],
locator: &Locator,
indexer: &Indexer,
) -> Result<Fix> {
if parent
.map(|parent| is_lone_child(stmt, parent, deleted))
@@ -175,7 +176,7 @@ pub fn delete_stmt(
Fix::deletion(stmt.location, next)
} else if helpers::match_leading_content(stmt, locator) {
Fix::deletion(stmt.location, stmt.end_location.unwrap())
} else if helpers::preceded_by_continuation(stmt, locator) {
} else if helpers::preceded_by_continuation(stmt, indexer) {
if is_end_of_file(stmt, locator) && stmt.location.column() == 0 {
// Special-case: a file can't end in a continuation.
Fix::replacement("\n".to_string(), stmt.location, stmt.end_location.unwrap())
@@ -198,6 +199,7 @@ pub fn remove_unused_imports<'a>(
parent: Option<&Stmt>,
deleted: &[&Stmt],
locator: &Locator,
indexer: &Indexer,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(stmt));
let mut tree = match_module(&module_text)?;
@@ -235,7 +237,7 @@ pub fn remove_unused_imports<'a>(
if !found_star {
bail!("Expected \'*\' for unused import");
}
return delete_stmt(stmt, parent, deleted, locator);
return delete_stmt(stmt, parent, deleted, locator, indexer);
} else {
bail!("Expected: ImportNames::Aliases | ImportNames::Star");
}
@@ -296,7 +298,7 @@ pub fn remove_unused_imports<'a>(
}
if aliases.is_empty() {
delete_stmt(stmt, parent, deleted, locator)
delete_stmt(stmt, parent, deleted, locator, indexer)
} else {
let mut state = CodegenState::default();
tree.codegen(&mut state);

View File

@@ -66,7 +66,7 @@ fn apply_fixes<'a>(
}
// Add the remaining content.
let slice = locator.slice_source_code_at(&last_pos);
let slice = locator.slice_source_code_at(last_pos);
output.append(&slice);
(Cow::from(output.finish()), num_fixed)

View File

@@ -39,7 +39,7 @@ use crate::rules::{
};
use crate::settings::types::PythonVersion;
use crate::settings::{flags, Settings};
use crate::source_code::{Locator, Stylist};
use crate::source_code::{Indexer, Locator, Stylist};
use crate::violations::DeferralKeyword;
use crate::visibility::{module_visibility, transition_scope, Modifier, Visibility, VisibleScope};
use crate::{autofix, docstrings, noqa, violations, visibility};
@@ -57,7 +57,8 @@ pub struct Checker<'a> {
pub(crate) settings: &'a Settings,
pub(crate) noqa_line_for: &'a IntMap<usize, usize>,
pub(crate) locator: &'a Locator<'a>,
pub(crate) style: &'a Stylist<'a>,
pub(crate) stylist: &'a Stylist<'a>,
pub(crate) indexer: &'a Indexer,
// Computed diagnostics.
pub(crate) diagnostics: Vec<Diagnostic>,
// Function and class definition tracking (e.g., for docstring enforcement).
@@ -98,6 +99,7 @@ pub struct Checker<'a> {
}
impl<'a> Checker<'a> {
#[allow(clippy::too_many_arguments)]
pub fn new(
settings: &'a Settings,
noqa_line_for: &'a IntMap<usize, usize>,
@@ -106,6 +108,7 @@ impl<'a> Checker<'a> {
path: &'a Path,
locator: &'a Locator,
style: &'a Stylist,
indexer: &'a Indexer,
) -> Checker<'a> {
Checker {
settings,
@@ -114,7 +117,8 @@ impl<'a> Checker<'a> {
noqa,
path,
locator,
style,
stylist: style,
indexer,
diagnostics: vec![],
definitions: vec![],
deletions: FxHashSet::default(),
@@ -1275,7 +1279,7 @@ where
}
}
}
StmtKind::With { items, body, .. } | StmtKind::AsyncWith { items, body, .. } => {
StmtKind::With { items, body, .. } => {
if self.settings.enabled.contains(&RuleCode::B017) {
flake8_bugbear::rules::assert_raises_exception(self, stmt, items);
}
@@ -1287,7 +1291,7 @@ where
self,
stmt,
body,
self.current_stmt_parent().map(|parent| parent.0),
self.current_stmt_parent().map(Into::into),
);
}
}
@@ -4001,6 +4005,7 @@ impl<'a> Checker<'a> {
parent,
&deleted,
self.locator,
self.indexer,
) {
Ok(fix) => {
if fix.content.is_empty() || fix.content == "pass" {
@@ -4296,6 +4301,7 @@ pub fn check_ast(
python_ast: &Suite,
locator: &Locator,
stylist: &Stylist,
indexer: &Indexer,
noqa_line_for: &IntMap<usize, usize>,
settings: &Settings,
autofix: flags::Autofix,
@@ -4310,6 +4316,7 @@ pub fn check_ast(
path,
locator,
stylist,
indexer,
);
checker.push_scope(Scope::new(ScopeKind::Module));
checker.bind_builtins();

View File

@@ -10,12 +10,13 @@ use crate::registry::{Diagnostic, RuleCode};
use crate::rules::isort;
use crate::rules::isort::track::{Block, ImportTracker};
use crate::settings::{flags, Settings};
use crate::source_code::{Locator, Stylist};
use crate::source_code::{Indexer, Locator, Stylist};
#[allow(clippy::too_many_arguments)]
pub fn check_imports(
python_ast: &Suite,
locator: &Locator,
indexer: &Indexer,
directives: &IsortDirectives,
settings: &Settings,
stylist: &Stylist,
@@ -39,7 +40,7 @@ pub fn check_imports(
for block in &blocks {
if !block.imports.is_empty() {
if let Some(diagnostic) = isort::rules::organize_imports(
block, locator, settings, stylist, autofix, package,
block, locator, indexer, settings, stylist, autofix, package,
) {
diagnostics.push(diagnostic);
}

View File

@@ -5,7 +5,9 @@ use rustpython_parser::lexer::{LexResult, Tok};
use crate::lex::docstring_detection::StateMachine;
use crate::registry::{Diagnostic, RuleCode};
use crate::rules::ruff::rules::Context;
use crate::rules::{eradicate, flake8_implicit_str_concat, flake8_quotes, pycodestyle, ruff};
use crate::rules::{
eradicate, flake8_commas, flake8_implicit_str_concat, flake8_quotes, pycodestyle, ruff,
};
use crate::settings::{flags, Settings};
use crate::source_code::Locator;
@@ -28,6 +30,9 @@ pub fn check_tokens(
let enforce_invalid_escape_sequence = settings.enabled.contains(&RuleCode::W605);
let enforce_implicit_string_concatenation = settings.enabled.contains(&RuleCode::ISC001)
|| settings.enabled.contains(&RuleCode::ISC002);
let enforce_trailing_comma = settings.enabled.contains(&RuleCode::COM812)
|| settings.enabled.contains(&RuleCode::COM818)
|| settings.enabled.contains(&RuleCode::COM819);
let mut state_machine = StateMachine::default();
for &(start, ref tok, end) in tokens.iter().flatten() {
@@ -111,5 +116,14 @@ pub fn check_tokens(
);
}
// COM812, COM818, COM819
if enforce_trailing_comma {
diagnostics.extend(
flake8_commas::rules::trailing_commas(tokens, locator)
.into_iter()
.filter(|diagnostic| settings.enabled.contains(diagnostic.kind.code())),
);
}
diagnostics
}

View File

@@ -37,14 +37,12 @@ pub struct IsortDirectives {
}
pub struct Directives {
pub commented_lines: Vec<usize>,
pub noqa_line_for: IntMap<usize, usize>,
pub isort: IsortDirectives,
}
pub fn extract_directives(lxr: &[LexResult], flags: Flags) -> Directives {
Directives {
commented_lines: extract_commented_lines(lxr),
noqa_line_for: if flags.contains(Flags::NOQA) {
extract_noqa_line_for(lxr)
} else {
@@ -58,16 +56,6 @@ pub fn extract_directives(lxr: &[LexResult], flags: Flags) -> Directives {
}
}
pub fn extract_commented_lines(lxr: &[LexResult]) -> Vec<usize> {
let mut commented_lines = Vec::new();
for (start, tok, ..) in lxr.iter().flatten() {
if matches!(tok, Tok::Comment(_)) {
commented_lines.push(start.row());
}
}
commented_lines
}
/// Extract a mapping from logical line to noqa line.
pub fn extract_noqa_line_for(lxr: &[LexResult]) -> IntMap<usize, usize> {
let mut noqa_line_for: IntMap<usize, usize> = IntMap::default();

View File

@@ -21,7 +21,6 @@ use crate::settings::options::Options;
use crate::settings::pyproject::Pyproject;
use crate::warn_user;
#[allow(clippy::unnecessary_wraps)]
pub fn convert(
config: &HashMap<String, HashMap<String, Option<String>>>,
black: Option<&Black>,
@@ -272,7 +271,7 @@ pub fn convert(
match value.trim() {
"csv" => {
flake8_pytest_style.parametrize_names_type =
Some(ParametrizeNameType::CSV);
Some(ParametrizeNameType::Csv);
}
"tuple" => {
flake8_pytest_style.parametrize_names_type =

View File

@@ -10,17 +10,17 @@ use crate::resolver::Relativity;
use crate::rustpython_helpers::tokenize;
use crate::settings::configuration::Configuration;
use crate::settings::{flags, pyproject, Settings};
use crate::source_code::{Locator, Stylist};
use crate::source_code::{Indexer, Locator, Stylist};
use crate::{directives, packaging, resolver};
/// Load the relevant `Settings` for a given `Path`.
fn resolve(path: &Path) -> Result<Settings> {
if let Some(pyproject) = pyproject::find_settings_toml(path)? {
// First priority: `pyproject.toml` in the current `Path`.
resolver::resolve_settings(&pyproject, &Relativity::Parent)
Ok(resolver::resolve_settings(&pyproject, &Relativity::Parent)?.lib)
} else if let Some(pyproject) = pyproject::find_user_settings_toml() {
// Second priority: user-specific `pyproject.toml`.
resolver::resolve_settings(&pyproject, &Relativity::Cwd)
Ok(resolver::resolve_settings(&pyproject, &Relativity::Cwd)?.lib)
} else {
// Fallback: default settings.
Settings::from_configuration(Configuration::default(), &path_dedot::CWD)
@@ -44,6 +44,9 @@ pub fn check(path: &Path, contents: &str, autofix: bool) -> Result<Vec<Diagnosti
// Detect the current code style (lazily).
let stylist = Stylist::from_contents(contents, &locator);
// Extra indices from the code.
let indexer: Indexer = tokens.as_slice().into();
// Extract the `# noqa` and `# isort: skip` directives from the source.
let directives =
directives::extract_directives(&tokens, directives::Flags::from_settings(&settings));
@@ -56,6 +59,7 @@ pub fn check(path: &Path, contents: &str, autofix: bool) -> Result<Vec<Diagnosti
tokens,
&locator,
&stylist,
&indexer,
&directives,
&settings,
autofix.into(),

View File

@@ -18,7 +18,7 @@ use crate::settings::configuration::Configuration;
use crate::settings::options::Options;
use crate::settings::types::PythonVersion;
use crate::settings::{flags, Settings};
use crate::source_code::{Locator, Stylist};
use crate::source_code::{Indexer, Locator, Stylist};
const VERSION: &str = env!("CARGO_PKG_VERSION");
@@ -157,6 +157,9 @@ pub fn check(contents: &str, options: JsValue) -> Result<JsValue, JsValue> {
// Detect the current code style (lazily).
let stylist = Stylist::from_contents(contents, &locator);
// Extra indices from the code.
let indexer: Indexer = tokens.as_slice().into();
// Extract the `# noqa` and `# isort: skip` directives from the source.
let directives = directives::extract_directives(&tokens, directives::Flags::empty());
@@ -168,6 +171,7 @@ pub fn check(contents: &str, options: JsValue) -> Result<JsValue, JsValue> {
tokens,
&locator,
&stylist,
&indexer,
&directives,
&settings,
flags::Autofix::Enabled,

View File

@@ -17,7 +17,7 @@ use crate::message::{Message, Source};
use crate::noqa::add_noqa;
use crate::registry::{Diagnostic, LintSource, RuleCode};
use crate::settings::{flags, Settings};
use crate::source_code::{Locator, Stylist};
use crate::source_code::{Indexer, Locator, Stylist};
use crate::{directives, fs, rustpython_helpers, violations};
const CARGO_PKG_NAME: &str = env!("CARGO_PKG_NAME");
@@ -33,6 +33,7 @@ pub fn check_path(
tokens: Vec<LexResult>,
locator: &Locator,
stylist: &Stylist,
indexer: &Indexer,
directives: &Directives,
settings: &Settings,
autofix: flags::Autofix,
@@ -65,7 +66,7 @@ pub fn check_path(
let use_ast = settings
.enabled
.iter()
.any(|rule_code| matches!(rule_code.lint_source(), LintSource::AST));
.any(|rule_code| matches!(rule_code.lint_source(), LintSource::Ast));
let use_imports = !directives.isort.skip_file
&& settings
.enabled
@@ -79,6 +80,7 @@ pub fn check_path(
&python_ast,
locator,
stylist,
indexer,
&directives.noqa_line_for,
settings,
autofix,
@@ -90,6 +92,7 @@ pub fn check_path(
diagnostics.extend(check_imports(
&python_ast,
locator,
indexer,
&directives.isort,
settings,
stylist,
@@ -127,7 +130,7 @@ pub fn check_path(
{
diagnostics.extend(check_lines(
contents,
&directives.commented_lines,
indexer.commented_lines(),
&doc_lines,
settings,
autofix,
@@ -135,16 +138,16 @@ pub fn check_path(
}
// Enforce `noqa` directives.
if matches!(noqa, flags::Noqa::Enabled)
if (matches!(noqa, flags::Noqa::Enabled) && !diagnostics.is_empty())
|| settings
.enabled
.iter()
.any(|rule_code| matches!(rule_code.lint_source(), LintSource::NoQA))
.any(|rule_code| matches!(rule_code.lint_source(), LintSource::NoQa))
{
check_noqa(
&mut diagnostics,
contents,
&directives.commented_lines,
indexer.commented_lines(),
&directives.noqa_line_for,
settings,
autofix,
@@ -184,6 +187,9 @@ pub fn add_noqa_to_path(path: &Path, settings: &Settings) -> Result<usize> {
// Detect the current code style (lazily).
let stylist = Stylist::from_contents(&contents, &locator);
// Extra indices from the code.
let indexer: Indexer = tokens.as_slice().into();
// Extract the `# noqa` and `# isort: skip` directives from the source.
let directives =
directives::extract_directives(&tokens, directives::Flags::from_settings(settings));
@@ -196,6 +202,7 @@ pub fn add_noqa_to_path(path: &Path, settings: &Settings) -> Result<usize> {
tokens,
&locator,
&stylist,
&indexer,
&directives,
settings,
flags::Autofix::Disabled,
@@ -230,6 +237,9 @@ pub fn lint_only(
// Detect the current code style (lazily).
let stylist = Stylist::from_contents(contents, &locator);
// Extra indices from the code.
let indexer: Indexer = tokens.as_slice().into();
// Extract the `# noqa` and `# isort: skip` directives from the source.
let directives =
directives::extract_directives(&tokens, directives::Flags::from_settings(settings));
@@ -242,6 +252,7 @@ pub fn lint_only(
tokens,
&locator,
&stylist,
&indexer,
&directives,
settings,
autofix,
@@ -290,6 +301,9 @@ pub fn lint_fix(
// Detect the current code style (lazily).
let stylist = Stylist::from_contents(&contents, &locator);
// Extra indices from the code.
let indexer: Indexer = tokens.as_slice().into();
// Extract the `# noqa` and `# isort: skip` directives from the source.
let directives =
directives::extract_directives(&tokens, directives::Flags::from_settings(settings));
@@ -302,6 +316,7 @@ pub fn lint_fix(
tokens,
&locator,
&stylist,
&indexer,
&directives,
settings,
flags::Autofix::Enabled,
@@ -366,6 +381,7 @@ pub fn test_path(path: &Path, settings: &Settings) -> Result<Vec<Diagnostic>> {
let tokens: Vec<LexResult> = rustpython_helpers::tokenize(&contents);
let locator = Locator::new(&contents);
let stylist = Stylist::from_contents(&contents, &locator);
let indexer: Indexer = tokens.as_slice().into();
let directives =
directives::extract_directives(&tokens, directives::Flags::from_settings(settings));
let mut diagnostics = check_path(
@@ -375,6 +391,7 @@ pub fn test_path(path: &Path, settings: &Settings) -> Result<Vec<Diagnostic>> {
tokens,
&locator,
&stylist,
&indexer,
&directives,
settings,
flags::Autofix::Enabled,
@@ -395,6 +412,7 @@ pub fn test_path(path: &Path, settings: &Settings) -> Result<Vec<Diagnostic>> {
let tokens: Vec<LexResult> = rustpython_helpers::tokenize(&contents);
let locator = Locator::new(&contents);
let stylist = Stylist::from_contents(&contents, &locator);
let indexer: Indexer = tokens.as_slice().into();
let directives =
directives::extract_directives(&tokens, directives::Flags::from_settings(settings));
let diagnostics = check_path(
@@ -404,6 +422,7 @@ pub fn test_path(path: &Path, settings: &Settings) -> Result<Vec<Diagnostic>> {
tokens,
&locator,
&stylist,
&indexer,
&directives,
settings,
flags::Autofix::Enabled,

View File

@@ -15,106 +15,7 @@ use crate::fix::Fix;
use crate::violation::Violation;
use crate::violations;
macro_rules! define_rule_mapping {
($($code:ident => $mod:ident::$name:ident,)+) => {
#[derive(
AsRefStr,
RuleCodePrefix,
EnumIter,
EnumString,
Debug,
Display,
PartialEq,
Eq,
Clone,
Serialize,
Deserialize,
Hash,
PartialOrd,
Ord,
)]
pub enum RuleCode {
$(
$code,
)+
}
#[derive(AsRefStr, Debug, PartialEq, Eq, Serialize, Deserialize)]
pub enum DiagnosticKind {
$(
$name($mod::$name),
)+
}
impl RuleCode {
/// A placeholder representation of the `DiagnosticKind` for the diagnostic.
pub fn kind(&self) -> DiagnosticKind {
match self {
$(
RuleCode::$code => DiagnosticKind::$name(<$mod::$name as Violation>::placeholder()),
)+
}
}
pub fn origin(&self) -> RuleOrigin {
match self {
$(
RuleCode::$code => ruff_macros::origin_by_code!($code),
)+
}
}
}
impl DiagnosticKind {
/// A four-letter shorthand code for the diagnostic.
pub fn code(&self) -> &'static RuleCode {
match self {
$(
DiagnosticKind::$name(..) => &RuleCode::$code,
)+
}
}
/// The body text for the diagnostic.
pub fn body(&self) -> String {
match self {
$(
DiagnosticKind::$name(x) => Violation::message(x),
)+
}
}
/// Whether the diagnostic is (potentially) fixable.
pub fn fixable(&self) -> bool {
match self {
$(
DiagnosticKind::$name(x) => x.autofix_title_formatter().is_some(),
)+
}
}
/// The message used to describe the fix action for a given `DiagnosticKind`.
pub fn commit(&self) -> Option<String> {
match self {
$(
DiagnosticKind::$name(x) => x.autofix_title_formatter().map(|f| f(x)),
)+
}
}
}
$(
impl From<$mod::$name> for DiagnosticKind {
fn from(x: $mod::$name) -> Self {
DiagnosticKind::$name(x)
}
}
)+
};
}
define_rule_mapping!(
ruff_macros::define_rule_mapping!(
// pycodestyle errors
E401 => violations::MultipleImportsOnOneLine,
E402 => violations::ModuleImportNotAtTopOfFile,
@@ -510,6 +411,10 @@ define_rule_mapping!(
PIE790 => violations::NoUnnecessaryPass,
PIE794 => violations::DupeClassFieldDefinitions,
PIE807 => violations::PreferListBuiltin,
// flake8-commas
COM812 => violations::TrailingCommaMissing,
COM818 => violations::TrailingCommaOnBareTupleProhibited,
COM819 => violations::TrailingCommaProhibited,
// Ruff
RUF001 => violations::AmbiguousUnicodeCharacterString,
RUF002 => violations::AmbiguousUnicodeCharacterDocstring,
@@ -552,6 +457,7 @@ pub enum RuleOrigin {
PygrepHooks,
Pylint,
Flake8Pie,
Flake8Commas,
Ruff,
}
@@ -621,6 +527,7 @@ impl RuleOrigin {
RuleOrigin::Pylint => "Pylint",
RuleOrigin::Pyupgrade => "pyupgrade",
RuleOrigin::Flake8Pie => "flake8-pie",
RuleOrigin::Flake8Commas => "flake8-commas",
RuleOrigin::Ruff => "Ruff-specific rules",
}
}
@@ -667,6 +574,7 @@ impl RuleOrigin {
]),
RuleOrigin::Pyupgrade => Prefixes::Single(RuleCodePrefix::UP),
RuleOrigin::Flake8Pie => Prefixes::Single(RuleCodePrefix::PIE),
RuleOrigin::Flake8Commas => Prefixes::Single(RuleCodePrefix::COM),
RuleOrigin::Ruff => Prefixes::Single(RuleCodePrefix::RUF),
}
}
@@ -788,19 +696,22 @@ impl RuleOrigin {
"https://pypi.org/project/flake8-pie/0.16.0/",
&Platform::PyPI,
)),
RuleOrigin::Flake8Commas => Some((
"https://pypi.org/project/flake8-commas/2.1.0/",
&Platform::PyPI,
)),
RuleOrigin::Ruff => None,
}
}
}
#[allow(clippy::upper_case_acronyms)]
pub enum LintSource {
AST,
FileSystem,
Ast,
Io,
Lines,
Tokens,
Imports,
NoQA,
NoQa,
}
impl RuleCode {
@@ -808,7 +719,7 @@ impl RuleCode {
/// physical lines).
pub fn lint_source(&self) -> &'static LintSource {
match self {
RuleCode::RUF100 => &LintSource::NoQA,
RuleCode::RUF100 => &LintSource::NoQa,
RuleCode::E501
| RuleCode::W292
| RuleCode::W505
@@ -823,12 +734,15 @@ impl RuleCode {
| RuleCode::Q002
| RuleCode::Q003
| RuleCode::W605
| RuleCode::COM812
| RuleCode::COM818
| RuleCode::COM819
| RuleCode::RUF001
| RuleCode::RUF002
| RuleCode::RUF003 => &LintSource::Tokens,
RuleCode::E902 => &LintSource::FileSystem,
RuleCode::E902 => &LintSource::Io,
RuleCode::I001 | RuleCode::I002 => &LintSource::Imports,
_ => &LintSource::AST,
_ => &LintSource::Ast,
}
}
}

View File

@@ -14,7 +14,7 @@ use rustc_hash::FxHashSet;
use crate::fs;
use crate::settings::configuration::Configuration;
use crate::settings::pyproject::settings_toml;
use crate::settings::{pyproject, Settings};
use crate::settings::{pyproject, AllSettings, Settings};
/// The strategy used to discover Python files in the filesystem..
#[derive(Debug)]
@@ -29,10 +29,10 @@ pub struct FileDiscovery {
pub enum PyprojectDiscovery {
/// Use a fixed `pyproject.toml` file for all Python files (i.e., one
/// provided on the command-line).
Fixed(Settings),
Fixed(AllSettings),
/// Use the closest `pyproject.toml` file in the filesystem hierarchy, or
/// the default settings.
Hierarchical(Settings),
Hierarchical(AllSettings),
}
/// The strategy for resolving file paths in a `pyproject.toml`.
@@ -58,17 +58,21 @@ impl Relativity {
#[derive(Default)]
pub struct Resolver {
settings: BTreeMap<PathBuf, Settings>,
settings: BTreeMap<PathBuf, AllSettings>,
}
impl Resolver {
/// Add a resolved `Settings` under a given `PathBuf` scope.
pub fn add(&mut self, path: PathBuf, settings: Settings) {
pub fn add(&mut self, path: PathBuf, settings: AllSettings) {
self.settings.insert(path, settings);
}
/// Return the appropriate `Settings` for a given `Path`.
pub fn resolve<'a>(&'a self, path: &Path, strategy: &'a PyprojectDiscovery) -> &'a Settings {
/// Return the appropriate `AllSettings` for a given `Path`.
pub fn resolve_all<'a>(
&'a self,
path: &Path,
strategy: &'a PyprojectDiscovery,
) -> &'a AllSettings {
match strategy {
PyprojectDiscovery::Fixed(settings) => settings,
PyprojectDiscovery::Hierarchical(default) => self
@@ -86,8 +90,12 @@ impl Resolver {
}
}
pub fn resolve<'a>(&'a self, path: &Path, strategy: &'a PyprojectDiscovery) -> &'a Settings {
&self.resolve_all(path, strategy).lib
}
/// Return an iterator over the resolved `Settings` in this `Resolver`.
pub fn iter(&self) -> impl Iterator<Item = &Settings> {
pub fn iter(&self) -> impl Iterator<Item = &AllSettings> {
self.settings.values()
}
@@ -100,11 +108,11 @@ impl Resolver {
// `Settings` for each path, but that's more expensive.
match &strategy {
PyprojectDiscovery::Fixed(settings) => {
settings.validate()?;
settings.lib.validate()?;
}
PyprojectDiscovery::Hierarchical(default) => {
for settings in std::iter::once(default).chain(self.iter()) {
settings.validate()?;
settings.lib.validate()?;
}
}
}
@@ -176,15 +184,15 @@ pub fn resolve_scoped_settings(
pyproject: &Path,
relativity: &Relativity,
processor: impl ConfigProcessor,
) -> Result<(PathBuf, Settings)> {
) -> Result<(PathBuf, AllSettings)> {
let project_root = relativity.resolve(pyproject);
let configuration = resolve_configuration(pyproject, relativity, processor)?;
let settings = Settings::from_configuration(configuration, &project_root)?;
let settings = AllSettings::from_configuration(configuration, &project_root)?;
Ok((project_root, settings))
}
/// Extract the `Settings` from a given `pyproject.toml`.
pub fn resolve_settings(pyproject: &Path, relativity: &Relativity) -> Result<Settings> {
pub fn resolve_settings(pyproject: &Path, relativity: &Relativity) -> Result<AllSettings> {
let (_project_root, settings) = resolve_scoped_settings(pyproject, relativity, &NoOpProcessor)?;
Ok(settings)
}
@@ -195,7 +203,7 @@ pub fn resolve_settings_with_processor(
pyproject: &Path,
relativity: &Relativity,
processor: impl ConfigProcessor,
) -> Result<Settings> {
) -> Result<AllSettings> {
let (_project_root, settings) = resolve_scoped_settings(pyproject, relativity, processor)?;
Ok(settings)
}

View File

@@ -48,7 +48,7 @@ pub fn assert_false(checker: &mut Checker, stmt: &Stmt, test: &Expr, msg: Option
let mut diagnostic = Diagnostic::new(violations::DoNotAssertFalse, Range::from_located(test));
if checker.patch(diagnostic.kind.code()) {
let mut generator: Generator = checker.style.into();
let mut generator: Generator = checker.stylist.into();
generator.unparse_stmt(&assertion_error(msg));
diagnostic.amend(Fix::replacement(
generator.generate(),

View File

@@ -55,7 +55,7 @@ fn duplicate_handler_exceptions<'a>(
Range::from_located(expr),
);
if checker.patch(diagnostic.kind.code()) {
let mut generator: Generator = checker.style.into();
let mut generator: Generator = checker.stylist.into();
if unique_elts.len() == 1 {
generator.unparse_expr(unique_elts[0], 0);
} else {

View File

@@ -48,7 +48,7 @@ pub fn getattr_with_constant(checker: &mut Checker, expr: &Expr, func: &Expr, ar
let mut diagnostic =
Diagnostic::new(violations::GetAttrWithConstant, Range::from_located(expr));
if checker.patch(diagnostic.kind.code()) {
let mut generator: Generator = checker.style.into();
let mut generator: Generator = checker.stylist.into();
generator.unparse_expr(&attribute(obj, value), 0);
diagnostic.amend(Fix::replacement(
generator.generate(),

View File

@@ -24,7 +24,7 @@ pub fn redundant_tuple_in_exception_handler(checker: &mut Checker, handlers: &[E
Range::from_located(type_),
);
if checker.patch(diagnostic.kind.code()) {
let mut generator: Generator = checker.style.into();
let mut generator: Generator = checker.stylist.into();
generator.unparse_expr(elt, 0);
diagnostic.amend(Fix::replacement(
generator.generate(),

View File

@@ -64,7 +64,7 @@ pub fn setattr_with_constant(checker: &mut Checker, expr: &Expr, func: &Expr, ar
Diagnostic::new(violations::SetAttrWithConstant, Range::from_located(expr));
if checker.patch(diagnostic.kind.code()) {
diagnostic.amend(Fix::replacement(
assignment(obj, name, value, checker.style),
assignment(obj, name, value, checker.stylist),
expr.location,
expr.end_location.unwrap(),
));

View File

@@ -0,0 +1,30 @@
pub(crate) mod rules;
#[cfg(test)]
mod tests {
use std::path::Path;
use anyhow::Result;
use test_case::test_case;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::settings;
#[test_case(Path::new("COM81.py"); "COM81")]
fn rules(path: &Path) -> Result<()> {
let snapshot = path.to_string_lossy().into_owned();
let diagnostics = test_path(
Path::new("./resources/test/fixtures/flake8_commas")
.join(path)
.as_path(),
&settings::Settings::for_rules(vec![
RuleCode::COM812,
RuleCode::COM818,
RuleCode::COM819,
]),
)?;
insta::assert_yaml_snapshot!(snapshot, diagnostics);
Ok(())
}
}

View File

@@ -0,0 +1,284 @@
use itertools::Itertools;
use rustpython_parser::lexer::{LexResult, Spanned};
use rustpython_parser::token::Tok;
use crate::ast::types::Range;
use crate::fix::Fix;
use crate::registry::Diagnostic;
use crate::source_code::Locator;
use crate::violations;
/// Simplified token type.
#[derive(Copy, Clone, PartialEq, Eq)]
enum TokenType {
Irrelevant,
NonLogicalNewline,
Newline,
Comma,
OpeningBracket,
OpeningSquareBracket,
OpeningCurlyBracket,
ClosingBracket,
For,
Named,
Def,
Lambda,
Colon,
}
/// Simplified token specialized for the task.
#[derive(Copy, Clone)]
struct Token<'tok> {
type_: TokenType,
// Underlying token.
spanned: Option<&'tok Spanned>,
}
impl<'tok> Token<'tok> {
fn irrelevant() -> Token<'static> {
Token {
type_: TokenType::Irrelevant,
spanned: None,
}
}
fn from_spanned(spanned: &'tok Spanned) -> Token<'tok> {
let type_ = match &spanned.1 {
Tok::NonLogicalNewline => TokenType::NonLogicalNewline,
Tok::Newline => TokenType::Newline,
Tok::For => TokenType::For,
Tok::Def => TokenType::Def,
Tok::Lambda => TokenType::Lambda,
// Import treated like a function.
Tok::Import => TokenType::Named,
Tok::Name { .. } => TokenType::Named,
Tok::Comma => TokenType::Comma,
Tok::Lpar => TokenType::OpeningBracket,
Tok::Lsqb => TokenType::OpeningSquareBracket,
Tok::Lbrace => TokenType::OpeningCurlyBracket,
Tok::Rpar | Tok::Rsqb | Tok::Rbrace => TokenType::ClosingBracket,
Tok::Colon => TokenType::Colon,
_ => TokenType::Irrelevant,
};
Self {
spanned: Some(spanned),
type_,
}
}
}
/// Comma context type - types of comma-delimited Python constructs.
#[derive(Copy, Clone, PartialEq, Eq)]
enum ContextType {
No,
/// Function definition parameter list, e.g. `def foo(a,b,c)`.
FunctionParameters,
/// Call argument-like item list, e.g. `f(1,2,3)`, `foo()(1,2,3)`.
CallArguments,
/// Tuple-like item list, e.g. `(1,2,3)`.
Tuple,
/// Subscript item list, e.g. `x[1,2,3]`, `foo()[1,2,3]`.
Subscript,
/// List-like item list, e.g. `[1,2,3]`.
List,
/// Dict-/set-like item list, e.g. `{1,2,3}`.
Dict,
/// Lambda parameter list, e.g. `lambda a, b`.
LambdaParameters,
}
/// Comma context - described a comma-delimited "situation".
#[derive(Copy, Clone)]
struct Context {
type_: ContextType,
num_commas: u32,
}
impl Context {
fn new(type_: ContextType) -> Self {
Context {
type_,
num_commas: 0,
}
}
fn inc(&mut self) {
self.num_commas += 1;
}
}
/// COM812, COM818, COM819
#[allow(clippy::if_same_then_else, clippy::needless_bool)]
pub fn trailing_commas(tokens: &[LexResult], _locator: &Locator) -> Vec<Diagnostic> {
let mut diagnostics = vec![];
let tokens = tokens
.iter()
.flatten()
// Completely ignore comments -- they just interfere with the logic.
.filter(|&r| !matches!(r, (_, Tok::Comment(_), _)))
.map(Token::from_spanned);
let tokens = [Token::irrelevant(), Token::irrelevant()]
.into_iter()
.chain(tokens);
// Collapse consecutive newlines to the first one -- trailing commas are
// added before the first newline.
let tokens = tokens.coalesce(|previous, current| {
if previous.type_ == TokenType::NonLogicalNewline
&& current.type_ == TokenType::NonLogicalNewline
{
Ok(previous)
} else {
Err((previous, current))
}
});
// The current nesting of the comma contexts.
let mut stack = vec![Context::new(ContextType::No)];
for (prev_prev, prev, token) in tokens.tuple_windows() {
// Update the comma context stack.
match token.type_ {
TokenType::OpeningBracket => match (prev.type_, prev_prev.type_) {
(TokenType::Named, TokenType::Def) => {
stack.push(Context::new(ContextType::FunctionParameters));
}
(TokenType::Named | TokenType::ClosingBracket, _) => {
stack.push(Context::new(ContextType::CallArguments));
}
_ => {
stack.push(Context::new(ContextType::Tuple));
}
},
TokenType::OpeningSquareBracket => match prev.type_ {
TokenType::ClosingBracket | TokenType::Named => {
stack.push(Context::new(ContextType::Subscript));
}
_ => {
stack.push(Context::new(ContextType::List));
}
},
TokenType::OpeningCurlyBracket => {
stack.push(Context::new(ContextType::Dict));
}
TokenType::Lambda => {
stack.push(Context::new(ContextType::LambdaParameters));
}
TokenType::For => {
let len = stack.len();
stack[len - 1] = Context::new(ContextType::No);
}
TokenType::Comma => {
let len = stack.len();
stack[len - 1].inc();
}
_ => {}
}
let context = &stack[stack.len() - 1];
// Is it allowed to have a trailing comma before this token?
let comma_allowed = token.type_ == TokenType::ClosingBracket
&& match context.type_ {
ContextType::No => false,
ContextType::FunctionParameters => true,
ContextType::CallArguments => true,
// `(1)` is not equivalent to `(1,)`.
ContextType::Tuple => context.num_commas != 0,
// `x[1]` is not equivalent to `x[1,]`.
ContextType::Subscript => context.num_commas != 0,
ContextType::List => true,
ContextType::Dict => true,
// Lambdas are required to be a single line, trailing comma never makes sense.
ContextType::LambdaParameters => false,
};
// Is prev a prohibited trailing comma?
let comma_prohibited = prev.type_ == TokenType::Comma && {
// Is `(1,)` or `x[1,]`?
let is_singleton_tuplish =
matches!(context.type_, ContextType::Subscript | ContextType::Tuple)
&& context.num_commas <= 1;
// There was no non-logical newline, so prohibit (except in `(1,)` or `x[1,]`).
if comma_allowed && !is_singleton_tuplish {
true
// Lambdas not handled by comma_allowed so handle it specially.
} else if context.type_ == ContextType::LambdaParameters
&& token.type_ == TokenType::Colon
{
true
} else {
false
}
};
if comma_prohibited {
let comma = prev.spanned.unwrap();
let mut diagnostic = Diagnostic::new(
violations::TrailingCommaProhibited,
Range {
location: comma.0,
end_location: comma.2,
},
);
diagnostic.amend(Fix::deletion(comma.0, comma.2));
diagnostics.push(diagnostic);
}
// Is prev a prohibited trailing comma on a bare tuple?
// Approximation: any comma followed by a statement-ending newline.
let bare_comma_prohibited =
prev.type_ == TokenType::Comma && token.type_ == TokenType::Newline;
if bare_comma_prohibited {
let comma = prev.spanned.unwrap();
let diagnostic = Diagnostic::new(
violations::TrailingCommaOnBareTupleProhibited,
Range {
location: comma.0,
end_location: comma.2,
},
);
diagnostics.push(diagnostic);
}
// Comma is required if:
// - It is allowed,
// - Followed by a newline,
// - Not already present,
// - Not on an empty (), {}, [].
let comma_required = comma_allowed
&& prev.type_ == TokenType::NonLogicalNewline
&& !matches!(
prev_prev.type_,
TokenType::Comma
| TokenType::OpeningBracket
| TokenType::OpeningSquareBracket
| TokenType::OpeningCurlyBracket
);
if comma_required {
let missing_comma = prev_prev.spanned.unwrap();
let mut diagnostic = Diagnostic::new(
violations::TrailingCommaMissing,
Range {
location: missing_comma.2,
end_location: missing_comma.2,
},
);
diagnostic.amend(Fix::insertion(",".to_owned(), missing_comma.2));
diagnostics.push(diagnostic);
}
// Pop the current context if the current token ended it.
// The top context is never popped (if unbalanced closing brackets).
let pop_context = match context.type_ {
// Lambda terminated by `:`.
ContextType::LambdaParameters => token.type_ == TokenType::Colon,
// All others terminated by a closing bracket.
// flake8-commas doesn't verify that it matches the opening...
_ => token.type_ == TokenType::ClosingBracket,
};
if pop_context && stack.len() > 1 {
stack.pop();
}
}
diagnostics
}

View File

@@ -0,0 +1,789 @@
---
source: src/rules/flake8_commas/mod.rs
expression: diagnostics
---
- kind:
TrailingCommaMissing: ~
location:
row: 4
column: 17
end_location:
row: 4
column: 17
fix:
content: ","
location:
row: 4
column: 17
end_location:
row: 4
column: 17
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 10
column: 5
end_location:
row: 10
column: 5
fix:
content: ","
location:
row: 10
column: 5
end_location:
row: 10
column: 5
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 16
column: 5
end_location:
row: 16
column: 5
fix:
content: ","
location:
row: 16
column: 5
end_location:
row: 16
column: 5
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 23
column: 5
end_location:
row: 23
column: 5
fix:
content: ","
location:
row: 23
column: 5
end_location:
row: 23
column: 5
parent: ~
- kind:
TrailingCommaOnBareTupleProhibited: ~
location:
row: 36
column: 7
end_location:
row: 36
column: 8
fix: ~
parent: ~
- kind:
TrailingCommaOnBareTupleProhibited: ~
location:
row: 38
column: 18
end_location:
row: 38
column: 19
fix: ~
parent: ~
- kind:
TrailingCommaOnBareTupleProhibited: ~
location:
row: 45
column: 7
end_location:
row: 45
column: 8
fix: ~
parent: ~
- kind:
TrailingCommaOnBareTupleProhibited: ~
location:
row: 49
column: 9
end_location:
row: 49
column: 10
fix: ~
parent: ~
- kind:
TrailingCommaOnBareTupleProhibited: ~
location:
row: 56
column: 31
end_location:
row: 56
column: 32
fix: ~
parent: ~
- kind:
TrailingCommaOnBareTupleProhibited: ~
location:
row: 58
column: 25
end_location:
row: 58
column: 26
fix: ~
parent: ~
- kind:
TrailingCommaOnBareTupleProhibited: ~
location:
row: 61
column: 16
end_location:
row: 61
column: 17
fix: ~
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 70
column: 7
end_location:
row: 70
column: 7
fix:
content: ","
location:
row: 70
column: 7
end_location:
row: 70
column: 7
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 78
column: 7
end_location:
row: 78
column: 7
fix:
content: ","
location:
row: 78
column: 7
end_location:
row: 78
column: 7
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 86
column: 7
end_location:
row: 86
column: 7
fix:
content: ","
location:
row: 86
column: 7
end_location:
row: 86
column: 7
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 152
column: 5
end_location:
row: 152
column: 5
fix:
content: ","
location:
row: 152
column: 5
end_location:
row: 152
column: 5
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 158
column: 10
end_location:
row: 158
column: 10
fix:
content: ","
location:
row: 158
column: 10
end_location:
row: 158
column: 10
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 293
column: 14
end_location:
row: 293
column: 14
fix:
content: ","
location:
row: 293
column: 14
end_location:
row: 293
column: 14
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 304
column: 13
end_location:
row: 304
column: 13
fix:
content: ","
location:
row: 304
column: 13
end_location:
row: 304
column: 13
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 310
column: 13
end_location:
row: 310
column: 13
fix:
content: ","
location:
row: 310
column: 13
end_location:
row: 310
column: 13
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 316
column: 9
end_location:
row: 316
column: 9
fix:
content: ","
location:
row: 316
column: 9
end_location:
row: 316
column: 9
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 322
column: 14
end_location:
row: 322
column: 14
fix:
content: ","
location:
row: 322
column: 14
end_location:
row: 322
column: 14
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 368
column: 14
end_location:
row: 368
column: 14
fix:
content: ","
location:
row: 368
column: 14
end_location:
row: 368
column: 14
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 375
column: 14
end_location:
row: 375
column: 14
fix:
content: ","
location:
row: 375
column: 14
end_location:
row: 375
column: 14
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 404
column: 14
end_location:
row: 404
column: 14
fix:
content: ","
location:
row: 404
column: 14
end_location:
row: 404
column: 14
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 432
column: 14
end_location:
row: 432
column: 14
fix:
content: ","
location:
row: 432
column: 14
end_location:
row: 432
column: 14
parent: ~
- kind:
TrailingCommaProhibited: ~
location:
row: 485
column: 20
end_location:
row: 485
column: 21
fix:
content: ""
location:
row: 485
column: 20
end_location:
row: 485
column: 21
parent: ~
- kind:
TrailingCommaProhibited: ~
location:
row: 487
column: 12
end_location:
row: 487
column: 13
fix:
content: ""
location:
row: 487
column: 12
end_location:
row: 487
column: 13
parent: ~
- kind:
TrailingCommaProhibited: ~
location:
row: 489
column: 17
end_location:
row: 489
column: 18
fix:
content: ""
location:
row: 489
column: 17
end_location:
row: 489
column: 18
parent: ~
- kind:
TrailingCommaProhibited: ~
location:
row: 494
column: 5
end_location:
row: 494
column: 6
fix:
content: ""
location:
row: 494
column: 5
end_location:
row: 494
column: 6
parent: ~
- kind:
TrailingCommaProhibited: ~
location:
row: 496
column: 20
end_location:
row: 496
column: 21
fix:
content: ""
location:
row: 496
column: 20
end_location:
row: 496
column: 21
parent: ~
- kind:
TrailingCommaProhibited: ~
location:
row: 498
column: 12
end_location:
row: 498
column: 13
fix:
content: ""
location:
row: 498
column: 12
end_location:
row: 498
column: 13
parent: ~
- kind:
TrailingCommaProhibited: ~
location:
row: 500
column: 17
end_location:
row: 500
column: 18
fix:
content: ""
location:
row: 500
column: 17
end_location:
row: 500
column: 18
parent: ~
- kind:
TrailingCommaProhibited: ~
location:
row: 505
column: 5
end_location:
row: 505
column: 6
fix:
content: ""
location:
row: 505
column: 5
end_location:
row: 505
column: 6
parent: ~
- kind:
TrailingCommaProhibited: ~
location:
row: 511
column: 9
end_location:
row: 511
column: 10
fix:
content: ""
location:
row: 511
column: 9
end_location:
row: 511
column: 10
parent: ~
- kind:
TrailingCommaProhibited: ~
location:
row: 513
column: 8
end_location:
row: 513
column: 9
fix:
content: ""
location:
row: 513
column: 8
end_location:
row: 513
column: 9
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 519
column: 12
end_location:
row: 519
column: 12
fix:
content: ","
location:
row: 519
column: 12
end_location:
row: 519
column: 12
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 526
column: 9
end_location:
row: 526
column: 9
fix:
content: ","
location:
row: 526
column: 9
end_location:
row: 526
column: 9
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 534
column: 15
end_location:
row: 534
column: 15
fix:
content: ","
location:
row: 534
column: 15
end_location:
row: 534
column: 15
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 541
column: 12
end_location:
row: 541
column: 12
fix:
content: ","
location:
row: 541
column: 12
end_location:
row: 541
column: 12
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 547
column: 23
end_location:
row: 547
column: 23
fix:
content: ","
location:
row: 547
column: 23
end_location:
row: 547
column: 23
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 554
column: 14
end_location:
row: 554
column: 14
fix:
content: ","
location:
row: 554
column: 14
end_location:
row: 554
column: 14
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 561
column: 12
end_location:
row: 561
column: 12
fix:
content: ","
location:
row: 561
column: 12
end_location:
row: 561
column: 12
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 565
column: 12
end_location:
row: 565
column: 12
fix:
content: ","
location:
row: 565
column: 12
end_location:
row: 565
column: 12
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 573
column: 9
end_location:
row: 573
column: 9
fix:
content: ","
location:
row: 573
column: 9
end_location:
row: 573
column: 9
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 577
column: 9
end_location:
row: 577
column: 9
fix:
content: ","
location:
row: 577
column: 9
end_location:
row: 577
column: 9
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 583
column: 9
end_location:
row: 583
column: 9
fix:
content: ","
location:
row: 583
column: 9
end_location:
row: 583
column: 9
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 590
column: 12
end_location:
row: 590
column: 12
fix:
content: ","
location:
row: 590
column: 12
end_location:
row: 590
column: 12
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 598
column: 14
end_location:
row: 598
column: 14
fix:
content: ","
location:
row: 598
column: 14
end_location:
row: 598
column: 14
parent: ~
- kind:
TrailingCommaMissing: ~
location:
row: 627
column: 19
end_location:
row: 627
column: 19
fix:
content: ","
location:
row: 627
column: 19
end_location:
row: 627
column: 19
parent: ~

View File

@@ -32,7 +32,7 @@ pub fn no_unnecessary_pass(checker: &mut Checker, body: &[Stmt]) {
Range::from_located(pass_stmt),
);
if checker.patch(&RuleCode::PIE790) {
match delete_stmt(pass_stmt, None, &[], checker.locator) {
match delete_stmt(pass_stmt, None, &[], checker.locator, checker.indexer) {
Ok(fix) => {
diagnostic.amend(fix);
}
@@ -91,7 +91,7 @@ pub fn dupe_class_field_definitions<'a, 'b>(
.map(std::convert::Into::into)
.collect();
let locator = checker.locator;
match delete_stmt(stmt, Some(parent), &deleted, locator) {
match delete_stmt(stmt, Some(parent), &deleted, locator, checker.indexer) {
Ok(fix) => {
checker.deletions.insert(RefEquality(stmt));
diagnostic.amend(fix);

View File

@@ -62,6 +62,7 @@ pub fn print_call(checker: &mut Checker, func: &Expr, keywords: &[Keyword]) {
defined_in.map(std::convert::Into::into),
&deleted,
checker.locator,
checker.indexer,
) {
Ok(fix) => {
if fix.content.is_empty() || fix.content == "pass" {

View File

@@ -35,7 +35,7 @@ mod tests {
RuleCode::PT006,
Path::new("PT006.py"),
Settings {
parametrize_names_type: types::ParametrizeNameType::CSV,
parametrize_names_type: types::ParametrizeNameType::Csv,
..Settings::default()
},
"PT006_csv";

View File

@@ -106,7 +106,7 @@ pub fn unittest_assertion(
if checker.patch(diagnostic.kind.code()) {
if let Ok(stmt) = unittest_assert.generate_assert(args, keywords) {
diagnostic.amend(Fix::replacement(
unparse_stmt(&stmt, checker.style),
unparse_stmt(&stmt, checker.stylist),
call.location,
call.end_location.unwrap(),
));

View File

@@ -31,7 +31,7 @@ fn elts_to_csv(elts: &[Expr], checker: &Checker) -> Option<String> {
return None;
}
let mut generator: Generator = checker.style.into();
let mut generator: Generator = checker.stylist.into();
generator.unparse_expr(
&create_expr(ExprKind::Constant {
value: Constant::Str(elts.iter().fold(String::new(), |mut acc, elt| {
@@ -85,7 +85,7 @@ fn check_names(checker: &mut Checker, expr: &Expr) {
Range::from_located(expr),
);
if checker.patch(diagnostic.kind.code()) {
let mut generator: Generator = checker.style.into();
let mut generator: Generator = checker.stylist.into();
generator.unparse_expr(
&create_expr(ExprKind::Tuple {
elts: names
@@ -115,7 +115,7 @@ fn check_names(checker: &mut Checker, expr: &Expr) {
Range::from_located(expr),
);
if checker.patch(diagnostic.kind.code()) {
let mut generator: Generator = checker.style.into();
let mut generator: Generator = checker.stylist.into();
generator.unparse_expr(
&create_expr(ExprKind::List {
elts: names
@@ -139,7 +139,7 @@ fn check_names(checker: &mut Checker, expr: &Expr) {
}
checker.diagnostics.push(diagnostic);
}
types::ParametrizeNameType::CSV => {}
types::ParametrizeNameType::Csv => {}
}
}
}
@@ -157,7 +157,7 @@ fn check_names(checker: &mut Checker, expr: &Expr) {
Range::from_located(expr),
);
if checker.patch(diagnostic.kind.code()) {
let mut generator: Generator = checker.style.into();
let mut generator: Generator = checker.stylist.into();
generator.unparse_expr(
&create_expr(ExprKind::List {
elts: elts.clone(),
@@ -173,7 +173,7 @@ fn check_names(checker: &mut Checker, expr: &Expr) {
}
checker.diagnostics.push(diagnostic);
}
types::ParametrizeNameType::CSV => {
types::ParametrizeNameType::Csv => {
let mut diagnostic = Diagnostic::new(
violations::ParametrizeNamesWrongType(names_type),
Range::from_located(expr),
@@ -206,7 +206,7 @@ fn check_names(checker: &mut Checker, expr: &Expr) {
Range::from_located(expr),
);
if checker.patch(diagnostic.kind.code()) {
let mut generator: Generator = checker.style.into();
let mut generator: Generator = checker.stylist.into();
generator.unparse_expr(
&create_expr(ExprKind::Tuple {
elts: elts.clone(),
@@ -222,7 +222,7 @@ fn check_names(checker: &mut Checker, expr: &Expr) {
}
checker.diagnostics.push(diagnostic);
}
types::ParametrizeNameType::CSV => {
types::ParametrizeNameType::Csv => {
let mut diagnostic = Diagnostic::new(
violations::ParametrizeNamesWrongType(names_type),
Range::from_located(expr),
@@ -279,12 +279,12 @@ fn check_values(checker: &mut Checker, expr: &Expr) {
fn handle_single_name(checker: &mut Checker, expr: &Expr, value: &Expr) {
let mut diagnostic = Diagnostic::new(
violations::ParametrizeNamesWrongType(types::ParametrizeNameType::CSV),
violations::ParametrizeNamesWrongType(types::ParametrizeNameType::Csv),
Range::from_located(expr),
);
if checker.patch(diagnostic.kind.code()) {
let mut generator: Generator = checker.style.into();
let mut generator: Generator = checker.stylist.into();
generator.unparse_expr(&create_expr(value.node.clone()), 0);
diagnostic.amend(Fix::replacement(
generator.generate(),

View File

@@ -4,10 +4,9 @@ use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq, Serialize, Deserialize, JsonSchema)]
#[allow(clippy::upper_case_acronyms)]
pub enum ParametrizeNameType {
#[serde(rename = "csv")]
CSV,
Csv,
#[serde(rename = "tuple")]
Tuple,
#[serde(rename = "list")]
@@ -23,7 +22,7 @@ impl Default for ParametrizeNameType {
impl Display for ParametrizeNameType {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
Self::CSV => write!(f, "csv"),
Self::Csv => write!(f, "csv"),
Self::Tuple => write!(f, "tuple"),
Self::List => write!(f, "list"),
}

View File

@@ -3,6 +3,7 @@ use rustpython_ast::{Constant, Expr, ExprKind, Location, Stmt, StmtKind};
use super::helpers::result_exists;
use super::visitor::{ReturnVisitor, Stack};
use crate::ast::helpers::elif_else_range;
use crate::ast::types::Range;
use crate::ast::visitor::Visitor;
use crate::ast::whitespace::indentation;
@@ -228,7 +229,8 @@ fn superfluous_else_node(checker: &mut Checker, stmt: &Stmt, branch: Branch) ->
if checker.settings.enabled.contains(&RuleCode::RET505) {
checker.diagnostics.push(Diagnostic::new(
violations::SuperfluousElseReturn(branch),
Range::from_located(stmt),
elif_else_range(stmt, checker.locator)
.unwrap_or_else(|| Range::from_located(stmt)),
));
}
return true;
@@ -237,7 +239,8 @@ fn superfluous_else_node(checker: &mut Checker, stmt: &Stmt, branch: Branch) ->
if checker.settings.enabled.contains(&RuleCode::RET508) {
checker.diagnostics.push(Diagnostic::new(
violations::SuperfluousElseBreak(branch),
Range::from_located(stmt),
elif_else_range(stmt, checker.locator)
.unwrap_or_else(|| Range::from_located(stmt)),
));
}
return true;
@@ -246,7 +249,8 @@ fn superfluous_else_node(checker: &mut Checker, stmt: &Stmt, branch: Branch) ->
if checker.settings.enabled.contains(&RuleCode::RET506) {
checker.diagnostics.push(Diagnostic::new(
violations::SuperfluousElseRaise(branch),
Range::from_located(stmt),
elif_else_range(stmt, checker.locator)
.unwrap_or_else(|| Range::from_located(stmt)),
));
}
return true;
@@ -255,7 +259,8 @@ fn superfluous_else_node(checker: &mut Checker, stmt: &Stmt, branch: Branch) ->
if checker.settings.enabled.contains(&RuleCode::RET507) {
checker.diagnostics.push(Diagnostic::new(
violations::SuperfluousElseContinue(branch),
Range::from_located(stmt),
elif_else_range(stmt, checker.locator)
.unwrap_or_else(|| Range::from_located(stmt)),
));
}
return true;

View File

@@ -5,81 +5,81 @@ expression: diagnostics
- kind:
SuperfluousElseReturn: Elif
location:
row: 5
row: 8
column: 4
end_location:
row: 13
column: 16
row: 8
column: 8
fix: ~
parent: ~
- kind:
SuperfluousElseReturn: Elif
location:
row: 17
row: 23
column: 4
end_location:
row: 26
column: 13
row: 23
column: 8
fix: ~
parent: ~
- kind:
SuperfluousElseReturn: Elif
location:
row: 38
row: 41
column: 4
end_location:
row: 46
column: 16
row: 41
column: 8
fix: ~
parent: ~
- kind:
SuperfluousElseReturn: Else
location:
row: 50
row: 53
column: 4
end_location:
row: 55
column: 16
row: 53
column: 8
fix: ~
parent: ~
- kind:
SuperfluousElseReturn: Else
location:
row: 61
row: 64
column: 8
end_location:
row: 66
column: 20
row: 64
column: 12
fix: ~
parent: ~
- kind:
SuperfluousElseReturn: Else
location:
row: 73
row: 79
column: 4
end_location:
row: 80
column: 13
row: 79
column: 8
fix: ~
parent: ~
- kind:
SuperfluousElseReturn: Else
location:
row: 86
row: 89
column: 8
end_location:
row: 90
column: 17
row: 89
column: 12
fix: ~
parent: ~
- kind:
SuperfluousElseReturn: Else
location:
row: 97
row: 99
column: 4
end_location:
row: 103
column: 23
row: 99
column: 8
fix: ~
parent: ~

View File

@@ -5,71 +5,71 @@ expression: diagnostics
- kind:
SuperfluousElseRaise: Elif
location:
row: 5
row: 8
column: 4
end_location:
row: 13
column: 26
row: 8
column: 8
fix: ~
parent: ~
- kind:
SuperfluousElseRaise: Elif
location:
row: 17
row: 23
column: 4
end_location:
row: 26
column: 13
row: 23
column: 8
fix: ~
parent: ~
- kind:
SuperfluousElseRaise: Else
location:
row: 31
row: 34
column: 4
end_location:
row: 36
column: 26
row: 34
column: 8
fix: ~
parent: ~
- kind:
SuperfluousElseRaise: Else
location:
row: 42
row: 45
column: 8
end_location:
row: 47
column: 30
row: 45
column: 12
fix: ~
parent: ~
- kind:
SuperfluousElseRaise: Else
location:
row: 54
row: 60
column: 4
end_location:
row: 61
column: 13
row: 60
column: 8
fix: ~
parent: ~
- kind:
SuperfluousElseRaise: Else
location:
row: 67
row: 70
column: 8
end_location:
row: 71
column: 17
row: 70
column: 12
fix: ~
parent: ~
- kind:
SuperfluousElseRaise: Else
location:
row: 78
row: 80
column: 4
end_location:
row: 84
column: 33
row: 80
column: 8
fix: ~
parent: ~

View File

@@ -5,71 +5,71 @@ expression: diagnostics
- kind:
SuperfluousElseContinue: Elif
location:
row: 6
row: 8
column: 8
end_location:
row: 11
column: 17
row: 8
column: 12
fix: ~
parent: ~
- kind:
SuperfluousElseContinue: Elif
location:
row: 16
row: 22
column: 8
end_location:
row: 25
column: 17
row: 22
column: 12
fix: ~
parent: ~
- kind:
SuperfluousElseContinue: Else
location:
row: 34
row: 36
column: 8
end_location:
row: 37
column: 17
row: 36
column: 12
fix: ~
parent: ~
- kind:
SuperfluousElseContinue: Else
location:
row: 44
row: 47
column: 12
end_location:
row: 49
column: 24
row: 47
column: 16
fix: ~
parent: ~
- kind:
SuperfluousElseContinue: Else
location:
row: 57
row: 63
column: 8
end_location:
row: 64
column: 17
row: 63
column: 12
fix: ~
parent: ~
- kind:
SuperfluousElseContinue: Else
location:
row: 71
row: 74
column: 12
end_location:
row: 75
column: 21
row: 74
column: 16
fix: ~
parent: ~
- kind:
SuperfluousElseContinue: Else
location:
row: 83
row: 85
column: 8
end_location:
row: 89
column: 24
row: 85
column: 12
fix: ~
parent: ~

View File

@@ -5,71 +5,71 @@ expression: diagnostics
- kind:
SuperfluousElseBreak: Elif
location:
row: 6
row: 8
column: 8
end_location:
row: 11
column: 17
row: 8
column: 12
fix: ~
parent: ~
- kind:
SuperfluousElseBreak: Elif
location:
row: 16
row: 22
column: 8
end_location:
row: 25
column: 17
row: 22
column: 12
fix: ~
parent: ~
- kind:
SuperfluousElseBreak: Else
location:
row: 31
row: 33
column: 8
end_location:
row: 34
column: 17
row: 33
column: 12
fix: ~
parent: ~
- kind:
SuperfluousElseBreak: Else
location:
row: 41
row: 44
column: 12
end_location:
row: 46
column: 21
row: 44
column: 16
fix: ~
parent: ~
- kind:
SuperfluousElseBreak: Else
location:
row: 54
row: 60
column: 8
end_location:
row: 61
column: 17
row: 60
column: 12
fix: ~
parent: ~
- kind:
SuperfluousElseBreak: Else
location:
row: 68
row: 71
column: 12
end_location:
row: 72
column: 21
row: 71
column: 16
fix: ~
parent: ~
- kind:
SuperfluousElseBreak: Else
location:
row: 80
row: 82
column: 8
end_location:
row: 86
column: 21
row: 82
column: 12
fix: ~
parent: ~

View File

@@ -126,7 +126,7 @@ pub fn duplicate_isinstance_call(checker: &mut Checker, expr: &Expr) {
// Populate the `Fix`. Replace the _entire_ `BoolOp`. Note that if we have
// multiple duplicates, the fixes will conflict.
diagnostic.amend(Fix::replacement(
unparse_expr(&bool_op, checker.style),
unparse_expr(&bool_op, checker.stylist),
expr.location,
expr.end_location.unwrap(),
));
@@ -169,13 +169,13 @@ pub fn compare_with_tuple(checker: &mut Checker, expr: &Expr) {
}
let str_values = values
.iter()
.map(|value| unparse_expr(value, checker.style))
.map(|value| unparse_expr(value, checker.stylist))
.collect();
let mut diagnostic = Diagnostic::new(
violations::CompareWithTuple(
value.to_string(),
str_values,
unparse_expr(expr, checker.style),
unparse_expr(expr, checker.stylist),
),
Range::from_located(expr),
);
@@ -193,7 +193,7 @@ pub fn compare_with_tuple(checker: &mut Checker, expr: &Expr) {
})],
});
diagnostic.amend(Fix::replacement(
unparse_expr(&in_expr, checker.style),
unparse_expr(&in_expr, checker.stylist),
expr.location,
expr.end_location.unwrap(),
));

View File

@@ -46,7 +46,7 @@ pub fn use_capital_environment_variables(checker: &mut Checker, expr: &Expr) {
kind: kind.clone(),
});
diagnostic.amend(Fix::replacement(
unparse_expr(&new_env_var, checker.style),
unparse_expr(&new_env_var, checker.stylist),
arg.location,
arg.end_location.unwrap(),
));
@@ -85,7 +85,7 @@ fn check_os_environ_subscript(checker: &mut Checker, expr: &Expr) {
kind: kind.clone(),
});
diagnostic.amend(Fix::replacement(
unparse_expr(&new_env_var, checker.style),
unparse_expr(&new_env_var, checker.stylist),
slice.location,
slice.end_location.unwrap(),
));

View File

@@ -184,7 +184,7 @@ pub fn convert_for_loop_to_any_all(checker: &mut Checker, stmt: &Stmt, sibling:
loop_info.test,
loop_info.target,
loop_info.iter,
checker.style,
checker.stylist,
);
// Don't flag if the resulting expression would exceed the maximum line length.
@@ -232,7 +232,7 @@ pub fn convert_for_loop_to_any_all(checker: &mut Checker, stmt: &Stmt, sibling:
&test,
loop_info.target,
loop_info.iter,
checker.style,
checker.stylist,
);
// Don't flag if the resulting expression would exceed the maximum line length.

View File

@@ -91,7 +91,7 @@ pub fn return_bool_condition_directly(checker: &mut Checker, stmt: &Stmt) {
if !(is_one_line_return_bool(body) && is_one_line_return_bool(orelse)) {
return;
}
let condition = unparse_expr(test, checker.style);
let condition = unparse_expr(test, checker.stylist);
let mut diagnostic = Diagnostic::new(
violations::ReturnBoolConditionDirectly(condition),
Range::from_located(stmt),
@@ -101,7 +101,7 @@ pub fn return_bool_condition_directly(checker: &mut Checker, stmt: &Stmt) {
value: Some(test.clone()),
});
diagnostic.amend(Fix::replacement(
unparse_stmt(&return_stmt, checker.style),
unparse_stmt(&return_stmt, checker.stylist),
stmt.location,
stmt.end_location.unwrap(),
));
@@ -191,7 +191,7 @@ pub fn use_ternary_operator(checker: &mut Checker, stmt: &Stmt, parent: Option<&
let target_var = &body_targets[0];
let ternary = ternary(target_var, body_value, test, orelse_value);
let contents = unparse_stmt(&ternary, checker.style);
let contents = unparse_stmt(&ternary, checker.stylist);
// Don't flag if the resulting expression would exceed the maximum line length.
if stmt.location.column() + contents.len() > checker.settings.line_length {
@@ -305,7 +305,7 @@ pub fn use_dict_get_with_default(
})),
type_comment: None,
}),
checker.style,
checker.stylist,
);
// Don't flag if the resulting expression would exceed the maximum line length.

View File

@@ -29,7 +29,7 @@ pub fn explicit_true_false_in_ifexpr(
}
let mut diagnostic = Diagnostic::new(
violations::IfExprWithTrueFalse(unparse_expr(test, checker.style)),
violations::IfExprWithTrueFalse(unparse_expr(test, checker.stylist)),
Range::from_located(expr),
);
if checker.patch(diagnostic.kind.code()) {
@@ -43,7 +43,7 @@ pub fn explicit_true_false_in_ifexpr(
args: vec![create_expr(test.node.clone())],
keywords: vec![],
}),
checker.style,
checker.stylist,
),
expr.location,
expr.end_location.unwrap(),
@@ -74,7 +74,7 @@ pub fn explicit_false_true_in_ifexpr(
}
let mut diagnostic = Diagnostic::new(
violations::IfExprWithFalseTrue(unparse_expr(test, checker.style)),
violations::IfExprWithFalseTrue(unparse_expr(test, checker.stylist)),
Range::from_located(expr),
);
if checker.patch(diagnostic.kind.code()) {
@@ -84,7 +84,7 @@ pub fn explicit_false_true_in_ifexpr(
op: Unaryop::Not,
operand: Box::new(create_expr(test.node.clone())),
}),
checker.style,
checker.stylist,
),
expr.location,
expr.end_location.unwrap(),
@@ -121,8 +121,8 @@ pub fn twisted_arms_in_ifexpr(
let mut diagnostic = Diagnostic::new(
violations::IfExprWithTwistedArms(
unparse_expr(body, checker.style),
unparse_expr(orelse, checker.style),
unparse_expr(body, checker.stylist),
unparse_expr(orelse, checker.stylist),
),
Range::from_located(expr),
);
@@ -134,7 +134,7 @@ pub fn twisted_arms_in_ifexpr(
body: Box::new(create_expr(orelse.node.clone())),
orelse: Box::new(create_expr(body.node.clone())),
}),
checker.style,
checker.stylist,
),
expr.location,
expr.end_location.unwrap(),

View File

@@ -37,8 +37,8 @@ pub fn negation_with_equal_op(checker: &mut Checker, expr: &Expr, op: &Unaryop,
let mut diagnostic = Diagnostic::new(
violations::NegateEqualOp(
unparse_expr(left, checker.style),
unparse_expr(&comparators[0], checker.style),
unparse_expr(left, checker.stylist),
unparse_expr(&comparators[0], checker.stylist),
),
Range::from_located(expr),
);
@@ -50,7 +50,7 @@ pub fn negation_with_equal_op(checker: &mut Checker, expr: &Expr, op: &Unaryop,
ops: vec![Cmpop::NotEq],
comparators: comparators.clone(),
}),
checker.style,
checker.stylist,
),
expr.location,
expr.end_location.unwrap(),
@@ -81,8 +81,8 @@ pub fn negation_with_not_equal_op(
let mut diagnostic = Diagnostic::new(
violations::NegateNotEqualOp(
unparse_expr(left, checker.style),
unparse_expr(&comparators[0], checker.style),
unparse_expr(left, checker.stylist),
unparse_expr(&comparators[0], checker.stylist),
),
Range::from_located(expr),
);
@@ -94,7 +94,7 @@ pub fn negation_with_not_equal_op(
ops: vec![Cmpop::Eq],
comparators: comparators.clone(),
}),
checker.style,
checker.stylist,
),
expr.location,
expr.end_location.unwrap(),
@@ -121,7 +121,7 @@ pub fn double_negation(checker: &mut Checker, expr: &Expr, op: &Unaryop, operand
);
if checker.patch(diagnostic.kind.code()) {
diagnostic.amend(Fix::replacement(
unparse_expr(operand, checker.style),
unparse_expr(operand, checker.stylist),
expr.location,
expr.end_location.unwrap(),
));

View File

@@ -63,11 +63,7 @@ pub fn has_comment_break(stmt: &Stmt, locator: &Locator) -> bool {
// # Direct comment.
// def f(): pass
let mut seen_blank = false;
for line in locator
.slice_source_code_until(&stmt.location)
.lines()
.rev()
{
for line in locator.slice_source_code_until(stmt.location).lines().rev() {
let line = line.trim();
if seen_blank {
if line.starts_with('#') {
@@ -113,7 +109,7 @@ pub fn find_splice_location(body: &[Stmt], locator: &Locator) -> Location {
let mut splice = match_docstring_end(body).unwrap_or_default();
// Find the first token that isn't a comment or whitespace.
let contents = locator.slice_source_code_at(&splice);
let contents = locator.slice_source_code_at(splice);
for (.., tok, end) in lexer::make_tokenizer(&contents).flatten() {
if matches!(tok, Tok::Comment(..) | Tok::Newline) {
splice = end;

View File

@@ -13,7 +13,7 @@ use crate::ast::whitespace::leading_space;
use crate::fix::Fix;
use crate::registry::Diagnostic;
use crate::settings::{flags, Settings};
use crate::source_code::{Locator, Stylist};
use crate::source_code::{Indexer, Locator, Stylist};
use crate::violations;
fn extract_range(body: &[&Stmt]) -> Range {
@@ -31,6 +31,7 @@ fn extract_indentation_range(body: &[&Stmt]) -> Range {
pub fn organize_imports(
block: &Block,
locator: &Locator,
indexer: &Indexer,
settings: &Settings,
stylist: &Stylist,
autofix: flags::Autofix,
@@ -43,7 +44,7 @@ pub fn organize_imports(
// Special-cases: there's leading or trailing content in the import block. These
// are too hard to get right, and relatively rare, so flag but don't fix.
if preceded_by_multi_statement_line(block.imports.first().unwrap(), locator)
if preceded_by_multi_statement_line(block.imports.first().unwrap(), locator, indexer)
|| followed_by_multi_statement_line(block.imports.last().unwrap(), locator)
{
return Some(Diagnostic::new(violations::UnsortedImports, range));

View File

@@ -6,6 +6,7 @@ pub mod flake8_blind_except;
pub mod flake8_boolean_trap;
pub mod flake8_bugbear;
pub mod flake8_builtins;
pub mod flake8_commas;
pub mod flake8_comprehensions;
pub mod flake8_datetimez;
pub mod flake8_debugger;

View File

@@ -13,7 +13,7 @@ mod tests {
use crate::linter::check_path;
use crate::registry::{RuleCode, RuleCodePrefix};
use crate::settings::flags;
use crate::source_code::{Locator, Stylist};
use crate::source_code::{Indexer, Locator, Stylist};
use crate::{directives, rustpython_helpers, settings};
fn rule_code(contents: &str, expected: &[RuleCode]) -> Result<()> {
@@ -22,6 +22,7 @@ mod tests {
let tokens: Vec<LexResult> = rustpython_helpers::tokenize(&contents);
let locator = Locator::new(&contents);
let stylist = Stylist::from_contents(&contents, &locator);
let indexer: Indexer = tokens.as_slice().into();
let directives =
directives::extract_directives(&tokens, directives::Flags::from_settings(&settings));
let diagnostics = check_path(
@@ -31,6 +32,7 @@ mod tests {
tokens,
&locator,
&stylist,
&indexer,
&directives,
&settings,
flags::Autofix::Enabled,

View File

@@ -284,7 +284,7 @@ pub fn literal_comparisons(
.map(|(idx, op)| bad_ops.get(&idx).unwrap_or(op))
.cloned()
.collect::<Vec<_>>();
let content = compare(left, &ops, comparators, checker.style);
let content = compare(left, &ops, comparators, checker.stylist);
for diagnostic in &mut diagnostics {
diagnostic.amend(Fix::replacement(
content.to_string(),
@@ -325,7 +325,7 @@ pub fn not_tests(
);
if checker.patch(diagnostic.kind.code()) && should_fix {
diagnostic.amend(Fix::replacement(
compare(left, &[Cmpop::NotIn], comparators, checker.style),
compare(left, &[Cmpop::NotIn], comparators, checker.stylist),
expr.location,
expr.end_location.unwrap(),
));
@@ -341,7 +341,7 @@ pub fn not_tests(
);
if checker.patch(diagnostic.kind.code()) && should_fix {
diagnostic.amend(Fix::replacement(
compare(left, &[Cmpop::IsNot], comparators, checker.style),
compare(left, &[Cmpop::IsNot], comparators, checker.stylist),
expr.location,
expr.end_location.unwrap(),
));
@@ -465,7 +465,10 @@ pub fn do_not_assign_lambda(checker: &mut Checker, target: &Expr, value: &Expr,
));
let indentation = &leading_space(&first_line);
let mut indented = String::new();
for (idx, line) in function(id, args, body, checker.style).lines().enumerate() {
for (idx, line) in function(id, args, body, checker.stylist)
.lines()
.enumerate()
{
if idx == 0 {
indented.push_str(line);
} else {

View File

@@ -17,7 +17,8 @@ mod tests {
#[test_case(RuleCode::D100, Path::new("D.py"); "D100")]
#[test_case(RuleCode::D101, Path::new("D.py"); "D101")]
#[test_case(RuleCode::D102, Path::new("D.py"); "D102")]
#[test_case(RuleCode::D102, Path::new("D.py"); "D102_0")]
#[test_case(RuleCode::D102, Path::new("setter.py"); "D102_1")]
#[test_case(RuleCode::D103, Path::new("D.py"); "D103")]
#[test_case(RuleCode::D104, Path::new("D.py"); "D104")]
#[test_case(RuleCode::D105, Path::new("D.py"); "D105")]

View File

@@ -18,8 +18,7 @@ pub enum Convention {
}
impl Convention {
#[allow(clippy::trivially_copy_pass_by_ref)]
pub fn codes(&self) -> &'static [RuleCodePrefix] {
pub fn codes(self) -> &'static [RuleCodePrefix] {
match self {
Convention::Google => &[
// All errors except D203, D204, D213, D215, D400, D401, D404, D406, D407, D408,

View File

@@ -0,0 +1,15 @@
---
source: src/rules/pydocstyle/mod.rs
expression: diagnostics
---
- kind:
PublicMethod: ~
location:
row: 16
column: 8
end_location:
row: 16
column: 11
fix: ~
parent: ~

View File

@@ -17,7 +17,7 @@ mod tests {
use crate::linter::{check_path, test_path};
use crate::registry::{RuleCode, RuleCodePrefix};
use crate::settings::flags;
use crate::source_code::{Locator, Stylist};
use crate::source_code::{Indexer, Locator, Stylist};
use crate::{directives, rustpython_helpers, settings};
#[test_case(RuleCode::F401, Path::new("F401_0.py"); "F401_0")]
@@ -213,6 +213,7 @@ mod tests {
let tokens: Vec<LexResult> = rustpython_helpers::tokenize(&contents);
let locator = Locator::new(&contents);
let stylist = Stylist::from_contents(&contents, &locator);
let indexer: Indexer = tokens.as_slice().into();
let directives =
directives::extract_directives(&tokens, directives::Flags::from_settings(&settings));
let mut diagnostics = check_path(
@@ -222,6 +223,7 @@ mod tests {
tokens,
&locator,
&stylist,
&indexer,
&directives,
&settings,
flags::Autofix::Enabled,

View File

@@ -42,7 +42,7 @@ pub fn repeated_keys(checker: &mut Checker, keys: &[Expr], values: &[Expr]) {
let is_duplicate_value = seen_values.contains(&comparable_value);
let mut diagnostic = Diagnostic::new(
violations::MultiValueRepeatedKeyLiteral(
unparse_expr(&keys[i], checker.style),
unparse_expr(&keys[i], checker.stylist),
is_duplicate_value,
),
Range::from_located(&keys[i]),

View File

@@ -70,7 +70,8 @@ fn remove_unused_variable(
.map(std::convert::Into::into)
.collect();
let locator = checker.locator;
match delete_stmt(stmt, parent, &deleted, locator) {
let indexer = checker.indexer;
match delete_stmt(stmt, parent, &deleted, locator, indexer) {
Ok(fix) => Some((DeletionKind::Whole, fix)),
Err(err) => {
error!("Failed to delete unused variable: {}", err);
@@ -108,7 +109,8 @@ fn remove_unused_variable(
.map(std::convert::Into::into)
.collect();
let locator = checker.locator;
match delete_stmt(stmt, parent, &deleted, locator) {
let indexer = checker.indexer;
match delete_stmt(stmt, parent, &deleted, locator, indexer) {
Ok(fix) => Some((DeletionKind::Whole, fix)),
Err(err) => {
error!("Failed to delete unused variable: {}", err);

View File

@@ -17,7 +17,7 @@ pub fn remove_class_def_base(
bases: &[Expr],
keywords: &[Keyword],
) -> Option<Fix> {
let contents = locator.slice_source_code_at(&stmt_at);
let contents = locator.slice_source_code_at(stmt_at);
// Case 1: `object` is the only base.
if bases.len() == 1 && keywords.is_empty() {

View File

@@ -168,7 +168,7 @@ pub fn convert_named_tuple_functional_to_class(
typename,
properties,
base_class,
checker.style,
checker.stylist,
));
}
Err(err) => debug!("Skipping ineligible `NamedTuple` \"{typename}\": {err}"),

View File

@@ -210,7 +210,7 @@ pub fn convert_typed_dict_functional_to_class(
body,
total_keyword,
base_class,
checker.style,
checker.stylist,
));
}
Err(err) => debug!("Skipping ineligible `TypedDict` \"{class_name}\": {err}"),

View File

@@ -35,13 +35,13 @@ pub fn native_literals(
if id == "bytes" {
let mut content = String::with_capacity(3);
content.push('b');
content.push(checker.style.quote().into());
content.push(checker.style.quote().into());
content.push(checker.stylist.quote().into());
content.push(checker.stylist.quote().into());
content
} else {
let mut content = String::with_capacity(2);
content.push(checker.style.quote().into());
content.push(checker.style.quote().into());
content.push(checker.stylist.quote().into());
content.push(checker.stylist.quote().into());
content
},
expr.location,

View File

@@ -398,7 +398,7 @@ fn handle_next_on_six_dict(expr: &Expr, patch: bool, checker: &Checker) -> Optio
},
arg,
patch,
checker.style,
checker.stylist,
))
}
@@ -427,7 +427,7 @@ pub fn remove_six_compat(checker: &mut Checker, expr: &Expr) {
keywords,
expr,
patch,
checker.style,
checker.stylist,
checker.locator,
),
ExprKind::Attribute { attr, .. } => map_name(attr.as_str(), expr, patch),

View File

@@ -228,7 +228,7 @@ pub fn rewrite_mock_import(checker: &mut Checker, stmt: &Stmt) {
// Generate the fix, if needed, which is shared between all `mock` imports.
let content = if checker.patch(&RuleCode::UP026) {
let indent = indentation(checker, stmt);
match format_import(stmt, &indent, checker.locator, checker.style) {
match format_import(stmt, &indent, checker.locator, checker.stylist) {
Ok(content) => Some(content),
Err(e) => {
error!("Failed to rewrite `mock` import: {e}");
@@ -274,7 +274,7 @@ pub fn rewrite_mock_import(checker: &mut Checker, stmt: &Stmt) {
);
if checker.patch(&RuleCode::UP026) {
let indent = indentation(checker, stmt);
match format_import_from(stmt, &indent, checker.locator, checker.style) {
match format_import_from(stmt, &indent, checker.locator, checker.stylist) {
Ok(content) => {
diagnostic.amend(Fix::replacement(
content,

View File

@@ -97,6 +97,7 @@ pub fn unnecessary_builtin_import(
defined_in.map(std::convert::Into::into),
&deleted,
checker.locator,
checker.indexer,
) {
Ok(fix) => {
if fix.content.is_empty() || fix.content == "pass" {

View File

@@ -82,6 +82,7 @@ pub fn unnecessary_future_import(checker: &mut Checker, stmt: &Stmt, names: &[Lo
defined_in.map(std::convert::Into::into),
&deleted,
checker.locator,
checker.indexer,
) {
Ok(fix) => {
if fix.content.is_empty() || fix.content == "pass" {

View File

@@ -68,7 +68,7 @@ pub fn use_pep604_annotation(checker: &mut Checker, expr: &Expr, value: &Expr, s
let mut diagnostic =
Diagnostic::new(violations::UsePEP604Annotation, Range::from_located(expr));
if checker.patch(diagnostic.kind.code()) {
let mut generator: Generator = checker.style.into();
let mut generator: Generator = checker.stylist.into();
generator.unparse_expr(&optional(slice), 0);
diagnostic.amend(Fix::replacement(
generator.generate(),
@@ -88,7 +88,7 @@ pub fn use_pep604_annotation(checker: &mut Checker, expr: &Expr, value: &Expr, s
// Invalid type annotation.
}
ExprKind::Tuple { elts, .. } => {
let mut generator: Generator = checker.style.into();
let mut generator: Generator = checker.stylist.into();
generator.unparse_expr(&union(elts), 0);
diagnostic.amend(Fix::replacement(
generator.generate(),
@@ -98,7 +98,7 @@ pub fn use_pep604_annotation(checker: &mut Checker, expr: &Expr, value: &Expr, s
}
_ => {
// Single argument.
let mut generator: Generator = checker.style.into();
let mut generator: Generator = checker.stylist.into();
generator.unparse_expr(slice, 0);
diagnostic.amend(Fix::replacement(
generator.generate(),

View File

@@ -45,6 +45,7 @@ pub fn useless_metaclass_type(checker: &mut Checker, stmt: &Stmt, value: &Expr,
defined_in.map(std::convert::Into::into),
&deleted,
checker.locator,
checker.indexer,
) {
Ok(fix) => {
if fix.content.is_empty() || fix.content == "pass" {

View File

@@ -1632,20 +1632,20 @@ pub fn ambiguous_unicode_character(
let end_location = Location::new(location.row(), location.column() + 1);
let mut diagnostic = Diagnostic::new::<DiagnosticKind>(
match context {
Context::String => violations::AmbiguousUnicodeCharacterString(
current_char,
Context::String => violations::AmbiguousUnicodeCharacterString {
confusable: current_char,
representant,
)
}
.into(),
Context::Docstring => violations::AmbiguousUnicodeCharacterDocstring(
current_char,
Context::Docstring => violations::AmbiguousUnicodeCharacterDocstring {
confusable: current_char,
representant,
)
}
.into(),
Context::Comment => violations::AmbiguousUnicodeCharacterComment(
current_char,
Context::Comment => violations::AmbiguousUnicodeCharacterComment {
confusable: current_char,
representant,
)
}
.into(),
},
Range::new(location, end_location),

View File

@@ -4,8 +4,8 @@ expression: diagnostics
---
- kind:
AmbiguousUnicodeCharacterString:
- 𝐁
- B
confusable: 𝐁
representant: B
location:
row: 1
column: 5
@@ -23,8 +23,8 @@ expression: diagnostics
parent: ~
- kind:
AmbiguousUnicodeCharacterDocstring:
-
- )
confusable:
representant: )
location:
row: 6
column: 55
@@ -42,8 +42,8 @@ expression: diagnostics
parent: ~
- kind:
AmbiguousUnicodeCharacterComment:
-
- /
confusable:
representant: /
location:
row: 7
column: 61

View File

@@ -12,6 +12,7 @@ use globset::{Glob, GlobMatcher, GlobSet};
use itertools::Either::{Left, Right};
use itertools::Itertools;
use once_cell::sync::Lazy;
#[cfg(test)]
use path_absolutize::path_dedot;
use regex::Regex;
use rustc_hash::FxHashSet;
@@ -38,22 +39,53 @@ pub mod types;
const CARGO_PKG_VERSION: &str = env!("CARGO_PKG_VERSION");
#[derive(Debug)]
pub struct AllSettings {
pub cli: CliSettings,
pub lib: Settings,
}
impl AllSettings {
pub fn from_configuration(config: Configuration, project_root: &Path) -> Result<Self> {
Ok(Self {
cli: CliSettings {
cache_dir: config
.cache_dir
.clone()
.unwrap_or_else(|| cache_dir(project_root)),
fix: config.fix.unwrap_or(false),
fix_only: config.fix_only.unwrap_or(false),
format: config.format.unwrap_or_default(),
update_check: config.update_check.unwrap_or_default(),
},
lib: Settings::from_configuration(config, project_root)?,
})
}
}
#[derive(Debug, Default, Clone)]
/// Settings that are not used by this library and
/// only here so that `ruff_cli` can use them.
pub struct CliSettings {
pub cache_dir: PathBuf,
pub fix: bool,
pub fix_only: bool,
pub format: SerializationFormat,
pub update_check: bool,
}
#[derive(Debug)]
#[allow(clippy::struct_excessive_bools)]
pub struct Settings {
pub allowed_confusables: FxHashSet<char>,
pub builtins: Vec<String>,
pub cache_dir: PathBuf,
pub dummy_variable_rgx: Regex,
pub enabled: FxHashSet<RuleCode>,
pub exclude: GlobSet,
pub extend_exclude: GlobSet,
pub external: FxHashSet<String>,
pub fix: bool,
pub fix_only: bool,
pub fixable: FxHashSet<RuleCode>,
pub force_exclude: bool,
pub format: SerializationFormat,
pub ignore_init_module_imports: bool,
pub line_length: usize,
pub namespace_packages: Vec<PathBuf>,
@@ -65,7 +97,6 @@ pub struct Settings {
pub target_version: PythonVersion,
pub task_tags: Vec<String>,
pub typing_modules: Vec<String>,
pub update_check: bool,
// Plugins
pub flake8_annotations: flake8_annotations::settings::Settings,
pub flake8_bandit: flake8_bandit::settings::Settings,
@@ -119,7 +150,6 @@ impl Settings {
.map(FxHashSet::from_iter)
.unwrap_or_default(),
builtins: config.builtins.unwrap_or_default(),
cache_dir: config.cache_dir.unwrap_or_else(|| cache_dir(project_root)),
dummy_variable_rgx: config
.dummy_variable_rgx
.unwrap_or_else(|| DEFAULT_DUMMY_VARIABLE_RGX.clone()),
@@ -158,8 +188,6 @@ impl Settings {
exclude: resolve_globset(config.exclude.unwrap_or_else(|| DEFAULT_EXCLUDE.clone()))?,
extend_exclude: resolve_globset(config.extend_exclude)?,
external: FxHashSet::from_iter(config.external.unwrap_or_default()),
fix: config.fix.unwrap_or(false),
fix_only: config.fix_only.unwrap_or(false),
fixable: resolve_codes(
[RuleCodeSpec {
select: &config.fixable.unwrap_or_else(|| CATEGORIES.to_vec()),
@@ -167,7 +195,6 @@ impl Settings {
}]
.into_iter(),
),
format: config.format.unwrap_or_default(),
force_exclude: config.force_exclude.unwrap_or(false),
ignore_init_module_imports: config.ignore_init_module_imports.unwrap_or_default(),
line_length: config.line_length.unwrap_or(88),
@@ -186,7 +213,6 @@ impl Settings {
vec!["TODO".to_string(), "FIXME".to_string(), "XXX".to_string()]
}),
typing_modules: config.typing_modules.unwrap_or_default(),
update_check: config.update_check.unwrap_or_default(),
// Plugins
flake8_annotations: config
.flake8_annotations
@@ -221,21 +247,18 @@ impl Settings {
})
}
#[cfg(test)]
pub fn for_rule(rule_code: RuleCode) -> Self {
Self {
allowed_confusables: FxHashSet::from_iter([]),
builtins: vec![],
cache_dir: cache_dir(path_dedot::CWD.as_path()),
dummy_variable_rgx: Regex::new("^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$").unwrap(),
enabled: FxHashSet::from_iter([rule_code.clone()]),
exclude: GlobSet::empty(),
extend_exclude: GlobSet::empty(),
external: FxHashSet::default(),
fix: false,
fix_only: false,
fixable: FxHashSet::from_iter([rule_code]),
force_exclude: false,
format: SerializationFormat::Text,
ignore_init_module_imports: false,
line_length: 88,
namespace_packages: vec![],
@@ -247,7 +270,6 @@ impl Settings {
target_version: PythonVersion::Py310,
task_tags: vec!["TODO".to_string(), "FIXME".to_string(), "XXX".to_string()],
typing_modules: vec![],
update_check: false,
flake8_annotations: flake8_annotations::settings::Settings::default(),
flake8_bandit: flake8_bandit::settings::Settings::default(),
flake8_bugbear: flake8_bugbear::settings::Settings::default(),
@@ -266,21 +288,18 @@ impl Settings {
}
}
#[cfg(test)]
pub fn for_rules(rule_codes: Vec<RuleCode>) -> Self {
Self {
allowed_confusables: FxHashSet::from_iter([]),
builtins: vec![],
cache_dir: cache_dir(path_dedot::CWD.as_path()),
dummy_variable_rgx: Regex::new("^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$").unwrap(),
enabled: FxHashSet::from_iter(rule_codes.clone()),
exclude: GlobSet::empty(),
extend_exclude: GlobSet::empty(),
external: FxHashSet::default(),
fix: false,
fix_only: false,
fixable: FxHashSet::from_iter(rule_codes),
force_exclude: false,
format: SerializationFormat::Text,
ignore_init_module_imports: false,
line_length: 88,
namespace_packages: vec![],
@@ -292,7 +311,6 @@ impl Settings {
target_version: PythonVersion::Py310,
task_tags: vec!["TODO".to_string(), "FIXME".to_string(), "XXX".to_string()],
typing_modules: vec![],
update_check: false,
flake8_annotations: flake8_annotations::settings::Settings::default(),
flake8_bandit: flake8_bandit::settings::Settings::default(),
flake8_bugbear: flake8_bugbear::settings::Settings::default(),

View File

@@ -550,7 +550,7 @@ other-attribute = 1
flake8_pytest_style: Some(flake8_pytest_style::settings::Options {
fixture_parentheses: Some(false),
parametrize_names_type: Some(
flake8_pytest_style::types::ParametrizeNameType::CSV
flake8_pytest_style::types::ParametrizeNameType::Csv
),
parametrize_values_type: Some(
flake8_pytest_style::types::ParametrizeValuesType::Tuple,

116
src/source_code/indexer.rs Normal file
View File

@@ -0,0 +1,116 @@
//! Struct used to index source code, to enable efficient lookup of tokens that
//! are omitted from the AST (e.g., commented lines).
use rustpython_ast::Location;
use rustpython_parser::lexer::{LexResult, Tok};
pub struct Indexer {
commented_lines: Vec<usize>,
continuation_lines: Vec<usize>,
}
impl Indexer {
pub fn commented_lines(&self) -> &[usize] {
&self.commented_lines
}
pub fn continuation_lines(&self) -> &[usize] {
&self.continuation_lines
}
}
impl From<&[LexResult]> for Indexer {
fn from(lxr: &[LexResult]) -> Self {
let mut commented_lines = Vec::new();
let mut continuation_lines = Vec::new();
let mut prev: Option<(&Location, &Tok, &Location)> = None;
for (start, tok, end) in lxr.iter().flatten() {
if matches!(tok, Tok::Comment(_)) {
commented_lines.push(start.row());
}
if let Some((.., prev_tok, prev_end)) = prev {
if !matches!(
prev_tok,
Tok::Newline | Tok::NonLogicalNewline | Tok::Comment(..)
) {
for line in prev_end.row()..start.row() {
continuation_lines.push(line);
}
}
}
prev = Some((start, tok, end));
}
Self {
commented_lines,
continuation_lines,
}
}
}
#[cfg(test)]
mod tests {
use rustpython_parser::lexer;
use rustpython_parser::lexer::LexResult;
use crate::source_code::Indexer;
#[test]
fn continuation() {
let contents = r#"x = 1"#;
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let indexer: Indexer = lxr.as_slice().into();
assert_eq!(indexer.continuation_lines(), Vec::<usize>::new().as_slice());
let contents = r#"
# Hello, world!
x = 1
y = 2
"#
.trim();
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let indexer: Indexer = lxr.as_slice().into();
assert_eq!(indexer.continuation_lines(), Vec::<usize>::new().as_slice());
let contents = r#"
x = \
1
if True:
z = \
\
2
(
"abc" # Foo
"def" \
"ghi"
)
"#
.trim();
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let indexer: Indexer = lxr.as_slice().into();
assert_eq!(indexer.continuation_lines(), [1, 5, 6, 11]);
let contents = r#"
x = 1; import sys
import os
if True:
x = 1; import sys
import os
if True:
x = 1; \
import os
x = 1; \
import os
"#
.trim();
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
let indexer: Indexer = lxr.as_slice().into();
assert_eq!(indexer.continuation_lines(), [9, 12]);
}
}

View File

@@ -25,15 +25,13 @@ impl<'a> Locator<'a> {
self.rope.get_or_init(|| Rope::from_str(self.contents))
}
#[allow(clippy::trivially_copy_pass_by_ref)]
pub fn slice_source_code_at(&self, location: &Location) -> Cow<'_, str> {
pub fn slice_source_code_at(&self, location: Location) -> Cow<'_, str> {
let rope = self.get_or_init_rope();
let offset = rope.line_to_char(location.row() - 1) + location.column();
Cow::from(rope.slice(offset..))
}
#[allow(clippy::trivially_copy_pass_by_ref)]
pub fn slice_source_code_until(&self, location: &Location) -> Cow<'_, str> {
pub fn slice_source_code_until(&self, location: Location) -> Cow<'_, str> {
let rope = self.get_or_init_rope();
let offset = rope.line_to_char(location.row() - 1) + location.column();
Cow::from(rope.slice(..offset))

View File

@@ -1,8 +1,10 @@
mod generator;
mod indexer;
mod locator;
mod stylist;
pub(crate) use generator::Generator;
pub(crate) use indexer::Indexer;
pub(crate) use locator::Locator;
use rustpython_parser::error::ParseError;
use rustpython_parser::parser;

View File

@@ -1277,7 +1277,7 @@ impl Violation for MagicValueComparison {
fn message(&self) -> String {
let MagicValueComparison(value) = self;
format!(
"Magic number used in comparison, consider replacing {value} with a constant variable"
"Magic value used in comparison, consider replacing {value} with a constant variable"
)
}
@@ -6027,14 +6027,69 @@ impl AlwaysAutofixableViolation for PreferListBuiltin {
}
}
// flake8-commas
define_violation!(
pub struct TrailingCommaMissing;
);
impl AlwaysAutofixableViolation for TrailingCommaMissing {
fn message(&self) -> String {
"Trailing comma missing".to_string()
}
fn autofix_title(&self) -> String {
"Add trailing comma".to_string()
}
fn placeholder() -> Self {
TrailingCommaMissing
}
}
define_violation!(
pub struct TrailingCommaOnBareTupleProhibited;
);
impl Violation for TrailingCommaOnBareTupleProhibited {
fn message(&self) -> String {
"Trailing comma on bare tuple prohibited".to_string()
}
fn placeholder() -> Self {
TrailingCommaOnBareTupleProhibited
}
}
define_violation!(
pub struct TrailingCommaProhibited;
);
impl AlwaysAutofixableViolation for TrailingCommaProhibited {
fn message(&self) -> String {
"Trailing comma prohibited".to_string()
}
fn autofix_title(&self) -> String {
"Remove trailing comma".to_string()
}
fn placeholder() -> Self {
TrailingCommaProhibited
}
}
// Ruff
define_violation!(
pub struct AmbiguousUnicodeCharacterString(pub char, pub char);
pub struct AmbiguousUnicodeCharacterString {
pub confusable: char,
pub representant: char,
}
);
impl AlwaysAutofixableViolation for AmbiguousUnicodeCharacterString {
fn message(&self) -> String {
let AmbiguousUnicodeCharacterString(confusable, representant) = self;
let AmbiguousUnicodeCharacterString {
confusable,
representant,
} = self;
format!(
"String contains ambiguous unicode character '{confusable}' (did you mean \
'{representant}'?)"
@@ -6042,21 +6097,33 @@ impl AlwaysAutofixableViolation for AmbiguousUnicodeCharacterString {
}
fn autofix_title(&self) -> String {
let AmbiguousUnicodeCharacterString(confusable, representant) = self;
let AmbiguousUnicodeCharacterString {
confusable,
representant,
} = self;
format!("Replace '{confusable}' with '{representant}'")
}
fn placeholder() -> Self {
AmbiguousUnicodeCharacterString('𝐁', 'B')
AmbiguousUnicodeCharacterString {
confusable: '𝐁',
representant: 'B',
}
}
}
define_violation!(
pub struct AmbiguousUnicodeCharacterDocstring(pub char, pub char);
pub struct AmbiguousUnicodeCharacterDocstring {
pub confusable: char,
pub representant: char,
}
);
impl AlwaysAutofixableViolation for AmbiguousUnicodeCharacterDocstring {
fn message(&self) -> String {
let AmbiguousUnicodeCharacterDocstring(confusable, representant) = self;
let AmbiguousUnicodeCharacterDocstring {
confusable,
representant,
} = self;
format!(
"Docstring contains ambiguous unicode character '{confusable}' (did you mean \
'{representant}'?)"
@@ -6064,21 +6131,33 @@ impl AlwaysAutofixableViolation for AmbiguousUnicodeCharacterDocstring {
}
fn autofix_title(&self) -> String {
let AmbiguousUnicodeCharacterDocstring(confusable, representant) = self;
let AmbiguousUnicodeCharacterDocstring {
confusable,
representant,
} = self;
format!("Replace '{confusable}' with '{representant}'")
}
fn placeholder() -> Self {
AmbiguousUnicodeCharacterDocstring('𝐁', 'B')
AmbiguousUnicodeCharacterDocstring {
confusable: '𝐁',
representant: 'B',
}
}
}
define_violation!(
pub struct AmbiguousUnicodeCharacterComment(pub char, pub char);
pub struct AmbiguousUnicodeCharacterComment {
pub confusable: char,
pub representant: char,
}
);
impl AlwaysAutofixableViolation for AmbiguousUnicodeCharacterComment {
fn message(&self) -> String {
let AmbiguousUnicodeCharacterComment(confusable, representant) = self;
let AmbiguousUnicodeCharacterComment {
confusable,
representant,
} = self;
format!(
"Comment contains ambiguous unicode character '{confusable}' (did you mean \
'{representant}'?)"
@@ -6086,12 +6165,18 @@ impl AlwaysAutofixableViolation for AmbiguousUnicodeCharacterComment {
}
fn autofix_title(&self) -> String {
let AmbiguousUnicodeCharacterComment(confusable, representant) = self;
let AmbiguousUnicodeCharacterComment {
confusable,
representant,
} = self;
format!("Replace '{confusable}' with '{representant}'")
}
fn placeholder() -> Self {
AmbiguousUnicodeCharacterComment('𝐁', 'B')
AmbiguousUnicodeCharacterComment {
confusable: '𝐁',
representant: 'B',
}
}
}

View File

@@ -5,6 +5,7 @@ use std::path::Path;
use rustpython_ast::{Expr, Stmt, StmtKind};
use crate::ast::helpers::collect_call_path;
use crate::checkers::ast::Checker;
use crate::docstrings::definition::Documentable;
@@ -148,7 +149,28 @@ fn function_visibility(stmt: &Stmt) -> Visibility {
fn method_visibility(stmt: &Stmt) -> Visibility {
match &stmt.node {
StmtKind::FunctionDef { name, .. } | StmtKind::AsyncFunctionDef { name, .. } => {
StmtKind::FunctionDef {
name,
decorator_list,
..
}
| StmtKind::AsyncFunctionDef {
name,
decorator_list,
..
} => {
// Is this a setter or deleter?
if decorator_list.iter().any(|expr| {
let call_path = collect_call_path(expr);
if call_path.len() > 1 {
call_path[0] == name
} else {
false
}
}) {
return Visibility::Private;
}
// Is the method non-private?
if !name.starts_with('_') {
return Visibility::Public;