Compare commits

...

22 Commits

Author SHA1 Message Date
Charlie Marsh
11e1380df4 Bump version to 0.0.265 (#4248) 2023-05-05 13:16:05 -04:00
Micha Reiser
e93f378635 Refactor whitespace around operator (#4223) 2023-05-05 09:37:56 +02:00
Micha Reiser
2124feb0e7 Fail lint tests if the fix creates a syntax error (#4202) 2023-05-05 07:59:33 +02:00
Charlie Marsh
c0e7269b07 Update doc defaults for section-order (#4232) 2023-05-04 21:35:27 +00:00
Chris Chan
c2921e957b [pylint] Implement import-self (W0406) (#4154) 2023-05-04 16:05:15 -04:00
Charlie Marsh
93cfce674a Ignore __debuggerskip__ in unused variable checks (#4229) 2023-05-04 15:45:49 -04:00
Charlie Marsh
b71cc3789f Change --fix-only exit semantics to mirror --fix (#4146) 2023-05-04 19:03:15 +00:00
Zanie Adkins
717128112d Fix panic in pydocstyle D214 when docstring indentation is empty (#4216) 2023-05-04 14:42:34 -04:00
Arya Kumar
e9e194ab32 [flake8-pyi] Implement PYI042 and PYI043 (#4214) 2023-05-04 14:35:26 -04:00
Calum Young
890e630c41 Allow linking to individual rules (#4158) 2023-05-04 13:43:53 -04:00
Aaron Cunningham
d78287540d Update B027 to support autofixing (#4178) 2023-05-04 16:36:32 +00:00
Charlie Marsh
494e807315 Add space when joining rule codes for debug messages (#4225) 2023-05-04 15:34:34 +00:00
Tom Kuson
6db1a32eb9 Add docs for PLC rules (#4224) 2023-05-04 10:56:00 -04:00
Dhruv Manilawala
bb2cbf1f25 End of statement insertion should occur after newline (#4215) 2023-05-04 16:17:41 +02:00
konstin
badfdab61a Show rule codes on autofix failure (#4220) 2023-05-04 15:25:07 +02:00
Dhruv Manilawala
59d40f9f81 Show settings path in --show-settings output (#4199) 2023-05-04 08:22:31 +02:00
Arya Kumar
37aae666c7 [flake8-pyi] PYI020 (#4211) 2023-05-03 22:37:32 -04:00
Leiser Fernández Gallo
460023a959 Fix era panic caused by out of bound edition (#4206) 2023-05-03 15:48:43 +02:00
Aarni Koskela
d0e3ca29d9 Print out autofix-broken or non-converging code when debugging (#4201) 2023-05-03 13:50:03 +02:00
Christian Clauss
ccfc78e2d5 faq: Clarify how Ruff and Black treat line-length. (#4180) 2023-05-02 23:19:38 +00:00
Micha Reiser
b14358fbfe Render tabs as 4 spaces in diagnostics (#4132) 2023-05-02 13:14:02 +00:00
wookie184
ac600bb3da Warn on PEP 604 syntax not in an annotation, but don't autofix (#4170) 2023-05-01 23:49:20 -07:00
92 changed files with 2089 additions and 584 deletions

View File

@@ -1,5 +1,16 @@
# Breaking Changes
## 0.0.265
### `--fix-only` now exits with a zero exit code, unless `--exit-non-zero-on-fix` is specified ([#4146](https://github.com/charliermarsh/ruff/pull/4146))
Previously, `--fix-only` would exit with a non-zero exit code if any fixes were applied. This
behavior was inconsistent with `--fix`, and further, meant that `--exit-non-zero-on-fix` was
effectively ignored when `--fix-only` was specified.
Now, `--fix-only` will exit with a zero exit code, unless `--exit-non-zero-on-fix` is specified,
in which case it will exit with a non-zero exit code if any fixes were applied.
## 0.0.260
### Fixes are now represented as a list of edits ([#3709](https://github.com/charliermarsh/ruff/pull/3709))

6
Cargo.lock generated
View File

@@ -841,7 +841,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.264"
version = "0.0.265"
dependencies = [
"anyhow",
"clap 4.2.4",
@@ -2004,7 +2004,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.264"
version = "0.0.265"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -2093,7 +2093,7 @@ dependencies = [
[[package]]
name = "ruff_cli"
version = "0.0.264"
version = "0.0.265"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",

View File

@@ -137,7 +137,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com) hook:
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.264'
rev: 'v0.0.265'
hooks:
- id: ruff
```

View File

@@ -1,6 +1,6 @@
[package]
name = "flake8-to-ruff"
version = "0.0.264"
version = "0.0.265"
edition = { workspace = true }
rust-version = { workspace = true }

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.0.264"
version = "0.0.265"
authors.workspace = true
edition.workspace = true
rust-version.workspace = true

View File

@@ -14,3 +14,8 @@ def foo(x, y, z):
return False
#import os # noqa: ERA001
class A():
pass
# b = c

View File

@@ -0,0 +1,39 @@
"""
Should emit:
B027 - on lines 13, 16, 19, 23
"""
from abc import ABC
class AbstractClass(ABC):
def empty_1(self): # error
...
def empty_2(self): # error
pass
def body_1(self):
print("foo")
...
def body_2(self):
self.body_1()
def foo():
class InnerAbstractClass(ABC):
def empty_1(self): # error
...
def empty_2(self): # error
pass
def body_1(self):
print("foo")
...
def body_2(self):
self.body_1()
return InnerAbstractClass

View File

@@ -0,0 +1,28 @@
import sys
import typing
from typing import Annotated, Literal, TypeAlias, TypeVar
import typing_extensions
def f(x: "int"): ... # Y020 Quoted annotations should never be used in stubs
def g(x: list["int"]): ... # Y020 Quoted annotations should never be used in stubs
_T = TypeVar("_T", bound="int") # Y020 Quoted annotations should never be used in stubs
def h(w: Literal["a", "b"], x: typing.Literal["c"], y: typing_extensions.Literal["d"], z: _T) -> _T: ...
def j() -> "int": ... # Y020 Quoted annotations should never be used in stubs
Alias: TypeAlias = list["int"] # Y020 Quoted annotations should never be used in stubs
class Child(list["int"]): # Y020 Quoted annotations should never be used in stubs
"""Documented and guaranteed useful.""" # Y021 Docstrings should not be included in stubs
if sys.platform == "linux":
f: "int" # Y020 Quoted annotations should never be used in stubs
elif sys.platform == "win32":
f: "str" # Y020 Quoted annotations should never be used in stubs
else:
f: "bytes" # Y020 Quoted annotations should never be used in stubs
# These two shouldn't trigger Y020 -- empty strings can't be "quoted annotations"
k = "" # Y052 Need type annotation for "k"
el = r"" # Y052 Need type annotation for "el"

View File

@@ -0,0 +1,28 @@
import sys
import typing
from typing import Annotated, Literal, TypeAlias, TypeVar
import typing_extensions
def f(x: "int"): ... # Y020 Quoted annotations should never be used in stubs
def g(x: list["int"]): ... # Y020 Quoted annotations should never be used in stubs
_T = TypeVar("_T", bound="int") # Y020 Quoted annotations should never be used in stubs
def h(w: Literal["a", "b"], x: typing.Literal["c"], y: typing_extensions.Literal["d"], z: _T) -> _T: ...
def j() -> "int": ... # Y020 Quoted annotations should never be used in stubs
Alias: TypeAlias = list["int"] # Y020 Quoted annotations should never be used in stubs
class Child(list["int"]): # Y020 Quoted annotations should never be used in stubs
"""Documented and guaranteed useful.""" # Y021 Docstrings should not be included in stubs
if sys.platform == "linux":
f: "int" # Y020 Quoted annotations should never be used in stubs
elif sys.platform == "win32":
f: "str" # Y020 Quoted annotations should never be used in stubs
else:
f: "bytes" # Y020 Quoted annotations should never be used in stubs
# These two shouldn't trigger Y020 -- empty strings can't be "quoted annotations"
k = "" # Y052 Need type annotation for "k"
el = r"" # Y052 Need type annotation for "el"

View File

@@ -0,0 +1,24 @@
import typing
from collections.abc import Mapping
from typing import (
Annotated,
TypeAlias,
Union,
Literal,
)
just_literals_pipe_union: TypeAlias = (
Literal[True] | Literal["idk"]
) # not PYI042 (not a stubfile)
PublicAliasT: TypeAlias = str | int
PublicAliasT2: TypeAlias = Union[str, bytes]
_ABCDEFGHIJKLMNOPQRST: TypeAlias = typing.Any
_PrivateAliasS: TypeAlias = Literal["I", "guess", "this", "is", "okay"]
_PrivateAliasS2: TypeAlias = Annotated[str, "also okay"]
snake_case_alias1: TypeAlias = str | int # not PYI042 (not a stubfile)
_snake_case_alias2: TypeAlias = Literal["whatever"] # not PYI042 (not a stubfile)
Snake_case_alias: TypeAlias = int | float # not PYI042 (not a stubfile)
# check that this edge case doesn't crash
_: TypeAlias = str | int

View File

@@ -0,0 +1,24 @@
import typing
from collections.abc import Mapping
from typing import (
Annotated,
TypeAlias,
Union,
Literal,
)
just_literals_pipe_union: TypeAlias = (
Literal[True] | Literal["idk"]
) # PYI042, since not camel case
PublicAliasT: TypeAlias = str | int
PublicAliasT2: TypeAlias = Union[str, bytes]
_ABCDEFGHIJKLMNOPQRST: TypeAlias = typing.Any
_PrivateAliasS: TypeAlias = Literal["I", "guess", "this", "is", "okay"]
_PrivateAliasS2: TypeAlias = Annotated[str, "also okay"]
snake_case_alias1: TypeAlias = str | int # PYI042, since not camel case
_snake_case_alias2: TypeAlias = Literal["whatever"] # PYI042, since not camel case
Snake_case_alias: TypeAlias = int | float # PYI042, since not camel case
# check that this edge case doesn't crash
_: TypeAlias = str | int

View File

@@ -0,0 +1,23 @@
import typing
from collections.abc import Mapping
from typing import (
Annotated,
TypeAlias,
Union,
Literal,
)
_PrivateAliasT: TypeAlias = str | int # not PYI043 (not a stubfile)
_PrivateAliasT2: TypeAlias = typing.Any # not PYI043 (not a stubfile)
_PrivateAliasT3: TypeAlias = Literal[
"not", "a", "chance"
] # not PYI043 (not a stubfile)
just_literals_pipe_union: TypeAlias = Literal[True] | Literal["idk"]
PublicAliasT: TypeAlias = str | int
PublicAliasT2: TypeAlias = Union[str, bytes]
_ABCDEFGHIJKLMNOPQRST: TypeAlias = typing.Any
_PrivateAliasS: TypeAlias = Literal["I", "guess", "this", "is", "okay"]
_PrivateAliasS2: TypeAlias = Annotated[str, "also okay"]
# check that this edge case doesn't crash
_: TypeAlias = str | int

View File

@@ -0,0 +1,23 @@
import typing
from collections.abc import Mapping
from typing import (
Annotated,
TypeAlias,
Union,
Literal,
)
_PrivateAliasT: TypeAlias = str | int # PYI043, since this ends in a T
_PrivateAliasT2: TypeAlias = typing.Any # PYI043, since this ends in a T
_PrivateAliasT3: TypeAlias = Literal[
"not", "a", "chance"
] # PYI043, since this ends in a T
just_literals_pipe_union: TypeAlias = Literal[True] | Literal["idk"]
PublicAliasT: TypeAlias = str | int
PublicAliasT2: TypeAlias = Union[str, bytes]
_ABCDEFGHIJKLMNOPQRST: TypeAlias = typing.Any
_PrivateAliasS: TypeAlias = Literal["I", "guess", "this", "is", "okay"]
_PrivateAliasS2: TypeAlias = Annotated[str, "also okay"]
# check that this edge case doesn't crash
_: TypeAlias = str | int

View File

@@ -0,0 +1,7 @@
"""Case: There's a random import, so it should add `contextlib` after it."""
import math
try:
math.sqrt(-1)
except ValueError: # SIM105
pass

View File

@@ -0,0 +1,12 @@
"""Case: `contextlib` already imported."""
import contextlib
def foo():
pass
try:
foo()
except ValueError:
pass

View File

@@ -0,0 +1,21 @@
"""A module docstring with D214 violations
Returns
-----
valid returns
Args
-----
valid args
"""
import os
from .expected import Expectation
expectation = Expectation()
expect = expectation.expect
expect(os.path.normcase(__file__ if __file__[-1] != 'c' else __file__[:-1]),
"D214: Section is over-indented ('Returns')")
expect(os.path.normcase(__file__ if __file__[-1] != 'c' else __file__[:-1]),
"D214: Section is over-indented ('Args')")

View File

@@ -0,0 +1,3 @@
import import_self.module
from import_self import module
from . import module

View File

@@ -1000,6 +1000,13 @@ where
if self.settings.rules.enabled(Rule::ManualFromImport) {
pylint::rules::manual_from_import(self, stmt, alias, names);
}
if self.settings.rules.enabled(Rule::ImportSelf) {
if let Some(diagnostic) =
pylint::rules::import_self(alias, self.module_path.as_deref())
{
self.diagnostics.push(diagnostic);
}
}
if let Some(asname) = &alias.node.asname {
let name = alias.node.name.split('.').last().unwrap();
@@ -1477,6 +1484,17 @@ where
}
}
if self.settings.rules.enabled(Rule::ImportSelf) {
if let Some(diagnostic) = pylint::rules::import_from_self(
*level,
module.as_deref(),
names,
self.module_path.as_deref(),
) {
self.diagnostics.push(diagnostic);
}
}
if self.settings.rules.enabled(Rule::BannedImportFrom) {
if let Some(diagnostic) = flake8_import_conventions::rules::banned_import_from(
stmt,
@@ -1914,6 +1932,14 @@ where
}
}
}
if self.ctx.match_typing_expr(annotation, "TypeAlias") {
if self.settings.rules.enabled(Rule::SnakeCaseTypeAlias) {
flake8_pyi::rules::snake_case_type_alias(self, target);
}
if self.settings.rules.enabled(Rule::TSuffixedTypeAlias) {
flake8_pyi::rules::t_suffixed_type_alias(self, target);
}
}
}
}
StmtKind::Delete { targets } => {
@@ -2281,8 +2307,7 @@ where
match &expr.node {
ExprKind::Subscript { value, slice, .. } => {
// Ex) Optional[...], Union[...]
if self.ctx.in_type_definition
&& !self.settings.pyupgrade.keep_runtime_typing
if !self.settings.pyupgrade.keep_runtime_typing
&& self.settings.rules.enabled(Rule::NonPEP604Annotation)
&& (self.settings.target_version >= PythonVersion::Py310
|| (self.settings.target_version >= PythonVersion::Py37
@@ -4791,6 +4816,11 @@ impl<'a> Checker<'a> {
pyupgrade::rules::quoted_annotation(self, value, range);
}
}
if self.is_stub {
if self.settings.rules.enabled(Rule::QuotedAnnotationInStub) {
flake8_pyi::rules::quoted_annotation_in_stub(self, value, range);
}
}
let expr = allocator.alloc(expr);

View File

@@ -204,6 +204,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Pylint, "R5501") => Rule::CollapsibleElseIf,
(Pylint, "W0120") => Rule::UselessElseOnLoop,
(Pylint, "W0129") => Rule::AssertOnStringLiteral,
(Pylint, "W0406") => Rule::ImportSelf,
(Pylint, "W0602") => Rule::GlobalVariableNotAssigned,
(Pylint, "W0603") => Rule::GlobalStatement,
(Pylint, "W0711") => Rule::BinaryOpException,
@@ -592,8 +593,11 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Flake8Pyi, "014") => Rule::ArgumentDefaultInStub,
(Flake8Pyi, "015") => Rule::AssignmentDefaultInStub,
(Flake8Pyi, "016") => Rule::DuplicateUnionMember,
(Flake8Pyi, "020") => Rule::QuotedAnnotationInStub,
(Flake8Pyi, "021") => Rule::DocstringInStub,
(Flake8Pyi, "033") => Rule::TypeCommentInStub,
(Flake8Pyi, "042") => Rule::SnakeCaseTypeAlias,
(Flake8Pyi, "043") => Rule::TSuffixedTypeAlias,
// flake8-pytest-style
(Flake8PytestStyle, "001") => Rule::PytestFixtureIncorrectParenthesesStyle,

View File

@@ -156,7 +156,7 @@ fn match_docstring_end(body: &[Stmt]) -> Option<TextSize> {
Some(stmt.end())
}
/// Find the location at which a "top-of-file" import should be inserted,
/// Find the location at which an "end-of-statement" import should be inserted,
/// along with a prefix and suffix to use for the insertion.
///
/// For example, given the following code:
@@ -165,9 +165,15 @@ fn match_docstring_end(body: &[Stmt]) -> Option<TextSize> {
/// """Hello, world!"""
///
/// import os
/// import math
///
///
/// def foo():
/// pass
/// ```
///
/// The location returned will be the start of the `import os` statement,
/// The location returned will be the start of new line after the last
/// import statement, which in this case is the line after `import math`,
/// along with a trailing newline suffix.
fn end_of_statement_insertion(stmt: &Stmt, locator: &Locator, stylist: &Stylist) -> Insertion {
let location = stmt.end();
@@ -180,7 +186,7 @@ fn end_of_statement_insertion(stmt: &Stmt, locator: &Locator, stylist: &Stylist)
// Otherwise, insert on the next line.
Insertion::new(
"",
locator.line_end(location),
locator.full_line_end(location),
stylist.line_ending().as_str(),
)
}

View File

@@ -4,6 +4,7 @@ use std::path::Path;
use anyhow::{anyhow, Result};
use colored::Colorize;
use itertools::Itertools;
use log::error;
use rustc_hash::FxHashMap;
use rustpython_parser::lexer::LexResult;
@@ -453,24 +454,12 @@ pub fn lint_fix<'a>(
// longer parseable on a subsequent pass, then we've introduced a
// syntax error. Return the original code.
if parseable && result.error.is_some() {
#[allow(clippy::print_stderr)]
{
eprintln!(
r#"
{}: Autofix introduced a syntax error. Reverting all changes.
This indicates a bug in `{}`. If you could open an issue at:
{}/issues/new?title=%5BAutofix%20error%5D
...quoting the contents of `{}`, along with the `pyproject.toml` settings and executed command, we'd be very appreciative!
"#,
"error".red().bold(),
CARGO_PKG_NAME,
CARGO_PKG_REPOSITORY,
fs::relativize_path(path),
);
}
report_autofix_syntax_error(
path,
&transformed,
&result.error.unwrap(),
fixed.keys().copied(),
);
return Err(anyhow!("Autofix introduced a syntax error"));
}
}
@@ -493,25 +482,7 @@ This indicates a bug in `{}`. If you could open an issue at:
continue;
}
#[allow(clippy::print_stderr)]
{
eprintln!(
r#"
{}: Failed to converge after {} iterations.
This indicates a bug in `{}`. If you could open an issue at:
{}/issues/new?title=%5BInfinite%20loop%5D
...quoting the contents of `{}`, along with the `pyproject.toml` settings and executed command, we'd be very appreciative!
"#,
"error".red().bold(),
MAX_ITERATIONS,
CARGO_PKG_NAME,
CARGO_PKG_REPOSITORY,
fs::relativize_path(path),
);
}
report_failed_to_converge_error(path, &transformed, &result.data.0);
}
return Ok(FixerResult {
@@ -526,3 +497,80 @@ This indicates a bug in `{}`. If you could open an issue at:
});
}
}
fn collect_rule_codes(rules: impl IntoIterator<Item = Rule>) -> String {
rules
.into_iter()
.map(|rule| rule.noqa_code().to_string())
.sorted_unstable()
.dedup()
.join(", ")
}
#[allow(clippy::print_stderr)]
fn report_failed_to_converge_error(path: &Path, transformed: &str, diagnostics: &[Diagnostic]) {
if cfg!(debug_assertions) {
let codes = collect_rule_codes(diagnostics.iter().map(|diagnostic| diagnostic.kind.rule()));
eprintln!(
"{}: Failed to converge after {} iterations in `{}` with rule codes {}:---\n{}\n---",
"debug error".red().bold(),
MAX_ITERATIONS,
fs::relativize_path(path),
codes,
transformed,
);
} else {
eprintln!(
r#"
{}: Failed to converge after {} iterations.
This indicates a bug in `{}`. If you could open an issue at:
{}/issues/new?title=%5BInfinite%20loop%5D
...quoting the contents of `{}`, along with the `pyproject.toml` settings and executed command, we'd be very appreciative!
"#,
"error".red().bold(),
MAX_ITERATIONS,
CARGO_PKG_NAME,
CARGO_PKG_REPOSITORY,
fs::relativize_path(path),
);
}
}
#[allow(clippy::print_stderr)]
fn report_autofix_syntax_error(
path: &Path,
transformed: &str,
error: &ParseError,
rules: impl IntoIterator<Item = Rule>,
) {
if cfg!(debug_assertions) {
let codes = collect_rule_codes(rules);
eprintln!(
"{}: Autofix introduced a syntax error in `{}` with rule codes {}: {}\n---\n{}\n---",
"error".red().bold(),
fs::relativize_path(path),
codes,
error,
transformed,
);
} else {
eprintln!(
r#"
{}: Autofix introduced a syntax error. Reverting all changes.
This indicates a bug in `{}`. If you could open an issue at:
{}/issues/new?title=%5BAutofix%20error%5D
...quoting the contents of `{}`, along with the `pyproject.toml` settings and executed command, we'd be very appreciative!
"#,
"error".red().bold(),
CARGO_PKG_NAME,
CARGO_PKG_REPOSITORY,
fs::relativize_path(path),
);
}
}

View File

@@ -8,7 +8,8 @@ use bitflags::bitflags;
use colored::Colorize;
use ruff_diagnostics::DiagnosticKind;
use ruff_python_ast::source_code::{OneIndexed, SourceLocation};
use ruff_text_size::TextRange;
use ruff_text_size::{TextRange, TextSize};
use std::borrow::Cow;
use std::fmt::{Display, Formatter};
use std::io::Write;
@@ -172,6 +173,7 @@ impl Display for MessageCodeFrame<'_> {
};
let source_code = file.to_source_code();
let content_start_index = source_code.line_index(range.start());
let mut start_index = content_start_index.saturating_sub(2);
@@ -200,26 +202,23 @@ impl Display for MessageCodeFrame<'_> {
let start_offset = source_code.line_start(start_index);
let end_offset = source_code.line_end(end_index);
let source_text = source_code.slice(TextRange::new(start_offset, end_offset));
let source = replace_whitespace(
source_code.slice(TextRange::new(start_offset, end_offset)),
range - start_offset,
);
let annotation_start_offset = range.start() - start_offset;
let annotation_end_offset = range.end() - start_offset;
let start_char = source_text[TextRange::up_to(annotation_start_offset)]
let start_char = source.text[TextRange::up_to(source.annotation_range.start())]
.chars()
.count();
let char_length = source_text
[TextRange::new(annotation_start_offset, annotation_end_offset)]
.chars()
.count();
let char_length = source.text[source.annotation_range].chars().count();
let label = kind.rule().noqa_code().to_string();
let snippet = Snippet {
title: None,
slices: vec![Slice {
source: source_text,
source: &source.text,
line_start: content_start_index.get(),
annotations: vec![SourceAnnotation {
label: &label,
@@ -245,6 +244,60 @@ impl Display for MessageCodeFrame<'_> {
}
}
fn replace_whitespace(source: &str, annotation_range: TextRange) -> SourceCode {
static TAB_SIZE: TextSize = TextSize::new(4);
let mut result = String::new();
let mut last_end = 0;
let mut range = annotation_range;
let mut column = 0;
for (index, m) in source.match_indices(['\t', '\n', '\r']) {
match m {
"\t" => {
let tab_width = TAB_SIZE - TextSize::new(column % 4);
if index < usize::from(annotation_range.start()) {
range += tab_width - TextSize::new(1);
} else if index < usize::from(annotation_range.end()) {
range = range.add_end(tab_width - TextSize::new(1));
}
result.push_str(&source[last_end..index]);
for _ in 0..u32::from(tab_width) {
result.push(' ');
}
last_end = index + 1;
}
"\n" | "\r" => {
column = 0;
}
_ => unreachable!(),
}
}
// No tabs
if result.is_empty() {
SourceCode {
annotation_range,
text: Cow::Borrowed(source),
}
} else {
result.push_str(&source[last_end..]);
SourceCode {
annotation_range: range,
text: Cow::Owned(result),
}
}
}
struct SourceCode<'a> {
text: Cow<'a, str>,
annotation_range: TextRange,
}
#[cfg(test)]
mod tests {
use crate::message::tests::{capture_emitter_output, create_messages};

View File

@@ -4,7 +4,7 @@ use std::path::{Path, PathBuf};
use rustc_hash::FxHashMap;
use crate::resolver::{PyprojectDiscovery, Resolver};
use crate::resolver::{PyprojectConfig, Resolver};
// If we have a Python package layout like:
// - root/
@@ -82,7 +82,7 @@ fn detect_package_root_with_cache<'a>(
pub fn detect_package_roots<'a>(
files: &[&'a Path],
resolver: &'a Resolver,
pyproject_strategy: &'a PyprojectDiscovery,
pyproject_config: &'a PyprojectConfig,
) -> FxHashMap<&'a Path, Option<&'a Path>> {
// Pre-populate the module cache, since the list of files could (but isn't
// required to) contain some `__init__.py` files.
@@ -98,9 +98,7 @@ pub fn detect_package_roots<'a>(
// Search for the package root for each file.
let mut package_roots: FxHashMap<&Path, Option<&Path>> = FxHashMap::default();
for file in files {
let namespace_packages = &resolver
.resolve(file, pyproject_strategy)
.namespace_packages;
let namespace_packages = &resolver.resolve(file, pyproject_config).namespace_packages;
if let Some(package) = file.parent() {
if package_roots.contains_key(package) {
continue;

View File

@@ -156,6 +156,7 @@ ruff_macros::register_rules!(
rules::pylint::rules::BadStringFormatType,
rules::pylint::rules::BidirectionalUnicode,
rules::pylint::rules::BinaryOpException,
rules::pylint::rules::ImportSelf,
rules::pylint::rules::InvalidCharacterBackspace,
rules::pylint::rules::InvalidCharacterSub,
rules::pylint::rules::InvalidCharacterEsc,
@@ -539,6 +540,9 @@ ruff_macros::register_rules!(
rules::flake8_pyi::rules::UnrecognizedPlatformName,
rules::flake8_pyi::rules::PassInClassBody,
rules::flake8_pyi::rules::DuplicateUnionMember,
rules::flake8_pyi::rules::QuotedAnnotationInStub,
rules::flake8_pyi::rules::SnakeCaseTypeAlias,
rules::flake8_pyi::rules::TSuffixedTypeAlias,
// flake8-pytest-style
rules::flake8_pytest_style::rules::PytestFixtureIncorrectParenthesesStyle,
rules::flake8_pytest_style::rules::PytestFixturePositionalArgs,

View File

@@ -7,10 +7,10 @@ use std::iter::FusedIterator;
///
/// Uses a bitset where a bit of one signals that the Rule with that [u16] is in this set.
#[derive(Clone, Default, CacheKey, PartialEq, Eq)]
pub struct RuleSet([u64; 9]);
pub struct RuleSet([u64; 10]);
impl RuleSet {
const EMPTY: [u64; 9] = [0; 9];
const EMPTY: [u64; 10] = [0; 10];
// 64 fits into a u16 without truncation
#[allow(clippy::cast_possible_truncation)]

View File

@@ -17,25 +17,42 @@ use crate::settings::configuration::Configuration;
use crate::settings::pyproject::settings_toml;
use crate::settings::{pyproject, AllSettings, Settings};
/// The configuration information from a `pyproject.toml` file.
pub struct PyprojectConfig {
/// The strategy used to discover the relevant `pyproject.toml` file for
/// each Python file.
pub strategy: PyprojectDiscoveryStrategy,
/// All settings from the `pyproject.toml` file.
pub settings: AllSettings,
/// Absolute path to the `pyproject.toml` file. This would be `None` when
/// either using the default settings or the `--isolated` flag is set.
pub path: Option<PathBuf>,
}
impl PyprojectConfig {
pub fn new(
strategy: PyprojectDiscoveryStrategy,
settings: AllSettings,
path: Option<PathBuf>,
) -> Self {
Self {
strategy,
settings,
path: path.map(fs::normalize_path),
}
}
}
/// The strategy used to discover the relevant `pyproject.toml` file for each
/// Python file.
#[derive(Debug, is_macro::Is)]
pub enum PyprojectDiscovery {
pub enum PyprojectDiscoveryStrategy {
/// Use a fixed `pyproject.toml` file for all Python files (i.e., one
/// provided on the command-line).
Fixed(AllSettings),
Fixed,
/// Use the closest `pyproject.toml` file in the filesystem hierarchy, or
/// the default settings.
Hierarchical(AllSettings),
}
impl PyprojectDiscovery {
pub fn top_level_settings(&self) -> &AllSettings {
match self {
PyprojectDiscovery::Fixed(settings) => settings,
PyprojectDiscovery::Hierarchical(settings) => settings,
}
}
Hierarchical,
}
/// The strategy for resolving file paths in a `pyproject.toml`.
@@ -75,21 +92,25 @@ impl Resolver {
pub fn resolve_all<'a>(
&'a self,
path: &Path,
strategy: &'a PyprojectDiscovery,
pyproject_config: &'a PyprojectConfig,
) -> &'a AllSettings {
match strategy {
PyprojectDiscovery::Fixed(settings) => settings,
PyprojectDiscovery::Hierarchical(default) => self
match pyproject_config.strategy {
PyprojectDiscoveryStrategy::Fixed => &pyproject_config.settings,
PyprojectDiscoveryStrategy::Hierarchical => self
.settings
.iter()
.rev()
.find_map(|(root, settings)| path.starts_with(root).then_some(settings))
.unwrap_or(default),
.unwrap_or(&pyproject_config.settings),
}
}
pub fn resolve<'a>(&'a self, path: &Path, strategy: &'a PyprojectDiscovery) -> &'a Settings {
&self.resolve_all(path, strategy).lib
pub fn resolve<'a>(
&'a self,
path: &Path,
pyproject_config: &'a PyprojectConfig,
) -> &'a Settings {
&self.resolve_all(path, pyproject_config).lib
}
/// Return an iterator over the resolved [`Settings`] in this [`Resolver`].
@@ -200,7 +221,7 @@ fn match_exclusion<P: AsRef<Path>, R: AsRef<Path>>(
/// Find all Python (`.py`, `.pyi` and `.ipynb` files) in a set of paths.
pub fn python_files_in_path(
paths: &[PathBuf],
pyproject_strategy: &PyprojectDiscovery,
pyproject_config: &PyprojectConfig,
processor: impl ConfigProcessor,
) -> Result<(Vec<Result<DirEntry, ignore::Error>>, Resolver)> {
// Normalize every path (e.g., convert from relative to absolute).
@@ -209,7 +230,7 @@ pub fn python_files_in_path(
// Search for `pyproject.toml` files in all parent directories.
let mut resolver = Resolver::default();
let mut seen = FxHashSet::default();
if pyproject_strategy.is_hierarchical() {
if pyproject_config.strategy.is_hierarchical() {
for path in &paths {
for ancestor in path.ancestors() {
if seen.insert(ancestor) {
@@ -224,8 +245,8 @@ pub fn python_files_in_path(
}
// Check if the paths themselves are excluded.
if pyproject_strategy.top_level_settings().lib.force_exclude {
paths.retain(|path| !is_file_excluded(path, &resolver, pyproject_strategy));
if pyproject_config.settings.lib.force_exclude {
paths.retain(|path| !is_file_excluded(path, &resolver, pyproject_config));
if paths.is_empty() {
return Ok((vec![], resolver));
}
@@ -240,12 +261,7 @@ pub fn python_files_in_path(
for path in &paths[1..] {
builder.add(path);
}
builder.standard_filters(
pyproject_strategy
.top_level_settings()
.lib
.respect_gitignore,
);
builder.standard_filters(pyproject_config.settings.lib.respect_gitignore);
builder.hidden(false);
let walker = builder.build_parallel();
@@ -261,7 +277,7 @@ pub fn python_files_in_path(
if entry.depth() > 0 {
let path = entry.path();
let resolver = resolver.read().unwrap();
let settings = resolver.resolve(path, pyproject_strategy);
let settings = resolver.resolve(path, pyproject_config);
if let Some(file_name) = path.file_name() {
if !settings.exclude.is_empty()
&& match_exclusion(path, file_name, &settings.exclude)
@@ -283,7 +299,7 @@ pub fn python_files_in_path(
// Search for the `pyproject.toml` file in this directory, before we visit any
// of its contents.
if pyproject_strategy.is_hierarchical() {
if pyproject_config.strategy.is_hierarchical() {
if let Ok(entry) = &result {
if entry
.file_type()
@@ -321,7 +337,7 @@ pub fn python_files_in_path(
// Otherwise, check if the file is included.
let path = entry.path();
let resolver = resolver.read().unwrap();
let settings = resolver.resolve(path, pyproject_strategy);
let settings = resolver.resolve(path, pyproject_config);
if settings.include.is_match(path) {
debug!("Included path via `include`: {:?}", path);
true
@@ -348,10 +364,10 @@ pub fn python_files_in_path(
/// Return `true` if the Python file at [`Path`] is _not_ excluded.
pub fn python_file_at_path(
path: &Path,
pyproject_strategy: &PyprojectDiscovery,
pyproject_config: &PyprojectConfig,
processor: impl ConfigProcessor,
) -> Result<bool> {
if !pyproject_strategy.top_level_settings().lib.force_exclude {
if !pyproject_config.settings.lib.force_exclude {
return Ok(true);
}
@@ -360,7 +376,7 @@ pub fn python_file_at_path(
// Search for `pyproject.toml` files in all parent directories.
let mut resolver = Resolver::default();
if pyproject_strategy.is_hierarchical() {
if pyproject_config.strategy.is_hierarchical() {
for ancestor in path.ancestors() {
if let Some(pyproject) = settings_toml(ancestor)? {
let (root, settings) =
@@ -371,14 +387,14 @@ pub fn python_file_at_path(
}
// Check exclusions.
Ok(!is_file_excluded(&path, &resolver, pyproject_strategy))
Ok(!is_file_excluded(&path, &resolver, pyproject_config))
}
/// Return `true` if the given top-level [`Path`] should be excluded.
fn is_file_excluded(
path: &Path,
resolver: &Resolver,
pyproject_strategy: &PyprojectDiscovery,
pyproject_strategy: &PyprojectConfig,
) -> bool {
// TODO(charlie): Respect gitignore.
for path in path.ancestors() {
@@ -419,7 +435,7 @@ mod tests {
use crate::resolver::{
is_file_excluded, match_exclusion, resolve_settings_with_processor, NoOpProcessor,
PyprojectDiscovery, Relativity, Resolver,
PyprojectConfig, PyprojectDiscoveryStrategy, Relativity, Resolver,
};
use crate::settings::pyproject::find_settings_toml;
use crate::settings::types::FilePattern;
@@ -560,25 +576,29 @@ mod tests {
fn rooted_exclusion() -> Result<()> {
let package_root = test_resource_path("package");
let resolver = Resolver::default();
let ppd = PyprojectDiscovery::Hierarchical(resolve_settings_with_processor(
&find_settings_toml(&package_root)?.unwrap(),
&Relativity::Parent,
&NoOpProcessor,
)?);
let pyproject_config = PyprojectConfig::new(
PyprojectDiscoveryStrategy::Hierarchical,
resolve_settings_with_processor(
&find_settings_toml(&package_root)?.unwrap(),
&Relativity::Parent,
&NoOpProcessor,
)?,
None,
);
// src/app.py should not be excluded even if it lives in a hierarchy that should
// be excluded by virtue of the pyproject.toml having `resources/*` in
// it.
assert!(!is_file_excluded(
&package_root.join("src/app.py"),
&resolver,
&ppd,
&pyproject_config,
));
// However, resources/ignored.py should be ignored, since that `resources` is
// beneath the package root.
assert!(is_file_excluded(
&package_root.join("resources/ignored.py"),
&resolver,
&ppd,
&pyproject_config,
));
Ok(())
}

View File

@@ -1,4 +1,4 @@
use ruff_text_size::{TextLen, TextRange};
use ruff_text_size::TextRange;
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit};
use ruff_macros::{derive_message_formats, violation};
@@ -58,10 +58,7 @@ pub fn commented_out_code(
if is_standalone_comment(line) && comment_contains_code(line, &settings.task_tags[..]) {
let mut diagnostic = Diagnostic::new(CommentedOutCode, range);
if autofix.into() && settings.rules.should_fix(Rule::CommentedOutCode) {
diagnostic.set_fix(Edit::range_deletion(TextRange::at(
range.start(),
line.text_len(),
)));
diagnostic.set_fix(Edit::range_deletion(locator.full_lines_range(range)));
}
Some(diagnostic)
} else {

View File

@@ -91,4 +91,19 @@ ERA001.py:13:5: ERA001 [*] Found commented-out code
15 14 |
16 15 | #import os # noqa: ERA001
ERA001.py:21:5: ERA001 [*] Found commented-out code
|
21 | class A():
22 | pass
23 | # b = c
| ^^^^^^^ ERA001
|
= help: Remove commented-out code
Suggested fix
18 18 |
19 19 | class A():
20 20 | pass
21 |- # b = c

View File

@@ -42,6 +42,7 @@ mod tests {
#[test_case(Rule::StarArgUnpackingAfterKeywordArg, Path::new("B026.py"); "B026")]
#[test_case(Rule::EmptyMethodWithoutAbstractDecorator, Path::new("B027.py"); "B027")]
#[test_case(Rule::EmptyMethodWithoutAbstractDecorator, Path::new("B027.pyi"); "B027_pyi")]
#[test_case(Rule::EmptyMethodWithoutAbstractDecorator, Path::new("B027_extended.py"); "B027_extended")]
#[test_case(Rule::NoExplicitStacklevel, Path::new("B028.py"); "B028")]
#[test_case(Rule::ExceptWithEmptyTuple, Path::new("B029.py"); "B029")]
#[test_case(Rule::ExceptWithNonExceptionClasses, Path::new("B030.py"); "B030")]

View File

@@ -1,10 +1,16 @@
use anyhow::{anyhow, Result};
use rustpython_parser::ast::{Constant, Expr, ExprKind, Keyword, Stmt, StmtKind};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::source_code::{Locator, Stylist};
use ruff_python_ast::whitespace::indentation;
use ruff_python_semantic::analyze::visibility::{is_abstract, is_overload};
use ruff_python_semantic::context::Context;
use crate::autofix::actions::get_or_import_symbol;
use crate::checkers::ast::Checker;
use crate::importer::Importer;
use crate::registry::Rule;
#[violation]
@@ -19,12 +25,13 @@ impl Violation for AbstractBaseClassWithoutAbstractMethod {
format!("`{name}` is an abstract base class, but it has no abstract methods")
}
}
#[violation]
pub struct EmptyMethodWithoutAbstractDecorator {
pub name: String,
}
impl Violation for EmptyMethodWithoutAbstractDecorator {
impl AlwaysAutofixableViolation for EmptyMethodWithoutAbstractDecorator {
#[derive_message_formats]
fn message(&self) -> String {
let EmptyMethodWithoutAbstractDecorator { name } = self;
@@ -32,24 +39,26 @@ impl Violation for EmptyMethodWithoutAbstractDecorator {
"`{name}` is an empty method in an abstract base class, but has no abstract decorator"
)
}
fn autofix_title(&self) -> String {
"Add the `@abstractmethod` decorator".to_string()
}
}
fn is_abc_class(checker: &Checker, bases: &[Expr], keywords: &[Keyword]) -> bool {
fn is_abc_class(context: &Context, bases: &[Expr], keywords: &[Keyword]) -> bool {
keywords.iter().any(|keyword| {
keyword
.node
.arg
.as_ref()
.map_or(false, |arg| arg == "metaclass")
&& checker
.ctx
&& context
.resolve_call_path(&keyword.node.value)
.map_or(false, |call_path| {
call_path.as_slice() == ["abc", "ABCMeta"]
})
}) || bases.iter().any(|base| {
checker
.ctx
context
.resolve_call_path(base)
.map_or(false, |call_path| call_path.as_slice() == ["abc", "ABC"])
})
@@ -68,6 +77,28 @@ fn is_empty_body(body: &[Stmt]) -> bool {
})
}
fn fix_abstractmethod_missing(
context: &Context,
importer: &Importer,
locator: &Locator,
stylist: &Stylist,
stmt: &Stmt,
) -> Result<Fix> {
let indent = indentation(locator, stmt).ok_or(anyhow!("Unable to detect indentation"))?;
let (import_edit, binding) =
get_or_import_symbol("abc", "abstractmethod", context, importer, locator)?;
let reference_edit = Edit::insertion(
format!(
"@{binding}{line_ending}{indent}",
line_ending = stylist.line_ending().as_str(),
),
stmt.range().start(),
);
Ok(Fix::from_iter([import_edit, reference_edit]))
}
/// B024
/// B027
pub fn abstract_base_class(
checker: &mut Checker,
stmt: &Stmt,
@@ -79,7 +110,7 @@ pub fn abstract_base_class(
if bases.len() + keywords.len() != 1 {
return;
}
if !is_abc_class(checker, bases, keywords) {
if !is_abc_class(&checker.ctx, bases, keywords) {
return;
}
@@ -123,12 +154,24 @@ pub fn abstract_base_class(
&& is_empty_body(body)
&& !is_overload(&checker.ctx, decorator_list)
{
checker.diagnostics.push(Diagnostic::new(
let mut diagnostic = Diagnostic::new(
EmptyMethodWithoutAbstractDecorator {
name: format!("{name}.{method_name}"),
},
stmt.range(),
));
);
if checker.patch(Rule::EmptyMethodWithoutAbstractDecorator) {
diagnostic.try_set_fix(|| {
fix_abstractmethod_missing(
&checker.ctx,
&checker.importer,
checker.locator,
checker.stylist,
stmt,
)
});
}
checker.diagnostics.push(diagnostic);
}
}
if checker

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff/src/rules/flake8_bugbear/mod.rs
---
B027.py:13:5: B027 `AbstractClass.empty_1` is an empty method in an abstract base class, but has no abstract decorator
B027.py:13:5: B027 [*] `AbstractClass.empty_1` is an empty method in an abstract base class, but has no abstract decorator
|
13 | class AbstractClass(ABC):
14 | def empty_1(self): # error
@@ -11,8 +11,18 @@ B027.py:13:5: B027 `AbstractClass.empty_1` is an empty method in an abstract bas
16 |
17 | def empty_2(self): # error
|
= help: Add the `@abstractmethod` decorator
B027.py:16:5: B027 `AbstractClass.empty_2` is an empty method in an abstract base class, but has no abstract decorator
Suggested fix
10 10 |
11 11 |
12 12 | class AbstractClass(ABC):
13 |+ @notabstract
13 14 | def empty_1(self): # error
14 15 | ...
15 16 |
B027.py:16:5: B027 [*] `AbstractClass.empty_2` is an empty method in an abstract base class, but has no abstract decorator
|
16 | ...
17 |
@@ -23,8 +33,18 @@ B027.py:16:5: B027 `AbstractClass.empty_2` is an empty method in an abstract bas
20 |
21 | def empty_3(self): # error
|
= help: Add the `@abstractmethod` decorator
B027.py:19:5: B027 `AbstractClass.empty_3` is an empty method in an abstract base class, but has no abstract decorator
Suggested fix
13 13 | def empty_1(self): # error
14 14 | ...
15 15 |
16 |+ @notabstract
16 17 | def empty_2(self): # error
17 18 | pass
18 19 |
B027.py:19:5: B027 [*] `AbstractClass.empty_3` is an empty method in an abstract base class, but has no abstract decorator
|
19 | pass
20 |
@@ -36,8 +56,18 @@ B027.py:19:5: B027 `AbstractClass.empty_3` is an empty method in an abstract bas
24 |
25 | def empty_4(self): # error
|
= help: Add the `@abstractmethod` decorator
B027.py:23:5: B027 `AbstractClass.empty_4` is an empty method in an abstract base class, but has no abstract decorator
Suggested fix
16 16 | def empty_2(self): # error
17 17 | pass
18 18 |
19 |+ @notabstract
19 20 | def empty_3(self): # error
20 21 | """docstring"""
21 22 | ...
B027.py:23:5: B027 [*] `AbstractClass.empty_4` is an empty method in an abstract base class, but has no abstract decorator
|
23 | ...
24 |
@@ -52,5 +82,15 @@ B027.py:23:5: B027 `AbstractClass.empty_4` is an empty method in an abstract bas
31 |
32 | @notabstract
|
= help: Add the `@abstractmethod` decorator
Suggested fix
20 20 | """docstring"""
21 21 | ...
22 22 |
23 |+ @notabstract
23 24 | def empty_4(self): # error
24 25 | """multiple ellipsis/pass"""
25 26 | ...

View File

@@ -0,0 +1,122 @@
---
source: crates/ruff/src/rules/flake8_bugbear/mod.rs
---
B027_extended.py:9:5: B027 [*] `AbstractClass.empty_1` is an empty method in an abstract base class, but has no abstract decorator
|
9 | class AbstractClass(ABC):
10 | def empty_1(self): # error
| _____^
11 | | ...
| |___________^ B027
12 |
13 | def empty_2(self): # error
|
= help: Add the `@abstractmethod` decorator
Suggested fix
2 2 | Should emit:
3 3 | B027 - on lines 13, 16, 19, 23
4 4 | """
5 |-from abc import ABC
5 |+from abc import ABC, abstractmethod
6 6 |
7 7 |
8 8 | class AbstractClass(ABC):
9 |+ @abstractmethod
9 10 | def empty_1(self): # error
10 11 | ...
11 12 |
B027_extended.py:12:5: B027 [*] `AbstractClass.empty_2` is an empty method in an abstract base class, but has no abstract decorator
|
12 | ...
13 |
14 | def empty_2(self): # error
| _____^
15 | | pass
| |____________^ B027
16 |
17 | def body_1(self):
|
= help: Add the `@abstractmethod` decorator
Suggested fix
2 2 | Should emit:
3 3 | B027 - on lines 13, 16, 19, 23
4 4 | """
5 |-from abc import ABC
5 |+from abc import ABC, abstractmethod
6 6 |
7 7 |
8 8 | class AbstractClass(ABC):
9 9 | def empty_1(self): # error
10 10 | ...
11 11 |
12 |+ @abstractmethod
12 13 | def empty_2(self): # error
13 14 | pass
14 15 |
B027_extended.py:25:9: B027 [*] `InnerAbstractClass.empty_1` is an empty method in an abstract base class, but has no abstract decorator
|
25 | def foo():
26 | class InnerAbstractClass(ABC):
27 | def empty_1(self): # error
| _________^
28 | | ...
| |_______________^ B027
29 |
30 | def empty_2(self): # error
|
= help: Add the `@abstractmethod` decorator
Suggested fix
2 2 | Should emit:
3 3 | B027 - on lines 13, 16, 19, 23
4 4 | """
5 |-from abc import ABC
5 |+from abc import ABC, abstractmethod
6 6 |
7 7 |
8 8 | class AbstractClass(ABC):
--------------------------------------------------------------------------------
22 22 |
23 23 | def foo():
24 24 | class InnerAbstractClass(ABC):
25 |+ @abstractmethod
25 26 | def empty_1(self): # error
26 27 | ...
27 28 |
B027_extended.py:28:9: B027 [*] `InnerAbstractClass.empty_2` is an empty method in an abstract base class, but has no abstract decorator
|
28 | ...
29 |
30 | def empty_2(self): # error
| _________^
31 | | pass
| |________________^ B027
32 |
33 | def body_1(self):
|
= help: Add the `@abstractmethod` decorator
Suggested fix
2 2 | Should emit:
3 3 | B027 - on lines 13, 16, 19, 23
4 4 | """
5 |-from abc import ABC
5 |+from abc import ABC, abstractmethod
6 6 |
7 7 |
8 8 | class AbstractClass(ABC):
--------------------------------------------------------------------------------
25 25 | def empty_1(self): # error
26 26 | ...
27 27 |
28 |+ @abstractmethod
28 29 | def empty_2(self): # error
29 30 | pass
30 31 |

View File

@@ -35,10 +35,16 @@ mod tests {
#[test_case(Rule::AssignmentDefaultInStub, Path::new("PYI015.pyi"))]
#[test_case(Rule::DuplicateUnionMember, Path::new("PYI016.py"))]
#[test_case(Rule::DuplicateUnionMember, Path::new("PYI016.pyi"))]
#[test_case(Rule::QuotedAnnotationInStub, Path::new("PYI020.py"))]
#[test_case(Rule::QuotedAnnotationInStub, Path::new("PYI020.pyi"))]
#[test_case(Rule::DocstringInStub, Path::new("PYI021.py"))]
#[test_case(Rule::DocstringInStub, Path::new("PYI021.pyi"))]
#[test_case(Rule::TypeCommentInStub, Path::new("PYI033.py"))]
#[test_case(Rule::TypeCommentInStub, Path::new("PYI033.pyi"))]
#[test_case(Rule::SnakeCaseTypeAlias, Path::new("PYI042.py"))]
#[test_case(Rule::SnakeCaseTypeAlias, Path::new("PYI042.pyi"))]
#[test_case(Rule::TSuffixedTypeAlias, Path::new("PYI043.py"))]
#[test_case(Rule::TSuffixedTypeAlias, Path::new("PYI043.pyi"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
let diagnostics = test_path(

View File

@@ -5,11 +5,15 @@ pub use non_empty_stub_body::{non_empty_stub_body, NonEmptyStubBody};
pub use pass_in_class_body::{pass_in_class_body, PassInClassBody};
pub use pass_statement_stub_body::{pass_statement_stub_body, PassStatementStubBody};
pub use prefix_type_params::{prefix_type_params, UnprefixedTypeParam};
pub use quoted_annotation_in_stub::{quoted_annotation_in_stub, QuotedAnnotationInStub};
pub use simple_defaults::{
annotated_assignment_default_in_stub, argument_simple_defaults, assignment_default_in_stub,
typed_argument_simple_defaults, ArgumentDefaultInStub, AssignmentDefaultInStub,
TypedArgumentDefaultInStub,
};
pub use type_alias_naming::{
snake_case_type_alias, t_suffixed_type_alias, SnakeCaseTypeAlias, TSuffixedTypeAlias,
};
pub use type_comment_in_stub::{type_comment_in_stub, TypeCommentInStub};
pub use unrecognized_platform::{
unrecognized_platform, UnrecognizedPlatformCheck, UnrecognizedPlatformName,
@@ -22,6 +26,8 @@ mod non_empty_stub_body;
mod pass_in_class_body;
mod pass_statement_stub_body;
mod prefix_type_params;
mod quoted_annotation_in_stub;
mod simple_defaults;
mod type_alias_naming;
mod type_comment_in_stub;
mod unrecognized_platform;

View File

@@ -0,0 +1,29 @@
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit};
use ruff_macros::{derive_message_formats, violation};
use ruff_text_size::TextRange;
use crate::checkers::ast::Checker;
use crate::registry::Rule;
#[violation]
pub struct QuotedAnnotationInStub;
impl AlwaysAutofixableViolation for QuotedAnnotationInStub {
#[derive_message_formats]
fn message(&self) -> String {
format!("Quoted annotations should not be included in stubs")
}
fn autofix_title(&self) -> String {
"Remove quotes".to_string()
}
}
/// PYI020
pub fn quoted_annotation_in_stub(checker: &mut Checker, annotation: &str, range: TextRange) {
let mut diagnostic = Diagnostic::new(QuotedAnnotationInStub, range);
if checker.patch(Rule::QuotedAnnotationInStub) {
diagnostic.set_fix(Edit::range_replacement(annotation.to_string(), range));
}
checker.diagnostics.push(diagnostic);
}

View File

@@ -0,0 +1,91 @@
use rustpython_parser::ast::{Expr, ExprKind};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use crate::checkers::ast::Checker;
#[violation]
pub struct SnakeCaseTypeAlias {
pub name: String,
}
impl Violation for SnakeCaseTypeAlias {
#[derive_message_formats]
fn message(&self) -> String {
let Self { name } = self;
format!("Type alias `{name}` should be CamelCase")
}
}
#[violation]
pub struct TSuffixedTypeAlias {
pub name: String,
}
impl Violation for TSuffixedTypeAlias {
#[derive_message_formats]
fn message(&self) -> String {
let Self { name } = self;
format!("Private type alias `{name}` should not be suffixed with `T` (the `T` suffix implies that an object is a `TypeVar`)")
}
}
/// Return `true` if the given name is a `snake_case` type alias. In this context, we match against
/// any name that begins with an optional underscore, followed by at least one lowercase letter.
fn is_snake_case_type_alias(name: &str) -> bool {
let mut chars = name.chars();
matches!(
(chars.next(), chars.next()),
(Some('_'), Some('0'..='9' | 'a'..='z')) | (Some('0'..='9' | 'a'..='z'), ..)
)
}
/// Return `true` if the given name is a T-suffixed type alias. In this context, we match against
/// any name that begins with an underscore, and ends in a lowercase letter, followed by `T`,
/// followed by an optional digit.
fn is_t_suffixed_type_alias(name: &str) -> bool {
// A T-suffixed, private type alias must begin with an underscore.
if !name.starts_with('_') {
return false;
}
// It must end in a lowercase letter, followed by `T`, and (optionally) a digit.
let mut chars = name.chars().rev();
matches!(
(chars.next(), chars.next(), chars.next()),
(Some('0'..='9'), Some('T'), Some('a'..='z')) | (Some('T'), Some('a'..='z'), _)
)
}
/// PYI042
pub fn snake_case_type_alias(checker: &mut Checker, target: &Expr) {
if let ExprKind::Name { id, .. } = target.node() {
if !is_snake_case_type_alias(id) {
return;
}
checker.diagnostics.push(Diagnostic::new(
SnakeCaseTypeAlias {
name: id.to_string(),
},
target.range(),
));
}
}
/// PYI043
pub fn t_suffixed_type_alias(checker: &mut Checker, target: &Expr) {
if let ExprKind::Name { id, .. } = target.node() {
if !is_t_suffixed_type_alias(id) {
return;
}
checker.diagnostics.push(Diagnostic::new(
TSuffixedTypeAlias {
name: id.to_string(),
},
target.range(),
));
}
}

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff/src/rules/flake8_pyi/mod.rs
---

View File

@@ -0,0 +1,187 @@
---
source: crates/ruff/src/rules/flake8_pyi/mod.rs
---
PYI020.pyi:7:10: PYI020 [*] Quoted annotations should not be included in stubs
|
7 | import typing_extensions
8 |
9 | def f(x: "int"): ... # Y020 Quoted annotations should never be used in stubs
| ^^^^^ PYI020
10 | def g(x: list["int"]): ... # Y020 Quoted annotations should never be used in stubs
11 | _T = TypeVar("_T", bound="int") # Y020 Quoted annotations should never be used in stubs
|
= help: Remove quotes
Suggested fix
4 4 |
5 5 | import typing_extensions
6 6 |
7 |-def f(x: "int"): ... # Y020 Quoted annotations should never be used in stubs
7 |+def f(x: int): ... # Y020 Quoted annotations should never be used in stubs
8 8 | def g(x: list["int"]): ... # Y020 Quoted annotations should never be used in stubs
9 9 | _T = TypeVar("_T", bound="int") # Y020 Quoted annotations should never be used in stubs
10 10 |
PYI020.pyi:8:15: PYI020 [*] Quoted annotations should not be included in stubs
|
8 | def f(x: "int"): ... # Y020 Quoted annotations should never be used in stubs
9 | def g(x: list["int"]): ... # Y020 Quoted annotations should never be used in stubs
| ^^^^^ PYI020
10 | _T = TypeVar("_T", bound="int") # Y020 Quoted annotations should never be used in stubs
|
= help: Remove quotes
Suggested fix
5 5 | import typing_extensions
6 6 |
7 7 | def f(x: "int"): ... # Y020 Quoted annotations should never be used in stubs
8 |-def g(x: list["int"]): ... # Y020 Quoted annotations should never be used in stubs
8 |+def g(x: list[int]): ... # Y020 Quoted annotations should never be used in stubs
9 9 | _T = TypeVar("_T", bound="int") # Y020 Quoted annotations should never be used in stubs
10 10 |
11 11 | def h(w: Literal["a", "b"], x: typing.Literal["c"], y: typing_extensions.Literal["d"], z: _T) -> _T: ...
PYI020.pyi:9:26: PYI020 [*] Quoted annotations should not be included in stubs
|
9 | def f(x: "int"): ... # Y020 Quoted annotations should never be used in stubs
10 | def g(x: list["int"]): ... # Y020 Quoted annotations should never be used in stubs
11 | _T = TypeVar("_T", bound="int") # Y020 Quoted annotations should never be used in stubs
| ^^^^^ PYI020
12 |
13 | def h(w: Literal["a", "b"], x: typing.Literal["c"], y: typing_extensions.Literal["d"], z: _T) -> _T: ...
|
= help: Remove quotes
Suggested fix
6 6 |
7 7 | def f(x: "int"): ... # Y020 Quoted annotations should never be used in stubs
8 8 | def g(x: list["int"]): ... # Y020 Quoted annotations should never be used in stubs
9 |-_T = TypeVar("_T", bound="int") # Y020 Quoted annotations should never be used in stubs
9 |+_T = TypeVar("_T", bound=int) # Y020 Quoted annotations should never be used in stubs
10 10 |
11 11 | def h(w: Literal["a", "b"], x: typing.Literal["c"], y: typing_extensions.Literal["d"], z: _T) -> _T: ...
12 12 |
PYI020.pyi:13:12: PYI020 [*] Quoted annotations should not be included in stubs
|
13 | def h(w: Literal["a", "b"], x: typing.Literal["c"], y: typing_extensions.Literal["d"], z: _T) -> _T: ...
14 |
15 | def j() -> "int": ... # Y020 Quoted annotations should never be used in stubs
| ^^^^^ PYI020
16 | Alias: TypeAlias = list["int"] # Y020 Quoted annotations should never be used in stubs
|
= help: Remove quotes
Suggested fix
10 10 |
11 11 | def h(w: Literal["a", "b"], x: typing.Literal["c"], y: typing_extensions.Literal["d"], z: _T) -> _T: ...
12 12 |
13 |-def j() -> "int": ... # Y020 Quoted annotations should never be used in stubs
13 |+def j() -> int: ... # Y020 Quoted annotations should never be used in stubs
14 14 | Alias: TypeAlias = list["int"] # Y020 Quoted annotations should never be used in stubs
15 15 |
16 16 | class Child(list["int"]): # Y020 Quoted annotations should never be used in stubs
PYI020.pyi:14:25: PYI020 [*] Quoted annotations should not be included in stubs
|
14 | def j() -> "int": ... # Y020 Quoted annotations should never be used in stubs
15 | Alias: TypeAlias = list["int"] # Y020 Quoted annotations should never be used in stubs
| ^^^^^ PYI020
16 |
17 | class Child(list["int"]): # Y020 Quoted annotations should never be used in stubs
|
= help: Remove quotes
Suggested fix
11 11 | def h(w: Literal["a", "b"], x: typing.Literal["c"], y: typing_extensions.Literal["d"], z: _T) -> _T: ...
12 12 |
13 13 | def j() -> "int": ... # Y020 Quoted annotations should never be used in stubs
14 |-Alias: TypeAlias = list["int"] # Y020 Quoted annotations should never be used in stubs
14 |+Alias: TypeAlias = list[int] # Y020 Quoted annotations should never be used in stubs
15 15 |
16 16 | class Child(list["int"]): # Y020 Quoted annotations should never be used in stubs
17 17 | """Documented and guaranteed useful.""" # Y021 Docstrings should not be included in stubs
PYI020.pyi:16:18: PYI020 [*] Quoted annotations should not be included in stubs
|
16 | Alias: TypeAlias = list["int"] # Y020 Quoted annotations should never be used in stubs
17 |
18 | class Child(list["int"]): # Y020 Quoted annotations should never be used in stubs
| ^^^^^ PYI020
19 | """Documented and guaranteed useful.""" # Y021 Docstrings should not be included in stubs
|
= help: Remove quotes
Suggested fix
13 13 | def j() -> "int": ... # Y020 Quoted annotations should never be used in stubs
14 14 | Alias: TypeAlias = list["int"] # Y020 Quoted annotations should never be used in stubs
15 15 |
16 |-class Child(list["int"]): # Y020 Quoted annotations should never be used in stubs
16 |+class Child(list[int]): # Y020 Quoted annotations should never be used in stubs
17 17 | """Documented and guaranteed useful.""" # Y021 Docstrings should not be included in stubs
18 18 |
19 19 | if sys.platform == "linux":
PYI020.pyi:20:8: PYI020 [*] Quoted annotations should not be included in stubs
|
20 | if sys.platform == "linux":
21 | f: "int" # Y020 Quoted annotations should never be used in stubs
| ^^^^^ PYI020
22 | elif sys.platform == "win32":
23 | f: "str" # Y020 Quoted annotations should never be used in stubs
|
= help: Remove quotes
Suggested fix
17 17 | """Documented and guaranteed useful.""" # Y021 Docstrings should not be included in stubs
18 18 |
19 19 | if sys.platform == "linux":
20 |- f: "int" # Y020 Quoted annotations should never be used in stubs
20 |+ f: int # Y020 Quoted annotations should never be used in stubs
21 21 | elif sys.platform == "win32":
22 22 | f: "str" # Y020 Quoted annotations should never be used in stubs
23 23 | else:
PYI020.pyi:22:8: PYI020 [*] Quoted annotations should not be included in stubs
|
22 | f: "int" # Y020 Quoted annotations should never be used in stubs
23 | elif sys.platform == "win32":
24 | f: "str" # Y020 Quoted annotations should never be used in stubs
| ^^^^^ PYI020
25 | else:
26 | f: "bytes" # Y020 Quoted annotations should never be used in stubs
|
= help: Remove quotes
Suggested fix
19 19 | if sys.platform == "linux":
20 20 | f: "int" # Y020 Quoted annotations should never be used in stubs
21 21 | elif sys.platform == "win32":
22 |- f: "str" # Y020 Quoted annotations should never be used in stubs
22 |+ f: str # Y020 Quoted annotations should never be used in stubs
23 23 | else:
24 24 | f: "bytes" # Y020 Quoted annotations should never be used in stubs
25 25 |
PYI020.pyi:24:8: PYI020 [*] Quoted annotations should not be included in stubs
|
24 | f: "str" # Y020 Quoted annotations should never be used in stubs
25 | else:
26 | f: "bytes" # Y020 Quoted annotations should never be used in stubs
| ^^^^^^^ PYI020
27 |
28 | # These two shouldn't trigger Y020 -- empty strings can't be "quoted annotations"
|
= help: Remove quotes
Suggested fix
21 21 | elif sys.platform == "win32":
22 22 | f: "str" # Y020 Quoted annotations should never be used in stubs
23 23 | else:
24 |- f: "bytes" # Y020 Quoted annotations should never be used in stubs
24 |+ f: bytes # Y020 Quoted annotations should never be used in stubs
25 25 |
26 26 | # These two shouldn't trigger Y020 -- empty strings can't be "quoted annotations"
27 27 | k = "" # Y052 Need type annotation for "k"

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff/src/rules/flake8_pyi/mod.rs
---

View File

@@ -0,0 +1,32 @@
---
source: crates/ruff/src/rules/flake8_pyi/mod.rs
---
PYI042.pyi:10:1: PYI042 Type alias `just_literals_pipe_union` should be CamelCase
|
10 | )
11 |
12 | just_literals_pipe_union: TypeAlias = (
| ^^^^^^^^^^^^^^^^^^^^^^^^ PYI042
13 | Literal[True] | Literal["idk"]
14 | ) # PYI042, since not camel case
|
PYI042.pyi:19:1: PYI042 Type alias `snake_case_alias1` should be CamelCase
|
19 | _PrivateAliasS2: TypeAlias = Annotated[str, "also okay"]
20 |
21 | snake_case_alias1: TypeAlias = str | int # PYI042, since not camel case
| ^^^^^^^^^^^^^^^^^ PYI042
22 | _snake_case_alias2: TypeAlias = Literal["whatever"] # PYI042, since not camel case
23 | Snake_case_alias: TypeAlias = int | float # PYI042, since not camel case
|
PYI042.pyi:20:1: PYI042 Type alias `_snake_case_alias2` should be CamelCase
|
20 | snake_case_alias1: TypeAlias = str | int # PYI042, since not camel case
21 | _snake_case_alias2: TypeAlias = Literal["whatever"] # PYI042, since not camel case
| ^^^^^^^^^^^^^^^^^^ PYI042
22 | Snake_case_alias: TypeAlias = int | float # PYI042, since not camel case
|

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff/src/rules/flake8_pyi/mod.rs
---

View File

@@ -0,0 +1,33 @@
---
source: crates/ruff/src/rules/flake8_pyi/mod.rs
---
PYI043.pyi:10:1: PYI043 Private type alias `_PrivateAliasT` should not be suffixed with `T` (the `T` suffix implies that an object is a `TypeVar`)
|
10 | )
11 |
12 | _PrivateAliasT: TypeAlias = str | int # PYI043, since this ends in a T
| ^^^^^^^^^^^^^^ PYI043
13 | _PrivateAliasT2: TypeAlias = typing.Any # PYI043, since this ends in a T
14 | _PrivateAliasT3: TypeAlias = Literal[
|
PYI043.pyi:11:1: PYI043 Private type alias `_PrivateAliasT2` should not be suffixed with `T` (the `T` suffix implies that an object is a `TypeVar`)
|
11 | _PrivateAliasT: TypeAlias = str | int # PYI043, since this ends in a T
12 | _PrivateAliasT2: TypeAlias = typing.Any # PYI043, since this ends in a T
| ^^^^^^^^^^^^^^^ PYI043
13 | _PrivateAliasT3: TypeAlias = Literal[
14 | "not", "a", "chance"
|
PYI043.pyi:12:1: PYI043 Private type alias `_PrivateAliasT3` should not be suffixed with `T` (the `T` suffix implies that an object is a `TypeVar`)
|
12 | _PrivateAliasT: TypeAlias = str | int # PYI043, since this ends in a T
13 | _PrivateAliasT2: TypeAlias = typing.Any # PYI043, since this ends in a T
14 | _PrivateAliasT3: TypeAlias = Literal[
| ^^^^^^^^^^^^^^^ PYI043
15 | "not", "a", "chance"
16 | ] # PYI043, since this ends in a T
|

View File

@@ -17,6 +17,8 @@ mod tests {
#[test_case(Rule::CollapsibleIf, Path::new("SIM102.py"); "SIM102")]
#[test_case(Rule::NeedlessBool, Path::new("SIM103.py"); "SIM103")]
#[test_case(Rule::SuppressibleException, Path::new("SIM105.py"); "SIM105")]
#[test_case(Rule::SuppressibleException, Path::new("SIM105_1.py"); "SIM105_1")]
#[test_case(Rule::SuppressibleException, Path::new("SIM105_2.py"); "SIM105_2")]
#[test_case(Rule::ReturnInTryExceptFinally, Path::new("SIM107.py"); "SIM107")]
#[test_case(Rule::IfElseBlockInsteadOfIfExp, Path::new("SIM108.py"); "SIM108")]
#[test_case(Rule::CompareWithTuple, Path::new("SIM109.py"); "SIM109")]

View File

@@ -0,0 +1,28 @@
---
source: crates/ruff/src/rules/flake8_simplify/mod.rs
---
SIM105_1.py:4:1: SIM105 [*] Use `contextlib.suppress(ValueError)` instead of `try`-`except`-`pass`
|
4 | import math
5 |
6 | / try:
7 | | math.sqrt(-1)
8 | | except ValueError: # SIM105
9 | | pass
| |________^ SIM105
|
= help: Replace with `contextlib.suppress(ValueError)`
Suggested fix
1 1 | """Case: There's a random import, so it should add `contextlib` after it."""
2 2 | import math
3 |+import contextlib
3 4 |
4 |-try:
5 |+with contextlib.suppress(ValueError):
5 6 | math.sqrt(-1)
6 |-except ValueError: # SIM105
7 |- pass
7 |+

View File

@@ -0,0 +1,25 @@
---
source: crates/ruff/src/rules/flake8_simplify/mod.rs
---
SIM105_2.py:9:1: SIM105 [*] Use `contextlib.suppress(ValueError)` instead of `try`-`except`-`pass`
|
9 | / try:
10 | | foo()
11 | | except ValueError:
12 | | pass
| |________^ SIM105
|
= help: Replace with `contextlib.suppress(ValueError)`
Suggested fix
6 6 | pass
7 7 |
8 8 |
9 |-try:
9 |+with contextlib.suppress(ValueError):
10 10 | foo()
11 |-except ValueError:
12 |- pass
11 |+

View File

@@ -272,7 +272,7 @@ pub struct Options {
/// in the order specified.
pub forced_separate: Option<Vec<String>>,
#[option(
default = r#"[]"#,
default = r#"["future", "standard-library", "third-party", "first-party", "local-folder"]"#,
value_type = r#"list["future" | "standard-library" | "third-party" | "first-party" | "local-folder" | str]"#,
example = r#"
section-order = ["future", "standard-library", "first-party", "local-folder", "third-party"]

View File

@@ -1,11 +1,9 @@
use itertools::Itertools;
use ruff_text_size::TextRange;
use crate::checkers::logical_lines::LogicalLinesContext;
use crate::rules::pycodestyle::rules::logical_lines::LogicalLine;
use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::token_kind::TokenKind;
use ruff_text_size::TextRange;
#[violation]
pub struct MissingWhitespaceAfterKeyword;
@@ -22,7 +20,10 @@ pub(crate) fn missing_whitespace_after_keyword(
line: &LogicalLine,
context: &mut LogicalLinesContext,
) {
for (tok0, tok1) in line.tokens().iter().tuple_windows() {
for window in line.tokens().windows(2) {
let tok0 = &window[0];
let tok1 = &window[1];
let tok0_kind = tok0.kind();
let tok1_kind = tok1.kind();

View File

@@ -1,10 +1,9 @@
use crate::checkers::logical_lines::LogicalLinesContext;
use ruff_diagnostics::Violation;
use crate::rules::pycodestyle::rules::logical_lines::{LogicalLine, LogicalLineToken};
use ruff_diagnostics::{DiagnosticKind, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::token_kind::TokenKind;
use ruff_text_size::{TextRange, TextSize};
use crate::rules::pycodestyle::rules::logical_lines::LogicalLine;
use ruff_text_size::TextRange;
// E225
#[violation]
@@ -56,131 +55,179 @@ pub(crate) fn missing_whitespace_around_operator(
line: &LogicalLine,
context: &mut LogicalLinesContext,
) {
#[derive(Copy, Clone, Eq, PartialEq)]
enum NeedsSpace {
Yes,
No,
Unset,
}
let mut needs_space_main = NeedsSpace::No;
let mut needs_space_aux = NeedsSpace::Unset;
let mut prev_end_aux = TextSize::default();
let mut parens = 0u32;
let mut prev_type: TokenKind = TokenKind::EndOfFile;
let mut prev_end = TextSize::default();
let mut prev_token: Option<&LogicalLineToken> = None;
let mut tokens = line.tokens().iter().peekable();
for token in line.tokens() {
while let Some(token) = tokens.next() {
let kind = token.kind();
if kind.is_skip_comment() {
if kind.is_trivia() {
continue;
}
match kind {
TokenKind::Lpar | TokenKind::Lambda => parens += 1,
TokenKind::Rpar => parens -= 1,
TokenKind::Rpar => parens = parens.saturating_sub(1),
_ => {}
};
let needs_space = needs_space_main == NeedsSpace::Yes
|| needs_space_aux != NeedsSpace::Unset
|| prev_end_aux != TextSize::new(0);
if needs_space {
if token.start() > prev_end {
if needs_space_main != NeedsSpace::Yes && needs_space_aux != NeedsSpace::Yes {
let needs_space = if kind == TokenKind::Equal && parens > 0 {
// Allow keyword args or defaults: foo(bar=None).
NeedsSpace::No
} else if kind == TokenKind::Slash {
// Tolerate the "/" operator in function definition
// For more info see PEP570
// `def f(a, /, b):` or `def f(a, b, /):` or `f = lambda a, /:`
// ^ ^ ^
let slash_in_func = matches!(
tokens.peek().map(|t| t.kind()),
Some(TokenKind::Comma | TokenKind::Rpar | TokenKind::Colon)
);
NeedsSpace::from(!slash_in_func)
} else if kind.is_unary() || kind == TokenKind::DoubleStar {
let is_binary = prev_token.map_or(false, |prev_token| {
let prev_kind = prev_token.kind();
// Check if the operator is used as a binary operator.
// Allow unary operators: -123, -x, +1.
// Allow argument unpacking: foo(*args, **kwargs)
matches!(
prev_kind,
TokenKind::Rpar | TokenKind::Rsqb | TokenKind::Rbrace
) || !(prev_kind.is_operator()
|| prev_kind.is_keyword()
|| prev_kind.is_soft_keyword())
});
if is_binary {
if kind == TokenKind::DoubleStar {
// Enforce consistent spacing, but don't enforce whitespaces.
NeedsSpace::Optional
} else {
NeedsSpace::Yes
}
} else {
NeedsSpace::No
}
} else if is_whitespace_needed(kind) {
NeedsSpace::Yes
} else {
NeedsSpace::No
};
if needs_space != NeedsSpace::No {
let has_leading_trivia = prev_token.map_or(true, |prev| {
prev.end() < token.start() || prev.kind().is_trivia()
});
let has_trailing_trivia = tokens.peek().map_or(true, |next| {
token.end() < next.start() || next.kind().is_trivia()
});
match (has_leading_trivia, has_trailing_trivia) {
// Operator with trailing but no leading space, enforce consistent spacing
(false, true) => {
context.push(
MissingWhitespaceAroundOperator,
TextRange::empty(prev_end_aux),
TextRange::empty(token.start()),
);
}
needs_space_main = NeedsSpace::No;
needs_space_aux = NeedsSpace::Unset;
prev_end_aux = TextSize::new(0);
} else if kind == TokenKind::Greater
&& matches!(prev_type, TokenKind::Less | TokenKind::Minus)
{
// Tolerate the "<>" operator, even if running Python 3
// Deal with Python 3's annotated return value "->"
} else if prev_type == TokenKind::Slash
&& matches!(kind, TokenKind::Comma | TokenKind::Rpar | TokenKind::Colon)
|| (prev_type == TokenKind::Rpar && kind == TokenKind::Colon)
{
// Tolerate the "/" operator in function definition
// For more info see PEP570
} else {
if needs_space_main == NeedsSpace::Yes || needs_space_aux == NeedsSpace::Yes {
context.push(MissingWhitespaceAroundOperator, TextRange::empty(prev_end));
} else if prev_type != TokenKind::DoubleStar {
if prev_type == TokenKind::Percent {
// Operator with leading but no trailing space, enforce consistent spacing.
(true, false) => {
context.push(
MissingWhitespaceAroundOperator,
TextRange::empty(token.end()),
);
}
// Operator with no space, require spaces if it is required by the operator.
(false, false) => {
if needs_space == NeedsSpace::Yes {
context.push(
MissingWhitespaceAroundModuloOperator,
TextRange::empty(prev_end_aux),
);
} else if !prev_type.is_arithmetic() {
context.push(
MissingWhitespaceAroundBitwiseOrShiftOperator,
TextRange::empty(prev_end_aux),
);
} else {
context.push(
MissingWhitespaceAroundArithmeticOperator,
TextRange::empty(prev_end_aux),
diagnostic_kind_for_operator(kind),
TextRange::empty(token.start()),
);
}
}
needs_space_main = NeedsSpace::No;
needs_space_aux = NeedsSpace::Unset;
prev_end_aux = TextSize::new(0);
}
} else if (kind.is_operator() || matches!(kind, TokenKind::Name))
&& prev_end != TextSize::default()
{
if kind == TokenKind::Equal && parens > 0 {
// Allow keyword args or defaults: foo(bar=None).
} else if kind.is_whitespace_needed() {
needs_space_main = NeedsSpace::Yes;
needs_space_aux = NeedsSpace::Unset;
prev_end_aux = TextSize::new(0);
} else if kind.is_unary() {
// Check if the operator is used as a binary operator
// Allow unary operators: -123, -x, +1.
// Allow argument unpacking: foo(*args, **kwargs)
if (matches!(
prev_type,
TokenKind::Rpar | TokenKind::Rsqb | TokenKind::Rbrace
)) || (!prev_type.is_operator()
&& !prev_type.is_keyword()
&& !prev_type.is_soft_keyword())
{
needs_space_main = NeedsSpace::Unset;
needs_space_aux = NeedsSpace::Unset;
prev_end_aux = TextSize::new(0);
(true, true) => {
// Operator has leading and trailing space, all good
}
} else if kind.is_whitespace_optional() {
needs_space_main = NeedsSpace::Unset;
needs_space_aux = NeedsSpace::Unset;
prev_end_aux = TextSize::new(0);
}
if needs_space_main == NeedsSpace::Unset {
// Surrounding space is optional, but ensure that
// trailing space matches opening space
prev_end_aux = prev_end;
needs_space_aux = if token.start() == prev_end {
NeedsSpace::No
} else {
NeedsSpace::Yes
};
} else if needs_space_main == NeedsSpace::Yes && token.start() == prev_end_aux {
// A needed opening space was not found
context.push(MissingWhitespaceAroundOperator, TextRange::empty(prev_end));
needs_space_main = NeedsSpace::No;
needs_space_aux = NeedsSpace::Unset;
prev_end_aux = TextSize::new(0);
}
}
prev_type = kind;
prev_end = token.end();
prev_token = Some(token);
}
}
#[derive(Copy, Clone, Eq, PartialEq, Debug)]
enum NeedsSpace {
/// Needs a leading and trailing space.
Yes,
/// Doesn't need a leading or trailing space. Or in other words, we don't care how many
/// leading or trailing spaces that token has.
No,
/// Needs consistent leading and trailing spacing. The operator needs spacing if
/// * it has a leading space
/// * it has a trailing space
Optional,
}
impl From<bool> for NeedsSpace {
fn from(value: bool) -> Self {
if value {
NeedsSpace::Yes
} else {
NeedsSpace::No
}
}
}
fn diagnostic_kind_for_operator(operator: TokenKind) -> DiagnosticKind {
if operator == TokenKind::Percent {
DiagnosticKind::from(MissingWhitespaceAroundModuloOperator)
} else if operator.is_bitwise_or_shift() {
DiagnosticKind::from(MissingWhitespaceAroundBitwiseOrShiftOperator)
} else if operator.is_arithmetic() {
DiagnosticKind::from(MissingWhitespaceAroundArithmeticOperator)
} else {
DiagnosticKind::from(MissingWhitespaceAroundOperator)
}
}
fn is_whitespace_needed(kind: TokenKind) -> bool {
matches!(
kind,
TokenKind::DoubleStarEqual
| TokenKind::StarEqual
| TokenKind::SlashEqual
| TokenKind::DoubleSlashEqual
| TokenKind::PlusEqual
| TokenKind::MinusEqual
| TokenKind::NotEqual
| TokenKind::Less
| TokenKind::Greater
| TokenKind::PercentEqual
| TokenKind::CircumflexEqual
| TokenKind::AmperEqual
| TokenKind::VbarEqual
| TokenKind::EqEqual
| TokenKind::LessEqual
| TokenKind::GreaterEqual
| TokenKind::LeftShiftEqual
| TokenKind::RightShiftEqual
| TokenKind::Equal
| TokenKind::And
| TokenKind::Or
| TokenKind::In
| TokenKind::Is
| TokenKind::Rarrow
| TokenKind::ColonEqual
| TokenKind::Slash
| TokenKind::Percent
) || kind.is_arithmetic()
|| kind.is_bitwise_or_shift()
}

View File

@@ -5,8 +5,8 @@ E101.py:11:1: E101 Indentation contains mixed spaces and tabs
|
11 | def func_mixed_start_with_tab():
12 | # E101
13 | print("mixed starts with tab")
| ^^ E101
13 | print("mixed starts with tab")
| ^^^^^^ E101
14 |
15 | def func_mixed_start_with_space():
|
@@ -15,8 +15,8 @@ E101.py:15:1: E101 Indentation contains mixed spaces and tabs
|
15 | def func_mixed_start_with_space():
16 | # E101
17 | print("mixed starts with space")
| ^^^^^^^^ E101
17 | print("mixed starts with space")
| ^^^^^^^^^^^^^^^^^^^^ E101
18 |
19 | def xyz():
|
@@ -25,8 +25,8 @@ E101.py:19:1: E101 Indentation contains mixed spaces and tabs
|
19 | def xyz():
20 | # E101
21 | print("xyz");
| ^^^ E101
21 | print("xyz");
| ^^^^^^^ E101
|

View File

@@ -25,8 +25,8 @@ E11.py:42:1: E117 Over-indented
|
42 | #: E117 W191
43 | def start():
44 | print()
| E117
44 | print()
| ^^^^^^^^ E117
|

View File

@@ -27,34 +27,34 @@ E20.py:6:15: E201 Whitespace after '('
8 | spam(ham[1], { eggs: 2})
| E201
9 | #: E201:1:6
10 | spam( ham[1], {eggs: 2})
10 | spam( ham[1], {eggs: 2})
|
E20.py:8:6: E201 Whitespace after '('
|
8 | spam(ham[1], { eggs: 2})
9 | #: E201:1:6
10 | spam( ham[1], {eggs: 2})
10 | spam( ham[1], {eggs: 2})
| E201
11 | #: E201:1:10
12 | spam(ham[ 1], {eggs: 2})
12 | spam(ham[ 1], {eggs: 2})
|
E20.py:10:10: E201 Whitespace after '('
|
10 | spam( ham[1], {eggs: 2})
10 | spam( ham[1], {eggs: 2})
11 | #: E201:1:10
12 | spam(ham[ 1], {eggs: 2})
12 | spam(ham[ 1], {eggs: 2})
| E201
13 | #: E201:1:15
14 | spam(ham[1], { eggs: 2})
14 | spam(ham[1], { eggs: 2})
|
E20.py:12:15: E201 Whitespace after '('
|
12 | spam(ham[ 1], {eggs: 2})
12 | spam(ham[ 1], {eggs: 2})
13 | #: E201:1:15
14 | spam(ham[1], { eggs: 2})
14 | spam(ham[1], { eggs: 2})
| E201
15 | #: Okay
16 | spam(ham[1], {eggs: 2})

View File

@@ -27,34 +27,34 @@ E20.py:23:11: E202 Whitespace before ')'
25 | spam(ham[1 ], {eggs: 2})
| E202
26 | #: E202:1:23
27 | spam(ham[1], {eggs: 2} )
27 | spam(ham[1], {eggs: 2} )
|
E20.py:25:23: E202 Whitespace before ')'
|
25 | spam(ham[1 ], {eggs: 2})
26 | #: E202:1:23
27 | spam(ham[1], {eggs: 2} )
27 | spam(ham[1], {eggs: 2} )
| E202
28 | #: E202:1:22
29 | spam(ham[1], {eggs: 2 })
29 | spam(ham[1], {eggs: 2 })
|
E20.py:27:22: E202 Whitespace before ')'
|
27 | spam(ham[1], {eggs: 2} )
27 | spam(ham[1], {eggs: 2} )
28 | #: E202:1:22
29 | spam(ham[1], {eggs: 2 })
29 | spam(ham[1], {eggs: 2 })
| E202
30 | #: E202:1:11
31 | spam(ham[1 ], {eggs: 2})
31 | spam(ham[1 ], {eggs: 2})
|
E20.py:29:11: E202 Whitespace before ')'
|
29 | spam(ham[1], {eggs: 2 })
29 | spam(ham[1], {eggs: 2 })
30 | #: E202:1:11
31 | spam(ham[1 ], {eggs: 2})
31 | spam(ham[1 ], {eggs: 2})
| E202
32 | #: Okay
33 | spam(ham[1], {eggs: 2})

View File

@@ -14,7 +14,7 @@ E20.py:55:10: E203 Whitespace before ',', ';', or ':'
|
55 | x, y = y, x
56 | #: E203:1:10
57 | if x == 4 :
57 | if x == 4 :
| E203
58 | print x, y
59 | x, y = y, x
@@ -34,7 +34,7 @@ E20.py:63:15: E203 Whitespace before ',', ';', or ':'
|
63 | #: E203:2:15 E702:2:16
64 | if x == 4:
65 | print x, y ; x, y = y, x
65 | print x, y ; x, y = y, x
| E203
66 | #: E203:3:13
67 | if x == 4:
@@ -54,7 +54,7 @@ E20.py:71:13: E203 Whitespace before ',', ';', or ':'
|
71 | if x == 4:
72 | print x, y
73 | x, y = y , x
73 | x, y = y , x
| E203
74 | #: Okay
75 | if x == 4:

View File

@@ -5,7 +5,7 @@ E22.py:43:2: E223 Tab before operator
|
43 | #: E223
44 | foobart = 4
45 | a = 3 # aligned with tab
45 | a = 3 # aligned with tab
| E223
46 | #:
|

View File

@@ -4,7 +4,7 @@ source: crates/ruff/src/rules/pycodestyle/mod.rs
E22.py:48:5: E224 Tab after operator
|
48 | #: E224
49 | a += 1
49 | a += 1
| E224
50 | b += 1000
51 | #:

View File

@@ -10,6 +10,16 @@ E22.py:54:13: E225 Missing whitespace around operator
57 | submitted+= 1
|
E22.py:56:10: E225 Missing whitespace around operator
|
56 | submitted +=1
57 | #: E225
58 | submitted+= 1
| E225
59 | #: E225
60 | c =-1
|
E22.py:58:4: E225 Missing whitespace around operator
|
58 | submitted+= 1
@@ -100,12 +110,12 @@ E22.py:74:12: E225 Missing whitespace around operator
78 | i=i+ 1
|
E22.py:76:3: E225 Missing whitespace around operator
E22.py:76:2: E225 Missing whitespace around operator
|
76 | _1kB = _1MB>> 10
77 | #: E225 E225
78 | i=i+ 1
| E225
| E225
79 | #: E225 E225
80 | i=i +1
|
@@ -120,12 +130,12 @@ E22.py:76:4: E225 Missing whitespace around operator
80 | i=i +1
|
E22.py:78:3: E225 Missing whitespace around operator
E22.py:78:2: E225 Missing whitespace around operator
|
78 | i=i+ 1
79 | #: E225 E225
80 | i=i +1
| E225
| E225
81 | #: E225
82 | i = 1and 1
|
@@ -140,6 +150,46 @@ E22.py:78:6: E225 Missing whitespace around operator
82 | i = 1and 1
|
E22.py:80:6: E225 Missing whitespace around operator
|
80 | i=i +1
81 | #: E225
82 | i = 1and 1
| E225
83 | #: E225
84 | i = 1or 0
|
E22.py:82:6: E225 Missing whitespace around operator
|
82 | i = 1and 1
83 | #: E225
84 | i = 1or 0
| E225
85 | #: E225
86 | 1is 1
|
E22.py:84:2: E225 Missing whitespace around operator
|
84 | i = 1or 0
85 | #: E225
86 | 1is 1
| E225
87 | #: E225
88 | 1in []
|
E22.py:86:2: E225 Missing whitespace around operator
|
86 | 1is 1
87 | #: E225
88 | 1in []
| E225
89 | #: E225
90 | i = 1 @2
|
E22.py:88:8: E225 Missing whitespace around operator
|
88 | 1in []
@@ -160,12 +210,12 @@ E22.py:90:6: E225 Missing whitespace around operator
94 | i=i+1
|
E22.py:92:3: E225 Missing whitespace around operator
E22.py:92:2: E225 Missing whitespace around operator
|
92 | i = 1@ 2
93 | #: E225 E226
94 | i=i+1
| E225
| E225
95 | #: E225 E226
96 | i =i+1
|
@@ -180,6 +230,16 @@ E22.py:94:4: E225 Missing whitespace around operator
98 | i= i+1
|
E22.py:96:2: E225 Missing whitespace around operator
|
96 | i =i+1
97 | #: E225 E226
98 | i= i+1
| E225
99 | #: E225 E226
100 | c = (a +b)*(a - b)
|
E22.py:98:9: E225 Missing whitespace around operator
|
98 | i= i+1

View File

@@ -50,6 +50,15 @@ E22.py:100:11: E226 Missing whitespace around arithmetic operator
103 | #:
|
E22.py:104:6: E226 Missing whitespace around arithmetic operator
|
104 | #: E226
105 | z = 2//30
| E226
106 | #: E226 E226
107 | c = (a+b) * (a-b)
|
E22.py:106:7: E226 Missing whitespace around arithmetic operator
|
106 | z = 2//30
@@ -120,4 +129,14 @@ E22.py:116:12: E226 Missing whitespace around arithmetic operator
120 | def halves(n):
|
E22.py:119:14: E226 Missing whitespace around arithmetic operator
|
119 | #: E226
120 | def halves(n):
121 | return (i//2 for i in range(n))
| E226
122 | #: E227
123 | _1kB = _1MB>>10
|

View File

@@ -28,12 +28,12 @@ E27.py:8:3: E271 Multiple spaces after keyword
10 | if 1:
| E271
11 | #: E273
12 | True and False
12 | True and False
|
E27.py:14:6: E271 Multiple spaces after keyword
|
14 | True and False
14 | True and False
15 | #: E271
16 | a and b
| E271

View File

@@ -28,7 +28,7 @@ E27.py:24:5: E272 Multiple spaces before keyword
26 | this and False
| E272
27 | #: E273
28 | a and b
28 | a and b
|

View File

@@ -5,17 +5,17 @@ E27.py:10:9: E273 Tab after keyword
|
10 | if 1:
11 | #: E273
12 | True and False
12 | True and False
| E273
13 | #: E273 E274
14 | True and False
14 | True and False
|
E27.py:12:5: E273 Tab after keyword
|
12 | True and False
12 | True and False
13 | #: E273 E274
14 | True and False
14 | True and False
| E273
15 | #: E271
16 | a and b
@@ -23,10 +23,10 @@ E27.py:12:5: E273 Tab after keyword
E27.py:12:10: E273 Tab after keyword
|
12 | True and False
12 | True and False
13 | #: E273 E274
14 | True and False
| E273
14 | True and False
| E273
15 | #: E271
16 | a and b
|
@@ -35,18 +35,18 @@ E27.py:26:6: E273 Tab after keyword
|
26 | this and False
27 | #: E273
28 | a and b
28 | a and b
| E273
29 | #: E274
30 | a and b
30 | a and b
|
E27.py:30:10: E273 Tab after keyword
|
30 | a and b
30 | a and b
31 | #: E273 E274
32 | this and False
| E273
32 | this and False
| E273
33 | #: Okay
34 | from u import (a, b)
|

View File

@@ -3,20 +3,20 @@ source: crates/ruff/src/rules/pycodestyle/mod.rs
---
E27.py:28:3: E274 Tab before keyword
|
28 | a and b
28 | a and b
29 | #: E274
30 | a and b
| E274
30 | a and b
| E274
31 | #: E273 E274
32 | this and False
32 | this and False
|
E27.py:30:6: E274 Tab before keyword
|
30 | a and b
30 | a and b
31 | #: E273 E274
32 | this and False
| E274
32 | this and False
| E274
33 | #: Okay
34 | from u import (a, b)
|

View File

@@ -5,8 +5,8 @@ W19.py:3:1: W191 Indentation contains tabs
|
3 | #: W191
4 | if False:
5 | print # indented with 1 tab
| W191
5 | print # indented with 1 tab
| ^^^^ W191
6 | #:
|
@@ -14,8 +14,8 @@ W19.py:9:1: W191 Indentation contains tabs
|
9 | #: W191
10 | y = x == 2 \
11 | or x == 3
| W191
11 | or x == 3
| ^^^^ W191
12 | #: E101 W191 W504
13 | if (
|
@@ -24,8 +24,8 @@ W19.py:16:1: W191 Indentation contains tabs
|
16 | ) or
17 | y == 4):
18 | pass
| W191
18 | pass
| ^^^^ W191
19 | #: E101 W191
20 | if x == 2 \
|
@@ -34,8 +34,8 @@ W19.py:21:1: W191 Indentation contains tabs
|
21 | or y > 1 \
22 | or x == 3:
23 | pass
| W191
23 | pass
| ^^^^ W191
24 | #: E101 W191
25 | if x == 2 \
|
@@ -44,8 +44,8 @@ W19.py:26:1: W191 Indentation contains tabs
|
26 | or y > 1 \
27 | or x == 3:
28 | pass
| W191
28 | pass
| ^^^^ W191
29 | #:
|
@@ -53,8 +53,8 @@ W19.py:32:1: W191 Indentation contains tabs
|
32 | if (foo == bar and
33 | baz == bop):
34 | pass
| W191
34 | pass
| ^^^^ W191
35 | #: E101 W191 W504
36 | if (
|
@@ -63,8 +63,8 @@ W19.py:38:1: W191 Indentation contains tabs
|
38 | baz == bop
39 | ):
40 | pass
| W191
40 | pass
| ^^^^ W191
41 | #:
|
@@ -72,18 +72,18 @@ W19.py:44:1: W191 Indentation contains tabs
|
44 | if start[1] > end_col and not (
45 | over_indent == 4 and indent_next):
46 | return (0, "E121 continuation line over-"
| W191
47 | "indented for visual indent")
46 | return (0, "E121 continuation line over-"
| ^^^^ W191
47 | "indented for visual indent")
48 | #:
|
W19.py:45:1: W191 Indentation contains tabs
|
45 | over_indent == 4 and indent_next):
46 | return (0, "E121 continuation line over-"
47 | "indented for visual indent")
| ^^^^^^^^ W191
46 | return (0, "E121 continuation line over-"
47 | "indented for visual indent")
| ^^^^^^^^^^^^ W191
48 | #:
|
@@ -91,8 +91,8 @@ W19.py:54:1: W191 Indentation contains tabs
|
54 | var_one, var_two, var_three,
55 | var_four):
56 | print(var_one)
| W191
56 | print(var_one)
| ^^^^ W191
57 | #: E101 W191 W504
58 | if ((row < 0 or self.moduleCount <= row or
|
@@ -101,8 +101,8 @@ W19.py:58:1: W191 Indentation contains tabs
|
58 | if ((row < 0 or self.moduleCount <= row or
59 | col < 0 or self.moduleCount <= col)):
60 | raise Exception("%s,%s - %s" % (row, col, self.moduleCount))
| W191
60 | raise Exception("%s,%s - %s" % (row, col, self.moduleCount))
| ^^^^ W191
61 | #: E101 E101 E101 E101 W191 W191 W191 W191 W191 W191
62 | if bar:
|
@@ -111,58 +111,58 @@ W19.py:61:1: W191 Indentation contains tabs
|
61 | #: E101 E101 E101 E101 W191 W191 W191 W191 W191 W191
62 | if bar:
63 | return (
| W191
64 | start, 'E121 lines starting with a '
65 | 'closing bracket should be indented '
63 | return (
| ^^^^ W191
64 | start, 'E121 lines starting with a '
65 | 'closing bracket should be indented '
|
W19.py:62:1: W191 Indentation contains tabs
|
62 | if bar:
63 | return (
64 | start, 'E121 lines starting with a '
| ^^^^ W191
65 | 'closing bracket should be indented '
66 | "to match that of the opening "
63 | return (
64 | start, 'E121 lines starting with a '
| ^^^^^^^^ W191
65 | 'closing bracket should be indented '
66 | "to match that of the opening "
|
W19.py:63:1: W191 Indentation contains tabs
|
63 | return (
64 | start, 'E121 lines starting with a '
65 | 'closing bracket should be indented '
| ^^^^ W191
66 | "to match that of the opening "
67 | "bracket's line"
63 | return (
64 | start, 'E121 lines starting with a '
65 | 'closing bracket should be indented '
| ^^^^^^^^ W191
66 | "to match that of the opening "
67 | "bracket's line"
|
W19.py:64:1: W191 Indentation contains tabs
|
64 | start, 'E121 lines starting with a '
65 | 'closing bracket should be indented '
66 | "to match that of the opening "
| ^^^^ W191
67 | "bracket's line"
68 | )
64 | start, 'E121 lines starting with a '
65 | 'closing bracket should be indented '
66 | "to match that of the opening "
| ^^^^^^^^ W191
67 | "bracket's line"
68 | )
|
W19.py:65:1: W191 Indentation contains tabs
|
65 | 'closing bracket should be indented '
66 | "to match that of the opening "
67 | "bracket's line"
| ^^^^ W191
68 | )
65 | 'closing bracket should be indented '
66 | "to match that of the opening "
67 | "bracket's line"
| ^^^^^^^^ W191
68 | )
69 | #
|
W19.py:66:1: W191 Indentation contains tabs
|
66 | "to match that of the opening "
67 | "bracket's line"
68 | )
| W191
66 | "to match that of the opening "
67 | "bracket's line"
68 | )
| ^^^^ W191
69 | #
70 | #: E101 W191 W504
|
@@ -171,8 +171,8 @@ W19.py:73:1: W191 Indentation contains tabs
|
73 | foo.bar("bop")
74 | )):
75 | print "yes"
| W191
75 | print "yes"
| ^^^^ W191
76 | #: E101 W191 W504
77 | # also ok, but starting to look like LISP
|
@@ -181,8 +181,8 @@ W19.py:78:1: W191 Indentation contains tabs
|
78 | if ((foo.bar("baz") and
79 | foo.bar("bop"))):
80 | print "yes"
| W191
80 | print "yes"
| ^^^^ W191
81 | #: E101 W191 W504
82 | if (a == 2 or
|
@@ -191,8 +191,8 @@ W19.py:83:1: W191 Indentation contains tabs
|
83 | b == "abc def ghi"
84 | "jkl mno"):
85 | return True
| W191
85 | return True
| ^^^^ W191
86 | #: E101 W191 W504
87 | if (a == 2 or
|
@@ -201,8 +201,8 @@ W19.py:88:1: W191 Indentation contains tabs
|
88 | b == """abc def ghi
89 | jkl mno"""):
90 | return True
| W191
90 | return True
| ^^^^ W191
91 | #: W191:2:1 W191:3:1 E101:3:2
92 | if length > options.max_line_length:
|
@@ -211,35 +211,35 @@ W19.py:91:1: W191 Indentation contains tabs
|
91 | #: W191:2:1 W191:3:1 E101:3:2
92 | if length > options.max_line_length:
93 | return options.max_line_length, \
| W191
94 | "E501 line too long (%d characters)" % length
93 | return options.max_line_length, \
| ^^^^ W191
94 | "E501 line too long (%d characters)" % length
|
W19.py:92:1: W191 Indentation contains tabs
|
92 | if length > options.max_line_length:
93 | return options.max_line_length, \
94 | "E501 line too long (%d characters)" % length
| ^^^^ W191
93 | return options.max_line_length, \
94 | "E501 line too long (%d characters)" % length
| ^^^^^^^^ W191
|
W19.py:98:1: W191 Indentation contains tabs
|
98 | #: E101 W191 W191 W504
99 | if os.path.exists(os.path.join(path, PEP8_BIN)):
100 | cmd = ([os.path.join(path, PEP8_BIN)] +
| W191
101 | self._pep8_options(targetfile))
100 | cmd = ([os.path.join(path, PEP8_BIN)] +
| ^^^^ W191
101 | self._pep8_options(targetfile))
102 | #: W191 - okay
|
W19.py:99:1: W191 Indentation contains tabs
|
99 | if os.path.exists(os.path.join(path, PEP8_BIN)):
100 | cmd = ([os.path.join(path, PEP8_BIN)] +
101 | self._pep8_options(targetfile))
| ^^^^^^^ W191
100 | cmd = ([os.path.join(path, PEP8_BIN)] +
101 | self._pep8_options(targetfile))
| ^^^^^^^^^^^ W191
102 | #: W191 - okay
103 | '''
|
@@ -248,36 +248,36 @@ W19.py:125:1: W191 Indentation contains tabs
|
125 | if foo is None and bar is "bop" and \
126 | blah == 'yeah':
127 | blah = 'yeahnah'
| W191
127 | blah = 'yeahnah'
| ^^^^ W191
|
W19.py:131:1: W191 Indentation contains tabs
|
131 | #: W191 W191 W191
132 | if True:
133 | foo(
| W191
134 | 1,
135 | 2)
133 | foo(
| ^^^^ W191
134 | 1,
135 | 2)
|
W19.py:132:1: W191 Indentation contains tabs
|
132 | if True:
133 | foo(
134 | 1,
| W191
135 | 2)
133 | foo(
134 | 1,
| ^^^^^^^^ W191
135 | 2)
136 | #: W191 W191 W191 W191 W191
|
W19.py:133:1: W191 Indentation contains tabs
|
133 | foo(
134 | 1,
135 | 2)
| W191
133 | foo(
134 | 1,
135 | 2)
| ^^^^^^^^ W191
136 | #: W191 W191 W191 W191 W191
137 | def test_keys(self):
|
@@ -286,48 +286,48 @@ W19.py:136:1: W191 Indentation contains tabs
|
136 | #: W191 W191 W191 W191 W191
137 | def test_keys(self):
138 | """areas.json - All regions are accounted for."""
| W191
139 | expected = set([
140 | u'Norrbotten',
138 | """areas.json - All regions are accounted for."""
| ^^^^ W191
139 | expected = set([
140 | u'Norrbotten',
|
W19.py:137:1: W191 Indentation contains tabs
|
137 | def test_keys(self):
138 | """areas.json - All regions are accounted for."""
139 | expected = set([
| W191
140 | u'Norrbotten',
141 | u'V\xe4sterbotten',
138 | """areas.json - All regions are accounted for."""
139 | expected = set([
| ^^^^ W191
140 | u'Norrbotten',
141 | u'V\xe4sterbotten',
|
W19.py:138:1: W191 Indentation contains tabs
|
138 | """areas.json - All regions are accounted for."""
139 | expected = set([
140 | u'Norrbotten',
| W191
141 | u'V\xe4sterbotten',
142 | ])
138 | """areas.json - All regions are accounted for."""
139 | expected = set([
140 | u'Norrbotten',
| ^^^^^^^^ W191
141 | u'V\xe4sterbotten',
142 | ])
|
W19.py:139:1: W191 Indentation contains tabs
|
139 | expected = set([
140 | u'Norrbotten',
141 | u'V\xe4sterbotten',
| W191
142 | ])
139 | expected = set([
140 | u'Norrbotten',
141 | u'V\xe4sterbotten',
| ^^^^^^^^ W191
142 | ])
143 | #: W191
|
W19.py:140:1: W191 Indentation contains tabs
|
140 | u'Norrbotten',
141 | u'V\xe4sterbotten',
142 | ])
| W191
140 | u'Norrbotten',
141 | u'V\xe4sterbotten',
142 | ])
| ^^^^ W191
143 | #: W191
144 | x = [
|
@@ -336,8 +336,8 @@ W19.py:143:1: W191 Indentation contains tabs
|
143 | #: W191
144 | x = [
145 | 'abc'
| W191
145 | 'abc'
| ^^^^ W191
146 | ]
147 | #: W191 - okay
|

View File

@@ -68,6 +68,7 @@ mod tests {
#[test_case(Rule::UndocumentedPublicPackage, Path::new("D104/__init__.py"); "D104_1")]
#[test_case(Rule::SectionNameEndsInColon, Path::new("D.py"); "D416")]
#[test_case(Rule::SectionNotOverIndented, Path::new("sections.py"); "D214")]
#[test_case(Rule::SectionNotOverIndented, Path::new("D214_module.py"); "D214_module")]
#[test_case(Rule::SectionUnderlineAfterName, Path::new("sections.py"); "D408")]
#[test_case(Rule::SectionUnderlineMatchesSectionLength, Path::new("sections.py"); "D409")]
#[test_case(Rule::SectionUnderlineNotOverIndented, Path::new("sections.py"); "D215")]

View File

@@ -619,10 +619,14 @@ fn common_section(
);
if checker.patch(diagnostic.kind.rule()) {
// Replace the existing indentation with whitespace of the appropriate length.
diagnostic.set_fix(Edit::range_replacement(
whitespace::clean(docstring.indentation),
TextRange::at(context.range().start(), leading_space.text_len()),
));
let content = whitespace::clean(docstring.indentation);
let fix_range = TextRange::at(context.range().start(), leading_space.text_len());
diagnostic.set_fix(if content.is_empty() {
Edit::range_deletion(fix_range)
} else {
Edit::range_replacement(content, fix_range)
});
};
checker.diagnostics.push(diagnostic);
}

View File

@@ -0,0 +1,59 @@
---
source: crates/ruff/src/rules/pydocstyle/mod.rs
---
D214_module.py:1:1: D214 [*] Section is over-indented ("Returns")
|
1 | / """A module docstring with D214 violations
2 | |
3 | | Returns
4 | | -----
5 | | valid returns
6 | |
7 | | Args
8 | | -----
9 | | valid args
10 | | """
| |___^ D214
11 |
12 | import os
|
= help: Remove over-indentation from "Returns"
Suggested fix
1 1 | """A module docstring with D214 violations
2 2 |
3 |- Returns
3 |+Returns
4 4 | -----
5 5 | valid returns
6 6 |
D214_module.py:1:1: D214 [*] Section is over-indented ("Args")
|
1 | / """A module docstring with D214 violations
2 | |
3 | | Returns
4 | | -----
5 | | valid returns
6 | |
7 | | Args
8 | | -----
9 | | valid args
10 | | """
| |___^ D214
11 |
12 | import os
|
= help: Remove over-indentation from "Args"
Suggested fix
4 4 | -----
5 5 | valid returns
6 6 |
7 |- Args
7 |+Args
8 8 | -----
9 9 | valid args
10 10 | """

View File

@@ -327,6 +327,7 @@ pub fn unused_variable(checker: &mut Checker, scope: ScopeId) {
&& name != &"__tracebackhide__"
&& name != &"__traceback_info__"
&& name != &"__traceback_supplement__"
&& name != &"__debuggerskip__"
{
let mut diagnostic = Diagnostic::new(
UnusedVariable {

View File

@@ -40,6 +40,7 @@ mod tests {
#[test_case(Rule::ContinueInFinally, Path::new("continue_in_finally.py"); "PLE0116")]
#[test_case(Rule::GlobalStatement, Path::new("global_statement.py"); "PLW0603")]
#[test_case(Rule::GlobalVariableNotAssigned, Path::new("global_variable_not_assigned.py"); "PLW0602")]
#[test_case(Rule::ImportSelf, Path::new("import_self/module.py"); "PLW0406")]
#[test_case(Rule::InvalidAllFormat, Path::new("invalid_all_format.py"); "PLE0605")]
#[test_case(Rule::InvalidAllObject, Path::new("invalid_all_object.py"); "PLE0604")]
#[test_case(Rule::InvalidCharacterBackspace, Path::new("invalid_characters.py"); "PLE2510")]

View File

@@ -51,6 +51,30 @@ impl std::fmt::Display for EmptyStringCmpop {
}
}
/// ## What it does
/// Checks for comparisons to empty strings.
///
/// ## Why is this bad?
/// An empty string is falsy, so it is unnecessary to compare it to `""`. If
/// the value can be something else Python considers falsy, such as `None` or
/// `0` or another empty container, then the code is not equivalent.
///
/// ## Example
/// ```python
/// def foo(x):
/// if x == "":
/// print("x is empty")
/// ```
///
/// Use instead:
/// ```python
/// def foo(x):
/// if not x:
/// print("x is empty")
/// ```
///
/// ## References
/// - [Python documentation](https://docs.python.org/3/library/stdtypes.html#truth-value-testing)
#[violation]
pub struct CompareToEmptyString {
pub existing: String,

View File

@@ -0,0 +1,70 @@
use rustpython_parser::ast::Alias;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::resolve_imported_module_path;
#[violation]
pub struct ImportSelf {
pub name: String,
}
impl Violation for ImportSelf {
#[derive_message_formats]
fn message(&self) -> String {
let Self { name } = self;
format!("Module `{name}` imports itself")
}
}
/// PLW0406
pub fn import_self(alias: &Alias, module_path: Option<&[String]>) -> Option<Diagnostic> {
let Some(module_path) = module_path else {
return None;
};
if alias.node.name.split('.').eq(module_path) {
return Some(Diagnostic::new(
ImportSelf {
name: alias.node.name.clone(),
},
alias.range(),
));
}
None
}
/// PLW0406
pub fn import_from_self(
level: Option<usize>,
module: Option<&str>,
names: &[Alias],
module_path: Option<&[String]>,
) -> Option<Diagnostic> {
let Some(module_path) = module_path else {
return None;
};
let Some(imported_module_path) = resolve_imported_module_path(level, module, Some(module_path)) else {
return None;
};
if imported_module_path
.split('.')
.eq(&module_path[..module_path.len() - 1])
{
if let Some(alias) = names
.iter()
.find(|alias| alias.node.name == module_path[module_path.len() - 1])
{
return Some(Diagnostic::new(
ImportSelf {
name: format!("{}.{}", imported_module_path, alias.node.name),
},
alias.range(),
));
}
}
None
}

View File

@@ -10,6 +10,7 @@ pub use comparison_of_constant::{comparison_of_constant, ComparisonOfConstant};
pub use continue_in_finally::{continue_in_finally, ContinueInFinally};
pub use global_statement::{global_statement, GlobalStatement};
pub use global_variable_not_assigned::GlobalVariableNotAssigned;
pub use import_self::{import_from_self, import_self, ImportSelf};
pub use invalid_all_format::{invalid_all_format, InvalidAllFormat};
pub use invalid_all_object::{invalid_all_object, InvalidAllObject};
pub use invalid_envvar_default::{invalid_envvar_default, InvalidEnvvarDefault};
@@ -57,6 +58,7 @@ mod comparison_of_constant;
mod continue_in_finally;
mod global_statement;
mod global_variable_not_assigned;
mod import_self;
mod invalid_all_format;
mod invalid_all_object;
mod invalid_envvar_default;

View File

@@ -5,6 +5,25 @@ use ruff_macros::{derive_message_formats, violation};
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for unnecessary direct calls to lambda expressions.
///
/// ## Why is this bad?
/// Calling a lambda expression directly is unnecessary. The expression can be
/// executed inline instead to improve readability.
///
/// ## Example
/// ```python
/// area = (lambda r: 3.14 * r ** 2)(radius)
/// ```
///
/// Use instead:
/// ```python
/// area = 3.14 * radius ** 2
/// ```
///
/// ## References
/// - [Python documentation](https://docs.python.org/3/reference/expressions.html#lambda)
#[violation]
pub struct UnnecessaryDirectLambdaCall;

View File

@@ -6,6 +6,21 @@ use ruff_macros::{derive_message_formats, violation};
use crate::checkers::ast::Checker;
use crate::registry::AsRule;
/// ## What it does
/// Checks for import aliases that do not rename the original package.
///
/// ## Why is this bad?
/// The import alias is redundant and should be removed to avoid confusion.
///
/// ## Example
/// ```python
/// import numpy as numpy
/// ```
///
/// Use instead:
/// ```python
/// import numpy as np
/// ```
#[violation]
pub struct UselessImportAlias;

View File

@@ -0,0 +1,28 @@
---
source: crates/ruff/src/rules/pylint/mod.rs
---
module.py:1:8: PLW0406 Module `import_self.module` imports itself
|
1 | import import_self.module
| ^^^^^^^^^^^^^^^^^^ PLW0406
2 | from import_self import module
3 | from . import module
|
module.py:2:25: PLW0406 Module `import_self.module` imports itself
|
2 | import import_self.module
3 | from import_self import module
| ^^^^^^ PLW0406
4 | from . import module
|
module.py:3:15: PLW0406 Module `import_self.module` imports itself
|
3 | import import_self.module
4 | from import_self import module
5 | from . import module
| ^^^^^^ PLW0406
|

View File

@@ -100,12 +100,13 @@ pub fn use_pep604_annotation(checker: &mut Checker, expr: &Expr, value: &Expr, s
return;
};
// Avoid fixing forward references.
let fixable = checker
.ctx
.in_deferred_string_type_definition
.as_ref()
.map_or(true, AnnotationKind::is_simple);
// Avoid fixing forward references, or types not in an annotation.
let fixable = checker.ctx.in_type_definition
&& checker
.ctx
.in_deferred_string_type_definition
.as_ref()
.map_or(true, AnnotationKind::is_simple);
match typing_member {
TypingMember::Optional => {

View File

@@ -200,6 +200,26 @@ UP007.py:47:8: UP007 [*] Use `X | Y` for type annotations
49 49 |
50 50 | x = Union[str, int]
UP007.py:48:9: UP007 Use `X | Y` for type annotations
|
48 | def f() -> None:
49 | x: Optional[str]
50 | x = Optional[str]
| ^^^^^^^^^^^^^ UP007
51 |
52 | x = Union[str, int]
|
UP007.py:50:9: UP007 Use `X | Y` for type annotations
|
50 | x = Optional[str]
51 |
52 | x = Union[str, int]
| ^^^^^^^^^^^^^^^ UP007
53 | x = Union["str", "int"]
54 | x: Union[str, int]
|
UP007.py:52:8: UP007 [*] Use `X | Y` for type annotations
|
52 | x = Union[str, int]

View File

@@ -5,6 +5,7 @@ use std::path::Path;
use anyhow::Result;
use itertools::Itertools;
use ruff_diagnostics::Diagnostic;
use rustc_hash::FxHashMap;
use rustpython_parser::lexer::LexResult;
@@ -15,6 +16,7 @@ use crate::directives;
use crate::linter::{check_path, LinterResult};
use crate::message::{Emitter, EmitterContext, Message, TextEmitter};
use crate::packaging::detect_package_root;
use crate::rules::pycodestyle::rules::syntax_error;
use crate::settings::{flags, Settings};
pub fn test_resource_path(path: impl AsRef<Path>) -> std::path::PathBuf {
@@ -24,6 +26,8 @@ pub fn test_resource_path(path: impl AsRef<Path>) -> std::path::PathBuf {
/// A convenient wrapper around [`check_path`], that additionally
/// asserts that autofixes converge after 10 iterations.
pub fn test_path(path: impl AsRef<Path>, settings: &Settings) -> Result<Vec<Message>> {
static MAX_ITERATIONS: usize = 10;
let path = test_resource_path("fixtures").join(path);
let contents = std::fs::read_to_string(&path)?;
let tokens: Vec<LexResult> = ruff_rustpython::tokenize(&contents);
@@ -38,7 +42,7 @@ pub fn test_path(path: impl AsRef<Path>, settings: &Settings) -> Result<Vec<Mess
);
let LinterResult {
data: (diagnostics, _imports),
..
error,
} = check_path(
&path,
path.parent()
@@ -53,19 +57,32 @@ pub fn test_path(path: impl AsRef<Path>, settings: &Settings) -> Result<Vec<Mess
flags::Autofix::Enabled,
);
let source_has_errors = error.is_some();
// Detect autofixes that don't converge after multiple iterations.
let mut iterations = 0;
if diagnostics
.iter()
.any(|diagnostic| !diagnostic.fix.is_empty())
{
let max_iterations = 10;
let mut diagnostics = diagnostics.clone();
let mut contents = contents.clone();
let mut iterations = 0;
loop {
let tokens: Vec<LexResult> = ruff_rustpython::tokenize(&contents);
let locator = Locator::new(&contents);
while let Some((fixed_contents, _)) = fix_file(&diagnostics, &Locator::new(&contents)) {
if iterations < MAX_ITERATIONS {
iterations += 1;
} else {
let output = print_diagnostics(diagnostics, &path, &contents);
panic!(
"Failed to converge after {MAX_ITERATIONS} iterations. This likely \
indicates a bug in the implementation of the fix. Last diagnostics:\n{output}"
);
}
let tokens: Vec<LexResult> = ruff_rustpython::tokenize(&fixed_contents);
let locator = Locator::new(&fixed_contents);
let stylist = Stylist::from_tokens(&tokens, &locator);
let indexer = Indexer::from_tokens(&tokens, &locator);
let directives = directives::extract_directives(
@@ -74,9 +91,10 @@ pub fn test_path(path: impl AsRef<Path>, settings: &Settings) -> Result<Vec<Mess
&locator,
&indexer,
);
let LinterResult {
data: (diagnostics, _imports),
..
data: (fixed_diagnostics, _),
error: fixed_error,
} = check_path(
&path,
None,
@@ -89,47 +107,30 @@ pub fn test_path(path: impl AsRef<Path>, settings: &Settings) -> Result<Vec<Mess
flags::Noqa::Enabled,
flags::Autofix::Enabled,
);
if let Some((fixed_contents, _)) = fix_file(&diagnostics, &locator) {
if iterations < max_iterations {
iterations += 1;
contents = fixed_contents.to_string();
} else {
let source_code = SourceFileBuilder::new(
path.file_name().unwrap().to_string_lossy().as_ref(),
contents,
)
.finish();
let messages: Vec<_> = diagnostics
.into_iter()
.map(|diagnostic| {
// Not strictly necessary but adds some coverage for this code path
let noqa = directives.noqa_line_for.resolve(diagnostic.start());
if let Some(fixed_error) = fixed_error {
if !source_has_errors {
// Previous fix introduced a syntax error, abort
let fixes = print_diagnostics(diagnostics, &path, &contents);
Message::from_diagnostic(diagnostic, source_code.clone(), noqa)
})
.collect();
let mut syntax_diagnostics = Vec::new();
syntax_error(&mut syntax_diagnostics, &fixed_error, &locator);
let syntax_errors =
print_diagnostics(syntax_diagnostics, &path, &fixed_contents);
let mut output: Vec<u8> = Vec::new();
TextEmitter::default()
.with_show_fix(true)
.with_show_source(true)
.emit(
&mut output,
&messages,
&EmitterContext::new(&FxHashMap::default()),
)
.unwrap();
let output_str = String::from_utf8(output).unwrap();
panic!(
"Failed to converge after {max_iterations} iterations. This likely \
indicates a bug in the implementation of the fix. Last diagnostics:\n{output_str}"
r#"Fixed source has a syntax error where the source document does not. This is a bug in one of the generated fixes:
{syntax_errors}
Last generated fixes:
{fixes}
Source with applied fixes:
{fixed_contents}"#
);
}
} else {
break;
}
diagnostics = fixed_diagnostics;
contents = fixed_contents.to_string();
}
}
@@ -151,6 +152,25 @@ pub fn test_path(path: impl AsRef<Path>, settings: &Settings) -> Result<Vec<Mess
.collect())
}
fn print_diagnostics(diagnostics: Vec<Diagnostic>, file_path: &Path, source: &str) -> String {
let source_file = SourceFileBuilder::new(
file_path.file_name().unwrap().to_string_lossy().as_ref(),
source,
)
.finish();
let messages: Vec<_> = diagnostics
.into_iter()
.map(|diagnostic| {
let noqa_start = diagnostic.start();
Message::from_diagnostic(diagnostic, source_file.clone(), noqa_start)
})
.collect();
print_messages(&messages)
}
pub(crate) fn print_messages(messages: &[Message]) -> String {
let mut output = Vec::new();

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_cli"
version = "0.0.264"
version = "0.0.265"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = { workspace = true }
rust-version = { workspace = true }

View File

@@ -7,7 +7,7 @@ use log::{debug, error};
use rayon::prelude::*;
use ruff::linter::add_noqa_to_path;
use ruff::resolver::PyprojectDiscovery;
use ruff::resolver::PyprojectConfig;
use ruff::{packaging, resolver, warn_user_once};
use crate::args::Overrides;
@@ -15,12 +15,12 @@ use crate::args::Overrides;
/// Add `noqa` directives to a collection of files.
pub fn add_noqa(
files: &[PathBuf],
pyproject_strategy: &PyprojectDiscovery,
pyproject_config: &PyprojectConfig,
overrides: &Overrides,
) -> Result<usize> {
// Collect all the files to check.
let start = Instant::now();
let (paths, resolver) = resolver::python_files_in_path(files, pyproject_strategy, overrides)?;
let (paths, resolver) = resolver::python_files_in_path(files, pyproject_config, overrides)?;
let duration = start.elapsed();
debug!("Identified files to lint in: {:?}", duration);
@@ -37,7 +37,7 @@ pub fn add_noqa(
.map(ignore::DirEntry::path)
.collect::<Vec<_>>(),
&resolver,
pyproject_strategy,
pyproject_config,
);
let start = Instant::now();
@@ -50,7 +50,7 @@ pub fn add_noqa(
.parent()
.and_then(|parent| package_roots.get(parent))
.and_then(|package| *package);
let settings = resolver.resolve(path, pyproject_strategy);
let settings = resolver.resolve(path, pyproject_config);
match add_noqa_to_path(path, package, settings) {
Ok(count) => Some(count),
Err(e) => {

View File

@@ -12,7 +12,7 @@ use ruff_text_size::{TextRange, TextSize};
use ruff::message::Message;
use ruff::registry::Rule;
use ruff::resolver::PyprojectDiscovery;
use ruff::resolver::{PyprojectConfig, PyprojectDiscoveryStrategy};
use ruff::settings::{flags, AllSettings};
use ruff::{fs, packaging, resolver, warn_user_once, IOError};
use ruff_diagnostics::Diagnostic;
@@ -27,7 +27,7 @@ use crate::panic::catch_unwind;
/// Run the linter over a collection of files.
pub fn run(
files: &[PathBuf],
pyproject_strategy: &PyprojectDiscovery,
pyproject_config: &PyprojectConfig,
overrides: &Overrides,
cache: flags::Cache,
noqa: flags::Noqa,
@@ -35,7 +35,7 @@ pub fn run(
) -> Result<Diagnostics> {
// Collect all the Python files to check.
let start = Instant::now();
let (paths, resolver) = resolver::python_files_in_path(files, pyproject_strategy, overrides)?;
let (paths, resolver) = resolver::python_files_in_path(files, pyproject_config, overrides)?;
let duration = start.elapsed();
debug!("Identified files to lint in: {:?}", duration);
@@ -52,12 +52,12 @@ pub fn run(
}
}
match &pyproject_strategy {
PyprojectDiscovery::Fixed(settings) => {
init_cache(&settings.cli.cache_dir);
match pyproject_config.strategy {
PyprojectDiscoveryStrategy::Fixed => {
init_cache(&pyproject_config.settings.cli.cache_dir);
}
PyprojectDiscovery::Hierarchical(default) => {
for settings in std::iter::once(default).chain(resolver.iter()) {
PyprojectDiscoveryStrategy::Hierarchical => {
for settings in std::iter::once(&pyproject_config.settings).chain(resolver.iter()) {
init_cache(&settings.cli.cache_dir);
}
}
@@ -72,7 +72,7 @@ pub fn run(
.map(ignore::DirEntry::path)
.collect::<Vec<_>>(),
&resolver,
pyproject_strategy,
pyproject_config,
);
let start = Instant::now();
@@ -86,7 +86,7 @@ pub fn run(
.parent()
.and_then(|parent| package_roots.get(parent))
.and_then(|package| *package);
let settings = resolver.resolve_all(path, pyproject_strategy);
let settings = resolver.resolve_all(path, pyproject_config);
lint_path(path, package, settings, cache, noqa, autofix).map_err(|e| {
(Some(path.to_owned()), {
@@ -116,7 +116,7 @@ pub fn run(
fs::relativize_path(path).bold(),
":".bold()
);
let settings = resolver.resolve(path, pyproject_strategy);
let settings = resolver.resolve(path, pyproject_config);
if settings.rules.enabled(Rule::IOError) {
let file =
SourceFileBuilder::new(path.to_string_lossy().as_ref(), "").finish();
@@ -196,7 +196,7 @@ mod test {
use path_absolutize::Absolutize;
use ruff::logging::LogLevel;
use ruff::resolver::PyprojectDiscovery;
use ruff::resolver::{PyprojectConfig, PyprojectDiscoveryStrategy};
use ruff::settings::configuration::{Configuration, RuleSelection};
use ruff::settings::flags::FixMode;
use ruff::settings::flags::{Cache, Noqa};
@@ -238,7 +238,11 @@ mod test {
let diagnostics = run(
&[root_path.join("valid.ipynb")],
&PyprojectDiscovery::Fixed(AllSettings::from_configuration(configuration, &root_path)?),
&PyprojectConfig::new(
PyprojectDiscoveryStrategy::Fixed,
AllSettings::from_configuration(configuration, &root_path)?,
None,
),
&overrides,
Cache::Disabled,
Noqa::Enabled,

View File

@@ -3,7 +3,7 @@ use std::path::Path;
use anyhow::Result;
use ruff::resolver::PyprojectDiscovery;
use ruff::resolver::PyprojectConfig;
use ruff::settings::flags;
use ruff::{packaging, resolver};
@@ -20,22 +20,28 @@ fn read_from_stdin() -> Result<String> {
/// Run the linter over a single file, read from `stdin`.
pub fn run_stdin(
filename: Option<&Path>,
pyproject_strategy: &PyprojectDiscovery,
pyproject_config: &PyprojectConfig,
overrides: &Overrides,
noqa: flags::Noqa,
autofix: flags::FixMode,
) -> Result<Diagnostics> {
if let Some(filename) = filename {
if !resolver::python_file_at_path(filename, pyproject_strategy, overrides)? {
if !resolver::python_file_at_path(filename, pyproject_config, overrides)? {
return Ok(Diagnostics::default());
}
}
let settings = pyproject_strategy.top_level_settings();
let package_root = filename
.and_then(Path::parent)
.and_then(|path| packaging::detect_package_root(path, &settings.lib.namespace_packages));
let package_root = filename.and_then(Path::parent).and_then(|path| {
packaging::detect_package_root(path, &pyproject_config.settings.lib.namespace_packages)
});
let stdin = read_from_stdin()?;
let mut diagnostics = lint_stdin(filename, package_root, &stdin, &settings.lib, noqa, autofix)?;
let mut diagnostics = lint_stdin(
filename,
package_root,
&stdin,
&pyproject_config.settings.lib,
noqa,
autofix,
)?;
diagnostics.messages.sort_unstable();
Ok(diagnostics)
}

View File

@@ -4,7 +4,7 @@ use std::path::PathBuf;
use anyhow::Result;
use itertools::Itertools;
use ruff::resolver::PyprojectDiscovery;
use ruff::resolver::PyprojectConfig;
use ruff::{resolver, warn_user_once};
use crate::args::Overrides;
@@ -12,11 +12,11 @@ use crate::args::Overrides;
/// Show the list of files to be checked based on current settings.
pub fn show_files(
files: &[PathBuf],
pyproject_strategy: &PyprojectDiscovery,
pyproject_config: &PyprojectConfig,
overrides: &Overrides,
) -> Result<()> {
// Collect all files in the hierarchy.
let (paths, _resolver) = resolver::python_files_in_path(files, pyproject_strategy, overrides)?;
let (paths, _resolver) = resolver::python_files_in_path(files, pyproject_config, overrides)?;
if paths.is_empty() {
warn_user_once!("No Python files found under the given path(s)");

View File

@@ -5,18 +5,18 @@ use anyhow::{bail, Result};
use itertools::Itertools;
use ruff::resolver;
use ruff::resolver::PyprojectDiscovery;
use ruff::resolver::PyprojectConfig;
use crate::args::Overrides;
/// Print the user-facing configuration settings.
pub fn show_settings(
files: &[PathBuf],
pyproject_strategy: &PyprojectDiscovery,
pyproject_config: &PyprojectConfig,
overrides: &Overrides,
) -> Result<()> {
// Collect all files in the hierarchy.
let (paths, resolver) = resolver::python_files_in_path(files, pyproject_strategy, overrides)?;
let (paths, resolver) = resolver::python_files_in_path(files, pyproject_config, overrides)?;
// Print the list of files.
let Some(entry) = paths
@@ -26,10 +26,13 @@ pub fn show_settings(
bail!("No files found under the given path");
};
let path = entry.path();
let settings = resolver.resolve(path, pyproject_strategy);
let settings = resolver.resolve(path, pyproject_config);
let mut stdout = BufWriter::new(io::stdout().lock());
writeln!(stdout, "Resolved settings for: {path:?}")?;
if let Some(settings_path) = pyproject_config.path.as_ref() {
writeln!(stdout, "Settings path: {settings_path:?}")?;
}
writeln!(stdout, "{settings:#?}")?;
Ok(())

View File

@@ -97,7 +97,7 @@ fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitStatus> {
// Construct the "default" settings. These are used when no `pyproject.toml`
// files are present, or files are injected from outside of the hierarchy.
let pyproject_strategy = resolve::resolve(
let pyproject_config = resolve::resolve(
cli.isolated,
cli.config.as_deref(),
&overrides,
@@ -105,16 +105,14 @@ fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitStatus> {
)?;
if cli.show_settings {
commands::show_settings::show_settings(&cli.files, &pyproject_strategy, &overrides)?;
commands::show_settings::show_settings(&cli.files, &pyproject_config, &overrides)?;
return Ok(ExitStatus::Success);
}
if cli.show_files {
commands::show_files::show_files(&cli.files, &pyproject_strategy, &overrides)?;
commands::show_files::show_files(&cli.files, &pyproject_config, &overrides)?;
return Ok(ExitStatus::Success);
}
let top_level_settings = pyproject_strategy.top_level_settings();
// Extract options that are included in `Settings`, but only apply at the top
// level.
let CliSettings {
@@ -124,7 +122,7 @@ fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitStatus> {
show_fixes,
update_check,
..
} = top_level_settings.cli.clone();
} = pyproject_config.settings.cli;
// Autofix rules are as follows:
// - If `--fix` or `--fix-only` is set, always apply fixes to the filesystem (or
@@ -155,7 +153,7 @@ fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitStatus> {
printer_flags |= PrinterFlags::SHOW_FIXES;
}
if top_level_settings.lib.show_source {
if pyproject_config.settings.lib.show_source {
printer_flags |= PrinterFlags::SHOW_SOURCE;
}
@@ -171,7 +169,7 @@ fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitStatus> {
warn_user_once!("--fix is incompatible with --add-noqa.");
}
let modifications =
commands::add_noqa::add_noqa(&cli.files, &pyproject_strategy, &overrides)?;
commands::add_noqa::add_noqa(&cli.files, &pyproject_config, &overrides)?;
if modifications > 0 && log_level >= LogLevel::Default {
let s = if modifications == 1 { "" } else { "s" };
#[allow(clippy::print_stderr)]
@@ -195,7 +193,7 @@ fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitStatus> {
let messages = commands::run::run(
&cli.files,
&pyproject_strategy,
&pyproject_config,
&overrides,
cache.into(),
noqa.into(),
@@ -225,7 +223,7 @@ fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitStatus> {
let messages = commands::run::run(
&cli.files,
&pyproject_strategy,
&pyproject_config,
&overrides,
cache.into(),
noqa.into(),
@@ -244,7 +242,7 @@ fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitStatus> {
let diagnostics = if is_stdin {
commands::run_stdin::run_stdin(
cli.stdin_filename.map(fs::normalize_path).as_deref(),
&pyproject_strategy,
&pyproject_config,
&overrides,
noqa.into(),
autofix,
@@ -252,7 +250,7 @@ fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitStatus> {
} else {
commands::run::run(
&cli.files,
&pyproject_strategy,
&pyproject_config,
&overrides,
cache.into(),
noqa.into(),
@@ -280,17 +278,33 @@ fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitStatus> {
}
if !cli.exit_zero {
if cli.diff || fix_only {
if cli.diff {
// If we're printing a diff, we always want to exit non-zero if there are
// any fixable violations (since we've printed the diff, but not applied the
// fixes).
if !diagnostics.fixed.is_empty() {
return Ok(ExitStatus::Failure);
}
} else if cli.exit_non_zero_on_fix {
if !diagnostics.fixed.is_empty() || !diagnostics.messages.is_empty() {
return Ok(ExitStatus::Failure);
} else if fix_only {
// If we're only fixing, we want to exit zero (since we've fixed all fixable
// violations), unless we're explicitly asked to exit non-zero on fix.
if cli.exit_non_zero_on_fix {
if !diagnostics.fixed.is_empty() {
return Ok(ExitStatus::Failure);
}
}
} else {
if !diagnostics.messages.is_empty() {
return Ok(ExitStatus::Failure);
// If we're running the linter (not just fixing), we want to exit non-zero if
// there are any violations, unless we're explicitly asked to exit zero on
// fix.
if cli.exit_non_zero_on_fix {
if !diagnostics.fixed.is_empty() || !diagnostics.messages.is_empty() {
return Ok(ExitStatus::Failure);
}
} else {
if !diagnostics.messages.is_empty() {
return Ok(ExitStatus::Failure);
}
}
}
}

View File

@@ -4,7 +4,8 @@ use anyhow::Result;
use path_absolutize::path_dedot;
use ruff::resolver::{
resolve_settings_with_processor, ConfigProcessor, PyprojectDiscovery, Relativity,
resolve_settings_with_processor, ConfigProcessor, PyprojectConfig, PyprojectDiscoveryStrategy,
Relativity,
};
use ruff::settings::configuration::Configuration;
use ruff::settings::{pyproject, AllSettings};
@@ -18,13 +19,17 @@ pub fn resolve(
config: Option<&Path>,
overrides: &Overrides,
stdin_filename: Option<&Path>,
) -> Result<PyprojectDiscovery> {
) -> Result<PyprojectConfig> {
// First priority: if we're running in isolated mode, use the default settings.
if isolated {
let mut config = Configuration::default();
overrides.process_config(&mut config);
let settings = AllSettings::from_configuration(config, &path_dedot::CWD)?;
return Ok(PyprojectDiscovery::Fixed(settings));
return Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Fixed,
settings,
None,
));
}
// Second priority: the user specified a `pyproject.toml` file. Use that
@@ -36,7 +41,11 @@ pub fn resolve(
.transpose()?
{
let settings = resolve_settings_with_processor(&pyproject, &Relativity::Cwd, overrides)?;
return Ok(PyprojectDiscovery::Fixed(settings));
return Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Fixed,
settings,
Some(pyproject),
));
}
// Third priority: find a `pyproject.toml` file in either an ancestor of
@@ -50,7 +59,11 @@ pub fn resolve(
.unwrap_or(&path_dedot::CWD.as_path()),
)? {
let settings = resolve_settings_with_processor(&pyproject, &Relativity::Parent, overrides)?;
return Ok(PyprojectDiscovery::Hierarchical(settings));
return Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Hierarchical,
settings,
Some(pyproject),
));
}
// Fourth priority: find a user-specific `pyproject.toml`, but resolve all paths
@@ -59,7 +72,11 @@ pub fn resolve(
// these act as the "default" settings.)
if let Some(pyproject) = pyproject::find_user_settings_toml() {
let settings = resolve_settings_with_processor(&pyproject, &Relativity::Cwd, overrides)?;
return Ok(PyprojectDiscovery::Hierarchical(settings));
return Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Hierarchical,
settings,
Some(pyproject),
));
}
// Fallback: load Ruff's default settings, and resolve all paths relative to the
@@ -69,5 +86,9 @@ pub fn resolve(
let mut config = Configuration::default();
overrides.process_config(&mut config);
let settings = AllSettings::from_configuration(config, &path_dedot::CWD)?;
Ok(PyprojectDiscovery::Hierarchical(settings))
Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Hierarchical,
settings,
None,
))
}

View File

@@ -24,7 +24,7 @@ fn generate_table(table_out: &mut String, rules: impl IntoIterator<Item = Rule>,
#[allow(clippy::or_fun_call)]
table_out.push_str(&format!(
"| {}{} | {} | {} | {} |",
"| {0}{1} {{ #{0}{1} }} | {2} | {3} | {4} |",
linter.common_prefix(),
linter.code_for_rule(rule).unwrap(),
rule.explanation()

View File

@@ -167,61 +167,9 @@ pub enum TokenKind {
}
impl TokenKind {
#[inline]
pub const fn is_whitespace_needed(&self) -> bool {
matches!(
self,
TokenKind::DoubleStarEqual
| TokenKind::StarEqual
| TokenKind::SlashEqual
| TokenKind::DoubleSlashEqual
| TokenKind::PlusEqual
| TokenKind::MinusEqual
| TokenKind::NotEqual
| TokenKind::Less
| TokenKind::Greater
| TokenKind::PercentEqual
| TokenKind::CircumflexEqual
| TokenKind::AmperEqual
| TokenKind::VbarEqual
| TokenKind::EqEqual
| TokenKind::LessEqual
| TokenKind::GreaterEqual
| TokenKind::LeftShiftEqual
| TokenKind::RightShiftEqual
| TokenKind::Equal
| TokenKind::And
| TokenKind::Or
| TokenKind::In
| TokenKind::Is
| TokenKind::Rarrow
)
}
#[inline]
pub const fn is_whitespace_optional(&self) -> bool {
self.is_arithmetic()
|| matches!(
self,
TokenKind::CircumFlex
| TokenKind::Amper
| TokenKind::Vbar
| TokenKind::LeftShift
| TokenKind::RightShift
| TokenKind::Percent
)
}
#[inline]
pub const fn is_unary(&self) -> bool {
matches!(
self,
TokenKind::Plus
| TokenKind::Minus
| TokenKind::Star
| TokenKind::DoubleStar
| TokenKind::RightShift
)
matches!(self, TokenKind::Plus | TokenKind::Minus | TokenKind::Star)
}
#[inline]
@@ -315,6 +263,11 @@ impl TokenKind {
| TokenKind::Ellipsis
| TokenKind::ColonEqual
| TokenKind::Colon
| TokenKind::And
| TokenKind::Or
| TokenKind::Not
| TokenKind::In
| TokenKind::Is
)
}
@@ -324,7 +277,7 @@ impl TokenKind {
}
#[inline]
pub const fn is_skip_comment(&self) -> bool {
pub const fn is_trivia(&self) -> bool {
matches!(
self,
TokenKind::Newline
@@ -344,10 +297,29 @@ impl TokenKind {
| TokenKind::Plus
| TokenKind::Minus
| TokenKind::Slash
| TokenKind::DoubleSlash
| TokenKind::At
)
}
#[inline]
pub const fn is_bitwise_or_shift(&self) -> bool {
matches!(
self,
TokenKind::LeftShift
| TokenKind::LeftShiftEqual
| TokenKind::RightShift
| TokenKind::RightShiftEqual
| TokenKind::Amper
| TokenKind::AmperEqual
| TokenKind::Vbar
| TokenKind::VbarEqual
| TokenKind::CircumFlex
| TokenKind::CircumflexEqual
| TokenKind::Tilde
)
}
#[inline]
pub const fn is_soft_keyword(&self) -> bool {
matches!(self, TokenKind::Match | TokenKind::Case)

View File

@@ -8,6 +8,12 @@ the `line-length` setting is consistent between the two.
As a project, Ruff is designed to be used alongside Black and, as such, will defer implementing
stylistic lint rules that are obviated by autoformatting.
Note that Ruff and Black treat line-length enforcement a little differently. Black makes a
best-effort attempt to adhere to the `line-length`, but avoids automatic line-wrapping in some cases
(e.g., within comments). Ruff, on the other hand, will flag rule `E501` for any line that exceeds
the `line-length` setting. As such, if `E501` is enabled, Ruff can still trigger line-length
violations even when Black is enabled.
## How does Ruff compare to Flake8?
(Coming from Flake8? Try [`flake8-to-ruff`](https://pypi.org/project/flake8-to-ruff/) to

View File

@@ -242,7 +242,7 @@ This tutorial has focused on Ruff's command-line interface, but Ruff can also be
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.264'
rev: 'v0.0.265'
hooks:
- id: ruff
```

View File

@@ -22,7 +22,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com) hook:
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.264'
rev: 'v0.0.265'
hooks:
- id: ruff
```
@@ -32,7 +32,7 @@ Or, to enable autofix:
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.264'
rev: 'v0.0.265'
hooks:
- id: ruff
args: [ --fix, --exit-non-zero-on-fix ]

View File

@@ -7,7 +7,7 @@ build-backend = "maturin"
[project]
name = "ruff"
version = "0.0.264"
version = "0.0.265"
description = "An extremely fast Python linter, written in Rust."
authors = [{ name = "Charlie Marsh", email = "charlie.r.marsh@gmail.com" }]
maintainers = [{ name = "Charlie Marsh", email = "charlie.r.marsh@gmail.com" }]

7
ruff.schema.json generated
View File

@@ -2033,6 +2033,9 @@
"PLW012",
"PLW0120",
"PLW0129",
"PLW04",
"PLW040",
"PLW0406",
"PLW06",
"PLW060",
"PLW0602",
@@ -2124,9 +2127,13 @@
"PYI015",
"PYI016",
"PYI02",
"PYI020",
"PYI021",
"PYI03",
"PYI033",
"PYI04",
"PYI042",
"PYI043",
"Q",
"Q0",
"Q00",