Compare commits

..

30 Commits

Author SHA1 Message Date
Zanie
f83ae012b8 Restore RUF ignore 2023-08-29 13:54:38 -05:00
Zanie
e09dfabc8c Fix test that checks for black and linter compatibility 2023-08-29 13:35:42 -05:00
Charlie Marsh
5de95d7054 Reuse FormatResult and FormatterIterationError in format_stdin.rs (#6985)
## Summary

Ensures that we use the same error types and messages. Also renames
those struct to `FormatCommand*` for consistency, and removes the
`FormatCommandResult::Skipped` variant in favor of skipping in the
iterator directly.
2023-08-29 17:41:53 +00:00
Charlie Marsh
8d1610d960 Implement Display on formatter structs (#6983)
Feedback from
https://github.com/astral-sh/ruff/pull/6948#discussion_r1308260021.
2023-08-29 16:57:26 +00:00
Charlie Marsh
fad23bbe60 Add a --check flag to the formatter CLI (#6982)
## Summary

Returns an exit code of 1 if any files would be reformatted:

```
ruff on  charlie/format-check:main [$?⇡] is 📦 v0.0.286 via 🐍 v3.11.2 via 🦀 v1.72.0
❯ cargo run -p ruff_cli -- format foo.py --check
   Compiling ruff_cli v0.0.286 (/Users/crmarsh/workspace/ruff/crates/ruff_cli)
    Finished dev [unoptimized + debuginfo] target(s) in 1.69s
     Running `target/debug/ruff format foo.py --check`
warning: `ruff format` is a work-in-progress, subject to change at any time, and intended only for experimentation.
1 file would be reformatted
ruff on  charlie/format-check:main [$?⇡] is 📦 v0.0.286 via 🐍 v3.11.2 via 🦀 v1.72.0 took 2s
❯ echo $?
1
```

Closes #6966.
2023-08-29 12:40:00 -04:00
Charlie Marsh
25c374856a Move stdin formatting to its own command file (#6981)
## Summary

This is similar to `commands::check` vs. `commands::check_stdin`, and
gets the logic out of the parent file (`lib.rs`). It also ensures that
we avoid formatting files that should be excluded when `--force-exclude`
is provided.
2023-08-29 16:06:10 +00:00
Charlie Marsh
34221346c1 Rename run.rs command to check.rs (#6980)
The CLI command is called "check", so this is more consistent (and
consistent with the pattern used in other commands).
2023-08-29 15:52:06 +00:00
Chris Pryer
924f10186f Update dynamic_text builder doc comment (#6978) 2023-08-29 16:29:11 +02:00
Charlie Marsh
b7634b6ede Fix typo in banned-from (#6977)
Oops...
2023-08-29 09:39:09 -04:00
Dhruv Manilawala
4d49d5e845 Add eat_char2 for the lexer (#6968)
## Summary

This PR adds a new helper method on the `Cursor` called `eat_char2`
which is similar to `eat_char` but accepts 2 characters instead of 1. It'll
`bump` the cursor twice if both characters are found on lookahead.

## Test Plan

`cargo test`
2023-08-29 17:18:02 +05:30
Micha Reiser
715d86dae9 Remove Comprehension priority (#6947) 2023-08-29 08:30:15 +02:00
Micha Reiser
adb48692d6 Use optional parentheses for tuples in return statements (#6875) 2023-08-29 08:30:05 +02:00
Charlie Marsh
19ccf1d073 Make RUF100 a suggested fix (#6967)
I made this automatic when I removed the deprecated "unspecified"
method, but I think suggested is actually more appropriate.
2023-08-29 04:28:52 +00:00
Charlie Marsh
e3c36465ec Base parameter-find lookup on range, rather than name (#6960)
## Summary

This simplifies the signature a bit and is more reliable in the
(unusual? invalid?) case of duplicate parameters.
2023-08-29 01:39:38 +00:00
Valeriy Savchenko
7e36284684 [refurb] Implement check-and-remove-from-set rule (FURB132) (#6904)
## Summary

This PR is a continuation of #6897 and #6702 and me replicating `refurb`
rules (#1348). It adds support for
[FURB132](https://github.com/dosisod/refurb/blob/master/refurb/checks/builtin/set_discard.py)

## Test Plan

I included a new test + checked that all other tests pass.
2023-08-29 01:17:26 +00:00
Valeriy Savchenko
c448b4086a [refurb] Implement delete-full-slice rule (FURB131) (#6897)
## Summary

This PR is a continuation of #6702 and me replicating `refurb` rules
(#1348). It adds support for
[FURB131](https://github.com/dosisod/refurb/blob/master/refurb/checks/builtin/no_del.py.

## Test Plan

I included a new test + checked that all other tests pass.
2023-08-28 20:52:38 -04:00
Charlie Marsh
3200015c06 Fix banned-from documentation (#6959)
Closes https://github.com/astral-sh/ruff/issues/6839.
2023-08-29 00:51:15 +00:00
Valeriy Savchenko
26d53c56a2 [refurb] Implement repeated-append rule (FURB113) (#6702)
## Summary

As an initial effort with replicating `refurb` rules (#1348 ), this PR
adds support for
[FURB113](https://github.com/dosisod/refurb/blob/master/refurb/checks/builtin/list_extend.py)
and adds a new category of checks.

## Test Plan

I included a new test + checked that all other tests pass.
2023-08-28 22:51:59 +00:00
Charlie Marsh
1439bb592e Report number of changed files on the format CLI (#6948)
## Summary

Very basic summary:

<img width="962" alt="Screen Shot 2023-08-28 at 1 17 37 PM"
src="https://github.com/astral-sh/ruff/assets/1309177/53537aca-7579-44d8-855b-f4553affae50">

If you run with `--verbose`, we'll also show you the timing:

<img width="962" alt="Screen Shot 2023-08-28 at 1 17 58 PM"
src="https://github.com/astral-sh/ruff/assets/1309177/63cbd13e-9462-4e49-b3a3-c6663a7ad41c">
2023-08-28 18:42:31 -04:00
Charlie Marsh
fc47e0dab2 Clean up some misc. code in NamedTuple and TypedDict conversion (#6957)
- Use `Option` instead of `Result` everywhere.
- Use `field` instead of `property` (to match the nomenclature of
`NamedTuple` and `TypedDict`).
- Put the violation function at the top of the file, rather than the
bottom.
2023-08-28 17:12:56 -04:00
Charlie Marsh
87aa5d6b66 Avoid panic when typename is provided as a keyword argument (#6955)
## Summary

The `typename` argument to `NamedTuple` and `TypedDict` is a required
positional argument. We assumed as much, but panicked if it was provided
as a keyword argument or otherwise omitted. This PR handles the case
gracefully.

Closes https://github.com/astral-sh/ruff/issues/6953.
2023-08-28 21:03:18 +00:00
Charlie Marsh
d1ad20c9ea Avoid invalid fix for C417 with separate keys and values (#6954)
Closes https://github.com/astral-sh/ruff/issues/6951.
2023-08-28 16:49:40 -04:00
Charlie Marsh
ecca125f9a Use SourceCodeSnippet for more diagnostic messages (#6950)
## Summary

This ensures that we truncate the messages when they break over multiple
lines, etc.
2023-08-28 16:22:49 -04:00
Charlie Marsh
005e21a139 Add line-length to E501 documentation (#6949)
Closes https://github.com/astral-sh/ruff/issues/6908.
2023-08-28 14:45:30 -04:00
dependabot[bot]
e1db036f90 ci(deps): bump tj-actions/changed-files from 37 to 38 (#6946)
Co-authored-by: jackton1 <a
Co-authored-by: repo-ranger[bot] <!-- raw HTML omitted --> (<a
Co-authored-by: repo-ranger[bot] <!-- raw HTML omitted -->
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-28 16:50:38 +00:00
Zanie Blue
9ad67b0758 Add comment to ecoystem check re. ALL rules (#6943) 2023-08-28 11:28:36 -05:00
Jelle van der Waa
af61abc747 Re-use is_magic where possible (#6945)
## Summary

Use `is_magic` where possible

## Test Plan

Unit tests
2023-08-28 15:35:34 +00:00
Charlie Marsh
ec575188c4 Narrow the supported options on the format CLI (#6944)
## Summary

Ensures that we only show supported options:

<img width="1228" alt="Screen Shot 2023-08-28 at 11 03 16 AM"
src="https://github.com/astral-sh/ruff/assets/1309177/50fb7595-dc30-43d2-a7e4-c0103acc15b9">

For now, I'm not super focused on DRYing up the CLI.
2023-08-28 15:28:22 +00:00
Charlie Marsh
aea7500c1e Allow Locator#slice to take Ranged (#6922)
## Summary

As a small quality-of-life improvement, the locator can now slice like
`locator.slice(stmt)` instead of requiring
`locator.slice(stmt.range())`.

## Test Plan

`cargo test`
2023-08-28 11:08:39 -04:00
Charlie Marsh
58f5f27dc3 Add TOML files to SourceType (#6929)
## Summary

This PR adds a higher-level enum (`SourceType`) around `PySourceType` to
allow us to use the same detection path to handle TOML files. Right now,
we have ad hoc `is_pyproject_toml` checks littered around, and some
codepaths are omitting that logic altogether (like `add_noqa`). Instead,
we should always be required to check the source type and handle TOML
files as appropriate.

This PR will also help with our pre-commit capabilities. If we add
`toml` to pre-commit (to support `pyproject.toml`), pre-commit will
start to pass _other_ files to Ruff (along with `poetry.lock` and
`Pipfile` -- see
[identify](b59996304f/identify/extensions.py (L355))).
By detecting those files and handling those cases, we avoid attempting
to parse them as Python files, which would lead to pre-commit errors.
(We tried to add `toml` to pre-commit here
(https://github.com/astral-sh/ruff-pre-commit/pull/44), but had to
revert here (https://github.com/astral-sh/ruff-pre-commit/pull/45) as it
led to the pre-commit hook attempting to parse `poetry.lock` files as
Python files.)
2023-08-28 15:01:48 +00:00
116 changed files with 3166 additions and 857 deletions

View File

@@ -30,7 +30,7 @@ jobs:
with:
fetch-depth: 0
- uses: tj-actions/changed-files@v37
- uses: tj-actions/changed-files@v38
id: changed
with:
files_yaml: |
@@ -96,26 +96,19 @@ jobs:
uses: taiki-e/install-action@v2
with:
tool: cargo-insta
- name: "Install cargo nextest"
uses: taiki-e/install-action@v2
with:
tool: cargo-nextest
- run: pip install black[d]==23.1.0
- uses: Swatinem/rust-cache@v2
- name: "Run tests (Ubuntu)"
if: ${{ matrix.os == 'ubuntu-latest' }}
run: >
cargo insta test --all --all-features --unreferenced reject --test-runner nextest
-- --status-level skip --failure-output immediate-final --no-fail-fast --all-targets
run: cargo insta test --all --all-features --unreferenced reject
- name: "Run tests (Windows)"
if: ${{ matrix.os == 'windows-latest' }}
shell: bash
# We can't reject unreferenced snapshots on windows because flake8_executable can't run on windows
run: cargo insta test --all --all-features
- run: cargo test --package ruff_cli --test black_compatibility_test -- --ignored
# TODO: Skipped as it's currently broken. The resource were moved from the
# ruff_cli to ruff crate, but this test was not updated.
if: false
# Check linter fix compatibility with Black
- name: "Check Black compatibility"
run: cargo test --package ruff_cli --test black_compatibility_test -- --ignored
# Check for broken links in the documentation.
- run: cargo doc --all --no-deps
env:

1
Cargo.lock generated
View File

@@ -2210,6 +2210,7 @@ dependencies = [
"glob",
"ignore",
"insta",
"is-macro",
"itertools",
"itoa",
"log",

View File

@@ -37,3 +37,6 @@ map(lambda x: lambda x: x, range(4))
map(lambda x=1: x, nums)
map(lambda *args: len(args), range(4))
map(lambda **kwargs: len(kwargs), range(4))
# Ok because multiple arguments are allowed.
dict(map(lambda k, v: (k, v), keys, values))

View File

@@ -22,3 +22,4 @@ MyType = typing.NamedTuple("MyType", a=int, b=tuple[str, ...])
# unfixable
MyType = typing.NamedTuple("MyType", [("a", int)], [("b", str)])
MyType = typing.NamedTuple("MyType", [("a", int)], b=str)
MyType = typing.NamedTuple(typename="MyType", a=int, b=str)

View File

@@ -0,0 +1,169 @@
from typing import List
def func():
pass
nums = []
nums2 = []
nums3: list[int] = func()
nums4: List[int]
class C:
def append(self, x):
pass
# Errors.
# FURB113
nums.append(1)
nums.append(2)
pass
# FURB113
nums3.append(1)
nums3.append(2)
pass
# FURB113
nums4.append(1)
nums4.append(2)
pass
# FURB113
nums.append(1)
nums2.append(1)
nums.append(2)
nums.append(3)
pass
# FURB113
nums.append(1)
nums2.append(1)
nums.append(2)
# FURB113
nums3.append(1)
nums.append(3)
# FURB113
nums4.append(1)
nums4.append(2)
nums3.append(2)
pass
# FURB113
nums.append(1)
nums.append(2)
nums.append(3)
if True:
# FURB113
nums.append(1)
nums.append(2)
if True:
# FURB113
nums.append(1)
nums.append(2)
pass
if True:
# FURB113
nums.append(1)
nums2.append(1)
nums.append(2)
nums.append(3)
def yes_one(x: list[int]):
# FURB113
x.append(1)
x.append(2)
def yes_two(x: List[int]):
# FURB113
x.append(1)
x.append(2)
def yes_three(*, x: list[int]):
# FURB113
x.append(1)
x.append(2)
def yes_four(x: list[int], /):
# FURB113
x.append(1)
x.append(2)
def yes_five(x: list[int], y: list[int]):
# FURB113
x.append(1)
x.append(2)
y.append(1)
x.append(3)
def yes_six(x: list):
# FURB113
x.append(1)
x.append(2)
# Non-errors.
nums.append(1)
pass
nums.append(2)
if True:
nums.append(1)
pass
nums.append(2)
nums.append(1)
pass
nums.append(1)
nums2.append(2)
nums.copy()
nums.copy()
c = C()
c.append(1)
c.append(2)
def not_one(x):
x.append(1)
x.append(2)
def not_two(x: C):
x.append(1)
x.append(2)
# redefining a list variable with a new type shouldn't confuse ruff.
nums2 = C()
nums2.append(1)
nums2.append(2)

View File

@@ -0,0 +1,64 @@
from typing import Dict, List
names = {"key": "value"}
nums = [1, 2, 3]
x = 42
y = "hello"
# these should match
# FURB131
del nums[:]
# FURB131
del names[:]
# FURB131
del x, nums[:]
# FURB131
del y, names[:], x
def yes_one(x: list[int]):
# FURB131
del x[:]
def yes_two(x: dict[int, str]):
# FURB131
del x[:]
def yes_three(x: List[int]):
# FURB131
del x[:]
def yes_four(x: Dict[int, str]):
# FURB131
del x[:]
# these should not
del names["key"]
del nums[0]
x = 1
del x
del nums[1:2]
del nums[:2]
del nums[1:]
del nums[::2]
def no_one(param):
del param[:]

View File

@@ -0,0 +1,80 @@
from typing import Set
from some.package.name import foo, bar
s = set()
s2 = {}
s3: set[int] = foo()
# these should match
# FURB132
if "x" in s:
s.remove("x")
# FURB132
if "x" in s2:
s2.remove("x")
# FURB132
if "x" in s3:
s3.remove("x")
var = "y"
# FURB132
if var in s:
s.remove(var)
if f"{var}:{var}" in s:
s.remove(f"{var}:{var}")
def identity(x):
return x
# TODO: FURB132
if identity("x") in s2:
s2.remove(identity("x"))
# these should not
if "x" in s:
s.remove("y")
s.discard("x")
s2 = set()
if "x" in s:
s2.remove("x")
if "x" in s:
s.remove("x")
print("removed item")
if bar() in s:
# bar() might have a side effect
s.remove(bar())
if "x" in s:
s.remove("x")
else:
print("not found")
class Container:
def remove(self, item) -> None:
return
def __contains__(self, other) -> bool:
return True
c = Container()
if "x" in c:
c.remove("x")

View File

@@ -8,7 +8,6 @@ use libcst_native::{
use ruff_python_ast::Stmt;
use ruff_python_codegen::Stylist;
use ruff_source_file::Locator;
use ruff_text_size::Ranged;
use crate::cst::helpers::compose_module_path;
use crate::cst::matchers::match_statement;
@@ -39,7 +38,7 @@ pub(crate) fn remove_imports<'a>(
locator: &Locator,
stylist: &Stylist,
) -> Result<Option<String>> {
let module_text = locator.slice(stmt.range());
let module_text = locator.slice(stmt);
let mut tree = match_statement(module_text)?;
let Statement::Simple(body) = &mut tree else {
@@ -118,7 +117,7 @@ pub(crate) fn retain_imports(
locator: &Locator,
stylist: &Stylist,
) -> Result<String> {
let module_text = locator.slice(stmt.range());
let module_text = locator.slice(stmt);
let mut tree = match_statement(module_text)?;
let Statement::Simple(body) = &mut tree else {

View File

@@ -9,6 +9,10 @@ impl SourceCodeSnippet {
Self(source_code)
}
pub(crate) fn from_str(source_code: &str) -> Self {
Self(source_code.to_string())
}
/// Return the full snippet for user-facing display, or `None` if the snippet should be
/// truncated.
pub(crate) fn full_display(&self) -> Option<&str> {

View File

@@ -163,9 +163,9 @@ pub(crate) fn definitions(checker: &mut Checker) {
continue;
};
let contents = checker.locator.slice(expr.range());
let contents = checker.locator().slice(expr);
let indentation = checker.locator.slice(TextRange::new(
let indentation = checker.locator().slice(TextRange::new(
checker.locator.line_start(expr.start()),
expr.start(),
));

View File

@@ -1039,12 +1039,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
}
}
if checker.enabled(Rule::PrintfStringFormatting) {
pyupgrade::rules::printf_string_formatting(
checker,
expr,
right,
checker.locator,
);
pyupgrade::rules::printf_string_formatting(checker, expr, right);
}
if checker.enabled(Rule::BadStringFormatCharacter) {
pylint::rules::bad_string_format_character::percent(checker, expr);

View File

@@ -13,7 +13,7 @@ use crate::rules::{
flake8_django, flake8_errmsg, flake8_import_conventions, flake8_pie, flake8_pyi,
flake8_pytest_style, flake8_raise, flake8_return, flake8_simplify, flake8_slots,
flake8_tidy_imports, flake8_type_checking, mccabe, pandas_vet, pep8_naming, perflint,
pycodestyle, pyflakes, pygrep_hooks, pylint, pyupgrade, ruff, tryceratops,
pycodestyle, pyflakes, pygrep_hooks, pylint, pyupgrade, refurb, ruff, tryceratops,
};
use crate::settings::types::PythonVersion;
@@ -1056,6 +1056,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
checker.diagnostics.push(diagnostic);
}
}
if checker.enabled(Rule::CheckAndRemoveFromSet) {
refurb::rules::check_and_remove_from_set(checker, if_);
}
if checker.source_type.is_stub() {
if checker.any_enabled(&[
Rule::UnrecognizedVersionInfoCheck,
@@ -1459,7 +1462,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
}
}
}
Stmt::Delete(ast::StmtDelete { targets, range: _ }) => {
Stmt::Delete(delete @ ast::StmtDelete { targets, range: _ }) => {
if checker.enabled(Rule::GlobalStatement) {
for target in targets {
if let Expr::Name(ast::ExprName { id, .. }) = target {
@@ -1467,6 +1470,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
}
}
}
if checker.enabled(Rule::DeleteFullSlice) {
refurb::rules::delete_full_slice(checker, delete);
}
}
Stmt::Expr(ast::StmtExpr { value, range: _ }) => {
if checker.enabled(Rule::UselessComparison) {
@@ -1484,6 +1490,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::AsyncioDanglingTask) {
ruff::rules::asyncio_dangling_task(checker, value);
}
if checker.enabled(Rule::RepeatedAppend) {
refurb::rules::repeated_append(checker, stmt);
}
}
_ => {}
}

View File

@@ -110,7 +110,7 @@ pub(crate) fn check_noqa(
Diagnostic::new(UnusedNOQA { codes: None }, directive.range());
if settings.rules.should_fix(diagnostic.kind.rule()) {
diagnostic
.set_fix(Fix::automatic(delete_noqa(directive.range(), locator)));
.set_fix(Fix::suggested(delete_noqa(directive.range(), locator)));
}
diagnostics.push(diagnostic);
}
@@ -174,12 +174,12 @@ pub(crate) fn check_noqa(
);
if settings.rules.should_fix(diagnostic.kind.rule()) {
if valid_codes.is_empty() {
diagnostic.set_fix(Fix::automatic(delete_noqa(
diagnostic.set_fix(Fix::suggested(delete_noqa(
directive.range(),
locator,
)));
} else {
diagnostic.set_fix(Fix::automatic(Edit::range_replacement(
diagnostic.set_fix(Fix::suggested(Edit::range_replacement(
format!("# noqa: {}", valid_codes.join(", ")),
directive.range(),
)));

View File

@@ -865,6 +865,11 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Slots, "001") => (RuleGroup::Unspecified, rules::flake8_slots::rules::NoSlotsInTupleSubclass),
(Flake8Slots, "002") => (RuleGroup::Unspecified, rules::flake8_slots::rules::NoSlotsInNamedtupleSubclass),
// refurb
(Refurb, "113") => (RuleGroup::Nursery, rules::refurb::rules::RepeatedAppend),
(Refurb, "131") => (RuleGroup::Nursery, rules::refurb::rules::DeleteFullSlice),
(Refurb, "132") => (RuleGroup::Nursery, rules::refurb::rules::CheckAndRemoveFromSet),
_ => return None,
})
}

View File

@@ -76,7 +76,7 @@ impl StatementVisitor<'_> for StringLinesVisitor<'_> {
}) = value.as_ref()
{
for line in UniversalNewlineIterator::with_offset(
self.locator.slice(value.range()),
self.locator.slice(value.as_ref()),
value.start(),
) {
self.string_lines.push(line.start());

View File

@@ -319,7 +319,7 @@ impl<'a> Importer<'a> {
/// Add the given member to an existing `Stmt::ImportFrom` statement.
fn add_member(&self, stmt: &Stmt, member: &str) -> Result<Edit> {
let mut statement = match_statement(self.locator.slice(stmt.range()))?;
let mut statement = match_statement(self.locator.slice(stmt))?;
let import_from = match_import_from(&mut statement)?;
let aliases = match_aliases(import_from)?;
aliases.push(ImportAlias {

View File

@@ -268,9 +268,12 @@ pub fn check_path(
const MAX_ITERATIONS: usize = 100;
/// Add any missing `# noqa` pragmas to the source code at the given `Path`.
pub fn add_noqa_to_path(path: &Path, package: Option<&Path>, settings: &Settings) -> Result<usize> {
let source_type = PySourceType::from(path);
pub fn add_noqa_to_path(
path: &Path,
package: Option<&Path>,
source_type: PySourceType,
settings: &Settings,
) -> Result<usize> {
// Read the file from disk.
let contents = std::fs::read_to_string(path)?;

View File

@@ -196,6 +196,9 @@ pub enum Linter {
/// [Perflint](https://pypi.org/project/perflint/)
#[prefix = "PERF"]
Perflint,
/// [refurb](https://pypi.org/project/refurb/)
#[prefix = "FURB"]
Refurb,
/// Ruff-specific rules
#[prefix = "RUF"]
Ruff,

View File

@@ -81,7 +81,7 @@ pub(crate) fn getattr_with_constant(
let mut diagnostic = Diagnostic::new(GetAttrWithConstant, expr.range());
if checker.patch(diagnostic.kind.rule()) {
diagnostic.set_fix(Fix::suggested(Edit::range_replacement(
format!("{}.{}", checker.locator().slice(obj.range()), value),
format!("{}.{}", checker.locator().slice(obj), value),
expr.range(),
)));
}

View File

@@ -84,7 +84,7 @@ pub(crate) fn unreliable_callable_check(
if id == "hasattr" {
if checker.semantic().is_builtin("callable") {
diagnostic.set_fix(Fix::automatic(Edit::range_replacement(
format!("callable({})", checker.locator().slice(obj.range())),
format!("callable({})", checker.locator().slice(obj)),
expr.range(),
)));
}

View File

@@ -31,7 +31,7 @@ pub(crate) fn fix_unnecessary_generator_list(
expr: &Expr,
) -> Result<Edit> {
// Expr(Call(GeneratorExp)))) -> Expr(ListComp)))
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let call = match_call_mut(&mut tree)?;
let arg = match_arg(call)?;
@@ -63,7 +63,7 @@ pub(crate) fn fix_unnecessary_generator_set(checker: &Checker, expr: &Expr) -> R
let stylist = checker.stylist();
// Expr(Call(GeneratorExp)))) -> Expr(SetComp)))
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let call = match_call_mut(&mut tree)?;
let arg = match_arg(call)?;
@@ -97,7 +97,7 @@ pub(crate) fn fix_unnecessary_generator_dict(checker: &Checker, expr: &Expr) ->
let locator = checker.locator();
let stylist = checker.stylist();
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let call = match_call_mut(&mut tree)?;
let arg = match_arg(call)?;
@@ -141,7 +141,7 @@ pub(crate) fn fix_unnecessary_list_comprehension_set(
let stylist = checker.stylist();
// Expr(Call(ListComp)))) ->
// Expr(SetComp)))
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let call = match_call_mut(&mut tree)?;
let arg = match_arg(call)?;
@@ -176,7 +176,7 @@ pub(crate) fn fix_unnecessary_list_comprehension_dict(
let locator = checker.locator();
let stylist = checker.stylist();
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let call = match_call_mut(&mut tree)?;
let arg = match_arg(call)?;
@@ -261,7 +261,7 @@ pub(crate) fn fix_unnecessary_literal_set(checker: &Checker, expr: &Expr) -> Res
let stylist = checker.stylist();
// Expr(Call(List|Tuple)))) -> Expr(Set)))
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let call = match_call_mut(&mut tree)?;
let arg = match_arg(call)?;
@@ -302,7 +302,7 @@ pub(crate) fn fix_unnecessary_literal_dict(checker: &Checker, expr: &Expr) -> Re
let stylist = checker.stylist();
// Expr(Call(List|Tuple)))) -> Expr(Dict)))
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let call = match_call_mut(&mut tree)?;
let arg = match_arg(call)?;
@@ -371,7 +371,7 @@ pub(crate) fn fix_unnecessary_collection_call(checker: &Checker, expr: &Expr) ->
let stylist = checker.stylist();
// Expr(Call("list" | "tuple" | "dict")))) -> Expr(List|Tuple|Dict)
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let call = match_call(&tree)?;
let name = match_name(&call.func)?;
@@ -522,7 +522,7 @@ pub(crate) fn fix_unnecessary_literal_within_tuple_call(
stylist: &Stylist,
expr: &Expr,
) -> Result<Edit> {
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let call = match_call_mut(&mut tree)?;
let arg = match_arg(call)?;
@@ -572,7 +572,7 @@ pub(crate) fn fix_unnecessary_literal_within_list_call(
stylist: &Stylist,
expr: &Expr,
) -> Result<Edit> {
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let call = match_call_mut(&mut tree)?;
let arg = match_arg(call)?;
@@ -625,7 +625,7 @@ pub(crate) fn fix_unnecessary_list_call(
expr: &Expr,
) -> Result<Edit> {
// Expr(Call(List|Tuple)))) -> Expr(List|Tuple)))
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let call = match_call_mut(&mut tree)?;
let arg = match_arg(call)?;
@@ -646,7 +646,7 @@ pub(crate) fn fix_unnecessary_call_around_sorted(
stylist: &Stylist,
expr: &Expr,
) -> Result<Edit> {
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let outer_call = match_call_mut(&mut tree)?;
let inner_call = match &outer_call.args[..] {
@@ -758,7 +758,7 @@ pub(crate) fn fix_unnecessary_double_cast_or_process(
stylist: &Stylist,
expr: &Expr,
) -> Result<Edit> {
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let outer_call = match_call_mut(&mut tree)?;
@@ -789,7 +789,7 @@ pub(crate) fn fix_unnecessary_comprehension(
stylist: &Stylist,
expr: &Expr,
) -> Result<Edit> {
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
match &tree {
@@ -878,7 +878,7 @@ pub(crate) fn fix_unnecessary_map(
parent: Option<&Expr>,
object_type: ObjectType,
) -> Result<Edit> {
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let call = match_call_mut(&mut tree)?;
let arg = match_arg(call)?;
@@ -1021,7 +1021,7 @@ pub(crate) fn fix_unnecessary_literal_within_dict_call(
stylist: &Stylist,
expr: &Expr,
) -> Result<Edit> {
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let call = match_call_mut(&mut tree)?;
let arg = match_arg(call)?;
@@ -1041,7 +1041,7 @@ pub(crate) fn fix_unnecessary_comprehension_any_all(
expr: &Expr,
) -> Result<Fix> {
// Expr(ListComp) -> Expr(GeneratorExp)
let module_text = locator.slice(expr.range());
let module_text = locator.slice(expr);
let mut tree = match_expression(module_text)?;
let call = match_call_mut(&mut tree)?;

View File

@@ -89,7 +89,7 @@ pub(crate) fn unnecessary_map(
ObjectType::Generator => {
// Exclude the parent if already matched by other arms.
if parent
.and_then(ruff_python_ast::Expr::as_call_expr)
.and_then(Expr::as_call_expr)
.and_then(|call| call.func.as_name_expr())
.is_some_and(|name| matches!(name.id.as_str(), "list" | "set" | "dict"))
{
@@ -122,7 +122,7 @@ pub(crate) fn unnecessary_map(
// Only flag, e.g., `list(map(lambda x: x + 1, iterable))`.
let [Expr::Call(ast::ExprCall {
func,
arguments: Arguments { args, .. },
arguments: Arguments { args, keywords, .. },
..
})] = args
else {
@@ -133,6 +133,10 @@ pub(crate) fn unnecessary_map(
return;
}
if !keywords.is_empty() {
return;
}
let Some(argument) = helpers::first_argument_with_matching_function("map", func, args)
else {
return;
@@ -163,13 +167,21 @@ pub(crate) fn unnecessary_map(
// Only flag, e.g., `dict(map(lambda v: (v, v ** 2), values))`.
let [Expr::Call(ast::ExprCall {
func,
arguments: Arguments { args, .. },
arguments: Arguments { args, keywords, .. },
..
})] = args
else {
return;
};
if args.len() != 2 {
return;
}
if !keywords.is_empty() {
return;
}
let Some(argument) = helpers::first_argument_with_matching_function("map", func, args)
else {
return;

View File

@@ -55,7 +55,7 @@ pub(crate) fn duplicate_union_member<'a>(checker: &mut Checker, expr: &'a Expr)
// Replace the parent with its non-duplicate child.
let child = if expr == left.as_ref() { right } else { left };
diagnostic.set_fix(Fix::automatic(Edit::range_replacement(
checker.locator().slice(child.range()).to_string(),
checker.locator().slice(child.as_ref()).to_string(),
parent.range(),
)));
}

View File

@@ -8,6 +8,7 @@ use ruff_python_ast::{self as ast, Expr};
use ruff_python_semantic::SemanticModel;
use ruff_text_size::Ranged;
use crate::autofix::snippet::SourceCodeSnippet;
use crate::{checkers::ast::Checker, rules::flake8_pyi::helpers::traverse_union};
/// ## What it does
@@ -38,7 +39,7 @@ use crate::{checkers::ast::Checker, rules::flake8_pyi::helpers::traverse_union};
/// ```
#[violation]
pub struct RedundantLiteralUnion {
literal: String,
literal: SourceCodeSnippet,
builtin_type: ExprType,
}
@@ -49,7 +50,11 @@ impl Violation for RedundantLiteralUnion {
literal,
builtin_type,
} = self;
format!("`Literal[{literal}]` is redundant in a union with `{builtin_type}`",)
if let Some(literal) = literal.full_display() {
format!("`Literal[{literal}]` is redundant in a union with `{builtin_type}`")
} else {
format!("`Literal` is redundant in a union with `{builtin_type}`")
}
}
}
@@ -88,7 +93,7 @@ pub(crate) fn redundant_literal_union<'a>(checker: &mut Checker, union: &'a Expr
if builtin_types_in_union.contains(&constant_type) {
checker.diagnostics.push(Diagnostic::new(
RedundantLiteralUnion {
literal: checker.locator().slice(literal_expr.range()).to_string(),
literal: SourceCodeSnippet::from_str(checker.locator().slice(literal_expr)),
builtin_type: constant_type,
},
literal_expr.range(),

View File

@@ -249,7 +249,7 @@ fn is_valid_default_value_with_annotation(
..
}) = left.as_ref()
{
return locator.slice(left.range()).len() <= 10;
return locator.slice(left.as_ref()).len() <= 10;
} else if let Expr::UnaryOp(ast::ExprUnaryOp {
op: UnaryOp::USub,
operand,
@@ -262,7 +262,7 @@ fn is_valid_default_value_with_annotation(
..
}) = operand.as_ref()
{
return locator.slice(operand.range()).len() <= 10;
return locator.slice(operand.as_ref()).len() <= 10;
}
}
}

View File

@@ -63,7 +63,7 @@ pub(crate) fn unnecessary_literal_union<'a>(checker: &mut Checker, expr: &'a Exp
UnnecessaryLiteralUnion {
members: literal_exprs
.into_iter()
.map(|literal_expr| checker.locator().slice(literal_expr.range()).to_string())
.map(|expr| checker.locator().slice(expr.as_ref()).to_string())
.collect(),
},
expr.range(),

View File

@@ -78,7 +78,7 @@ pub(crate) fn unnecessary_type_union<'a>(checker: &mut Checker, union: &'a Expr)
UnnecessaryTypeUnion {
members: type_exprs
.into_iter()
.map(|type_expr| checker.locator().slice(type_expr.range()).to_string())
.map(|type_expr| checker.locator().slice(type_expr.as_ref()).to_string())
.collect(),
is_pep604_union,
},

View File

@@ -411,7 +411,7 @@ fn to_pytest_raises_args<'a>(
"assertRaises" | "failUnlessRaises" => {
match (arguments.args.as_slice(), arguments.keywords.as_slice()) {
// Ex) `assertRaises(Exception)`
([arg], []) => Cow::Borrowed(checker.locator().slice(arg.range())),
([arg], []) => Cow::Borrowed(checker.locator().slice(arg)),
// Ex) `assertRaises(expected_exception=Exception)`
([], [kwarg])
if kwarg
@@ -429,8 +429,8 @@ fn to_pytest_raises_args<'a>(
// Ex) `assertRaisesRegex(Exception, regex)`
([arg1, arg2], []) => Cow::Owned(format!(
"{}, match={}",
checker.locator().slice(arg1.range()),
checker.locator().slice(arg2.range())
checker.locator().slice(arg1),
checker.locator().slice(arg2)
)),
// Ex) `assertRaisesRegex(Exception, expected_regex=regex)`
([arg], [kwarg])
@@ -441,7 +441,7 @@ fn to_pytest_raises_args<'a>(
{
Cow::Owned(format!(
"{}, match={}",
checker.locator().slice(arg.range()),
checker.locator().slice(arg),
checker.locator().slice(kwarg.value.range())
))
}

View File

@@ -537,7 +537,7 @@ fn unnecessary_assign(checker: &mut Checker, stack: &Stack) {
edits::delete_stmt(stmt, None, checker.locator(), checker.indexer());
// Replace the `x = 1` statement with `return 1`.
let content = checker.locator().slice(assign.range());
let content = checker.locator().slice(assign);
let equals_index = content
.find('=')
.ok_or(anyhow::anyhow!("expected '=' in assignment statement"))?;

View File

@@ -1,6 +1,7 @@
use ruff_python_ast::{self as ast, Arguments, Constant, Expr};
use ruff_text_size::{Ranged, TextRange};
use crate::autofix::snippet::SourceCodeSnippet;
use ruff_diagnostics::{AlwaysAutofixableViolation, AutofixKind, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::is_const_none;
@@ -35,8 +36,8 @@ use crate::registry::AsRule;
/// - [Python documentation: `os.environ`](https://docs.python.org/3/library/os.html#os.environ)
#[violation]
pub struct UncapitalizedEnvironmentVariables {
expected: String,
original: String,
expected: SourceCodeSnippet,
actual: SourceCodeSnippet,
}
impl Violation for UncapitalizedEnvironmentVariables {
@@ -44,13 +45,21 @@ impl Violation for UncapitalizedEnvironmentVariables {
#[derive_message_formats]
fn message(&self) -> String {
let UncapitalizedEnvironmentVariables { expected, original } = self;
format!("Use capitalized environment variable `{expected}` instead of `{original}`")
let UncapitalizedEnvironmentVariables { expected, actual } = self;
if let (Some(expected), Some(actual)) = (expected.full_display(), actual.full_display()) {
format!("Use capitalized environment variable `{expected}` instead of `{actual}`")
} else {
format!("Use capitalized environment variable")
}
}
fn autofix_title(&self) -> Option<String> {
let UncapitalizedEnvironmentVariables { expected, original } = self;
Some(format!("Replace `{original}` with `{expected}`"))
let UncapitalizedEnvironmentVariables { expected, actual } = self;
if let (Some(expected), Some(actual)) = (expected.full_display(), actual.full_display()) {
Some(format!("Replace `{actual}` with `{expected}`"))
} else {
Some("Capitalize environment variable".to_string())
}
}
}
@@ -77,20 +86,28 @@ impl Violation for UncapitalizedEnvironmentVariables {
/// - [Python documentation: `dict.get`](https://docs.python.org/3/library/stdtypes.html#dict.get)
#[violation]
pub struct DictGetWithNoneDefault {
expected: String,
original: String,
expected: SourceCodeSnippet,
actual: SourceCodeSnippet,
}
impl AlwaysAutofixableViolation for DictGetWithNoneDefault {
#[derive_message_formats]
fn message(&self) -> String {
let DictGetWithNoneDefault { expected, original } = self;
format!("Use `{expected}` instead of `{original}`")
let DictGetWithNoneDefault { expected, actual } = self;
if let (Some(expected), Some(actual)) = (expected.full_display(), actual.full_display()) {
format!("Use `{expected}` instead of `{actual}`")
} else {
format!("Use `dict.get()` without default value")
}
}
fn autofix_title(&self) -> String {
let DictGetWithNoneDefault { expected, original } = self;
format!("Replace `{original}` with `{expected}`")
let DictGetWithNoneDefault { expected, actual } = self;
if let (Some(expected), Some(actual)) = (expected.full_display(), actual.full_display()) {
format!("Replace `{actual}` with `{expected}`")
} else {
"Remove default value".to_string()
}
}
}
@@ -141,8 +158,8 @@ pub(crate) fn use_capital_environment_variables(checker: &mut Checker, expr: &Ex
checker.diagnostics.push(Diagnostic::new(
UncapitalizedEnvironmentVariables {
expected: capital_env_var,
original: env_var.clone(),
expected: SourceCodeSnippet::new(capital_env_var),
actual: SourceCodeSnippet::new(env_var.clone()),
},
arg.range(),
));
@@ -181,8 +198,8 @@ fn check_os_environ_subscript(checker: &mut Checker, expr: &Expr) {
let mut diagnostic = Diagnostic::new(
UncapitalizedEnvironmentVariables {
expected: capital_env_var.clone(),
original: env_var.clone(),
expected: SourceCodeSnippet::new(capital_env_var.clone()),
actual: SourceCodeSnippet::new(env_var.clone()),
},
slice.range(),
);
@@ -238,15 +255,15 @@ pub(crate) fn dict_get_with_none_default(checker: &mut Checker, expr: &Expr) {
let expected = format!(
"{}({})",
checker.locator().slice(func.range()),
checker.locator().slice(key.range())
checker.locator().slice(func.as_ref()),
checker.locator().slice(key)
);
let original = checker.locator().slice(expr.range()).to_string();
let actual = checker.locator().slice(expr);
let mut diagnostic = Diagnostic::new(
DictGetWithNoneDefault {
expected: expected.clone(),
original,
expected: SourceCodeSnippet::new(expected.clone()),
actual: SourceCodeSnippet::from_str(actual),
},
expr.range(),
);

View File

@@ -155,7 +155,7 @@ pub(crate) fn explicit_true_false_in_ifexpr(
if checker.patch(diagnostic.kind.rule()) {
if test.is_compare_expr() {
diagnostic.set_fix(Fix::suggested(Edit::range_replacement(
checker.locator().slice(test.range()).to_string(),
checker.locator().slice(test).to_string(),
expr.range(),
)));
} else if checker.semantic().is_builtin("bool") {

View File

@@ -273,7 +273,7 @@ pub(crate) fn double_negation(checker: &mut Checker, expr: &Expr, op: UnaryOp, o
if checker.patch(diagnostic.kind.rule()) {
if checker.semantic().in_boolean_test() {
diagnostic.set_fix(Fix::suggested(Edit::range_replacement(
checker.locator().slice(operand.range()).to_string(),
checker.locator().slice(operand.as_ref()).to_string(),
expr.range(),
)));
} else if checker.semantic().is_builtin("bool") {

View File

@@ -13,7 +13,7 @@ pub(super) fn trailing_comma(
locator: &Locator,
source_type: PySourceType,
) -> TrailingComma {
let contents = locator.slice(stmt.range());
let contents = locator.slice(stmt);
let mut count = 0u32;
let mut trailing_comma = TrailingComma::Absent;
for (tok, _) in lexer::lex_starts_at(contents, source_type.as_mode(), stmt.start()).flatten() {

View File

@@ -52,5 +52,6 @@ pub mod pyflakes;
pub mod pygrep_hooks;
pub mod pylint;
pub mod pyupgrade;
pub mod refurb;
pub mod ruff;
pub mod tryceratops;

View File

@@ -3,6 +3,7 @@ use ruff_python_ast::Stmt;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::identifier::Identifier;
use ruff_python_semantic::analyze::visibility;
use ruff_python_semantic::{Scope, ScopeKind};
use crate::settings::types::IdentifierPattern;
@@ -51,7 +52,7 @@ pub(crate) fn dunder_function_name(
if matches!(scope.kind, ScopeKind::Class(_)) {
return None;
}
if !(name.starts_with("__") && name.ends_with("__")) {
if !visibility::is_magic(name) {
return None;
}
// Allowed under PEP 562 (https://peps.python.org/pep-0562/).

View File

@@ -103,7 +103,7 @@ pub(crate) fn incorrect_dict_iterator(checker: &mut Checker, stmt_for: &ast::Stm
if checker.patch(diagnostic.kind.rule()) {
let replace_attribute = Edit::range_replacement("values".to_string(), attr.range());
let replace_target = Edit::range_replacement(
checker.locator().slice(value.range()).to_string(),
checker.locator().slice(value).to_string(),
stmt_for.target.range(),
);
diagnostic.set_fix(Fix::suggested_edits(replace_attribute, [replace_target]));
@@ -121,7 +121,7 @@ pub(crate) fn incorrect_dict_iterator(checker: &mut Checker, stmt_for: &ast::Stm
if checker.patch(diagnostic.kind.rule()) {
let replace_attribute = Edit::range_replacement("keys".to_string(), attr.range());
let replace_target = Edit::range_replacement(
checker.locator().slice(key.range()).to_string(),
checker.locator().slice(key).to_string(),
stmt_for.target.range(),
);
diagnostic.set_fix(Fix::suggested_edits(replace_attribute, [replace_target]));

View File

@@ -11,7 +11,9 @@ use crate::settings::Settings;
/// ## Why is this bad?
/// For flowing long blocks of text (docstrings or comments), overlong lines
/// can hurt readability. [PEP 8], for example, recommends that such lines be
/// limited to 72 characters.
/// limited to 72 characters, while this rule enforces the limit specified by
/// the [`pycodestyle.max-doc-length`] setting. (If no value is provided, this
/// rule will be ignored, even if it's added to your `--select` list.)
///
/// In the context of this rule, a "doc line" is defined as a line consisting
/// of either a standalone comment or a standalone string, like a docstring.
@@ -22,11 +24,6 @@ use crate::settings::Settings;
/// characters), and lines that end with a URL (as long as the URL starts
/// before the line-length threshold).
///
/// Note that this rule will only be enforced after providing a value for
/// [`pycodestyle.max-doc-length`] in your Ruff configuration file. If no
/// value is provided, this rule will be ignored, even if it's added to your
/// `--select` list.
///
/// If [`pycodestyle.ignore-overlong-task-comments`] is `true`, this rule will
/// also ignore comments that start with any of the specified [`task-tags`]
/// (e.g., `# TODO:`).

View File

@@ -10,7 +10,9 @@ use crate::settings::Settings;
///
/// ## Why is this bad?
/// Overlong lines can hurt readability. [PEP 8], for example, recommends
/// limiting lines to 79 characters.
/// limiting lines to 79 characters. By default, this rule enforces a limit
/// of 88 characters for compatibility with Black, though that limit is
/// configurable via the [`line-length`] setting.
///
/// In the interest of pragmatism, this rule makes a few exceptions when
/// determining whether a line is overlong. Namely, it ignores lines that
@@ -36,6 +38,7 @@ use crate::settings::Settings;
/// ```
///
/// ## Options
/// - `line-length`
/// - `task-tags`
/// - `pycodestyle.ignore-overlong-task-comments`
///

View File

@@ -17,7 +17,7 @@ pub(super) fn remove_unused_format_arguments_from_dict(
locator: &Locator,
stylist: &Stylist,
) -> Result<Edit> {
let source_code = locator.slice(dict.range());
let source_code = locator.slice(dict);
transform_expression(source_code, stylist, |mut expression| {
let dict = match_dict(&mut expression)?;
@@ -41,7 +41,7 @@ pub(super) fn remove_unused_keyword_arguments_from_format_call(
locator: &Locator,
stylist: &Stylist,
) -> Result<Edit> {
let source_code = locator.slice(call.range());
let source_code = locator.slice(call);
transform_expression(source_code, stylist, |mut expression| {
let call = match_call_mut(&mut expression)?;
@@ -69,7 +69,7 @@ pub(crate) fn remove_unused_positional_arguments_from_format_call(
locator: &Locator,
stylist: &Stylist,
) -> Result<Edit> {
let source_code = locator.slice(call.range());
let source_code = locator.slice(call);
transform_expression(source_code, stylist, |mut expression| {
let call = match_call_mut(&mut expression)?;

View File

@@ -53,7 +53,7 @@ fn find_useless_f_strings<'a>(
locator: &'a Locator,
source_type: PySourceType,
) -> impl Iterator<Item = (TextRange, TextRange)> + 'a {
let contents = locator.slice(expr.range());
let contents = locator.slice(expr);
lexer::lex_starts_at(contents, source_type.as_mode(), expr.start())
.flatten()
.filter_map(|(tok, range)| match tok {

View File

@@ -3,6 +3,7 @@ use std::hash::BuildHasherDefault;
use ruff_python_ast::Expr;
use rustc_hash::{FxHashMap, FxHashSet};
use crate::autofix::snippet::SourceCodeSnippet;
use ruff_diagnostics::{AutofixKind, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::comparable::ComparableExpr;
@@ -43,7 +44,7 @@ use crate::registry::{AsRule, Rule};
/// - [Python documentation: Dictionaries](https://docs.python.org/3/tutorial/datastructures.html#dictionaries)
#[violation]
pub struct MultiValueRepeatedKeyLiteral {
name: String,
name: SourceCodeSnippet,
}
impl Violation for MultiValueRepeatedKeyLiteral {
@@ -52,12 +53,20 @@ impl Violation for MultiValueRepeatedKeyLiteral {
#[derive_message_formats]
fn message(&self) -> String {
let MultiValueRepeatedKeyLiteral { name } = self;
format!("Dictionary key literal `{name}` repeated")
if let Some(name) = name.full_display() {
format!("Dictionary key literal `{name}` repeated")
} else {
format!("Dictionary key literal repeated")
}
}
fn autofix_title(&self) -> Option<String> {
let MultiValueRepeatedKeyLiteral { name } = self;
Some(format!("Remove repeated key literal `{name}`"))
if let Some(name) = name.full_display() {
Some(format!("Remove repeated key literal `{name}`"))
} else {
Some(format!("Remove repeated key literal"))
}
}
}
@@ -92,7 +101,7 @@ impl Violation for MultiValueRepeatedKeyLiteral {
/// - [Python documentation: Dictionaries](https://docs.python.org/3/tutorial/datastructures.html#dictionaries)
#[violation]
pub struct MultiValueRepeatedKeyVariable {
name: String,
name: SourceCodeSnippet,
}
impl Violation for MultiValueRepeatedKeyVariable {
@@ -101,12 +110,20 @@ impl Violation for MultiValueRepeatedKeyVariable {
#[derive_message_formats]
fn message(&self) -> String {
let MultiValueRepeatedKeyVariable { name } = self;
format!("Dictionary key `{name}` repeated")
if let Some(name) = name.full_display() {
format!("Dictionary key `{name}` repeated")
} else {
format!("Dictionary key repeated")
}
}
fn autofix_title(&self) -> Option<String> {
let MultiValueRepeatedKeyVariable { name } = self;
Some(format!("Remove repeated key `{name}`"))
if let Some(name) = name.full_display() {
Some(format!("Remove repeated key `{name}`"))
} else {
Some(format!("Remove repeated key"))
}
}
}
@@ -135,7 +152,7 @@ pub(crate) fn repeated_keys(checker: &mut Checker, keys: &[Option<Expr>], values
if checker.enabled(Rule::MultiValueRepeatedKeyLiteral) {
let mut diagnostic = Diagnostic::new(
MultiValueRepeatedKeyLiteral {
name: checker.locator().slice(key.range()).to_string(),
name: SourceCodeSnippet::from_str(checker.locator().slice(key)),
},
key.range(),
);
@@ -154,7 +171,7 @@ pub(crate) fn repeated_keys(checker: &mut Checker, keys: &[Option<Expr>], values
if checker.enabled(Rule::MultiValueRepeatedKeyVariable) {
let mut diagnostic = Diagnostic::new(
MultiValueRepeatedKeyVariable {
name: checker.locator().slice(key.range()).to_string(),
name: SourceCodeSnippet::from_str(checker.locator().slice(key)),
},
key.range(),
);

View File

@@ -94,12 +94,8 @@ pub(crate) fn call(checker: &mut Checker, string: &str, range: TextRange) {
pub(crate) fn percent(checker: &mut Checker, expr: &Expr) {
// Grab each string segment (in case there's an implicit concatenation).
let mut strings: Vec<TextRange> = vec![];
for (tok, range) in lexer::lex_starts_at(
checker.locator().slice(expr.range()),
Mode::Module,
expr.start(),
)
.flatten()
for (tok, range) in
lexer::lex_starts_at(checker.locator().slice(expr), Mode::Module, expr.start()).flatten()
{
if tok.is_string() {
strings.push(range);

View File

@@ -211,7 +211,7 @@ fn is_valid_dict(
/// PLE1307
pub(crate) fn bad_string_format_type(checker: &mut Checker, expr: &Expr, right: &Expr) {
// Grab each string segment (in case there's an implicit concatenation).
let content = checker.locator().slice(expr.range());
let content = checker.locator().slice(expr);
let mut strings: Vec<TextRange> = vec![];
for (tok, range) in
lexer::lex_starts_at(content, checker.source_type.as_mode(), expr.start()).flatten()

View File

@@ -1,5 +1,6 @@
use itertools::Itertools;
use crate::autofix::snippet::SourceCodeSnippet;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{CmpOp, Expr};
@@ -24,19 +25,18 @@ use crate::rules::pylint::helpers::CmpOpExt;
/// - [Python documentation: Comparisons](https://docs.python.org/3/reference/expressions.html#comparisons)
#[violation]
pub struct ComparisonWithItself {
left: String,
op: CmpOp,
right: String,
actual: SourceCodeSnippet,
}
impl Violation for ComparisonWithItself {
#[derive_message_formats]
fn message(&self) -> String {
let ComparisonWithItself { left, op, right } = self;
format!(
"Name compared with itself, consider replacing `{left} {} {right}`",
CmpOpExt::from(op)
)
let ComparisonWithItself { actual } = self;
if let Some(actual) = actual.full_display() {
format!("Name compared with itself, consider replacing `{actual}`")
} else {
format!("Name compared with itself")
}
}
}
@@ -55,11 +55,15 @@ pub(crate) fn comparison_with_itself(
match (left, right) {
// Ex) `foo == foo`
(Expr::Name(left_name), Expr::Name(right_name)) if left_name.id == right_name.id => {
let actual = format!(
"{} {} {}",
checker.locator().slice(left),
CmpOpExt::from(op),
checker.locator().slice(right)
);
checker.diagnostics.push(Diagnostic::new(
ComparisonWithItself {
left: checker.generator().expr(left),
op: *op,
right: checker.generator().expr(right),
actual: SourceCodeSnippet::new(actual),
},
left_name.range(),
));
@@ -99,11 +103,15 @@ pub(crate) fn comparison_with_itself(
"id" | "len" | "type" | "int" | "bool" | "str" | "repr" | "bytes"
) && checker.semantic().is_builtin(&left_func.id)
{
let actual = format!(
"{} {} {}",
checker.locator().slice(left),
CmpOpExt::from(op),
checker.locator().slice(right)
);
checker.diagnostics.push(Diagnostic::new(
ComparisonWithItself {
left: checker.generator().expr(left),
op: *op,
right: checker.generator().expr(right),
actual: SourceCodeSnippet::new(actual),
},
left_call.range(),
));

View File

@@ -46,8 +46,10 @@ impl Violation for NestedMinMax {
const AUTOFIX: AutofixKind = AutofixKind::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
format!("Nested `{}` calls can be flattened", self.func)
let NestedMinMax { func } = self;
format!("Nested `{func}` calls can be flattened")
}
fn autofix_title(&self) -> Option<String> {

View File

@@ -187,10 +187,10 @@ fn merged_membership_test(
BoolOp::Or => "in",
BoolOp::And => "not in",
};
let left = locator.slice(left.range());
let left = locator.slice(left);
let members = comparators
.iter()
.map(|comparator| locator.slice(comparator.range()))
.map(|comparator| locator.slice(comparator))
.join(", ");
format!("{left} {op} ({members})",)
}

View File

@@ -2,6 +2,7 @@ use itertools::Itertools;
use ruff_python_ast::{self as ast, Arguments, BoolOp, Expr};
use rustc_hash::{FxHashMap, FxHashSet};
use crate::autofix::snippet::SourceCodeSnippet;
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::hashable::HashableExpr;
@@ -44,19 +45,27 @@ use crate::settings::types::PythonVersion;
/// - [Python documentation: `isinstance`](https://docs.python.org/3/library/functions.html#isinstance)
#[violation]
pub struct RepeatedIsinstanceCalls {
expr: String,
expression: SourceCodeSnippet,
}
impl AlwaysAutofixableViolation for RepeatedIsinstanceCalls {
#[derive_message_formats]
fn message(&self) -> String {
let RepeatedIsinstanceCalls { expr } = self;
format!("Merge `isinstance` calls: `{expr}`")
let RepeatedIsinstanceCalls { expression } = self;
if let Some(expression) = expression.full_display() {
format!("Merge `isinstance` calls: `{expression}`")
} else {
format!("Merge `isinstance` calls")
}
}
fn autofix_title(&self) -> String {
let RepeatedIsinstanceCalls { expr } = self;
format!("Replace with `{expr}`")
let RepeatedIsinstanceCalls { expression } = self;
if let Some(expression) = expression.full_display() {
format!("Replace with `{expression}`")
} else {
format!("Replace with merged `isinstance` call")
}
}
}
@@ -117,8 +126,12 @@ pub(crate) fn repeated_isinstance_calls(
.sorted(),
checker.settings.target_version,
);
let mut diagnostic =
Diagnostic::new(RepeatedIsinstanceCalls { expr: call.clone() }, expr.range());
let mut diagnostic = Diagnostic::new(
RepeatedIsinstanceCalls {
expression: SourceCodeSnippet::new(call.clone()),
},
expr.range(),
);
if checker.patch(diagnostic.kind.rule()) {
diagnostic.set_fix(Fix::automatic(Edit::range_replacement(call, expr.range())));
}

View File

@@ -1,16 +1,15 @@
use anyhow::{bail, Result};
use log::debug;
use ruff_python_ast::{
self as ast, Arguments, Constant, Expr, ExprContext, Identifier, Keyword, Stmt,
};
use ruff_text_size::{Ranged, TextRange};
use ruff_diagnostics::{AutofixKind, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::is_dunder;
use ruff_python_ast::{
self as ast, Arguments, Constant, Expr, ExprContext, Identifier, Keyword, Stmt,
};
use ruff_python_codegen::Generator;
use ruff_python_semantic::SemanticModel;
use ruff_python_stdlib::identifiers::is_identifier;
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
use crate::registry::AsRule;
@@ -66,14 +65,77 @@ impl Violation for ConvertNamedTupleFunctionalToClass {
}
}
/// UP014
pub(crate) fn convert_named_tuple_functional_to_class(
checker: &mut Checker,
stmt: &Stmt,
targets: &[Expr],
value: &Expr,
) {
let Some((typename, args, keywords, base_class)) =
match_named_tuple_assign(targets, value, checker.semantic())
else {
return;
};
let fields = match (args, keywords) {
// Ex) `NamedTuple("MyType")`
([_typename], []) => vec![Stmt::Pass(ast::StmtPass {
range: TextRange::default(),
})],
// Ex) `NamedTuple("MyType", [("a", int), ("b", str)])`
([_typename, fields], []) => {
if let Some(fields) = create_fields_from_fields_arg(fields) {
fields
} else {
debug!("Skipping `NamedTuple` \"{typename}\": unable to parse fields");
return;
}
}
// Ex) `NamedTuple("MyType", a=int, b=str)`
([_typename], keywords) => {
if let Some(fields) = create_fields_from_keywords(keywords) {
fields
} else {
debug!("Skipping `NamedTuple` \"{typename}\": unable to parse keywords");
return;
}
}
// Ex) `NamedTuple()`
_ => {
debug!("Skipping `NamedTuple` \"{typename}\": mixed fields and keywords");
return;
}
};
let mut diagnostic = Diagnostic::new(
ConvertNamedTupleFunctionalToClass {
name: typename.to_string(),
},
stmt.range(),
);
if checker.patch(diagnostic.kind.rule()) {
// TODO(charlie): Preserve indentation, to remove the first-column requirement.
if checker.locator().is_at_start_of_line(stmt.start()) {
diagnostic.set_fix(convert_to_class(
stmt,
typename,
fields,
base_class,
checker.generator(),
));
}
}
checker.diagnostics.push(diagnostic);
}
/// Return the typename, args, keywords, and base class.
fn match_named_tuple_assign<'a>(
targets: &'a [Expr],
value: &'a Expr,
semantic: &SemanticModel,
) -> Option<(&'a str, &'a [Expr], &'a [Keyword], &'a Expr)> {
let target = targets.get(0)?;
let Expr::Name(ast::ExprName { id: typename, .. }) = target else {
let [Expr::Name(ast::ExprName { id: typename, .. })] = targets else {
return None;
};
let Expr::Call(ast::ExprCall {
@@ -90,13 +152,12 @@ fn match_named_tuple_assign<'a>(
Some((typename, args, keywords, func))
}
/// Generate a [`Stmt::AnnAssign`] representing the provided property
/// definition.
fn create_property_assignment_stmt(property: &str, annotation: &Expr) -> Stmt {
/// Generate a [`Stmt::AnnAssign`] representing the provided field definition.
fn create_field_assignment_stmt(field: &str, annotation: &Expr) -> Stmt {
ast::StmtAnnAssign {
target: Box::new(
ast::ExprName {
id: property.into(),
id: field.into(),
ctx: ExprContext::Load,
range: TextRange::default(),
}
@@ -110,56 +171,46 @@ fn create_property_assignment_stmt(property: &str, annotation: &Expr) -> Stmt {
.into()
}
/// Create a list of property assignments from the `NamedTuple` fields argument.
fn create_properties_from_fields_arg(fields: &Expr) -> Result<Vec<Stmt>> {
let Expr::List(ast::ExprList { elts, .. }) = &fields else {
bail!("Expected argument to be `Expr::List`");
};
/// Create a list of field assignments from the `NamedTuple` fields argument.
fn create_fields_from_fields_arg(fields: &Expr) -> Option<Vec<Stmt>> {
let ast::ExprList { elts, .. } = fields.as_list_expr()?;
if elts.is_empty() {
let node = Stmt::Pass(ast::StmtPass {
range: TextRange::default(),
});
return Ok(vec![node]);
Some(vec![node])
} else {
elts.iter()
.map(|field| {
let ast::ExprTuple { elts, .. } = field.as_tuple_expr()?;
let [field, annotation] = elts.as_slice() else {
return None;
};
let field = field.as_constant_expr()?;
let Constant::Str(ast::StringConstant { value: field, .. }) = &field.value else {
return None;
};
if !is_identifier(field) {
return None;
}
if is_dunder(field) {
return None;
}
Some(create_field_assignment_stmt(field, annotation))
})
.collect()
}
elts.iter()
.map(|field| {
let Expr::Tuple(ast::ExprTuple { elts, .. }) = &field else {
bail!("Expected `field` to be `Expr::Tuple`")
};
let [field_name, annotation] = elts.as_slice() else {
bail!("Expected `elts` to have exactly two elements")
};
let Expr::Constant(ast::ExprConstant {
value:
Constant::Str(ast::StringConstant {
value: property, ..
}),
..
}) = &field_name
else {
bail!("Expected `field_name` to be `Constant::Str`")
};
if !is_identifier(property) {
bail!("Invalid property name: {}", property)
}
if is_dunder(property) {
bail!("Cannot use dunder property name: {}", property)
}
Ok(create_property_assignment_stmt(property, annotation))
})
.collect()
}
/// Create a list of property assignments from the `NamedTuple` keyword arguments.
fn create_properties_from_keywords(keywords: &[Keyword]) -> Result<Vec<Stmt>> {
/// Create a list of field assignments from the `NamedTuple` keyword arguments.
fn create_fields_from_keywords(keywords: &[Keyword]) -> Option<Vec<Stmt>> {
keywords
.iter()
.map(|keyword| {
let Keyword { arg, value, .. } = keyword;
let Some(arg) = arg else {
bail!("Expected `keyword` to have an `arg`")
};
Ok(create_property_assignment_stmt(arg.as_str(), value))
keyword
.arg
.as_ref()
.map(|field| create_field_assignment_stmt(field.as_str(), &keyword.value))
})
.collect()
}
@@ -195,67 +246,3 @@ fn convert_to_class(
stmt.range(),
))
}
/// UP014
pub(crate) fn convert_named_tuple_functional_to_class(
checker: &mut Checker,
stmt: &Stmt,
targets: &[Expr],
value: &Expr,
) {
let Some((typename, args, keywords, base_class)) =
match_named_tuple_assign(targets, value, checker.semantic())
else {
return;
};
let properties = match (&args[1..], keywords) {
// Ex) NamedTuple("MyType")
([], []) => vec![Stmt::Pass(ast::StmtPass {
range: TextRange::default(),
})],
// Ex) NamedTuple("MyType", [("a", int), ("b", str)])
([fields], []) => {
if let Ok(properties) = create_properties_from_fields_arg(fields) {
properties
} else {
debug!("Skipping `NamedTuple` \"{typename}\": unable to parse fields");
return;
}
}
// Ex) NamedTuple("MyType", a=int, b=str)
([], keywords) => {
if let Ok(properties) = create_properties_from_keywords(keywords) {
properties
} else {
debug!("Skipping `NamedTuple` \"{typename}\": unable to parse keywords");
return;
}
}
// Unfixable
_ => {
debug!("Skipping `NamedTuple` \"{typename}\": mixed fields and keywords");
return;
}
};
let mut diagnostic = Diagnostic::new(
ConvertNamedTupleFunctionalToClass {
name: typename.to_string(),
},
stmt.range(),
);
if checker.patch(diagnostic.kind.rule()) {
// TODO(charlie): Preserve indentation, to remove the first-column requirement.
if checker.locator().is_at_start_of_line(stmt.start()) {
diagnostic.set_fix(convert_to_class(
stmt,
typename,
properties,
base_class,
checker.generator(),
));
}
}
checker.diagnostics.push(diagnostic);
}

View File

@@ -1,6 +1,3 @@
use anyhow::{bail, Result};
use log::debug;
use ruff_diagnostics::{AutofixKind, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::is_dunder;
@@ -64,6 +61,45 @@ impl Violation for ConvertTypedDictFunctionalToClass {
}
}
/// UP013
pub(crate) fn convert_typed_dict_functional_to_class(
checker: &mut Checker,
stmt: &Stmt,
targets: &[Expr],
value: &Expr,
) {
let Some((class_name, arguments, base_class)) =
match_typed_dict_assign(targets, value, checker.semantic())
else {
return;
};
let Some((body, total_keyword)) = match_fields_and_total(arguments) else {
return;
};
let mut diagnostic = Diagnostic::new(
ConvertTypedDictFunctionalToClass {
name: class_name.to_string(),
},
stmt.range(),
);
if checker.patch(diagnostic.kind.rule()) {
// TODO(charlie): Preserve indentation, to remove the first-column requirement.
if checker.locator().is_at_start_of_line(stmt.start()) {
diagnostic.set_fix(convert_to_class(
stmt,
class_name,
body,
total_keyword,
base_class,
checker.generator(),
));
}
}
checker.diagnostics.push(diagnostic);
}
/// Return the class name, arguments, keywords and base class for a `TypedDict`
/// assignment.
fn match_typed_dict_assign<'a>(
@@ -71,8 +107,7 @@ fn match_typed_dict_assign<'a>(
value: &'a Expr,
semantic: &SemanticModel,
) -> Option<(&'a str, &'a Arguments, &'a Expr)> {
let target = targets.get(0)?;
let Expr::Name(ast::ExprName { id: class_name, .. }) = target else {
let [Expr::Name(ast::ExprName { id: class_name, .. })] = targets else {
return None;
};
let Expr::Call(ast::ExprCall {
@@ -89,13 +124,12 @@ fn match_typed_dict_assign<'a>(
Some((class_name, arguments, func))
}
/// Generate a [`Stmt::AnnAssign`] representing the provided property
/// definition.
fn create_property_assignment_stmt(property: &str, annotation: &Expr) -> Stmt {
/// Generate a [`Stmt::AnnAssign`] representing the provided field definition.
fn create_field_assignment_stmt(field: &str, annotation: &Expr) -> Stmt {
ast::StmtAnnAssign {
target: Box::new(
ast::ExprName {
id: property.into(),
id: field.into(),
ctx: ExprContext::Load,
range: TextRange::default(),
}
@@ -109,8 +143,7 @@ fn create_property_assignment_stmt(property: &str, annotation: &Expr) -> Stmt {
.into()
}
/// Generate a `StmtKind:ClassDef` statement based on the provided body,
/// keywords and base class.
/// Generate a `StmtKind:ClassDef` statement based on the provided body, keywords, and base class.
fn create_class_def_stmt(
class_name: &str,
body: Vec<Stmt>,
@@ -135,103 +168,101 @@ fn create_class_def_stmt(
.into()
}
fn properties_from_dict_literal(keys: &[Option<Expr>], values: &[Expr]) -> Result<Vec<Stmt>> {
fn fields_from_dict_literal(keys: &[Option<Expr>], values: &[Expr]) -> Option<Vec<Stmt>> {
if keys.is_empty() {
let node = Stmt::Pass(ast::StmtPass {
range: TextRange::default(),
});
return Ok(vec![node]);
Some(vec![node])
} else {
keys.iter()
.zip(values.iter())
.map(|(key, value)| match key {
Some(Expr::Constant(ast::ExprConstant {
value: Constant::Str(ast::StringConstant { value: field, .. }),
..
})) => {
if !is_identifier(field) {
return None;
}
if is_dunder(field) {
return None;
}
Some(create_field_assignment_stmt(field, value))
}
_ => None,
})
.collect()
}
keys.iter()
.zip(values.iter())
.map(|(key, value)| match key {
Some(Expr::Constant(ast::ExprConstant {
value:
Constant::Str(ast::StringConstant {
value: property, ..
}),
..
})) => {
if !is_identifier(property) {
bail!("Invalid property name: {}", property)
}
if is_dunder(property) {
bail!("Cannot use dunder property name: {}", property)
}
Ok(create_property_assignment_stmt(property, value))
}
_ => bail!("Expected `key` to be `Constant::Str`"),
})
.collect()
}
fn properties_from_dict_call(func: &Expr, keywords: &[Keyword]) -> Result<Vec<Stmt>> {
let Expr::Name(ast::ExprName { id, .. }) = func else {
bail!("Expected `func` to be `Expr::Name`")
};
fn fields_from_dict_call(func: &Expr, keywords: &[Keyword]) -> Option<Vec<Stmt>> {
let ast::ExprName { id, .. } = func.as_name_expr()?;
if id != "dict" {
bail!("Expected `id` to be `\"dict\"`")
return None;
}
if keywords.is_empty() {
let node = Stmt::Pass(ast::StmtPass {
range: TextRange::default(),
});
return Ok(vec![node]);
Some(vec![node])
} else {
fields_from_keywords(keywords)
}
properties_from_keywords(keywords)
}
// Deprecated in Python 3.11, removed in Python 3.13.
fn properties_from_keywords(keywords: &[Keyword]) -> Result<Vec<Stmt>> {
fn fields_from_keywords(keywords: &[Keyword]) -> Option<Vec<Stmt>> {
if keywords.is_empty() {
let node = Stmt::Pass(ast::StmtPass {
range: TextRange::default(),
});
return Ok(vec![node]);
return Some(vec![node]);
}
keywords
.iter()
.map(|keyword| {
if let Some(property) = &keyword.arg {
Ok(create_property_assignment_stmt(property, &keyword.value))
} else {
bail!("Expected `arg` to be `Some`")
}
keyword
.arg
.as_ref()
.map(|field| create_field_assignment_stmt(field, &keyword.value))
})
.collect()
}
fn match_properties_and_total(arguments: &Arguments) -> Result<(Vec<Stmt>, Option<&Keyword>)> {
// We don't have to manage the hybrid case because it's not possible to have a
// dict and keywords. For example, the following is illegal:
// ```
// MyType = TypedDict('MyType', {'a': int, 'b': str}, a=int, b=str)
// ```
if let Some(dict) = arguments.args.get(1) {
let total = arguments.find_keyword("total");
match dict {
Expr::Dict(ast::ExprDict {
keys,
values,
range: _,
}) => Ok((properties_from_dict_literal(keys, values)?, total)),
Expr::Call(ast::ExprCall {
func,
arguments: Arguments { keywords, .. },
..
}) => Ok((properties_from_dict_call(func, keywords)?, total)),
_ => bail!("Expected `arg` to be `Expr::Dict` or `Expr::Call`"),
/// Match the fields and `total` keyword from a `TypedDict` call.
fn match_fields_and_total(arguments: &Arguments) -> Option<(Vec<Stmt>, Option<&Keyword>)> {
match (arguments.args.as_slice(), arguments.keywords.as_slice()) {
// Ex) `TypedDict("MyType", {"a": int, "b": str})`
([_typename, fields], [..]) => {
let total = arguments.find_keyword("total");
match fields {
Expr::Dict(ast::ExprDict {
keys,
values,
range: _,
}) => Some((fields_from_dict_literal(keys, values)?, total)),
Expr::Call(ast::ExprCall {
func,
arguments: Arguments { keywords, .. },
range: _,
}) => Some((fields_from_dict_call(func, keywords)?, total)),
_ => None,
}
}
} else if !arguments.keywords.is_empty() {
Ok((properties_from_keywords(&arguments.keywords)?, None))
} else {
let node = Stmt::Pass(ast::StmtPass {
range: TextRange::default(),
});
Ok((vec![node], None))
// Ex) `TypedDict("MyType")`
([_typename], []) => {
let node = Stmt::Pass(ast::StmtPass {
range: TextRange::default(),
});
Some((vec![node], None))
}
// Ex) `TypedDict("MyType", a=int, b=str)`
([_typename], fields) => Some((fields_from_keywords(fields)?, None)),
// Ex) `TypedDict()`
_ => None,
}
}
@@ -254,46 +285,3 @@ fn convert_to_class(
stmt.range(),
))
}
/// UP013
pub(crate) fn convert_typed_dict_functional_to_class(
checker: &mut Checker,
stmt: &Stmt,
targets: &[Expr],
value: &Expr,
) {
let Some((class_name, arguments, base_class)) =
match_typed_dict_assign(targets, value, checker.semantic())
else {
return;
};
let (body, total_keyword) = match match_properties_and_total(arguments) {
Ok((body, total_keyword)) => (body, total_keyword),
Err(err) => {
debug!("Skipping ineligible `TypedDict` \"{class_name}\": {err}");
return;
}
};
let mut diagnostic = Diagnostic::new(
ConvertTypedDictFunctionalToClass {
name: class_name.to_string(),
},
stmt.range(),
);
if checker.patch(diagnostic.kind.rule()) {
// TODO(charlie): Preserve indentation, to remove the first-column requirement.
if checker.locator().is_at_start_of_line(stmt.start()) {
diagnostic.set_fix(convert_to_class(
stmt,
class_name,
body,
total_keyword,
base_class,
checker.generator(),
));
}
}
checker.diagnostics.push(diagnostic);
}

View File

@@ -45,7 +45,7 @@ where
{
let mut diagnostic = Diagnostic::new(DeprecatedCElementTree, node.range());
if checker.patch(diagnostic.kind.rule()) {
let contents = checker.locator().slice(node.range());
let contents = checker.locator().slice(node);
diagnostic.set_fix(Fix::suggested(Edit::range_replacement(
contents.replacen("cElementTree", "ElementTree", 1),
node.range(),

View File

@@ -150,7 +150,7 @@ fn format_import(
locator: &Locator,
stylist: &Stylist,
) -> Result<String> {
let module_text = locator.slice(stmt.range());
let module_text = locator.slice(stmt);
let mut tree = match_statement(module_text)?;
let import = match_import(&mut tree)?;
@@ -177,7 +177,7 @@ fn format_import_from(
locator: &Locator,
stylist: &Stylist,
) -> Result<String> {
let module_text = locator.slice(stmt.range());
let module_text = locator.slice(stmt);
let mut tree = match_statement(module_text).unwrap();
let import = match_import_from(&mut tree)?;

View File

@@ -74,7 +74,7 @@ impl<'a> FormatSummaryValues<'a> {
for arg in &call.arguments.args {
if matches!(arg, Expr::Starred(..))
|| contains_quotes(locator.slice(arg.range()))
|| contains_quotes(locator.slice(arg))
|| locator.contains_line_break(arg.range())
{
return None;
@@ -90,9 +90,7 @@ impl<'a> FormatSummaryValues<'a> {
let Some(key) = arg else {
return None;
};
if contains_quotes(locator.slice(value.range()))
|| locator.contains_line_break(value.range())
{
if contains_quotes(locator.slice(value)) || locator.contains_line_break(value.range()) {
return None;
}
extracted_kwargs.insert(key, value);
@@ -142,7 +140,7 @@ enum FormatContext {
/// Given an [`Expr`], format it for use in a formatted expression within an f-string.
fn formatted_expr<'a>(expr: &Expr, context: FormatContext, locator: &Locator<'a>) -> Cow<'a, str> {
let text = locator.slice(expr.range());
let text = locator.slice(expr);
let parenthesize = match (context, expr) {
// E.g., `x + y` should be parenthesized in `f"{(x + y)[0]}"`.
(
@@ -373,7 +371,7 @@ pub(crate) fn f_strings(
return;
}
let mut contents = String::with_capacity(checker.locator().slice(call.range()).len());
let mut contents = String::with_capacity(checker.locator().slice(call).len());
let mut prev_end = call.start();
for (range, fstring) in patches {
contents.push_str(

View File

@@ -206,7 +206,7 @@ fn generate_call(
locator: &Locator,
stylist: &Stylist,
) -> Result<String> {
let source_code = locator.slice(call.range());
let source_code = locator.slice(call);
let output = transform_expression_text(source_code, |source_code| {
let mut expression = match_expression(&source_code)?;

View File

@@ -211,7 +211,7 @@ pub(crate) fn native_literals(
return;
}
let arg_code = checker.locator().slice(arg.range());
let arg_code = checker.locator().slice(arg);
// Attribute access on an integer requires the integer to be parenthesized to disambiguate from a float
// Ex) `(7).denominator` is valid but `7.denominator` is not

View File

@@ -1,18 +1,18 @@
use std::str::FromStr;
use ruff_python_ast::{self as ast, Constant, Expr};
use ruff_python_literal::cformat::{
CConversionFlags, CFormatPart, CFormatPrecision, CFormatQuantity, CFormatString,
};
use ruff_python_parser::{lexer, AsMode, Tok};
use ruff_text_size::{Ranged, TextRange};
use ruff_diagnostics::{AutofixKind, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::str::{leading_quote, trailing_quote};
use ruff_python_ast::whitespace::indentation;
use ruff_python_ast::{self as ast, Constant, Expr};
use ruff_python_codegen::Stylist;
use ruff_python_literal::cformat::{
CConversionFlags, CFormatPart, CFormatPrecision, CFormatQuantity, CFormatString,
};
use ruff_python_parser::{lexer, AsMode, Tok};
use ruff_python_stdlib::identifiers::is_identifier;
use ruff_source_file::Locator;
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
use crate::registry::AsRule;
@@ -162,8 +162,8 @@ fn percent_to_format(format_string: &CFormatString) -> String {
}
/// If a tuple has one argument, remove the comma; otherwise, return it as-is.
fn clean_params_tuple(checker: &mut Checker, right: &Expr, locator: &Locator) -> String {
let mut contents = checker.locator().slice(right.range()).to_string();
fn clean_params_tuple(right: &Expr, locator: &Locator) -> String {
let mut contents = locator.slice(right).to_string();
if let Expr::Tuple(ast::ExprTuple { elts, .. }) = &right {
if elts.len() == 1 {
if !locator.contains_line_break(right.range()) {
@@ -182,11 +182,7 @@ fn clean_params_tuple(checker: &mut Checker, right: &Expr, locator: &Locator) ->
/// Converts a dictionary to a function call while preserving as much styling as
/// possible.
fn clean_params_dictionary(
checker: &mut Checker,
right: &Expr,
locator: &Locator,
) -> Option<String> {
fn clean_params_dictionary(right: &Expr, locator: &Locator, stylist: &Stylist) -> Option<String> {
let is_multi_line = locator.contains_line_break(right.range());
let mut contents = String::new();
if let Expr::Dict(ast::ExprDict {
@@ -220,11 +216,11 @@ fn clean_params_dictionary(
seen.push(key_string);
if is_multi_line {
if indent.is_none() {
indent = indentation(checker.locator(), key);
indent = indentation(locator, key);
}
}
let value_string = checker.locator().slice(value.range());
let value_string = locator.slice(value);
arguments.push(format!("{key_string}={value_string}"));
} else {
// If there are any non-string keys, abort.
@@ -232,7 +228,7 @@ fn clean_params_dictionary(
}
}
None => {
let value_string = checker.locator().slice(value.range());
let value_string = locator.slice(value);
arguments.push(format!("**{value_string}"));
}
}
@@ -248,16 +244,16 @@ fn clean_params_dictionary(
};
for item in &arguments {
contents.push_str(checker.stylist().line_ending().as_str());
contents.push_str(stylist.line_ending().as_str());
contents.push_str(indent);
contents.push_str(item);
contents.push(',');
}
contents.push_str(checker.stylist().line_ending().as_str());
contents.push_str(stylist.line_ending().as_str());
// For the ending parentheses, go back one indent.
let default_indent: &str = checker.stylist().indentation();
let default_indent: &str = stylist.indentation();
if let Some(ident) = indent.strip_prefix(default_indent) {
contents.push_str(ident);
} else {
@@ -333,17 +329,12 @@ fn convertible(format_string: &CFormatString, params: &Expr) -> bool {
}
/// UP031
pub(crate) fn printf_string_formatting(
checker: &mut Checker,
expr: &Expr,
right: &Expr,
locator: &Locator,
) {
pub(crate) fn printf_string_formatting(checker: &mut Checker, expr: &Expr, right: &Expr) {
// Grab each string segment (in case there's an implicit concatenation).
let mut strings: Vec<TextRange> = vec![];
let mut extension = None;
for (tok, range) in lexer::lex_starts_at(
checker.locator().slice(expr.range()),
checker.locator().slice(expr),
checker.source_type.as_mode(),
expr.start(),
)
@@ -404,16 +395,16 @@ pub(crate) fn printf_string_formatting(
// Parse the parameters.
let params_string = match right {
Expr::Constant(_) | Expr::FString(_) => {
format!("({})", checker.locator().slice(right.range()))
format!("({})", checker.locator().slice(right))
}
Expr::Name(_) | Expr::Attribute(_) | Expr::Subscript(_) | Expr::Call(_) => {
if num_keyword_arguments > 0 {
// If we have _any_ named fields, assume the right-hand side is a mapping.
format!("(**{})", checker.locator().slice(right.range()))
format!("(**{})", checker.locator().slice(right))
} else if num_positional_arguments > 1 {
// If we have multiple fields, but no named fields, assume the right-hand side is a
// tuple.
format!("(*{})", checker.locator().slice(right.range()))
format!("(*{})", checker.locator().slice(right))
} else {
// Otherwise, if we have a single field, we can't make any assumptions about the
// right-hand side. It _could_ be a tuple, but it could also be a single value,
@@ -427,9 +418,11 @@ pub(crate) fn printf_string_formatting(
return;
}
}
Expr::Tuple(_) => clean_params_tuple(checker, right, locator),
Expr::Tuple(_) => clean_params_tuple(right, checker.locator()),
Expr::Dict(_) => {
if let Some(params_string) = clean_params_dictionary(checker, right, locator) {
if let Some(params_string) =
clean_params_dictionary(right, checker.locator(), checker.stylist())
{
params_string
} else {
return;

View File

@@ -204,7 +204,7 @@ fn create_remove_param_fix<T: Ranged>(
mode_param: &Expr,
source_type: PySourceType,
) -> Result<Edit> {
let content = locator.slice(expr.range());
let content = locator.slice(expr);
// Find the last comma before mode_param and create a deletion fix
// starting from the comma and ending after mode_param.
let mut fix_start: Option<TextSize> = None;

View File

@@ -125,7 +125,7 @@ fn replace_with_bytes_literal(
source_type: PySourceType,
) -> Fix {
// Build up a replacement string by prefixing all string tokens with `b`.
let contents = locator.slice(call.range());
let contents = locator.slice(call);
let mut replacement = String::with_capacity(contents.len() + 1);
let mut prev = call.start();
for (tok, range) in

View File

@@ -68,7 +68,7 @@ pub(crate) fn unpacked_list_comprehension(checker: &mut Checker, targets: &[Expr
let mut diagnostic = Diagnostic::new(UnpackedListComprehension, value.range());
if checker.patch(diagnostic.kind.rule()) {
let existing = checker.locator().slice(value.range());
let existing = checker.locator().slice(value);
let mut content = String::with_capacity(existing.len());
content.push('(');

View File

@@ -100,7 +100,7 @@ pub(crate) fn use_pep604_annotation(
_ => {
// Single argument.
diagnostic.set_fix(Fix::suggested(Edit::range_replacement(
checker.locator().slice(slice.range()).to_string(),
checker.locator().slice(slice).to_string(),
expr.range(),
)));
}
@@ -113,7 +113,7 @@ pub(crate) fn use_pep604_annotation(
/// Format the expression as a PEP 604-style optional.
fn optional(expr: &Expr, locator: &Locator) -> String {
format!("{} | None", locator.slice(expr.range()))
format!("{} | None", locator.slice(expr))
}
/// Format the expressions as a PEP 604-style union.
@@ -128,7 +128,7 @@ fn union(elts: &[Expr], locator: &Locator) -> String {
if elts.peek().is_none() {
"()".to_string()
} else {
elts.map(|expr| locator.slice(expr.range())).join(" | ")
elts.map(|expr| locator.slice(expr)).join(" | ")
}
}

View File

@@ -103,7 +103,7 @@ pub(crate) fn yield_in_for_loop(checker: &mut Checker, stmt_for: &ast::StmtFor)
let mut diagnostic = Diagnostic::new(YieldInForLoop, stmt_for.range());
if checker.patch(diagnostic.kind.rule()) {
let contents = checker.locator().slice(iter.range());
let contents = checker.locator().slice(iter.as_ref());
let contents = format!("yield from {contents}");
diagnostic.set_fix(Fix::suggested(Edit::range_replacement(
contents,

View File

@@ -0,0 +1,193 @@
use ast::{ParameterWithDefault, Parameters};
use ruff_python_ast::helpers::map_subscript;
use ruff_python_ast::{self as ast, Expr, Stmt};
use ruff_python_semantic::analyze::type_inference::{PythonType, ResolvedPythonType};
use ruff_python_semantic::{Binding, BindingKind, SemanticModel};
use ruff_text_size::Ranged;
/// Abstraction for a type checker, conservatively checks for the intended type(s).
trait TypeChecker {
/// Check annotation expression to match the intended type(s).
fn match_annotation(annotation: &Expr, semantic: &SemanticModel) -> bool;
/// Check initializer expression to match the intended type(s).
fn match_initializer(initializer: &Expr, semantic: &SemanticModel) -> bool;
}
/// Check if the type checker accepts the given binding with the given name.
///
/// NOTE: this function doesn't perform more serious type inference, so it won't be able
/// to understand if the value gets initialized from a call to a function always returning
/// lists. This also implies no interfile analysis.
fn check_type<T: TypeChecker>(binding: &Binding, semantic: &SemanticModel) -> bool {
match binding.kind {
BindingKind::Assignment => match binding.statement(semantic) {
// ```python
// x = init_expr
// ```
//
// The type checker might know how to infer the type based on `init_expr`.
Some(Stmt::Assign(ast::StmtAssign { value, .. })) => {
T::match_initializer(value.as_ref(), semantic)
}
// ```python
// x: annotation = some_expr
// ```
//
// In this situation, we check only the annotation.
Some(Stmt::AnnAssign(ast::StmtAnnAssign { annotation, .. })) => {
T::match_annotation(annotation.as_ref(), semantic)
}
_ => false,
},
BindingKind::Argument => match binding.statement(semantic) {
// ```python
// def foo(x: annotation):
// ...
// ```
//
// We trust the annotation and see if the type checker matches the annotation.
Some(Stmt::FunctionDef(ast::StmtFunctionDef { parameters, .. })) => {
let Some(parameter) = find_parameter(parameters.as_ref(), binding) else {
return false;
};
let Some(ref annotation) = parameter.parameter.annotation else {
return false;
};
T::match_annotation(annotation.as_ref(), semantic)
}
_ => false,
},
BindingKind::Annotation => match binding.statement(semantic) {
// ```python
// x: annotation
// ```
//
// It's a typed declaration, type annotation is the only source of information.
Some(Stmt::AnnAssign(ast::StmtAnnAssign { annotation, .. })) => {
T::match_annotation(annotation.as_ref(), semantic)
}
_ => false,
},
_ => false,
}
}
/// Type checker for builtin types.
trait BuiltinTypeChecker {
/// Builtin type name.
const BUILTIN_TYPE_NAME: &'static str;
/// Type name as found in the `Typing` module.
const TYPING_NAME: &'static str;
/// [`PythonType`] associated with the intended type.
const EXPR_TYPE: PythonType;
/// Check annotation expression to match the intended type.
fn match_annotation(annotation: &Expr, semantic: &SemanticModel) -> bool {
let value = map_subscript(annotation);
Self::match_builtin_type(value, semantic)
|| semantic.match_typing_expr(value, Self::TYPING_NAME)
}
/// Check initializer expression to match the intended type.
fn match_initializer(initializer: &Expr, semantic: &SemanticModel) -> bool {
Self::match_expr_type(initializer) || Self::match_builtin_constructor(initializer, semantic)
}
/// Check if the type can be inferred from the given expression.
fn match_expr_type(initializer: &Expr) -> bool {
let init_type: ResolvedPythonType = initializer.into();
match init_type {
ResolvedPythonType::Atom(atom) => atom == Self::EXPR_TYPE,
_ => false,
}
}
/// Check if the given expression corresponds to a constructor call of the builtin type.
fn match_builtin_constructor(initializer: &Expr, semantic: &SemanticModel) -> bool {
let Expr::Call(ast::ExprCall { func, .. }) = initializer else {
return false;
};
Self::match_builtin_type(func.as_ref(), semantic)
}
/// Check if the given expression names the builtin type.
fn match_builtin_type(type_expr: &Expr, semantic: &SemanticModel) -> bool {
let Expr::Name(ast::ExprName { id, .. }) = type_expr else {
return false;
};
id == Self::BUILTIN_TYPE_NAME && semantic.is_builtin(Self::BUILTIN_TYPE_NAME)
}
}
impl<T: BuiltinTypeChecker> TypeChecker for T {
fn match_annotation(annotation: &Expr, semantic: &SemanticModel) -> bool {
<Self as BuiltinTypeChecker>::match_annotation(annotation, semantic)
}
fn match_initializer(initializer: &Expr, semantic: &SemanticModel) -> bool {
<Self as BuiltinTypeChecker>::match_initializer(initializer, semantic)
}
}
struct ListChecker;
impl BuiltinTypeChecker for ListChecker {
const BUILTIN_TYPE_NAME: &'static str = "list";
const TYPING_NAME: &'static str = "List";
const EXPR_TYPE: PythonType = PythonType::List;
}
struct DictChecker;
impl BuiltinTypeChecker for DictChecker {
const BUILTIN_TYPE_NAME: &'static str = "dict";
const TYPING_NAME: &'static str = "Dict";
const EXPR_TYPE: PythonType = PythonType::Dict;
}
struct SetChecker;
impl BuiltinTypeChecker for SetChecker {
const BUILTIN_TYPE_NAME: &'static str = "set";
const TYPING_NAME: &'static str = "Set";
const EXPR_TYPE: PythonType = PythonType::Set;
}
/// Test whether the given binding (and the given name) can be considered a list.
/// For this, we check what value might be associated with it through it's initialization and
/// what annotation it has (we consider `list` and `typing.List`).
pub(super) fn is_list(binding: &Binding, semantic: &SemanticModel) -> bool {
check_type::<ListChecker>(binding, semantic)
}
/// Test whether the given binding (and the given name) can be considered a dictionary.
/// For this, we check what value might be associated with it through it's initialization and
/// what annotation it has (we consider `dict` and `typing.Dict`).
pub(super) fn is_dict(binding: &Binding, semantic: &SemanticModel) -> bool {
check_type::<DictChecker>(binding, semantic)
}
/// Test whether the given binding (and the given name) can be considered a set.
/// For this, we check what value might be associated with it through it's initialization and
/// what annotation it has (we consider `set` and `typing.Set`).
pub(super) fn is_set(binding: &Binding, semantic: &SemanticModel) -> bool {
check_type::<SetChecker>(binding, semantic)
}
/// Find the [`ParameterWithDefault`] corresponding to the given [`Binding`].
#[inline]
fn find_parameter<'a>(
parameters: &'a Parameters,
binding: &Binding,
) -> Option<&'a ParameterWithDefault> {
parameters
.args
.iter()
.chain(parameters.posonlyargs.iter())
.chain(parameters.kwonlyargs.iter())
.find(|arg| arg.parameter.name.range() == binding.range())
}

View File

@@ -0,0 +1,29 @@
//! Rules from [refurb](https://pypi.org/project/refurb/)/
mod helpers;
pub(crate) mod rules;
#[cfg(test)]
mod tests {
use std::path::Path;
use anyhow::Result;
use test_case::test_case;
use crate::registry::Rule;
use crate::test::test_path;
use crate::{assert_messages, settings};
#[test_case(Rule::RepeatedAppend, Path::new("FURB113.py"))]
#[test_case(Rule::DeleteFullSlice, Path::new("FURB131.py"))]
#[test_case(Rule::CheckAndRemoveFromSet, Path::new("FURB132.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("refurb").join(path).as_path(),
&settings::Settings::for_rule(rule_code),
)?;
assert_messages!(snapshot, diagnostics);
Ok(())
}
}

View File

@@ -0,0 +1,204 @@
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::helpers::contains_effect;
use ruff_python_ast::{self as ast, CmpOp, Expr, Stmt};
use ruff_python_codegen::Generator;
use ruff_text_size::{Ranged, TextRange};
use crate::autofix::snippet::SourceCodeSnippet;
use crate::checkers::ast::Checker;
use crate::registry::AsRule;
use crate::rules::refurb::helpers::is_set;
/// ## What it does
/// Checks for uses of `set#remove` that can be replaced with `set#discard`.
///
/// ## Why is this bad?
/// If an element should be removed from a set if it is present, it is more
/// succinct and idiomatic to use `discard`.
///
/// ## Example
/// ```python
/// nums = {123, 456}
///
/// if 123 in nums:
/// nums.remove(123)
/// ```
///
/// Use instead:
/// ```python
/// nums = {123, 456}
///
/// nums.discard(123)
/// ```
///
/// ## References
/// - [Python documentation: `set.discard()`](https://docs.python.org/3/library/stdtypes.html?highlight=list#frozenset.discard)
#[violation]
pub struct CheckAndRemoveFromSet {
element: SourceCodeSnippet,
set: String,
}
impl CheckAndRemoveFromSet {
fn suggestion(&self) -> String {
let set = &self.set;
let element = self.element.truncated_display();
format!("{set}.discard({element})")
}
}
impl AlwaysAutofixableViolation for CheckAndRemoveFromSet {
#[derive_message_formats]
fn message(&self) -> String {
let suggestion = self.suggestion();
format!("Use `{suggestion}` instead of check and `remove`")
}
fn autofix_title(&self) -> String {
let suggestion = self.suggestion();
format!("Replace with `{suggestion}`")
}
}
/// FURB132
pub(crate) fn check_and_remove_from_set(checker: &mut Checker, if_stmt: &ast::StmtIf) {
// In order to fit the profile, we need if without else clauses and with only one statement in its body.
if if_stmt.body.len() != 1 || !if_stmt.elif_else_clauses.is_empty() {
return;
}
// The `if` test should be `element in set`.
let Some((check_element, check_set)) = match_check(if_stmt) else {
return;
};
// The `if` body should be `set.remove(element)`.
let Some((remove_element, remove_set)) = match_remove(if_stmt) else {
return;
};
// `
// `set` in the check should be the same as `set` in the body
if check_set.id != remove_set.id
// `element` in the check should be the same as `element` in the body
|| !compare(&check_element.into(), &remove_element.into())
// `element` shouldn't have a side effect, otherwise we might change the semantics of the program.
|| contains_effect(check_element, |id| checker.semantic().is_builtin(id))
{
return;
}
// Check if what we assume is set is indeed a set.
if !checker
.semantic()
.resolve_name(check_set)
.map(|binding_id| checker.semantic().binding(binding_id))
.map_or(false, |binding| is_set(binding, checker.semantic()))
{
return;
};
let mut diagnostic = Diagnostic::new(
CheckAndRemoveFromSet {
element: SourceCodeSnippet::from_str(checker.locator().slice(check_element)),
set: check_set.id.to_string(),
},
if_stmt.range(),
);
if checker.patch(diagnostic.kind.rule()) {
diagnostic.set_fix(Fix::suggested(Edit::replacement(
make_suggestion(check_set, check_element, checker.generator()),
if_stmt.start(),
if_stmt.end(),
)));
}
checker.diagnostics.push(diagnostic);
}
fn compare(lhs: &ComparableExpr, rhs: &ComparableExpr) -> bool {
lhs == rhs
}
/// Match `if` condition to be `expr in name`, returns a tuple of (`expr`, `name`) on success.
fn match_check(if_stmt: &ast::StmtIf) -> Option<(&Expr, &ast::ExprName)> {
let ast::ExprCompare {
ops,
left,
comparators,
..
} = if_stmt.test.as_compare_expr()?;
if ops.as_slice() != [CmpOp::In] {
return None;
}
let [Expr::Name(right @ ast::ExprName { .. })] = comparators.as_slice() else {
return None;
};
Some((left.as_ref(), right))
}
/// Match `if` body to be `name.remove(expr)`, returns a tuple of (`expr`, `name`) on success.
fn match_remove(if_stmt: &ast::StmtIf) -> Option<(&Expr, &ast::ExprName)> {
let [Stmt::Expr(ast::StmtExpr { value: expr, .. })] = if_stmt.body.as_slice() else {
return None;
};
let ast::ExprCall {
func: attr,
arguments: ast::Arguments { args, keywords, .. },
..
} = expr.as_call_expr()?;
let ast::ExprAttribute {
value: receiver,
attr: func_name,
..
} = attr.as_attribute_expr()?;
let Expr::Name(ref set @ ast::ExprName { .. }) = receiver.as_ref() else {
return None;
};
let [arg] = args.as_slice() else {
return None;
};
if func_name != "remove" || !keywords.is_empty() {
return None;
}
Some((arg, set))
}
/// Construct the fix suggestion, ie `set.discard(element)`.
fn make_suggestion(set: &ast::ExprName, element: &Expr, generator: Generator) -> String {
// Here we construct `set.discard(element)`
//
// Let's make `set.discard`.
let attr = ast::ExprAttribute {
value: Box::new(set.clone().into()),
attr: ast::Identifier::new("discard".to_string(), TextRange::default()),
ctx: ast::ExprContext::Load,
range: TextRange::default(),
};
// Make the actual call `set.discard(element)`
let call = ast::ExprCall {
func: Box::new(attr.into()),
arguments: ast::Arguments {
args: vec![element.clone()],
keywords: vec![],
range: TextRange::default(),
},
range: TextRange::default(),
};
// And finally, turn it into a statement.
let stmt = ast::StmtExpr {
value: Box::new(call.into()),
range: TextRange::default(),
};
generator.stmt(&stmt.into())
}

View File

@@ -0,0 +1,155 @@
use ruff_diagnostics::{AutofixKind, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Expr};
use ruff_python_codegen::Generator;
use ruff_python_semantic::{Binding, SemanticModel};
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
use crate::registry::AsRule;
use crate::rules::refurb::helpers::{is_dict, is_list};
/// ## What it does
/// Checks for `del` statements that delete the entire slice of a list or
/// dictionary.
///
/// ## Why is this bad?
/// It's is faster and more succinct to remove all items via the `clear()`
/// method.
///
/// ## Example
/// ```python
/// names = {"key": "value"}
/// nums = [1, 2, 3]
///
/// del names[:]
/// del nums[:]
/// ```
///
/// Use instead:
/// ```python
/// names = {"key": "value"}
/// nums = [1, 2, 3]
///
/// names.clear()
/// nums.clear()
/// ```
///
/// ## References
/// - [Python documentation: Mutable Sequence Types](https://docs.python.org/3/library/stdtypes.html?highlight=list#mutable-sequence-types)
/// - [Python documentation: `dict.clear()`](https://docs.python.org/3/library/stdtypes.html?highlight=list#dict.clear)
#[violation]
pub struct DeleteFullSlice;
impl Violation for DeleteFullSlice {
const AUTOFIX: AutofixKind = AutofixKind::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
format!("Prefer `clear` over deleting a full slice")
}
fn autofix_title(&self) -> Option<String> {
Some("Replace with `clear()`".to_string())
}
}
/// FURB131
pub(crate) fn delete_full_slice(checker: &mut Checker, delete: &ast::StmtDelete) {
for target in &delete.targets {
let Some(name) = match_full_slice(target, checker.semantic()) else {
continue;
};
let mut diagnostic = Diagnostic::new(DeleteFullSlice, delete.range);
// Fix is only supported for single-target deletions.
if checker.patch(diagnostic.kind.rule()) && delete.targets.len() == 1 {
let replacement = make_suggestion(name, checker.generator());
diagnostic.set_fix(Fix::suggested(Edit::replacement(
replacement,
delete.start(),
delete.end(),
)));
}
checker.diagnostics.push(diagnostic);
}
}
/// Match `del expr[:]` where `expr` is a list or a dict.
fn match_full_slice<'a>(expr: &'a Expr, semantic: &SemanticModel) -> Option<&'a str> {
// Check that it is `del expr[...]`.
let subscript = expr.as_subscript_expr()?;
// Check that it is` `del expr[:]`.
if !matches!(
subscript.slice.as_ref(),
Expr::Slice(ast::ExprSlice {
lower: None,
upper: None,
step: None,
range: _,
})
) {
return None;
}
// Check that it is del var[:]
let ast::ExprName { id: name, .. } = subscript.value.as_name_expr()?;
// Let's find definition for var
let scope = semantic.current_scope();
let bindings: Vec<&Binding> = scope
.get_all(name)
.map(|binding_id| semantic.binding(binding_id))
.collect();
// NOTE: Maybe it is too strict of a limitation, but it seems reasonable.
let [binding] = bindings.as_slice() else {
return None;
};
// It should only apply to variables that are known to be lists or dicts.
if !(is_dict(binding, semantic) || is_list(binding, semantic)) {
return None;
}
// Name is needed for the fix suggestion.
Some(name)
}
/// Make fix suggestion for the given name, ie `name.clear()`.
fn make_suggestion(name: &str, generator: Generator) -> String {
// Here we construct `var.clear()`
//
// And start with construction of `var`
let var = ast::ExprName {
id: name.to_string(),
ctx: ast::ExprContext::Load,
range: TextRange::default(),
};
// Make `var.clear`.
let attr = ast::ExprAttribute {
value: Box::new(var.into()),
attr: ast::Identifier::new("clear".to_string(), TextRange::default()),
ctx: ast::ExprContext::Load,
range: TextRange::default(),
};
// Make it into a call `var.clear()`
let call = ast::ExprCall {
func: Box::new(attr.into()),
arguments: ast::Arguments {
args: vec![],
keywords: vec![],
range: TextRange::default(),
},
range: TextRange::default(),
};
// And finally, turn it into a statement.
let stmt = ast::StmtExpr {
value: Box::new(call.into()),
range: TextRange::default(),
};
generator.stmt(&stmt.into())
}

View File

@@ -0,0 +1,7 @@
pub(crate) use check_and_remove_from_set::*;
pub(crate) use delete_full_slice::*;
pub(crate) use repeated_append::*;
mod check_and_remove_from_set;
mod delete_full_slice;
mod repeated_append;

View File

@@ -0,0 +1,363 @@
use rustc_hash::FxHashMap;
use ast::traversal;
use ruff_diagnostics::{AutofixKind, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Expr, Stmt};
use ruff_python_codegen::Generator;
use ruff_python_semantic::{Binding, BindingId, DefinitionId, SemanticModel};
use ruff_text_size::{Ranged, TextRange};
use crate::autofix::snippet::SourceCodeSnippet;
use crate::checkers::ast::Checker;
use crate::registry::AsRule;
use crate::rules::refurb::helpers::is_list;
/// ## What it does
/// Checks for consecutive calls to `append`.
///
/// ## Why is this bad?
/// Consecutive calls to `append` can be less efficient than batching them into
/// a single `extend`. Each `append` resizes the list individually, whereas an
/// `extend` can resize the list once for all elements.
///
/// ## Example
/// ```python
/// nums = [1, 2, 3]
///
/// nums.append(4)
/// nums.append(5)
/// nums.append(6)
/// ```
///
/// Use instead:
/// ```python
/// nums = [1, 2, 3]
///
/// nums.extend((4, 5, 6))
/// ```
///
/// ## References
/// - [Python documentation: More on Lists](https://docs.python.org/3/tutorial/datastructures.html#more-on-lists)
#[violation]
pub struct RepeatedAppend {
name: String,
replacement: SourceCodeSnippet,
}
impl RepeatedAppend {
fn suggestion(&self) -> String {
let name = &self.name;
self.replacement
.full_display()
.map_or(format!("{name}.extend(...)"), ToString::to_string)
}
}
impl Violation for RepeatedAppend {
const AUTOFIX: AutofixKind = AutofixKind::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let name = &self.name;
let suggestion = self.suggestion();
format!("Use `{suggestion}` instead of repeatedly calling `{name}.append()`")
}
fn autofix_title(&self) -> Option<String> {
let suggestion = self.suggestion();
Some(format!("Replace with `{suggestion}`"))
}
}
/// FURB113
pub(crate) fn repeated_append(checker: &mut Checker, stmt: &Stmt) {
let Some(appends) = match_consecutive_appends(checker.semantic(), stmt) else {
return;
};
// No need to proceed if we have less than 1 `append` to work with.
if appends.len() <= 1 {
return;
}
// group borrows from checker, so we can't directly push into checker.diagnostics
let diagnostics: Vec<Diagnostic> = group_appends(appends)
.iter()
.filter_map(|group| {
// Groups with just one element are fine, and shouldn't be replaced by `extend`.
if group.appends.len() <= 1 {
return None;
}
let replacement = make_suggestion(group, checker.generator());
let mut diagnostic = Diagnostic::new(
RepeatedAppend {
name: group.name().to_string(),
replacement: SourceCodeSnippet::new(replacement.clone()),
},
group.range(),
);
// We only suggest a fix when all appends in a group are clumped together. If they're
// non-consecutive, fixing them is much more difficult.
if checker.patch(diagnostic.kind.rule()) && group.is_consecutive {
diagnostic.set_fix(Fix::suggested(Edit::replacement(
replacement,
group.start(),
group.end(),
)));
}
Some(diagnostic)
})
.collect();
checker.diagnostics.extend(diagnostics);
}
#[derive(Debug, Clone)]
struct Append<'a> {
/// Receiver of the `append` call (aka `self` argument).
receiver: &'a ast::ExprName,
/// [`BindingId`] that the receiver references.
binding_id: BindingId,
/// [`Binding`] that the receiver references.
binding: &'a Binding<'a>,
/// [`Expr`] serving as a sole argument to `append`.
argument: &'a Expr,
/// The statement containing the `append` call.
stmt: &'a Stmt,
}
#[derive(Debug)]
struct AppendGroup<'a> {
/// A sequence of `appends` connected to the same binding.
appends: Vec<Append<'a>>,
/// `true` when all appends in the group follow one another and don't have other statements in
/// between. It is much easier to make fix suggestions for consecutive groups.
is_consecutive: bool,
}
impl AppendGroup<'_> {
fn name(&self) -> &str {
assert!(!self.appends.is_empty());
&self.appends.first().unwrap().receiver.id
}
}
impl Ranged for AppendGroup<'_> {
fn range(&self) -> TextRange {
assert!(!self.appends.is_empty());
TextRange::new(
self.appends.first().unwrap().stmt.start(),
self.appends.last().unwrap().stmt.end(),
)
}
}
/// Match consecutive calls to `append` on list variables starting from the given statement.
fn match_consecutive_appends<'a>(
semantic: &'a SemanticModel,
stmt: &'a Stmt,
) -> Option<Vec<Append<'a>>> {
// Match the current statement, to see if it's an append.
let append = match_append(semantic, stmt)?;
// In order to match consecutive statements, we need to go to the tree ancestor of the
// given statement, find its position there, and match all 'appends' from there.
let siblings: &[Stmt] = if semantic.at_top_level() {
// If the statement is at the top level, we should go to the parent module.
// Module is available in the definitions list.
let module = semantic.definitions[DefinitionId::module()].as_module()?;
module.python_ast
} else {
// Otherwise, go to the parent, and take its body as a sequence of siblings.
semantic
.current_statement_parent()
.and_then(|parent| traversal::suite(stmt, parent))?
};
let stmt_index = siblings.iter().position(|sibling| sibling == stmt)?;
// We shouldn't repeat the same work for many 'appends' that go in a row. Let's check
// that this statement is at the beginning of such a group.
if stmt_index != 0 && match_append(semantic, &siblings[stmt_index - 1]).is_some() {
return None;
}
// Starting from the next statement, let's match all appends and make a vector.
Some(
std::iter::once(append)
.chain(
siblings
.iter()
.skip(stmt_index + 1)
.map_while(|sibling| match_append(semantic, sibling)),
)
.collect(),
)
}
/// Group the given appends by the associated bindings.
fn group_appends(appends: Vec<Append<'_>>) -> Vec<AppendGroup<'_>> {
// We want to go over the given list of appends and group the by receivers.
let mut map: FxHashMap<BindingId, AppendGroup> = FxHashMap::default();
let mut iter = appends.into_iter();
let mut last_binding = {
let first_append = iter.next().unwrap();
let binding_id = first_append.binding_id;
let _ = get_or_add(&mut map, first_append);
binding_id
};
for append in iter {
let binding_id = append.binding_id;
let group = get_or_add(&mut map, append);
if binding_id != last_binding {
// If the group is not brand new, and the previous group was different,
// we should mark it as "non-consecutive".
//
// We are catching the following situation:
// ```python
// a.append(1)
// a.append(2)
// b.append(1)
// a.append(3) # <- we are currently here
// ```
//
// So, `a` != `b` and group for `a` already contains appends 1 and 2.
// It is only possible if this group got interrupted by at least one
// other group and, thus, it is non-consecutive.
if group.appends.len() > 1 {
group.is_consecutive = false;
}
last_binding = binding_id;
}
}
map.into_values().collect()
}
#[inline]
fn get_or_add<'a, 'b>(
map: &'b mut FxHashMap<BindingId, AppendGroup<'a>>,
append: Append<'a>,
) -> &'b mut AppendGroup<'a> {
let group = map.entry(append.binding_id).or_insert(AppendGroup {
appends: vec![],
is_consecutive: true,
});
group.appends.push(append);
group
}
/// Matches that the given statement is a call to `append` on a list variable.
fn match_append<'a>(semantic: &'a SemanticModel, stmt: &'a Stmt) -> Option<Append<'a>> {
let Stmt::Expr(ast::StmtExpr { value, .. }) = stmt else {
return None;
};
let Expr::Call(ast::ExprCall {
func, arguments, ..
}) = value.as_ref()
else {
return None;
};
// `append` should have just one argument, an element to be added.
let [argument] = arguments.args.as_slice() else {
return None;
};
// The called function should be an attribute, ie `value.attr`.
let Expr::Attribute(ast::ExprAttribute { value, attr, .. }) = func.as_ref() else {
return None;
};
// `attr` should be `append` and it shouldn't have any keyword arguments.
if attr != "append" || !arguments.keywords.is_empty() {
return None;
}
// We match only variable references, i.e. `value` should be a name expression.
let Expr::Name(receiver @ ast::ExprName { id: name, .. }) = value.as_ref() else {
return None;
};
// Now we need to find what is this variable bound to...
let scope = semantic.current_scope();
let bindings: Vec<BindingId> = scope.get_all(name).collect();
// Maybe it is too strict of a limitation, but it seems reasonable.
let [binding_id] = bindings.as_slice() else {
return None;
};
let binding = semantic.binding(*binding_id);
// ...and whether this something is a list.
if !is_list(binding, semantic) {
return None;
}
Some(Append {
receiver,
binding_id: *binding_id,
binding,
stmt,
argument,
})
}
/// Make fix suggestion for the given group of appends.
fn make_suggestion(group: &AppendGroup, generator: Generator) -> String {
let appends = &group.appends;
assert!(!appends.is_empty());
let first = appends.first().unwrap();
assert!(appends
.iter()
.all(|append| append.binding.source == first.binding.source));
// Here we construct `var.extend((elt1, elt2, ..., eltN))
//
// Each eltK comes from an individual `var.append(eltK)`.
let elts: Vec<Expr> = appends
.iter()
.map(|append| append.argument.clone())
.collect();
// Join all elements into a tuple: `(elt1, elt2, ..., eltN)`
let tuple = ast::ExprTuple {
elts,
ctx: ast::ExprContext::Load,
range: TextRange::default(),
};
// Make `var.extend`.
// NOTE: receiver is the same for all appends and that's why we can take the first.
let attr = ast::ExprAttribute {
value: Box::new(first.receiver.clone().into()),
attr: ast::Identifier::new("extend".to_string(), TextRange::default()),
ctx: ast::ExprContext::Load,
range: TextRange::default(),
};
// Make the actual call `var.extend((elt1, elt2, ..., eltN))`
let call = ast::ExprCall {
func: Box::new(attr.into()),
arguments: ast::Arguments {
args: vec![tuple.into()],
keywords: vec![],
range: TextRange::default(),
},
range: TextRange::default(),
};
// And finally, turn it into a statement.
let stmt = ast::StmtExpr {
value: Box::new(call.into()),
range: TextRange::default(),
};
generator.stmt(&stmt.into())
}

View File

@@ -0,0 +1,335 @@
---
source: crates/ruff/src/rules/refurb/mod.rs
---
FURB113.py:23:1: FURB113 [*] Use `nums.extend((1, 2))` instead of repeatedly calling `nums.append()`
|
22 | # FURB113
23 | / nums.append(1)
24 | | nums.append(2)
| |______________^ FURB113
25 | pass
|
= help: Replace with `nums.extend((1, 2))`
Suggested fix
20 20 |
21 21 |
22 22 | # FURB113
23 |-nums.append(1)
24 |-nums.append(2)
23 |+nums.extend((1, 2))
25 24 | pass
26 25 |
27 26 |
FURB113.py:29:1: FURB113 [*] Use `nums3.extend((1, 2))` instead of repeatedly calling `nums3.append()`
|
28 | # FURB113
29 | / nums3.append(1)
30 | | nums3.append(2)
| |_______________^ FURB113
31 | pass
|
= help: Replace with `nums3.extend((1, 2))`
Suggested fix
26 26 |
27 27 |
28 28 | # FURB113
29 |-nums3.append(1)
30 |-nums3.append(2)
29 |+nums3.extend((1, 2))
31 30 | pass
32 31 |
33 32 |
FURB113.py:35:1: FURB113 [*] Use `nums4.extend((1, 2))` instead of repeatedly calling `nums4.append()`
|
34 | # FURB113
35 | / nums4.append(1)
36 | | nums4.append(2)
| |_______________^ FURB113
37 | pass
|
= help: Replace with `nums4.extend((1, 2))`
Suggested fix
32 32 |
33 33 |
34 34 | # FURB113
35 |-nums4.append(1)
36 |-nums4.append(2)
35 |+nums4.extend((1, 2))
37 36 | pass
38 37 |
39 38 |
FURB113.py:41:1: FURB113 Use `nums.extend((1, 2, 3))` instead of repeatedly calling `nums.append()`
|
40 | # FURB113
41 | / nums.append(1)
42 | | nums2.append(1)
43 | | nums.append(2)
44 | | nums.append(3)
| |______________^ FURB113
45 | pass
|
= help: Replace with `nums.extend((1, 2, 3))`
FURB113.py:49:1: FURB113 Use `nums.extend((1, 2, 3))` instead of repeatedly calling `nums.append()`
|
48 | # FURB113
49 | / nums.append(1)
50 | | nums2.append(1)
51 | | nums.append(2)
52 | | # FURB113
53 | | nums3.append(1)
54 | | nums.append(3)
| |______________^ FURB113
55 | # FURB113
56 | nums4.append(1)
|
= help: Replace with `nums.extend((1, 2, 3))`
FURB113.py:53:1: FURB113 Use `nums3.extend((1, 2))` instead of repeatedly calling `nums3.append()`
|
51 | nums.append(2)
52 | # FURB113
53 | / nums3.append(1)
54 | | nums.append(3)
55 | | # FURB113
56 | | nums4.append(1)
57 | | nums4.append(2)
58 | | nums3.append(2)
| |_______________^ FURB113
59 | pass
|
= help: Replace with `nums3.extend((1, 2))`
FURB113.py:56:1: FURB113 [*] Use `nums4.extend((1, 2))` instead of repeatedly calling `nums4.append()`
|
54 | nums.append(3)
55 | # FURB113
56 | / nums4.append(1)
57 | | nums4.append(2)
| |_______________^ FURB113
58 | nums3.append(2)
59 | pass
|
= help: Replace with `nums4.extend((1, 2))`
Suggested fix
53 53 | nums3.append(1)
54 54 | nums.append(3)
55 55 | # FURB113
56 |-nums4.append(1)
57 |-nums4.append(2)
56 |+nums4.extend((1, 2))
58 57 | nums3.append(2)
59 58 | pass
60 59 |
FURB113.py:62:1: FURB113 [*] Use `nums.extend((1, 2, 3))` instead of repeatedly calling `nums.append()`
|
61 | # FURB113
62 | / nums.append(1)
63 | | nums.append(2)
64 | | nums.append(3)
| |______________^ FURB113
|
= help: Replace with `nums.extend((1, 2, 3))`
Suggested fix
59 59 | pass
60 60 |
61 61 | # FURB113
62 |-nums.append(1)
63 |-nums.append(2)
64 |-nums.append(3)
62 |+nums.extend((1, 2, 3))
65 63 |
66 64 |
67 65 | if True:
FURB113.py:69:5: FURB113 [*] Use `nums.extend((1, 2))` instead of repeatedly calling `nums.append()`
|
67 | if True:
68 | # FURB113
69 | nums.append(1)
| _____^
70 | | nums.append(2)
| |__________________^ FURB113
|
= help: Replace with `nums.extend((1, 2))`
Suggested fix
66 66 |
67 67 | if True:
68 68 | # FURB113
69 |- nums.append(1)
70 |- nums.append(2)
69 |+ nums.extend((1, 2))
71 70 |
72 71 |
73 72 | if True:
FURB113.py:75:5: FURB113 [*] Use `nums.extend((1, 2))` instead of repeatedly calling `nums.append()`
|
73 | if True:
74 | # FURB113
75 | nums.append(1)
| _____^
76 | | nums.append(2)
| |__________________^ FURB113
77 | pass
|
= help: Replace with `nums.extend((1, 2))`
Suggested fix
72 72 |
73 73 | if True:
74 74 | # FURB113
75 |- nums.append(1)
76 |- nums.append(2)
75 |+ nums.extend((1, 2))
77 76 | pass
78 77 |
79 78 |
FURB113.py:82:5: FURB113 Use `nums.extend((1, 2, 3))` instead of repeatedly calling `nums.append()`
|
80 | if True:
81 | # FURB113
82 | nums.append(1)
| _____^
83 | | nums2.append(1)
84 | | nums.append(2)
85 | | nums.append(3)
| |__________________^ FURB113
|
= help: Replace with `nums.extend((1, 2, 3))`
FURB113.py:90:5: FURB113 [*] Use `x.extend((1, 2))` instead of repeatedly calling `x.append()`
|
88 | def yes_one(x: list[int]):
89 | # FURB113
90 | x.append(1)
| _____^
91 | | x.append(2)
| |_______________^ FURB113
|
= help: Replace with `x.extend((1, 2))`
Suggested fix
87 87 |
88 88 | def yes_one(x: list[int]):
89 89 | # FURB113
90 |- x.append(1)
91 |- x.append(2)
90 |+ x.extend((1, 2))
92 91 |
93 92 |
94 93 | def yes_two(x: List[int]):
FURB113.py:96:5: FURB113 [*] Use `x.extend((1, 2))` instead of repeatedly calling `x.append()`
|
94 | def yes_two(x: List[int]):
95 | # FURB113
96 | x.append(1)
| _____^
97 | | x.append(2)
| |_______________^ FURB113
|
= help: Replace with `x.extend((1, 2))`
Suggested fix
93 93 |
94 94 | def yes_two(x: List[int]):
95 95 | # FURB113
96 |- x.append(1)
97 |- x.append(2)
96 |+ x.extend((1, 2))
98 97 |
99 98 |
100 99 | def yes_three(*, x: list[int]):
FURB113.py:102:5: FURB113 [*] Use `x.extend((1, 2))` instead of repeatedly calling `x.append()`
|
100 | def yes_three(*, x: list[int]):
101 | # FURB113
102 | x.append(1)
| _____^
103 | | x.append(2)
| |_______________^ FURB113
|
= help: Replace with `x.extend((1, 2))`
Suggested fix
99 99 |
100 100 | def yes_three(*, x: list[int]):
101 101 | # FURB113
102 |- x.append(1)
103 |- x.append(2)
102 |+ x.extend((1, 2))
104 103 |
105 104 |
106 105 | def yes_four(x: list[int], /):
FURB113.py:108:5: FURB113 [*] Use `x.extend((1, 2))` instead of repeatedly calling `x.append()`
|
106 | def yes_four(x: list[int], /):
107 | # FURB113
108 | x.append(1)
| _____^
109 | | x.append(2)
| |_______________^ FURB113
|
= help: Replace with `x.extend((1, 2))`
Suggested fix
105 105 |
106 106 | def yes_four(x: list[int], /):
107 107 | # FURB113
108 |- x.append(1)
109 |- x.append(2)
108 |+ x.extend((1, 2))
110 109 |
111 110 |
112 111 | def yes_five(x: list[int], y: list[int]):
FURB113.py:114:5: FURB113 Use `x.extend((1, 2, 3))` instead of repeatedly calling `x.append()`
|
112 | def yes_five(x: list[int], y: list[int]):
113 | # FURB113
114 | x.append(1)
| _____^
115 | | x.append(2)
116 | | y.append(1)
117 | | x.append(3)
| |_______________^ FURB113
|
= help: Replace with `x.extend((1, 2, 3))`
FURB113.py:122:5: FURB113 [*] Use `x.extend((1, 2))` instead of repeatedly calling `x.append()`
|
120 | def yes_six(x: list):
121 | # FURB113
122 | x.append(1)
| _____^
123 | | x.append(2)
| |_______________^ FURB113
|
= help: Replace with `x.extend((1, 2))`
Suggested fix
119 119 |
120 120 | def yes_six(x: list):
121 121 | # FURB113
122 |- x.append(1)
123 |- x.append(2)
122 |+ x.extend((1, 2))
124 123 |
125 124 |
126 125 | # Non-errors.

View File

@@ -0,0 +1,132 @@
---
source: crates/ruff/src/rules/refurb/mod.rs
---
FURB131.py:11:1: FURB131 [*] Prefer `clear` over deleting a full slice
|
10 | # FURB131
11 | del nums[:]
| ^^^^^^^^^^^ FURB131
|
= help: Replace with `clear()`
Suggested fix
8 8 | # these should match
9 9 |
10 10 | # FURB131
11 |-del nums[:]
11 |+nums.clear()
12 12 |
13 13 |
14 14 | # FURB131
FURB131.py:15:1: FURB131 [*] Prefer `clear` over deleting a full slice
|
14 | # FURB131
15 | del names[:]
| ^^^^^^^^^^^^ FURB131
|
= help: Replace with `clear()`
Suggested fix
12 12 |
13 13 |
14 14 | # FURB131
15 |-del names[:]
15 |+names.clear()
16 16 |
17 17 |
18 18 | # FURB131
FURB131.py:19:1: FURB131 Prefer `clear` over deleting a full slice
|
18 | # FURB131
19 | del x, nums[:]
| ^^^^^^^^^^^^^^ FURB131
|
= help: Replace with `clear()`
FURB131.py:23:1: FURB131 Prefer `clear` over deleting a full slice
|
22 | # FURB131
23 | del y, names[:], x
| ^^^^^^^^^^^^^^^^^^ FURB131
|
= help: Replace with `clear()`
FURB131.py:28:5: FURB131 [*] Prefer `clear` over deleting a full slice
|
26 | def yes_one(x: list[int]):
27 | # FURB131
28 | del x[:]
| ^^^^^^^^ FURB131
|
= help: Replace with `clear()`
Suggested fix
25 25 |
26 26 | def yes_one(x: list[int]):
27 27 | # FURB131
28 |- del x[:]
28 |+ x.clear()
29 29 |
30 30 |
31 31 | def yes_two(x: dict[int, str]):
FURB131.py:33:5: FURB131 [*] Prefer `clear` over deleting a full slice
|
31 | def yes_two(x: dict[int, str]):
32 | # FURB131
33 | del x[:]
| ^^^^^^^^ FURB131
|
= help: Replace with `clear()`
Suggested fix
30 30 |
31 31 | def yes_two(x: dict[int, str]):
32 32 | # FURB131
33 |- del x[:]
33 |+ x.clear()
34 34 |
35 35 |
36 36 | def yes_three(x: List[int]):
FURB131.py:38:5: FURB131 [*] Prefer `clear` over deleting a full slice
|
36 | def yes_three(x: List[int]):
37 | # FURB131
38 | del x[:]
| ^^^^^^^^ FURB131
|
= help: Replace with `clear()`
Suggested fix
35 35 |
36 36 | def yes_three(x: List[int]):
37 37 | # FURB131
38 |- del x[:]
38 |+ x.clear()
39 39 |
40 40 |
41 41 | def yes_four(x: Dict[int, str]):
FURB131.py:43:5: FURB131 [*] Prefer `clear` over deleting a full slice
|
41 | def yes_four(x: Dict[int, str]):
42 | # FURB131
43 | del x[:]
| ^^^^^^^^ FURB131
|
= help: Replace with `clear()`
Suggested fix
40 40 |
41 41 | def yes_four(x: Dict[int, str]):
42 42 | # FURB131
43 |- del x[:]
43 |+ x.clear()
44 44 |
45 45 |
46 46 | # these should not

View File

@@ -0,0 +1,84 @@
---
source: crates/ruff/src/rules/refurb/mod.rs
---
FURB132.py:12:1: FURB132 [*] Use `s.discard("x")` instead of check and `remove`
|
11 | # FURB132
12 | / if "x" in s:
13 | | s.remove("x")
| |_________________^ FURB132
|
= help: Replace with `s.discard("x")`
Suggested fix
9 9 | # these should match
10 10 |
11 11 | # FURB132
12 |-if "x" in s:
13 |- s.remove("x")
12 |+s.discard("x")
14 13 |
15 14 |
16 15 | # FURB132
FURB132.py:22:1: FURB132 [*] Use `s3.discard("x")` instead of check and `remove`
|
21 | # FURB132
22 | / if "x" in s3:
23 | | s3.remove("x")
| |__________________^ FURB132
|
= help: Replace with `s3.discard("x")`
Suggested fix
19 19 |
20 20 |
21 21 | # FURB132
22 |-if "x" in s3:
23 |- s3.remove("x")
22 |+s3.discard("x")
24 23 |
25 24 |
26 25 | var = "y"
FURB132.py:28:1: FURB132 [*] Use `s.discard(var)` instead of check and `remove`
|
26 | var = "y"
27 | # FURB132
28 | / if var in s:
29 | | s.remove(var)
| |_________________^ FURB132
|
= help: Replace with `s.discard(var)`
Suggested fix
25 25 |
26 26 | var = "y"
27 27 | # FURB132
28 |-if var in s:
29 |- s.remove(var)
28 |+s.discard(var)
30 29 |
31 30 |
32 31 | if f"{var}:{var}" in s:
FURB132.py:32:1: FURB132 [*] Use `s.discard(f"{var}:{var}")` instead of check and `remove`
|
32 | / if f"{var}:{var}" in s:
33 | | s.remove(f"{var}:{var}")
| |____________________________^ FURB132
|
= help: Replace with `s.discard(f"{var}:{var}")`
Suggested fix
29 29 | s.remove(var)
30 30 |
31 31 |
32 |-if f"{var}:{var}" in s:
33 |- s.remove(f"{var}:{var}")
32 |+s.discard(f"{var}:{var}")
34 33 |
35 34 |
36 35 | def identity(x):

View File

@@ -139,7 +139,7 @@ fn convert_call_to_conversion_flag(
locator: &Locator,
stylist: &Stylist,
) -> Result<Fix> {
let source_code = locator.slice(expr.range());
let source_code = locator.slice(expr);
transform_expression(source_code, stylist, |mut expression| {
// Replace the formatted call expression at `index` with a conversion flag.
let formatted_string_expression = match_part(index, &mut expression)?;

View File

@@ -99,7 +99,7 @@ fn convert_to_reduce(iterable: &Expr, call: &ast::ExprCall, checker: &Checker) -
checker.semantic(),
)?;
let iterable = checker.locator().slice(iterable.range());
let iterable = checker.locator().slice(iterable);
Ok(Fix::suggested_edits(
Edit::range_replacement(

View File

@@ -1,5 +1,6 @@
use ruff_python_ast::Expr;
use crate::autofix::snippet::SourceCodeSnippet;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::is_constant;
@@ -29,14 +30,18 @@ use crate::checkers::ast::Checker;
/// ```
#[violation]
pub struct StaticKeyDictComprehension {
key: String,
key: SourceCodeSnippet,
}
impl Violation for StaticKeyDictComprehension {
#[derive_message_formats]
fn message(&self) -> String {
let StaticKeyDictComprehension { key } = self;
format!("Dictionary comprehension uses static key: `{key}`")
if let Some(key) = key.full_display() {
format!("Dictionary comprehension uses static key: `{key}`")
} else {
format!("Dictionary comprehension uses static key")
}
}
}
@@ -45,7 +50,7 @@ pub(crate) fn static_key_dict_comprehension(checker: &mut Checker, key: &Expr) {
if is_constant(key) {
checker.diagnostics.push(Diagnostic::new(
StaticKeyDictComprehension {
key: checker.locator().slice(key.range()).to_string(),
key: SourceCodeSnippet::from_str(checker.locator().slice(key)),
},
key.range(),
));

View File

@@ -10,7 +10,7 @@ RUF100_0.py:9:12: RUF100 [*] Unused blanket `noqa` directive
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
6 6 | b = 2 # noqa: F841
7 7 |
8 8 | # Invalid
@@ -30,7 +30,7 @@ RUF100_0.py:13:12: RUF100 [*] Unused `noqa` directive (unused: `E501`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
10 10 | print(c)
11 11 |
12 12 | # Invalid
@@ -50,7 +50,7 @@ RUF100_0.py:16:12: RUF100 [*] Unused `noqa` directive (unused: `F841`, `E501`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
13 13 | d = 1 # noqa: E501
14 14 |
15 15 | # Invalid
@@ -70,7 +70,7 @@ RUF100_0.py:19:12: RUF100 [*] Unused `noqa` directive (unused: `F841`, `W191`; n
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
16 16 | d = 1 # noqa: F841, E501
17 17 |
18 18 | # Invalid (and unimplemented or not enabled)
@@ -90,7 +90,7 @@ RUF100_0.py:22:12: RUF100 [*] Unused `noqa` directive (unused: `F841`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
19 19 | d = 1 # noqa: F841, W191, F821
20 20 |
21 21 | # Invalid (but external)
@@ -111,7 +111,7 @@ RUF100_0.py:26:10: RUF100 [*] Unused `noqa` directive (unused: `E501`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
23 23 |
24 24 | # fmt: off
25 25 | # Invalid - no space before #
@@ -148,7 +148,7 @@ RUF100_0.py:29:33: RUF100 [*] Unused `noqa` directive (unused: `E501`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
26 26 | d = 1# noqa: E501
27 27 |
28 28 | # Invalid - many spaces before #
@@ -168,7 +168,7 @@ RUF100_0.py:55:6: RUF100 [*] Unused `noqa` directive (unused: `F841`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
52 52 | https://github.com/PyCQA/pycodestyle/pull/258/files#diff-841c622497a8033d10152bfdfb15b20b92437ecdea21a260944ea86b77b51533
53 53 |
54 54 | Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
@@ -188,7 +188,7 @@ RUF100_0.py:63:6: RUF100 [*] Unused `noqa` directive (unused: `E501`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
60 60 | https://github.com/PyCQA/pycodestyle/pull/258/files#diff-841c622497a8033d10152bfdfb15b20b92437ecdea21a260944ea86b77b51533
61 61 |
62 62 | Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor.
@@ -208,7 +208,7 @@ RUF100_0.py:71:6: RUF100 [*] Unused blanket `noqa` directive
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
68 68 | https://github.com/PyCQA/pycodestyle/pull/258/files#diff-841c622497a8033d10152bfdfb15b20b92437ecdea21a260944ea86b77b51533
69 69 |
70 70 | Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor.
@@ -254,7 +254,7 @@ RUF100_0.py:90:92: RUF100 [*] Unused `noqa` directive (unused: `F401`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
87 87 |
88 88 | print(sys.path)
89 89 |

View File

@@ -32,7 +32,7 @@ RUF100_1.py:52:20: RUF100 [*] Unused `noqa` directive (unused: `F401`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
49 49 | def f():
50 50 | # This should ignore the error, but the inner noqa should be marked as unused.
51 51 | from typing import ( # noqa: F401
@@ -52,7 +52,7 @@ RUF100_1.py:59:20: RUF100 [*] Unused `noqa` directive (unused: `F401`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
56 56 | def f():
57 57 | # This should ignore the error, but the inner noqa should be marked as unused.
58 58 | from typing import ( # noqa
@@ -72,7 +72,7 @@ RUF100_1.py:66:16: RUF100 [*] Unused `noqa` directive (non-enabled: `F501`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
63 63 | def f():
64 64 | # This should ignore the error, but mark F501 as unused.
65 65 | from typing import ( # noqa: F401
@@ -93,7 +93,7 @@ RUF100_1.py:72:27: RUF100 [*] Unused `noqa` directive (non-enabled: `F501`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
69 69 |
70 70 | def f():
71 71 | # This should ignore the error, but mark F501 as unused.
@@ -144,7 +144,7 @@ RUF100_1.py:89:55: RUF100 [*] Unused `noqa` directive (non-enabled: `F501`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
86 86 |
87 87 | def f():
88 88 | # This should mark F501 as unused.

View File

@@ -8,7 +8,7 @@ RUF100_2.py:1:19: RUF100 [*] Unused `noqa` directive (unused: `F401`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
1 |-import itertools # noqa: F401
1 |+import itertools

View File

@@ -10,7 +10,7 @@ RUF100_3.py:1:1: RUF100 [*] Unused blanket `noqa` directive
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
1 |-# noqa
2 1 | # noqa # comment
3 2 | print() # noqa
@@ -26,7 +26,7 @@ RUF100_3.py:2:1: RUF100 [*] Unused blanket `noqa` directive
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
1 1 | # noqa
2 |-# noqa # comment
2 |+# comment
@@ -45,7 +45,7 @@ RUF100_3.py:3:10: RUF100 [*] Unused blanket `noqa` directive
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
1 1 | # noqa
2 2 | # noqa # comment
3 |-print() # noqa
@@ -65,7 +65,7 @@ RUF100_3.py:4:10: RUF100 [*] Unused blanket `noqa` directive
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
1 1 | # noqa
2 2 | # noqa # comment
3 3 | print() # noqa
@@ -86,7 +86,7 @@ RUF100_3.py:5:10: RUF100 [*] Unused blanket `noqa` directive
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
2 2 | # noqa # comment
3 3 | print() # noqa
4 4 | print() # noqa # comment
@@ -107,7 +107,7 @@ RUF100_3.py:6:10: RUF100 [*] Unused blanket `noqa` directive
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
3 3 | print() # noqa
4 4 | print() # noqa # comment
5 5 | print() # noqa # comment
@@ -128,7 +128,7 @@ RUF100_3.py:7:10: RUF100 [*] Unused blanket `noqa` directive
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
4 4 | print() # noqa # comment
5 5 | print() # noqa # comment
6 6 | print() # noqa comment
@@ -149,7 +149,7 @@ RUF100_3.py:14:1: RUF100 [*] Unused `noqa` directive (unused: `E501`, `F821`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
11 11 | print(a) # noqa comment
12 12 | print(a) # noqa comment
13 13 |
@@ -168,7 +168,7 @@ RUF100_3.py:15:1: RUF100 [*] Unused `noqa` directive (unused: `E501`, `F821`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
12 12 | print(a) # noqa comment
13 13 |
14 14 | # noqa: E501, F821
@@ -189,7 +189,7 @@ RUF100_3.py:16:10: RUF100 [*] Unused `noqa` directive (unused: `E501`, `F821`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
13 13 |
14 14 | # noqa: E501, F821
15 15 | # noqa: E501, F821 # comment
@@ -210,7 +210,7 @@ RUF100_3.py:17:10: RUF100 [*] Unused `noqa` directive (unused: `E501`, `F821`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
14 14 | # noqa: E501, F821
15 15 | # noqa: E501, F821 # comment
16 16 | print() # noqa: E501, F821
@@ -231,7 +231,7 @@ RUF100_3.py:18:10: RUF100 [*] Unused `noqa` directive (unused: `E501`, `F821`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
15 15 | # noqa: E501, F821 # comment
16 16 | print() # noqa: E501, F821
17 17 | print() # noqa: E501, F821 # comment
@@ -252,7 +252,7 @@ RUF100_3.py:19:10: RUF100 [*] Unused `noqa` directive (unused: `E501`, `F821`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
16 16 | print() # noqa: E501, F821
17 17 | print() # noqa: E501, F821 # comment
18 18 | print() # noqa: E501, F821 # comment
@@ -273,7 +273,7 @@ RUF100_3.py:20:10: RUF100 [*] Unused `noqa` directive (unused: `E501`, `F821`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
17 17 | print() # noqa: E501, F821 # comment
18 18 | print() # noqa: E501, F821 # comment
19 19 | print() # noqa: E501, F821 comment
@@ -294,7 +294,7 @@ RUF100_3.py:21:11: RUF100 [*] Unused `noqa` directive (unused: `E501`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
18 18 | print() # noqa: E501, F821 # comment
19 19 | print() # noqa: E501, F821 comment
20 20 | print() # noqa: E501, F821 comment
@@ -315,7 +315,7 @@ RUF100_3.py:22:11: RUF100 [*] Unused `noqa` directive (unused: `E501`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
19 19 | print() # noqa: E501, F821 comment
20 20 | print() # noqa: E501, F821 comment
21 21 | print(a) # noqa: E501, F821
@@ -336,7 +336,7 @@ RUF100_3.py:23:11: RUF100 [*] Unused `noqa` directive (unused: `E501`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
20 20 | print() # noqa: E501, F821 comment
21 21 | print(a) # noqa: E501, F821
22 22 | print(a) # noqa: E501, F821 # comment
@@ -355,7 +355,7 @@ RUF100_3.py:24:11: RUF100 [*] Unused `noqa` directive (unused: `E501`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
21 21 | print(a) # noqa: E501, F821
22 22 | print(a) # noqa: E501, F821 # comment
23 23 | print(a) # noqa: E501, F821 # comment
@@ -372,7 +372,7 @@ RUF100_3.py:25:11: RUF100 [*] Unused `noqa` directive (unused: `E501`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
22 22 | print(a) # noqa: E501, F821 # comment
23 23 | print(a) # noqa: E501, F821 # comment
24 24 | print(a) # noqa: E501, F821 comment

View File

@@ -40,7 +40,7 @@ RUF100_5.py:11:13: RUF100 [*] Unused `noqa` directive (unused: `E501`)
|
= help: Remove unused `noqa` directive
Fix
Suggested fix
8 8 | }
9 9 |
10 10 |

View File

@@ -3,7 +3,9 @@ use crate::jupyter::Notebook;
#[derive(Clone, Debug, PartialEq, is_macro::Is)]
pub enum SourceKind {
/// The source contains Python source code.
Python(String),
/// The source contains a Jupyter notebook.
IpyNotebook(Notebook),
}

View File

@@ -47,6 +47,7 @@ colored = { workspace = true }
filetime = { workspace = true }
glob = { workspace = true }
ignore = { workspace = true }
is-macro = { workspace = true }
itertools = { workspace = true }
itoa = { version = "1.0.6" }
log = { workspace = true }

View File

@@ -34,7 +34,7 @@ pub struct Args {
#[derive(Debug, clap::Subcommand)]
pub enum Command {
/// Run Ruff on the given files or directories (default).
Check(CheckArgs),
Check(CheckCommand),
/// Explain a rule (or all rules).
#[clap(alias = "--explain")]
#[command(group = clap::ArgGroup::new("selector").multiple(false).required(true))]
@@ -51,9 +51,9 @@ pub enum Command {
#[arg(long, value_enum, default_value = "text")]
format: HelpFormat,
},
/// List or describe the available configuration options
/// List or describe the available configuration options.
Config { option: Option<String> },
/// List all supported upstream linters
/// List all supported upstream linters.
Linter {
/// Output format
#[arg(long, value_enum, default_value = "text")]
@@ -65,19 +65,16 @@ pub enum Command {
/// Generate shell completion.
#[clap(alias = "--generate-shell-completion", hide = true)]
GenerateShellCompletion { shell: clap_complete_command::Shell },
/// Format the given files, or stdin when using `-`.
/// Run the Ruff formatter on the given files or directories.
#[doc(hidden)]
#[clap(hide = true)]
Format {
/// List of files or directories to format or `-` for stdin
files: Vec<PathBuf>,
},
Format(FormatCommand),
}
// The `Parser` derive is for ruff_dev, for ruff_cli `Args` would be sufficient
#[derive(Clone, Debug, clap::Parser)]
#[allow(clippy::struct_excessive_bools, clippy::module_name_repetitions)]
pub struct CheckArgs {
#[allow(clippy::struct_excessive_bools)]
pub struct CheckCommand {
/// List of files or directories to check.
pub files: Vec<PathBuf>,
/// Attempt to automatically fix lint violations.
@@ -95,15 +92,13 @@ pub struct CheckArgs {
show_fixes: bool,
#[clap(long, overrides_with("show_fixes"), hide = true)]
no_show_fixes: bool,
/// Avoid writing any fixed files back; instead, output a diff for each
/// changed file to stdout. Implies `--fix-only`.
/// Avoid writing any fixed files back; instead, output a diff for each changed file to stdout. Implies `--fix-only`.
#[arg(long, conflicts_with = "show_fixes")]
pub diff: bool,
/// Run in watch mode by re-running whenever files change.
#[arg(short, long)]
pub watch: bool,
/// Fix any fixable lint violations, but don't report on leftover
/// violations. Implies `--fix`.
/// Fix any fixable lint violations, but don't report on leftover violations. Implies `--fix`.
#[arg(long, overrides_with("no_fix_only"))]
fix_only: bool,
#[clap(long, overrides_with("fix_only"), hide = true)]
@@ -124,8 +119,7 @@ pub struct CheckArgs {
/// configuration.
#[arg(long, conflicts_with = "isolated")]
pub config: Option<PathBuf>,
/// Comma-separated list of rule codes to enable (or ALL, to enable all
/// rules).
/// Comma-separated list of rule codes to enable (or ALL, to enable all rules).
#[arg(
long,
value_delimiter = ',',
@@ -145,8 +139,7 @@ pub struct CheckArgs {
hide_possible_values = true
)]
pub ignore: Option<Vec<RuleSelector>>,
/// Like --select, but adds additional rule codes on top of those already
/// specified.
/// Like --select, but adds additional rule codes on top of those already specified.
#[arg(
long,
value_delimiter = ',',
@@ -169,8 +162,7 @@ pub struct CheckArgs {
/// List of mappings from file pattern to code to exclude.
#[arg(long, value_delimiter = ',', help_heading = "Rule selection")]
pub per_file_ignores: Option<Vec<PatternPrefixPair>>,
/// Like `--per-file-ignores`, but adds additional ignores on top of
/// those already specified.
/// Like `--per-file-ignores`, but adds additional ignores on top of those already specified.
#[arg(long, value_delimiter = ',', help_heading = "Rule selection")]
pub extend_per_file_ignores: Option<Vec<PatternPrefixPair>>,
/// List of paths, used to omit files and/or directories from analysis.
@@ -181,8 +173,7 @@ pub struct CheckArgs {
help_heading = "File selection"
)]
pub exclude: Option<Vec<FilePattern>>,
/// Like --exclude, but adds additional files and directories on top of
/// those already excluded.
/// Like --exclude, but adds additional files and directories on top of those already excluded.
#[arg(
long,
value_delimiter = ',',
@@ -190,8 +181,7 @@ pub struct CheckArgs {
help_heading = "File selection"
)]
pub extend_exclude: Option<Vec<FilePattern>>,
/// List of rule codes to treat as eligible for autofix. Only applicable
/// when autofix itself is enabled (e.g., via `--fix`).
/// List of rule codes to treat as eligible for autofix. Only applicable when autofix itself is enabled (e.g., via `--fix`).
#[arg(
long,
value_delimiter = ',',
@@ -201,8 +191,7 @@ pub struct CheckArgs {
hide_possible_values = true
)]
pub fixable: Option<Vec<RuleSelector>>,
/// List of rule codes to treat as ineligible for autofix. Only applicable
/// when autofix itself is enabled (e.g., via `--fix`).
/// List of rule codes to treat as ineligible for autofix. Only applicable when autofix itself is enabled (e.g., via `--fix`).
#[arg(
long,
value_delimiter = ',',
@@ -212,8 +201,7 @@ pub struct CheckArgs {
hide_possible_values = true
)]
pub unfixable: Option<Vec<RuleSelector>>,
/// Like --fixable, but adds additional rule codes on top of those already
/// specified.
/// Like --fixable, but adds additional rule codes on top of those already specified.
#[arg(
long,
value_delimiter = ',',
@@ -233,8 +221,7 @@ pub struct CheckArgs {
hide = true
)]
pub extend_unfixable: Option<Vec<RuleSelector>>,
/// Respect file exclusions via `.gitignore` and other standard ignore
/// files.
/// Respect file exclusions via `.gitignore` and other standard ignore files.
#[arg(
long,
overrides_with("no_respect_gitignore"),
@@ -243,8 +230,7 @@ pub struct CheckArgs {
respect_gitignore: bool,
#[clap(long, overrides_with("respect_gitignore"), hide = true)]
no_respect_gitignore: bool,
/// Enforce exclusions, even for paths passed to Ruff directly on the
/// command-line.
/// Enforce exclusions, even for paths passed to Ruff directly on the command-line.
#[arg(
long,
overrides_with("no_force_exclude"),
@@ -253,8 +239,7 @@ pub struct CheckArgs {
force_exclude: bool,
#[clap(long, overrides_with("force_exclude"), hide = true)]
no_force_exclude: bool,
/// Set the line-length for length-associated rules and automatic
/// formatting.
/// Set the line-length for length-associated rules and automatic formatting.
#[arg(long, help_heading = "Rule configuration", hide = true)]
pub line_length: Option<LineLength>,
/// Regular expression matching the name of dummy variables.
@@ -280,8 +265,7 @@ pub struct CheckArgs {
conflicts_with = "exit_non_zero_on_fix"
)]
pub exit_zero: bool,
/// Exit with a non-zero status code if any files were modified via
/// autofix, even if no lint violations remain.
/// Exit with a non-zero status code if any files were modified via autofix, even if no lint violations remain.
#[arg(long, help_heading = "Miscellaneous", conflicts_with = "exit_zero")]
pub exit_non_zero_on_fix: bool,
/// Show counts for every rule with at least one violation.
@@ -340,6 +324,53 @@ pub struct CheckArgs {
pub ecosystem_ci: bool,
}
#[derive(Clone, Debug, clap::Parser)]
#[allow(clippy::struct_excessive_bools)]
pub struct FormatCommand {
/// List of files or directories to format.
pub files: Vec<PathBuf>,
/// Avoid writing any formatted files back; instead, exit with a non-zero status code if any
/// files would have been modified, and zero otherwise.
#[arg(long)]
pub check: bool,
/// Specify file to write the formatter output to (default: stdout).
#[arg(short, long)]
pub output_file: Option<PathBuf>,
/// The minimum Python version that should be supported.
#[arg(long, value_enum)]
pub target_version: Option<PythonVersion>,
/// Path to the `pyproject.toml` or `ruff.toml` file to use for configuration.
#[arg(long, conflicts_with = "isolated")]
pub config: Option<PathBuf>,
/// Respect file exclusions via `.gitignore` and other standard ignore files.
#[arg(
long,
overrides_with("no_respect_gitignore"),
help_heading = "File selection"
)]
respect_gitignore: bool,
#[clap(long, overrides_with("respect_gitignore"), hide = true)]
no_respect_gitignore: bool,
/// Enforce exclusions, even for paths passed to Ruff directly on the command-line.
#[arg(
long,
overrides_with("no_force_exclude"),
help_heading = "File selection"
)]
force_exclude: bool,
#[clap(long, overrides_with("force_exclude"), hide = true)]
no_force_exclude: bool,
/// Set the line-length.
#[arg(long, help_heading = "Rule configuration", hide = true)]
pub line_length: Option<LineLength>,
/// Ignore all configuration files.
#[arg(long, conflicts_with = "config", help_heading = "Miscellaneous")]
pub isolated: bool,
/// The name of the file when passing it through stdin.
#[arg(long, help_heading = "Miscellaneous")]
pub stdin_filename: Option<PathBuf>,
}
#[derive(Debug, Clone, Copy, clap::ValueEnum)]
pub enum HelpFormat {
Text,
@@ -367,8 +398,7 @@ pub struct LogLevelArgs {
help_heading = "Log levels"
)]
pub quiet: bool,
/// Disable all logging (but still exit with status code "1" upon detecting
/// lint violations).
/// Disable all logging (but still exit with status code "1" upon detecting lint violations).
#[arg(
short,
long,
@@ -393,12 +423,12 @@ impl From<&LogLevelArgs> for LogLevel {
}
}
impl CheckArgs {
impl CheckCommand {
/// Partition the CLI into command-line arguments and configuration
/// overrides.
pub fn partition(self) -> (Arguments, Overrides) {
pub fn partition(self) -> (CheckArguments, Overrides) {
(
Arguments {
CheckArguments {
add_noqa: self.add_noqa,
config: self.config,
diff: self.diff,
@@ -448,6 +478,34 @@ impl CheckArgs {
}
}
impl FormatCommand {
/// Partition the CLI into command-line arguments and configuration
/// overrides.
pub fn partition(self) -> (FormatArguments, Overrides) {
(
FormatArguments {
check: self.check,
config: self.config,
files: self.files,
isolated: self.isolated,
output_file: self.output_file,
stdin_filename: self.stdin_filename,
},
Overrides {
line_length: self.line_length,
respect_gitignore: resolve_bool_arg(
self.respect_gitignore,
self.no_respect_gitignore,
),
force_exclude: resolve_bool_arg(self.force_exclude, self.no_force_exclude),
target_version: self.target_version,
// Unsupported on the formatter CLI, but required on `Overrides`.
..Overrides::default()
},
)
}
}
fn parse_rule_selector(env: &str) -> Result<RuleSelector, std::io::Error> {
RuleSelector::from_str(env)
.map_err(|e| std::io::Error::new(std::io::ErrorKind::InvalidInput, e))
@@ -465,7 +523,7 @@ fn resolve_bool_arg(yes: bool, no: bool) -> Option<bool> {
/// CLI settings that are distinct from configuration (commands, lists of files,
/// etc.).
#[allow(clippy::struct_excessive_bools)]
pub struct Arguments {
pub struct CheckArguments {
pub add_noqa: bool,
pub config: Option<PathBuf>,
pub diff: bool,
@@ -484,6 +542,18 @@ pub struct Arguments {
pub watch: bool,
}
/// CLI settings that are distinct from configuration (commands, lists of files,
/// etc.).
#[allow(clippy::struct_excessive_bools)]
pub struct FormatArguments {
pub check: bool,
pub config: Option<PathBuf>,
pub files: Vec<PathBuf>,
pub isolated: bool,
pub output_file: Option<PathBuf>,
pub stdin_filename: Option<PathBuf>,
}
/// CLI settings that function as configuration overrides.
#[derive(Clone, Default)]
#[allow(clippy::struct_excessive_bools)]

View File

@@ -8,7 +8,7 @@ use rayon::prelude::*;
use ruff::linter::add_noqa_to_path;
use ruff::warn_user_once;
use ruff_python_stdlib::path::{is_jupyter_notebook, is_project_toml};
use ruff_python_ast::{PySourceType, SourceType};
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig};
use crate::args::Overrides;
@@ -46,15 +46,17 @@ pub(crate) fn add_noqa(
.flatten()
.filter_map(|entry| {
let path = entry.path();
if is_project_toml(path) || is_jupyter_notebook(path) {
let SourceType::Python(source_type @ (PySourceType::Python | PySourceType::Stub)) =
SourceType::from(path)
else {
return None;
}
};
let package = path
.parent()
.and_then(|parent| package_roots.get(parent))
.and_then(|package| *package);
let settings = resolver.resolve(path, pyproject_config);
match add_noqa_to_path(path, package, settings) {
match add_noqa_to_path(path, package, source_type, settings) {
Ok(count) => Some(count),
Err(e) => {
error!("Failed to add noqa to {}: {e}", path.display());

View File

@@ -28,7 +28,7 @@ use crate::diagnostics::Diagnostics;
use crate::panic::catch_unwind;
/// Run the linter over a collection of files.
pub(crate) fn run(
pub(crate) fn check(
files: &[PathBuf],
pyproject_config: &PyprojectConfig,
overrides: &Overrides,
@@ -245,7 +245,7 @@ mod test {
use crate::args::Overrides;
use super::run;
use super::check;
/// We check that regular python files, pyproject.toml and jupyter notebooks all handle io
/// errors gracefully
@@ -278,7 +278,7 @@ mod test {
PyprojectConfig::new(PyprojectDiscoveryStrategy::Fixed, settings, None);
// Run
let diagnostics = run(
let diagnostics = check(
// Notebooks are not included by default
&[tempdir.path().to_path_buf(), notebook],
&pyproject_config,

View File

@@ -1,4 +1,3 @@
use std::io::{self, Read};
use std::path::Path;
use anyhow::Result;
@@ -9,16 +8,10 @@ use ruff_workspace::resolver::{python_file_at_path, PyprojectConfig};
use crate::args::Overrides;
use crate::diagnostics::{lint_stdin, Diagnostics};
/// Read a `String` from `stdin`.
pub(crate) fn read_from_stdin() -> Result<String> {
let mut buffer = String::new();
io::stdin().lock().read_to_string(&mut buffer)?;
Ok(buffer)
}
use crate::stdin::read_from_stdin;
/// Run the linter over a single file, read from `stdin`.
pub(crate) fn run_stdin(
pub(crate) fn check_stdin(
filename: Option<&Path>,
pyproject_config: &PyprojectConfig,
overrides: &Overrides,

View File

@@ -1,33 +1,56 @@
use std::fmt::{Display, Formatter};
use std::fs::File;
use std::io;
use std::io::{BufWriter, Write};
use std::num::NonZeroU16;
use std::path::{Path, PathBuf};
use std::time::Instant;
use anyhow::Result;
use colored::Colorize;
use log::warn;
use log::{debug, warn};
use rayon::iter::Either::{Left, Right};
use rayon::iter::{IntoParallelIterator, ParallelIterator};
use thiserror::Error;
use tracing::{span, Level};
use ruff::fs;
use ruff::logging::LogLevel;
use ruff::warn_user_once;
use ruff_formatter::LineWidth;
use ruff_python_ast::PySourceType;
use ruff_python_ast::{PySourceType, SourceType};
use ruff_python_formatter::{format_module, FormatModuleError, PyFormatOptions};
use ruff_workspace::resolver::python_files_in_path;
use crate::args::{Arguments, Overrides};
use crate::args::{FormatArguments, Overrides};
use crate::resolve::resolve;
use crate::ExitStatus;
#[derive(Debug, Copy, Clone, is_macro::Is)]
pub(crate) enum FormatMode {
/// Write the formatted contents back to the file.
Write,
/// Check if the file is formatted, but do not write the formatted contents back.
Check,
}
/// Format a set of files, and return the exit status.
pub(crate) fn format(cli: &Arguments, overrides: &Overrides) -> Result<ExitStatus> {
pub(crate) fn format(
cli: &FormatArguments,
overrides: &Overrides,
log_level: LogLevel,
) -> Result<ExitStatus> {
let pyproject_config = resolve(
cli.isolated,
cli.config.as_deref(),
overrides,
cli.stdin_filename.as_deref(),
)?;
let mode = if cli.check {
FormatMode::Check
} else {
FormatMode::Write
};
let (paths, resolver) = python_files_in_path(&cli.files, &pyproject_config, overrides)?;
if paths.is_empty() {
@@ -35,115 +58,251 @@ pub(crate) fn format(cli: &Arguments, overrides: &Overrides) -> Result<ExitStatu
return Ok(ExitStatus::Success);
}
let result = paths
let start = Instant::now();
let (results, errors): (Vec<_>, Vec<_>) = paths
.into_par_iter()
.map(|dir_entry| {
let dir_entry = dir_entry?;
let path = dir_entry.path();
let source_type = PySourceType::from(path);
if !(source_type.is_python() || source_type.is_stub())
|| path
.extension()
.is_some_and(|extension| extension == "toml")
{
return Ok(());
.filter_map(|entry| match entry {
Ok(entry) => {
let path = entry.path();
let SourceType::Python(source_type @ (PySourceType::Python | PySourceType::Stub)) =
SourceType::from(path)
else {
// Ignore any non-Python files.
return None;
};
let line_length = resolver.resolve(path, &pyproject_config).line_length;
let options = PyFormatOptions::from_source_type(source_type)
.with_line_width(LineWidth::from(NonZeroU16::from(line_length)));
Some(format_path(path, options, mode))
}
let line_length = resolver.resolve(path, &pyproject_config).line_length;
let options = PyFormatOptions::from_extension(path)
.with_line_width(LineWidth::from(NonZeroU16::from(line_length)));
format_path(path, options)
Err(err) => Some(Err(FormatCommandError::Ignore(err))),
})
.map(|result| {
result.map_err(|err| {
err.show_user();
err
})
})
.collect::<Result<Vec<_>, _>>();
.partition_map(|result| match result {
Ok(diagnostic) => Left(diagnostic),
Err(err) => Right(err),
});
let duration = start.elapsed();
debug!("Formatted files in: {:?}", duration);
if result.is_ok() {
Ok(ExitStatus::Success)
} else {
Ok(ExitStatus::Error)
let summary = FormatResultSummary::new(results, mode);
// Report on any errors.
if !errors.is_empty() {
warn!("Encountered {} errors while formatting:", errors.len());
for error in &errors {
warn!("{error}");
}
}
// Report on the formatting changes.
if log_level >= LogLevel::Default {
let mut writer: Box<dyn Write> = match &cli.output_file {
Some(path) => {
colored::control::set_override(false);
let file = File::create(path)?;
Box::new(BufWriter::new(file))
}
_ => Box::new(BufWriter::new(io::stdout())),
};
writeln!(writer, "{summary}")?;
}
match mode {
FormatMode::Write => {
if errors.is_empty() {
Ok(ExitStatus::Success)
} else {
Ok(ExitStatus::Error)
}
}
FormatMode::Check => {
if errors.is_empty() {
if summary.formatted > 0 {
Ok(ExitStatus::Failure)
} else {
Ok(ExitStatus::Success)
}
} else {
Ok(ExitStatus::Error)
}
}
}
}
/// Format the file at the given [`Path`].
#[tracing::instrument(skip_all, fields(path = %path.display()))]
fn format_path(path: &Path, options: PyFormatOptions) -> Result<(), FormatterIterationError> {
fn format_path(
path: &Path,
options: PyFormatOptions,
mode: FormatMode,
) -> Result<FormatCommandResult, FormatCommandError> {
let unformatted = std::fs::read_to_string(path)
.map_err(|err| FormatterIterationError::Read(path.to_path_buf(), err))?;
.map_err(|err| FormatCommandError::Read(Some(path.to_path_buf()), err))?;
let formatted = {
let span = span!(Level::TRACE, "format_path_without_io", path = %path.display());
let _enter = span.enter();
format_module(&unformatted, options)
.map_err(|err| FormatterIterationError::FormatModule(path.to_path_buf(), err))?
.map_err(|err| FormatCommandError::FormatModule(Some(path.to_path_buf()), err))?
};
std::fs::write(path, formatted.as_code().as_bytes())
.map_err(|err| FormatterIterationError::Write(path.to_path_buf(), err))?;
Ok(())
let formatted = formatted.as_code();
if formatted.len() == unformatted.len() && formatted == unformatted {
Ok(FormatCommandResult::Unchanged)
} else {
if mode.is_write() {
std::fs::write(path, formatted.as_bytes())
.map_err(|err| FormatCommandError::Write(Some(path.to_path_buf()), err))?;
}
Ok(FormatCommandResult::Formatted)
}
}
#[derive(Debug, Clone, Copy, is_macro::Is)]
pub(crate) enum FormatCommandResult {
/// The file was formatted.
Formatted,
/// The file was unchanged, as the formatted contents matched the existing contents.
Unchanged,
}
#[derive(Debug)]
struct FormatResultSummary {
/// The format mode that was used.
mode: FormatMode,
/// The number of files that were formatted.
formatted: usize,
/// The number of files that were unchanged.
unchanged: usize,
}
impl FormatResultSummary {
fn new(diagnostics: Vec<FormatCommandResult>, mode: FormatMode) -> Self {
let mut summary = Self {
mode,
formatted: 0,
unchanged: 0,
};
for diagnostic in diagnostics {
match diagnostic {
FormatCommandResult::Formatted => summary.formatted += 1,
FormatCommandResult::Unchanged => summary.unchanged += 1,
}
}
summary
}
}
impl Display for FormatResultSummary {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
if self.formatted > 0 && self.unchanged > 0 {
write!(
f,
"{} file{} {}, {} file{} left unchanged",
self.formatted,
if self.formatted == 1 { "" } else { "s" },
match self.mode {
FormatMode::Write => "reformatted",
FormatMode::Check => "would be reformatted",
},
self.unchanged,
if self.unchanged == 1 { "" } else { "s" },
)
} else if self.formatted > 0 {
write!(
f,
"{} file{} {}",
self.formatted,
if self.formatted == 1 { "" } else { "s" },
match self.mode {
FormatMode::Write => "reformatted",
FormatMode::Check => "would be reformatted",
}
)
} else if self.unchanged > 0 {
write!(
f,
"{} file{} left unchanged",
self.unchanged,
if self.unchanged == 1 { "" } else { "s" },
)
} else {
Ok(())
}
}
}
/// An error that can occur while formatting a set of files.
#[derive(Error, Debug)]
enum FormatterIterationError {
#[error("Failed to traverse: {0}")]
pub(crate) enum FormatCommandError {
Ignore(#[from] ignore::Error),
#[error("Failed to read {0}: {1}")]
Read(PathBuf, io::Error),
#[error("Failed to write {0}: {1}")]
Write(PathBuf, io::Error),
#[error("Failed to format {0}: {1}")]
FormatModule(PathBuf, FormatModuleError),
Read(Option<PathBuf>, io::Error),
Write(Option<PathBuf>, io::Error),
FormatModule(Option<PathBuf>, FormatModuleError),
}
impl FormatterIterationError {
/// Pretty-print a [`FormatterIterationError`] for user-facing display.
fn show_user(&self) {
impl Display for FormatCommandError {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
Self::Ignore(err) => {
if let ignore::Error::WithPath { path, .. } = err {
warn!(
write!(
f,
"{}{}{} {}",
"Failed to format ".bold(),
fs::relativize_path(path).bold(),
":".bold(),
err.io_error()
.map_or_else(|| err.to_string(), std::string::ToString::to_string)
);
)
} else {
warn!(
write!(
f,
"{} {}",
"Encountered error:".bold(),
err.io_error()
.map_or_else(|| err.to_string(), std::string::ToString::to_string)
);
)
}
}
Self::Read(path, err) => {
warn!(
"{}{}{} {err}",
"Failed to read ".bold(),
fs::relativize_path(path).bold(),
":".bold()
);
if let Some(path) = path {
write!(
f,
"{}{}{} {err}",
"Failed to read ".bold(),
fs::relativize_path(path).bold(),
":".bold()
)
} else {
write!(f, "{}{} {err}", "Failed to read".bold(), ":".bold())
}
}
Self::Write(path, err) => {
warn!(
"{}{}{} {err}",
"Failed to write ".bold(),
fs::relativize_path(path).bold(),
":".bold()
);
if let Some(path) = path {
write!(
f,
"{}{}{} {err}",
"Failed to write ".bold(),
fs::relativize_path(path).bold(),
":".bold()
)
} else {
write!(f, "{}{} {err}", "Failed to write".bold(), ":".bold())
}
}
Self::FormatModule(path, err) => {
warn!(
"{}{}{} {err}",
"Failed to format ".bold(),
fs::relativize_path(path).bold(),
":".bold()
);
if let Some(path) = path {
write!(
f,
"{}{}{} {err}",
"Failed to format ".bold(),
fs::relativize_path(path).bold(),
":".bold()
)
} else {
write!(f, "{}{} {err}", "Failed to format".bold(), ":".bold())
}
}
}
}

View File

@@ -0,0 +1,81 @@
use std::io::{stdout, Write};
use std::path::Path;
use anyhow::Result;
use log::warn;
use ruff_python_formatter::{format_module, PyFormatOptions};
use ruff_workspace::resolver::python_file_at_path;
use crate::args::{FormatArguments, Overrides};
use crate::commands::format::{FormatCommandError, FormatCommandResult, FormatMode};
use crate::resolve::resolve;
use crate::stdin::read_from_stdin;
use crate::ExitStatus;
/// Run the formatter over a single file, read from `stdin`.
pub(crate) fn format_stdin(cli: &FormatArguments, overrides: &Overrides) -> Result<ExitStatus> {
let pyproject_config = resolve(
cli.isolated,
cli.config.as_deref(),
overrides,
cli.stdin_filename.as_deref(),
)?;
let mode = if cli.check {
FormatMode::Check
} else {
FormatMode::Write
};
if let Some(filename) = cli.stdin_filename.as_deref() {
if !python_file_at_path(filename, &pyproject_config, overrides)? {
return Ok(ExitStatus::Success);
}
}
// Format the file.
let path = cli.stdin_filename.as_deref();
let options = path
.map(PyFormatOptions::from_extension)
.unwrap_or_default();
match format_source(path, options, mode) {
Ok(result) => match mode {
FormatMode::Write => Ok(ExitStatus::Success),
FormatMode::Check => {
if result.is_formatted() {
Ok(ExitStatus::Failure)
} else {
Ok(ExitStatus::Success)
}
}
},
Err(err) => {
warn!("{err}");
Ok(ExitStatus::Error)
}
}
}
/// Format source code read from `stdin`.
fn format_source(
path: Option<&Path>,
options: PyFormatOptions,
mode: FormatMode,
) -> Result<FormatCommandResult, FormatCommandError> {
let unformatted = read_from_stdin()
.map_err(|err| FormatCommandError::Read(path.map(Path::to_path_buf), err))?;
let formatted = format_module(&unformatted, options)
.map_err(|err| FormatCommandError::FormatModule(path.map(Path::to_path_buf), err))?;
let formatted = formatted.as_code();
if formatted.len() == unformatted.len() && formatted == unformatted {
Ok(FormatCommandResult::Unchanged)
} else {
if mode.is_write() {
stdout()
.lock()
.write_all(formatted.as_bytes())
.map_err(|err| FormatCommandError::Write(path.map(Path::to_path_buf), err))?;
}
Ok(FormatCommandResult::Formatted)
}
}

View File

@@ -1,10 +1,11 @@
pub(crate) mod add_noqa;
pub(crate) mod check;
pub(crate) mod check_stdin;
pub(crate) mod clean;
pub(crate) mod config;
pub(crate) mod format;
pub(crate) mod format_stdin;
pub(crate) mod linter;
pub(crate) mod rule;
pub(crate) mod run;
pub(crate) mod run_stdin;
pub(crate) mod show_files;
pub(crate) mod show_settings;

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff_cli/src/commands/run.rs
source: crates/ruff_cli/src/commands/check.rs
---
/home/ferris/project/code.py:1:1: E902 Permission denied (os error 13)
/home/ferris/project/notebook.ipynb:1:1: E902 Permission denied (os error 13)

View File

@@ -27,8 +27,7 @@ use ruff::{fs, IOError};
use ruff_diagnostics::Diagnostic;
use ruff_macros::CacheKey;
use ruff_python_ast::imports::ImportMap;
use ruff_python_ast::PySourceType;
use ruff_python_stdlib::path::is_project_toml;
use ruff_python_ast::{PySourceType, SourceType, TomlSourceType};
use ruff_source_file::{LineIndex, SourceCode, SourceFileBuilder};
use ruff_text_size::{TextRange, TextSize};
@@ -228,36 +227,36 @@ pub(crate) fn lint_path(
debug!("Checking: {}", path.display());
// We have to special case this here since the Python tokenizer doesn't work with TOML.
if is_project_toml(path) {
let messages = if settings
.lib
.rules
.iter_enabled()
.any(|rule_code| rule_code.lint_source().is_pyproject_toml())
{
let contents = match std::fs::read_to_string(path) {
Ok(contents) => contents,
Err(err) => {
return Ok(Diagnostics::from_io_error(&err, path, &settings.lib));
}
let source_type = match SourceType::from(path) {
SourceType::Toml(TomlSourceType::Pyproject) => {
let messages = if settings
.lib
.rules
.iter_enabled()
.any(|rule_code| rule_code.lint_source().is_pyproject_toml())
{
let contents = match std::fs::read_to_string(path) {
Ok(contents) => contents,
Err(err) => {
return Ok(Diagnostics::from_io_error(&err, path, &settings.lib));
}
};
let source_file = SourceFileBuilder::new(path.to_string_lossy(), contents).finish();
lint_pyproject_toml(source_file, &settings.lib)
} else {
vec![]
};
let source_file = SourceFileBuilder::new(path.to_string_lossy(), contents).finish();
lint_pyproject_toml(source_file, &settings.lib)
} else {
vec![]
};
return Ok(Diagnostics {
messages,
..Diagnostics::default()
});
}
return Ok(Diagnostics {
messages,
..Diagnostics::default()
});
}
SourceType::Toml(_) => return Ok(Diagnostics::default()),
SourceType::Python(source_type) => source_type,
};
// Extract the sources from the file.
let LintSources {
source_type,
source_kind,
} = match LintSources::try_from_path(path) {
let LintSource(source_kind) = match LintSource::try_from_path(path, source_type) {
Ok(sources) => sources,
Err(SourceExtractionError::Io(err)) => {
return Ok(Diagnostics::from_io_error(&err, path, &settings.lib));
@@ -438,21 +437,24 @@ pub(crate) fn lint_stdin(
noqa: flags::Noqa,
autofix: flags::FixMode,
) -> Result<Diagnostics> {
// Extract the sources from the file.
let LintSources {
source_type,
source_kind,
} = match LintSources::try_from_source_code(contents, path) {
Ok(sources) => sources,
Err(SourceExtractionError::Io(err)) => {
// SAFETY: An `io::Error` can only occur if we're reading from a path.
return Ok(Diagnostics::from_io_error(&err, path.unwrap(), settings));
}
Err(SourceExtractionError::Diagnostics(diagnostics)) => {
return Ok(*diagnostics);
}
// TODO(charlie): Support `pyproject.toml`.
let SourceType::Python(source_type) = path.map(SourceType::from).unwrap_or_default() else {
return Ok(Diagnostics::default());
};
// Extract the sources from the file.
let LintSource(source_kind) =
match LintSource::try_from_source_code(contents, path, source_type) {
Ok(sources) => sources,
Err(SourceExtractionError::Io(err)) => {
// SAFETY: An `io::Error` can only occur if we're reading from a path.
return Ok(Diagnostics::from_io_error(&err, path.unwrap(), settings));
}
Err(SourceExtractionError::Diagnostics(diagnostics)) => {
return Ok(*diagnostics);
}
};
// Lint the inputs.
let (
LinterResult {
@@ -554,58 +556,40 @@ pub(crate) fn lint_stdin(
}
#[derive(Debug)]
struct LintSources {
/// The "type" of source code, e.g. `.py`, `.pyi`, `.ipynb`, etc.
source_type: PySourceType,
/// The "kind" of source, e.g. Python file, Jupyter Notebook, etc.
source_kind: SourceKind,
}
struct LintSource(SourceKind);
impl LintSources {
/// Extract the lint [`LintSources`] from the given file path.
fn try_from_path(path: &Path) -> Result<LintSources, SourceExtractionError> {
let source_type = PySourceType::from(path);
// Read the file from disk.
impl LintSource {
/// Extract the lint [`LintSource`] from the given file path.
fn try_from_path(
path: &Path,
source_type: PySourceType,
) -> Result<LintSource, SourceExtractionError> {
if source_type.is_ipynb() {
let notebook = notebook_from_path(path).map_err(SourceExtractionError::Diagnostics)?;
let source_kind = SourceKind::IpyNotebook(notebook);
Ok(LintSources {
source_type,
source_kind,
})
Ok(LintSource(source_kind))
} else {
// This is tested by ruff_cli integration test `unreadable_file`
let contents = std::fs::read_to_string(path).map_err(SourceExtractionError::Io)?;
Ok(LintSources {
source_type,
source_kind: SourceKind::Python(contents),
})
Ok(LintSource(SourceKind::Python(contents)))
}
}
/// Extract the lint [`LintSources`] from the raw string contents, optionally accompanied by a
/// Extract the lint [`LintSource`] from the raw string contents, optionally accompanied by a
/// file path indicating the path to the file from which the contents were read. If provided,
/// the file path should be used for diagnostics, but not for reading the file from disk.
fn try_from_source_code(
source_code: String,
path: Option<&Path>,
) -> Result<LintSources, SourceExtractionError> {
let source_type = path.map(PySourceType::from).unwrap_or_default();
source_type: PySourceType,
) -> Result<LintSource, SourceExtractionError> {
if source_type.is_ipynb() {
let notebook = notebook_from_source_code(&source_code, path)
.map_err(SourceExtractionError::Diagnostics)?;
let source_kind = SourceKind::IpyNotebook(notebook);
Ok(LintSources {
source_type,
source_kind,
})
Ok(LintSource(source_kind))
} else {
Ok(LintSources {
source_type,
source_kind: SourceKind::Python(source_code),
})
Ok(LintSource(SourceKind::Python(source_code)))
}
}
}

View File

@@ -6,7 +6,6 @@ use std::sync::mpsc::channel;
use anyhow::Result;
use clap::CommandFactory;
use clap::FromArgMatches;
use log::warn;
use notify::{recommended_watcher, RecursiveMode, Watcher};
@@ -14,10 +13,8 @@ use ruff::logging::{set_up_logging, LogLevel};
use ruff::settings::types::SerializationFormat;
use ruff::settings::{flags, CliSettings};
use ruff::{fs, warn_user_once};
use ruff_python_formatter::{format_module, PyFormatOptions};
use crate::args::{Args, CheckArgs, Command};
use crate::commands::run_stdin::read_from_stdin;
use crate::args::{Args, CheckCommand, Command, FormatCommand};
use crate::printer::{Flags as PrinterFlags, Printer};
pub mod args;
@@ -27,6 +24,7 @@ mod diagnostics;
mod panic;
mod printer;
pub mod resolve;
mod stdin;
#[derive(Copy, Clone)]
pub enum ExitStatus {
@@ -78,7 +76,7 @@ fn change_detected(paths: &[PathBuf]) -> Option<ChangeKind> {
None
}
/// Returns true if the linter should read from standard input.
/// Returns true if the command should read from standard input.
fn is_stdin(files: &[PathBuf], stdin_filename: Option<&Path>) -> bool {
// If the user provided a `--stdin-filename`, always read from standard input.
if stdin_filename.is_some() {
@@ -149,44 +147,28 @@ quoting the executed command, along with the relevant file contents and `pyproje
shell.generate(&mut Args::command(), &mut stdout());
}
Command::Check(args) => return check(args, log_level),
Command::Format { files } => return format(&files),
Command::Format(args) => return format(args, log_level),
}
Ok(ExitStatus::Success)
}
fn format(paths: &[PathBuf]) -> Result<ExitStatus> {
fn format(args: FormatCommand, log_level: LogLevel) -> Result<ExitStatus> {
warn_user_once!(
"`ruff format` is a work-in-progress, subject to change at any time, and intended only for \
experimentation."
);
// We want to use the same as `ruff check <files>`, but we don't actually want to allow
// any of the linter settings.
// TODO(konstin): Refactor this to allow getting config and resolver without going
// though clap.
let args_matches = CheckArgs::command()
.no_binary_name(true)
.get_matches_from(paths);
let check_args: CheckArgs = CheckArgs::from_arg_matches(&args_matches)?;
let (cli, overrides) = check_args.partition();
let (cli, overrides) = args.partition();
if is_stdin(&cli.files, cli.stdin_filename.as_deref()) {
let unformatted = read_from_stdin()?;
let options = cli
.stdin_filename
.as_deref()
.map(PyFormatOptions::from_extension)
.unwrap_or_default();
let formatted = format_module(&unformatted, options)?;
stdout().lock().write_all(formatted.as_code().as_bytes())?;
Ok(ExitStatus::Success)
commands::format_stdin::format_stdin(&cli, &overrides)
} else {
commands::format::format(&cli, &overrides)
commands::format::format(&cli, &overrides, log_level)
}
}
pub fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitStatus> {
pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
let (cli, overrides) = args.partition();
// Construct the "default" settings. These are used when no `pyproject.toml`
@@ -309,7 +291,7 @@ pub fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitStatus> {
Printer::clear_screen()?;
printer.write_to_user("Starting linter in watch mode...\n");
let messages = commands::run::run(
let messages = commands::check::check(
&cli.files,
&pyproject_config,
&overrides,
@@ -341,7 +323,7 @@ pub fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitStatus> {
Printer::clear_screen()?;
printer.write_to_user("File change detected...\n");
let messages = commands::run::run(
let messages = commands::check::check(
&cli.files,
&pyproject_config,
&overrides,
@@ -359,7 +341,7 @@ pub fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitStatus> {
// Generate lint violations.
let diagnostics = if is_stdin {
commands::run_stdin::run_stdin(
commands::check_stdin::check_stdin(
cli.stdin_filename.map(fs::normalize_path).as_deref(),
&pyproject_config,
&overrides,
@@ -367,7 +349,7 @@ pub fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitStatus> {
autofix,
)?
} else {
commands::run::run(
commands::check::check(
&cli.files,
&pyproject_config,
&overrides,

View File

@@ -0,0 +1,9 @@
use std::io;
use std::io::Read;
/// Read a string from `stdin`.
pub(crate) fn read_from_stdin() -> Result<String, io::Error> {
let mut buffer = String::new();
io::stdin().lock().read_to_string(&mut buffer)?;
Ok(buffer)
}

View File

@@ -8,9 +8,10 @@ use std::thread::sleep;
use std::time::Duration;
use std::{fs, process, str};
use anyhow::{anyhow, Context, Result};
use anyhow::{anyhow, bail, Context, Result};
use assert_cmd::Command;
use log::info;
use similar::TextDiff;
use walkdir::WalkDir;
use ruff::logging::{set_up_logging, LogLevel};
@@ -133,12 +134,17 @@ fn run_test(path: &Path, blackd: &Blackd, ruff_args: &[&str]) -> Result<()> {
}
let step_3_output = step_3.get_output().stdout.clone();
assert_eq!(
str::from_utf8(&step_2_output),
str::from_utf8(&step_3_output),
"Mismatch found for {}",
path.display()
);
let step_2_output_str = str::from_utf8(&step_2_output)?;
let step_3_output_str = str::from_utf8(&step_3_output)?;
if step_2_output_str != step_3_output_str {
let text_diff = TextDiff::from_lines(step_2_output_str, step_3_output_str);
let unified_diff = text_diff.unified_diff();
println!("Mismatch found for {}", path.display());
println!("---------- FIRST ----------\n{}\n", step_2_output_str);
println!("---------- SECOND ----------\n{}\n", step_3_output_str);
println!("---------- DIFF ----------\n{}\n", unified_diff);
};
Ok(())
}
@@ -151,6 +157,10 @@ fn test_ruff_black_compatibility() -> Result<()> {
let blackd = Blackd::new()?;
let fixtures_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"))
// Get the workspace directory https://github.com/rust-lang/cargo/issues/3946
.parent()
.expect("parent should be available for crate")
.join("ruff")
.join("resources")
.join("test")
.join("fixtures");
@@ -177,16 +187,25 @@ fn test_ruff_black_compatibility() -> Result<()> {
"--fix",
"--line-length",
"88",
"--select ALL",
"--select",
"ALL",
// Exclude ruff codes, specifically RUF100, because it causes differences that are not a
// problem. Ruff would add a `# noqa: W292` after the first run, black introduces a
// newline, and ruff removes the `# noqa: W292` again.
"--ignore RUF",
"--ignore",
"RUF",
];
let mut results = vec![];
for entry in paths {
let path = entry.path();
run_test(path, &blackd, &ruff_args).context(format!("Testing {}", path.display()))?;
results.push(
run_test(path, &blackd, &ruff_args).context(format!("Testing {}", path.display())),
);
}
let errors = results.iter().filter(|result| result.is_err()).count();
if errors > 0 {
bail!("{} mismatches found", errors);
}
Ok(())

View File

@@ -1,23 +1,3 @@
use anyhow::{bail, format_err, Context, Error};
use clap::{CommandFactory, FromArgMatches};
use ignore::DirEntry;
use imara_diff::intern::InternedInput;
use imara_diff::sink::Counter;
use imara_diff::{diff, Algorithm};
use indicatif::ProgressStyle;
#[cfg_attr(feature = "singlethreaded", allow(unused_imports))]
use rayon::iter::{IntoParallelIterator, ParallelIterator};
use ruff::logging::LogLevel;
use ruff::settings::types::{FilePattern, FilePatternSet};
use ruff_cli::args::{CheckArgs, LogLevelArgs};
use ruff_cli::resolve::resolve;
use ruff_formatter::{FormatError, LineWidth, PrintError};
use ruff_python_formatter::{
format_module, FormatModuleError, MagicTrailingComma, PyFormatOptions,
};
use ruff_workspace::resolver::python_files_in_path;
use serde::Deserialize;
use similar::{ChangeTag, TextDiff};
use std::fmt::{Display, Formatter};
use std::fs::File;
use std::io::{BufWriter, Write};
@@ -28,6 +8,18 @@ use std::path::{Path, PathBuf};
use std::process::ExitCode;
use std::time::{Duration, Instant};
use std::{fmt, fs, io, iter};
use anyhow::{bail, format_err, Context, Error};
use clap::{CommandFactory, FromArgMatches};
use ignore::DirEntry;
use imara_diff::intern::InternedInput;
use imara_diff::sink::Counter;
use imara_diff::{diff, Algorithm};
use indicatif::ProgressStyle;
#[cfg_attr(feature = "singlethreaded", allow(unused_imports))]
use rayon::iter::{IntoParallelIterator, ParallelIterator};
use serde::Deserialize;
use similar::{ChangeTag, TextDiff};
use tempfile::NamedTempFile;
use tracing::{debug, error, info, info_span};
use tracing_indicatif::span_ext::IndicatifSpanExt;
@@ -36,13 +28,23 @@ use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
use tracing_subscriber::EnvFilter;
use ruff::logging::LogLevel;
use ruff::settings::types::{FilePattern, FilePatternSet};
use ruff_cli::args::{FormatCommand, LogLevelArgs};
use ruff_cli::resolve::resolve;
use ruff_formatter::{FormatError, LineWidth, PrintError};
use ruff_python_formatter::{
format_module, FormatModuleError, MagicTrailingComma, PyFormatOptions,
};
use ruff_workspace::resolver::python_files_in_path;
/// Find files that ruff would check so we can format them. Adapted from `ruff_cli`.
fn ruff_check_paths(dirs: &[PathBuf]) -> anyhow::Result<Vec<Result<DirEntry, ignore::Error>>> {
let args_matches = CheckArgs::command()
let args_matches = FormatCommand::command()
.no_binary_name(true)
.get_matches_from(dirs);
let check_args: CheckArgs = CheckArgs::from_arg_matches(&args_matches)?;
let (cli, overrides) = check_args.partition();
let arguments: FormatCommand = FormatCommand::from_arg_matches(&args_matches)?;
let (cli, overrides) = arguments.partition();
let mut pyproject_config = resolve(
cli.isolated,
cli.config.as_deref(),

View File

@@ -56,7 +56,7 @@ enum Command {
/// Run a ruff command n times for profiling/benchmarking
Repeat {
#[clap(flatten)]
args: ruff_cli::args::CheckArgs,
args: ruff_cli::args::CheckCommand,
#[clap(flatten)]
log_level_args: ruff_cli::args::LogLevelArgs,
/// Run this many times

View File

@@ -334,7 +334,8 @@ impl<Context> Format<Context> for SourcePosition {
}
}
/// Creates a text from a dynamic string with its optional start-position in the source document
/// Creates a text from a dynamic string with its optional start-position in the source document.
/// This is done by allocating a new string internally.
pub fn dynamic_text(text: &str, position: Option<TextSize>) -> DynamicText {
debug_assert_no_newlines(text);

View File

@@ -24,32 +24,63 @@ pub mod types;
pub mod visitor;
pub mod whitespace;
#[derive(Clone, Copy, Debug, Default, PartialEq)]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
pub enum PySourceType {
#[default]
Python,
Stub,
Ipynb,
/// The type of a source file.
#[derive(Clone, Copy, Debug, PartialEq, is_macro::Is)]
pub enum SourceType {
/// The file contains Python source code.
Python(PySourceType),
/// The file contains TOML.
Toml(TomlSourceType),
}
impl PySourceType {
pub const fn is_python(&self) -> bool {
matches!(self, PySourceType::Python)
impl Default for SourceType {
fn default() -> Self {
Self::Python(PySourceType::Python)
}
}
pub const fn is_stub(&self) -> bool {
matches!(self, PySourceType::Stub)
impl From<&Path> for SourceType {
fn from(path: &Path) -> Self {
match path.file_name() {
Some(filename) if filename == "pyproject.toml" => Self::Toml(TomlSourceType::Pyproject),
Some(filename) if filename == "Pipfile" => Self::Toml(TomlSourceType::Pipfile),
Some(filename) if filename == "poetry.lock" => Self::Toml(TomlSourceType::Poetry),
_ => match path.extension() {
Some(ext) if ext == "toml" => Self::Toml(TomlSourceType::Unrecognized),
_ => Self::Python(PySourceType::from(path)),
},
}
}
}
pub const fn is_ipynb(&self) -> bool {
matches!(self, PySourceType::Ipynb)
}
#[derive(Clone, Copy, Debug, PartialEq, is_macro::Is)]
pub enum TomlSourceType {
/// The source is a `pyproject.toml`.
Pyproject,
/// The source is a `Pipfile`.
Pipfile,
/// The source is a `poetry.lock`.
Poetry,
/// The source is an unrecognized TOML file.
Unrecognized,
}
#[derive(Clone, Copy, Debug, Default, PartialEq, is_macro::Is)]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
pub enum PySourceType {
/// The source is a Python file (`.py`).
#[default]
Python,
/// The source is a Python stub file (`.pyi`).
Stub,
/// The source is a Jupyter notebook (`.ipynb`).
Ipynb,
}
impl From<&Path> for PySourceType {
fn from(path: &Path) -> Self {
match path.extension() {
Some(ext) if ext == "py" => PySourceType::Python,
Some(ext) if ext == "pyi" => PySourceType::Stub,
Some(ext) if ext == "ipynb" => PySourceType::Ipynb,
_ => PySourceType::Python,

View File

@@ -3,6 +3,7 @@ use crate::{self as ast, ExceptHandler, Stmt, Suite};
/// Given a [`Stmt`] and its parent, return the [`Suite`] that contains the [`Stmt`].
pub fn suite<'a>(stmt: &'a Stmt, parent: &'a Stmt) -> Option<&'a Suite> {
// TODO: refactor this to work without a parent, ie when `stmt` is at the top level
match parent {
Stmt::FunctionDef(ast::StmtFunctionDef { body, .. }) => Some(body),
Stmt::ClassDef(ast::StmtClassDef { body, .. }) => Some(body),

View File

@@ -94,3 +94,11 @@ if not (
isinstance(aaaaaaaaaaaaaaaaaaaaaaa, bbbbbbbbb) and (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx + yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy) or isinstance(ccccccccccc, dddddd)
):
pass
def test():
return (
isinstance(other, Mapping)
and {k.lower(): v for k, v in self.items()}
== {k.lower(): v for k, v in other.items()}
)

View File

@@ -0,0 +1,16 @@
return len(self.nodeseeeeeeeee), sum(
len(node.parents) for node in self.node_map.values()
)
return len(self.nodeseeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee), sum(
len(node.parents) for node in self.node_map.values()
)
return (
len(self.nodeseeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee), sum(
len(node.parents) for node in self.node_map.values()
)
)

View File

@@ -31,6 +31,20 @@ pub enum TupleParentheses {
/// ```
Preserve,
/// The same as [`Self::Default`] except that it uses [`optional_parentheses`] rather than
/// [`parenthesize_if_expands`]. This avoids adding parentheses if breaking any containing parenthesized
/// expression makes the tuple fit.
///
/// Avoids adding parentheses around the tuple because breaking the `sum` call expression is sufficient
/// to make it fit.
///
/// ```python
/// return len(self.nodeseeeeeeeee), sum(
// len(node.parents) for node in self.node_map.values()
// )
/// ```
OptionalParentheses,
/// Handle the special cases where we don't include parentheses at all.
///
/// Black never formats tuple targets of for loops with parentheses if inside a comprehension.
@@ -158,7 +172,7 @@ impl FormatNodeRule<ExprTuple> for FormatExprTuple {
.finish()
}
TupleParentheses::Preserve => group(&ExprSequence::new(item)).fmt(f),
TupleParentheses::NeverPreserve => {
TupleParentheses::NeverPreserve | TupleParentheses::OptionalParentheses => {
optional_parentheses(&ExprSequence::new(item)).fmt(f)
}
TupleParentheses::Default => {

View File

@@ -434,16 +434,17 @@ impl<'input> CanOmitOptionalParenthesesVisitor<'input> {
// Visits a subexpression, ignoring whether it is parenthesized or not
fn visit_subexpression(&mut self, expr: &'input Expr) {
match expr {
Expr::Dict(_) | Expr::List(_) | Expr::Tuple(_) | Expr::Set(_) => {
Expr::Dict(_)
| Expr::List(_)
| Expr::Tuple(_)
| Expr::Set(_)
| Expr::ListComp(_)
| Expr::SetComp(_)
| Expr::DictComp(_) => {
self.any_parenthesized_expressions = true;
// The values are always parenthesized, don't visit.
return;
}
Expr::ListComp(_) | Expr::SetComp(_) | Expr::DictComp(_) => {
self.any_parenthesized_expressions = true;
self.update_max_priority(OperatorPriority::Comprehension);
return;
}
// It's impossible for a file smaller or equal to 4GB to contain more than 2^32 comparisons
// because each comparison requires a left operand, and `n` `operands` and right sides.
#[allow(clippy::cast_possible_truncation)]
@@ -789,7 +790,6 @@ enum OperatorPriority {
String,
BooleanOperation,
Conditional,
Comprehension,
}
impl From<ast::Operator> for OperatorPriority {

Some files were not shown because too many files have changed in this diff Show More