Compare commits

...

14 Commits

Author SHA1 Message Date
Zanie Blue
5892c691ea Bump version to 0.0.285 (#6660)
Requires
- https://github.com/astral-sh/ruff/pull/6655
- https://github.com/astral-sh/ruff/pull/6657
2023-08-17 15:46:28 -05:00
Zanie Blue
82e0a97b34 Clarify behavior of PLW3201 (#6657)
Otherwise it is unclear that violations will be raised for methods like
`_foo_`
2023-08-17 14:41:55 -05:00
Zanie Blue
a8d7bbae6f Remove experimental label from Jupyter docs (#6655) 2023-08-17 14:40:50 -05:00
Charlie Marsh
1050142a58 Expand expressions to include parentheses in E712 (#6575)
## Summary

This PR exposes our `is_expression_parenthesized` logic such that we can
use it to expand expressions when autofixing to include their
parenthesized ranges.

This solution has a few drawbacks: (1) we need to compute parenthesized
ranges in more places, which also relies on backwards lexing; and (2) we
need to make use of this in any relevant fixes.

However, I still think it's worth pursuing. On (1), the implementation
is very contained, so IMO we can easily swap this out for a more
performant solution in the future if needed. On (2), this improves
correctness and fixes some bad syntax errors detected by fuzzing, which
means it has value even if it's not as robust as an _actual_
`ParenthesizedExpression` node in the AST itself.

Closes https://github.com/astral-sh/ruff/issues/4925.

## Test Plan

`cargo test` with new cases that previously failed the fuzzer.
2023-08-17 15:51:09 +00:00
Charlie Marsh
db1c556508 Implement Ranged on more structs (#6639)
## Summary

I noticed some inconsistencies around uses of `.range.start()`, structs
that have a `TextRange` field but don't implement `Ranged`, etc.

## Test Plan

`cargo test`
2023-08-17 11:22:39 -04:00
Charlie Marsh
a70807e1e1 Expand NamedExpr range to include full range of parenthesized value (#6632)
## Summary

Given:

```python
if (
    x
    :=
    (  # 4
        y # 5
    )  # 6
):
    pass
```

It turns out the parser ended the range of the `NamedExpr` at the end of
`y`, rather than the end of the parenthesis that encloses `y`. This just
seems like a bug -- the range should be from the start of the name on
the left, to the end of the parenthesized node on the right.

## Test Plan

`cargo test`
2023-08-17 14:34:05 +00:00
dependabot[bot]
d9bb51dee4 ci(deps): bump cloudflare/wrangler-action from 3.0.0 to 3.0.2 (#6565)
Bumps
[cloudflare/wrangler-action](https://github.com/cloudflare/wrangler-action)
from 3.0.0 to 3.0.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/cloudflare/wrangler-action/releases">cloudflare/wrangler-action's
releases</a>.</em></p>
<blockquote>
<h2>v3.0.2</h2>
<h3>Patch Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/cloudflare/wrangler-action/pull/147">#147</a>
<a
href="58f274b9f7"><code>58f274b</code></a>
Thanks <a
href="https://github.com/JacobMGEvans"><code>@​JacobMGEvans</code></a>!
- Added more error logging when a command fails to execute
Previously, we prevented any error logs from propagating too far to
prevent leaking of any potentially sensitive information. However, this
made it difficult for developers to debug their code.</p>
<p>In this release, we have updated our error handling to allow for more
error messaging from pre/post and custom commands. We still discourage
the use of these commands for secrets or other sensitive information,
but we believe this change will make it easier for developers to debug
their code.</p>
<p>Relates to <a
href="https://redirect.github.com/cloudflare/wrangler-action/issues/137">#137</a></p>
</li>
<li>
<p><a
href="https://redirect.github.com/cloudflare/wrangler-action/pull/147">#147</a>
<a
href="58f274b9f7"><code>58f274b</code></a>
Thanks <a
href="https://github.com/JacobMGEvans"><code>@​JacobMGEvans</code></a>!
- Adding Changesets</p>
</li>
<li>
<p><a
href="https://github.com/cloudflare/wrangler-action/blob/HEAD/#version-300">Version
3.0.0</a></p>
</li>
<li>
<p><a
href="https://github.com/cloudflare/wrangler-action/blob/HEAD/#version-200">Version
2.0.0</a></p>
</li>
</ul>
<h2>v3.0.1</h2>
<p>Automating Build &amp; Release</p>
<h2>What's Changed</h2>
<ul>
<li>Artifacts are now ESM supported with <code>.mjs</code></li>
<li>Update publish to deploy by <a
href="https://github.com/lrapoport-cf"><code>@​lrapoport-cf</code></a>
in <a
href="https://redirect.github.com/cloudflare/wrangler-action/pull/124">cloudflare/wrangler-action#124</a></li>
<li>Automatically add issues to workers-sdk GH project by <a
href="https://github.com/lrapoport-cf"><code>@​lrapoport-cf</code></a>
in <a
href="https://redirect.github.com/cloudflare/wrangler-action/pull/127">cloudflare/wrangler-action#127</a></li>
<li>Automate Action Release by <a
href="https://github.com/JacobMGEvans"><code>@​JacobMGEvans</code></a>
in <a
href="https://redirect.github.com/cloudflare/wrangler-action/pull/128">cloudflare/wrangler-action#128</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/lrapoport-cf"><code>@​lrapoport-cf</code></a>
made their first contribution in <a
href="https://redirect.github.com/cloudflare/wrangler-action/pull/124">cloudflare/wrangler-action#124</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/cloudflare/wrangler-action/compare/3.0.0...3.0.1">https://github.com/cloudflare/wrangler-action/compare/3.0.0...3.0.1</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/cloudflare/wrangler-action/blob/main/CHANGELOG.md">cloudflare/wrangler-action's
changelog</a>.</em></p>
<blockquote>
<h2>3.0.2</h2>
<h3>Patch Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/cloudflare/wrangler-action/pull/147">#147</a>
<a
href="58f274b9f7"><code>58f274b</code></a>
Thanks <a
href="https://github.com/JacobMGEvans"><code>@​JacobMGEvans</code></a>!
- Added more error logging when a command fails to execute
Previously, we prevented any error logs from propagating too far to
prevent leaking of any potentially sensitive information. However, this
made it difficult for developers to debug their code.</p>
<p>In this release, we have updated our error handling to allow for more
error messaging from pre/post and custom commands. We still discourage
the use of these commands for secrets or other sensitive information,
but we believe this change will make it easier for developers to debug
their code.</p>
<p>Relates to <a
href="https://redirect.github.com/cloudflare/wrangler-action/issues/137">#137</a></p>
</li>
<li>
<p><a
href="https://redirect.github.com/cloudflare/wrangler-action/pull/147">#147</a>
<a
href="58f274b9f7"><code>58f274b</code></a>
Thanks <a
href="https://github.com/JacobMGEvans"><code>@​JacobMGEvans</code></a>!
- Adding Changesets</p>
</li>
<li>
<p><a
href="https://github.com/cloudflare/wrangler-action/blob/main/#version-300">Version
3.0.0</a></p>
</li>
<li>
<p><a
href="https://github.com/cloudflare/wrangler-action/blob/main/#version-200">Version
2.0.0</a></p>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="80501a7b4d"><code>80501a7</code></a>
Automatic compilation</li>
<li><a
href="3252711404"><code>3252711</code></a>
hotfix</li>
<li><a
href="2faabf36a2"><code>2faabf3</code></a>
Merge pull request <a
href="https://redirect.github.com/cloudflare/wrangler-action/issues/150">#150</a>
from cloudflare/jacobmgevans/changesets-tag-cli</li>
<li><a
href="b734d85d74"><code>b734d85</code></a>
Changesets needs a tag created for release</li>
<li><a
href="8ca2ff1612"><code>8ca2ff1</code></a>
Merge pull request <a
href="https://redirect.github.com/cloudflare/wrangler-action/issues/148">#148</a>
from cloudflare/changeset-release/main</li>
<li><a
href="7d08a8657e"><code>7d08a86</code></a>
Merge pull request <a
href="https://redirect.github.com/cloudflare/wrangler-action/issues/149">#149</a>
from cloudflare/jacobmgevans/changesets-needs-publish</li>
<li><a
href="e193627f19"><code>e193627</code></a>
Spoofing publish for Changeset Action</li>
<li><a
href="55b80c5f62"><code>55b80c5</code></a>
Version Packages</li>
<li><a
href="f61dc4d5a9"><code>f61dc4d</code></a>
Merge pull request <a
href="https://redirect.github.com/cloudflare/wrangler-action/issues/147">#147</a>
from cloudflare/revert-146-changeset-release/main</li>
<li><a
href="58f274b9f7"><code>58f274b</code></a>
Revert &quot;Version Packages&quot;</li>
<li>Additional commits viewable in <a
href="https://github.com/cloudflare/wrangler-action/compare/3.0.0...v3.0.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cloudflare/wrangler-action&package-manager=github_actions&previous-version=3.0.0&new-version=3.0.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-17 09:11:04 -05:00
Zanie Blue
d0f2a8e424 Add support for nested replacements inside format specifications (#6616)
Closes https://github.com/astral-sh/ruff/issues/6442

Python string formatting like `"hello {place}".format(place="world")`
supports format specifications for replaced content such as `"hello
{place:>10}".format(place="world")` which will align the text to the
right in a container filled up to ten characters.

Ruff parses formatted strings into `FormatPart`s each of which is either
a `Field` (content in `{...}`) or a `Literal` (the normal content).
Fields are parsed into name and format specifier sections (we'll ignore
conversion specifiers for now).

There are a myriad of specifiers that can be used in a `FormatSpec`.
Unfortunately for linters, the specifier values can be dynamically set.
For example, `"hello {place:{align}{width}}".format(place="world",
align=">", width=10)` and `"hello {place:{fmt}}".format(place="world",
fmt=">10")` will yield the same string as before but variables can be
used to determine the formatting. In this case, when parsing the format
specifier we can't know what _kind_ of specifier is being used as their
meaning is determined by both position and value.

Ruff does not support nested replacements and our current data model
does not support the concept. Here the data model is updated to support
this concept, although linting of specifications with replacements will
be inherently limited. We could split format specifications into two
types, one without any replacements that we can perform lints with and
one with replacements that we cannot inspect. However, it seems
excessive to drop all parsing of format specifiers due to the presence
of a replacement. Instead, I've opted to parse replacements eagerly and
ignore their possible effect on other format specifiers. This will allow
us to retain a simple interface for `FormatSpec` and most syntax checks.
We may need to add some handling to relax errors if a replacement was
seen previously.

It's worth noting that the nested replacement _can_ also include a
format specification although it may fail at runtime if you produce an
invalid outer format specification. For example, `"hello
{place:{fmt:<2}}".format(place="world", fmt=">10")` is valid so we need
to represent each nested replacement as a full `FormatPart`.

## Test plan

Adding unit tests for `FormatSpec` parsing and snapshots for PLE1300
2023-08-17 09:07:30 -05:00
Charlie Marsh
1334232168 Introduce ExpressionRef (#6637)
## Summary

This PR revives the `ExpressionRef` concept introduced in
https://github.com/astral-sh/ruff/pull/5644, motivated by the change we
want to make in https://github.com/astral-sh/ruff/pull/6575 to narrow
the type of the expression that can be passed to `parenthesized_range`.

## Test Plan

`cargo test`
2023-08-17 10:07:16 -04:00
Micha Reiser
fa7442da2f Support fmt: skip on compound statements (#6593) 2023-08-17 06:05:41 +00:00
Micha Reiser
4dc32a00d0 Support fmt: skip for simple-statements and decorators (#6561) 2023-08-17 05:58:19 +00:00
Evan Rittenhouse
e3ecbe660e [ruff] Implement quadratic-list-summation rule (RUF017) (#6489)
## Summary

Adds `RUF017`. Closes #5073 

## Test Plan

`cargo t`
2023-08-16 23:13:05 -04:00
Harutaka Kawamura
8c3a8c4fc6 Support glob patterns for raises_require_match_for and raises_require_match_for (#6635)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Support glob patterns for `raises_require_match_for` and
`raises_require_match_for`. Resolve #6473

## Test Plan

New tests + existing tests
2023-08-17 02:15:50 +00:00
Charlie Marsh
dcc7226685 Make lambda-assignment fix always-manual in class bodies (#6626)
## Summary

Related to https://github.com/astral-sh/ruff/issues/6620 (although that
will only be truly closed once we respect manual fixes on the CLI).
2023-08-16 21:24:48 -04:00
174 changed files with 22262 additions and 19937 deletions

View File

@@ -40,7 +40,7 @@ jobs:
run: mkdocs build --strict -f mkdocs.generated.yml
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
uses: cloudflare/wrangler-action@3.0.0
uses: cloudflare/wrangler-action@v3.0.2
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}

View File

@@ -40,7 +40,7 @@ jobs:
working-directory: playground
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
uses: cloudflare/wrangler-action@3.0.0
uses: cloudflare/wrangler-action@v3.0.2
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}

6
Cargo.lock generated
View File

@@ -812,7 +812,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.284"
version = "0.0.285"
dependencies = [
"anyhow",
"clap",
@@ -2064,7 +2064,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.284"
version = "0.0.285"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -2164,7 +2164,7 @@ dependencies = [
[[package]]
name = "ruff_cli"
version = "0.0.284"
version = "0.0.285"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",

View File

@@ -140,7 +140,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com) hook:
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.0.284
rev: v0.0.285
hooks:
- id: ruff
```

View File

@@ -1,6 +1,6 @@
[package]
name = "flake8-to-ruff"
version = "0.0.284"
version = "0.0.285"
description = """
Convert Flake8 configuration files to Ruff configuration files.
"""

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.0.284"
version = "0.0.285"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -1,3 +1,4 @@
from pickle import PicklingError, UnpicklingError
import socket
import pytest
@@ -20,6 +21,12 @@ def test_error_no_argument_given():
with pytest.raises(socket.error):
raise ValueError("Can't divide 1 by 0")
with pytest.raises(PicklingError):
raise PicklingError("Can't pickle")
with pytest.raises(UnpicklingError):
raise UnpicklingError("Can't unpickle")
def test_error_match_is_empty():
with pytest.raises(ValueError, match=None):

View File

@@ -25,6 +25,12 @@ if (True) == TrueElement or x == TrueElement:
if res == True != False:
pass
if(True) == TrueElement or x == TrueElement:
pass
if (yield i) == True:
print("even")
#: Okay
if x not in y:
pass

View File

@@ -133,3 +133,8 @@ def scope():
from collections.abc import Callable
f: Callable[[str, int, list[str]], list[str]] = lambda a, b, /, c: [*c, a * b]
class TemperatureScales(Enum):
CELSIUS = (lambda deg_c: deg_c)
FAHRENHEIT = (lambda deg_c: deg_c * 9 / 5 + 32)

View File

@@ -14,7 +14,10 @@
"{:s} {:y}".format("hello", "world") # [bad-format-character]
"{:*^30s}".format("centered")
"{:*^30s}".format("centered") # OK
"{:{s}}".format("hello", s="s") # OK (nested replacement value not checked)
"{:{s:y}}".format("hello", s="s") # [bad-format-character] (nested replacement format spec checked)
## f-strings

View File

@@ -0,0 +1,14 @@
x = [1, 2, 3]
y = [4, 5, 6]
# RUF017
sum([x, y], start=[])
sum([x, y], [])
sum([[1, 2, 3], [4, 5, 6]], start=[])
sum([[1, 2, 3], [4, 5, 6]], [])
sum([[1, 2, 3], [4, 5, 6]],
[])
# OK
sum([x, y])
sum([[1, 2, 3], [4, 5, 6]])

View File

@@ -1,4 +1,5 @@
use ruff_diagnostics::{Diagnostic, Fix};
use ruff_python_ast::Ranged;
use crate::checkers::ast::Checker;
use crate::codes::Rule;
@@ -29,7 +30,7 @@ pub(crate) fn bindings(checker: &mut Checker) {
pyflakes::rules::UnusedVariable {
name: binding.name(checker.locator).to_string(),
},
binding.range,
binding.range(),
);
if checker.patch(Rule::UnusedVariable) {
diagnostic.try_set_fix(|| {

View File

@@ -1,4 +1,5 @@
use ruff_diagnostics::Diagnostic;
use ruff_python_ast::Ranged;
use ruff_python_semantic::analyze::{branch_detection, visibility};
use ruff_python_semantic::{Binding, BindingKind, ScopeKind};
@@ -81,7 +82,7 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
pylint::rules::GlobalVariableNotAssigned {
name: (*name).to_string(),
},
binding.range,
binding.range(),
));
}
}
@@ -122,14 +123,14 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
}
#[allow(deprecated)]
let line = checker.locator.compute_line_index(shadowed.range.start());
let line = checker.locator.compute_line_index(shadowed.start());
checker.diagnostics.push(Diagnostic::new(
pyflakes::rules::ImportShadowedByLoopVar {
name: name.to_string(),
line,
},
binding.range,
binding.range(),
));
}
}
@@ -218,13 +219,13 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
}
#[allow(deprecated)]
let line = checker.locator.compute_line_index(shadowed.range.start());
let line = checker.locator.compute_line_index(shadowed.start());
let mut diagnostic = Diagnostic::new(
pyflakes::rules::RedefinedWhileUnused {
name: (*name).to_string(),
line,
},
binding.range,
binding.range(),
);
if let Some(range) = binding.parent_range(&checker.semantic) {
diagnostic.set_parent(range.start());

View File

@@ -873,6 +873,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::UnsupportedMethodCallOnAll) {
flake8_pyi::rules::unsupported_method_call_on_all(checker, func);
}
if checker.enabled(Rule::QuadraticListSummation) {
ruff::rules::quadratic_list_summation(checker, call);
}
}
Expr::Dict(ast::ExprDict {
keys,

View File

@@ -1855,7 +1855,7 @@ impl<'a> Checker<'a> {
.map(|binding_id| &self.semantic.bindings[binding_id])
.filter_map(|binding| match &binding.kind {
BindingKind::Export(Export { names }) => {
Some(names.iter().map(|name| (*name, binding.range)))
Some(names.iter().map(|name| (*name, binding.range())))
}
_ => None,
})

View File

@@ -1,10 +1,10 @@
use ruff_python_parser::lexer::LexResult;
use ruff_text_size::TextRange;
use ruff_diagnostics::{Diagnostic, DiagnosticKind};
use ruff_python_ast::Ranged;
use ruff_python_codegen::Stylist;
use ruff_python_parser::lexer::LexResult;
use ruff_python_parser::TokenKind;
use ruff_source_file::Locator;
use ruff_text_size::TextRange;
use crate::registry::{AsRule, Rule};
use crate::rules::pycodestyle::rules::logical_lines::{

View File

@@ -816,6 +816,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Ruff, "014") => (RuleGroup::Nursery, rules::ruff::rules::UnreachableCode),
(Ruff, "015") => (RuleGroup::Unspecified, rules::ruff::rules::UnnecessaryIterableAllocationForFirstElement),
(Ruff, "016") => (RuleGroup::Unspecified, rules::ruff::rules::InvalidIndexType),
(Ruff, "017") => (RuleGroup::Nursery, rules::ruff::rules::QuadraticListSummation),
(Ruff, "100") => (RuleGroup::Unspecified, rules::ruff::rules::UnusedNOQA),
(Ruff, "200") => (RuleGroup::Unspecified, rules::ruff::rules::InvalidPyprojectToml),

View File

@@ -2,9 +2,8 @@ use std::fmt::{Debug, Formatter};
use std::ops::Deref;
use ruff_python_ast::{Expr, Ranged};
use ruff_text_size::{TextRange, TextSize};
use ruff_python_semantic::Definition;
use ruff_text_size::TextRange;
pub(crate) mod extraction;
pub(crate) mod google;
@@ -28,43 +27,34 @@ impl<'a> Docstring<'a> {
DocstringBody { docstring: self }
}
pub(crate) fn start(&self) -> TextSize {
self.expr.start()
}
pub(crate) fn end(&self) -> TextSize {
self.expr.end()
}
pub(crate) fn range(&self) -> TextRange {
self.expr.range()
}
pub(crate) fn leading_quote(&self) -> &'a str {
&self.contents[TextRange::up_to(self.body_range.start())]
}
}
impl Ranged for Docstring<'_> {
fn range(&self) -> TextRange {
self.expr.range()
}
}
#[derive(Copy, Clone)]
pub(crate) struct DocstringBody<'a> {
docstring: &'a Docstring<'a>,
}
impl<'a> DocstringBody<'a> {
#[inline]
pub(crate) fn start(self) -> TextSize {
self.range().start()
}
pub(crate) fn range(self) -> TextRange {
self.docstring.body_range + self.docstring.start()
}
pub(crate) fn as_str(self) -> &'a str {
&self.docstring.contents[self.docstring.body_range]
}
}
impl Ranged for DocstringBody<'_> {
fn range(&self) -> TextRange {
self.docstring.body_range + self.docstring.start()
}
}
impl Deref for DocstringBody<'_> {
type Target = str;

View File

@@ -2,6 +2,7 @@ use std::fmt::{Debug, Formatter};
use std::iter::FusedIterator;
use ruff_python_ast::docstrings::{leading_space, leading_words};
use ruff_python_ast::Ranged;
use ruff_text_size::{TextLen, TextRange, TextSize};
use strum_macros::EnumIter;
@@ -366,6 +367,12 @@ impl<'a> SectionContext<'a> {
}
}
impl Ranged for SectionContext<'_> {
fn range(&self) -> TextRange {
self.range()
}
}
impl Debug for SectionContext<'_> {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.debug_struct("SectionContext")

View File

@@ -194,7 +194,7 @@ impl<'a> Importer<'a> {
// import and the current location, and thus the symbol would not be available). It's also
// unclear whether should add an import statement at the start of the file, since it could
// be shadowed between the import and the current location.
if imported_name.range().start() > at {
if imported_name.start() > at {
return Some(Err(ResolutionError::ImportAfterUsage));
}

View File

@@ -4,6 +4,7 @@ use anyhow::{anyhow, Result};
use itertools::Itertools;
use ruff_diagnostics::Edit;
use ruff_python_ast::Ranged;
use ruff_python_semantic::{Binding, BindingKind, Scope, ScopeId, SemanticModel};
pub(crate) struct Renamer;
@@ -220,12 +221,12 @@ impl Renamer {
BindingKind::Import(_) | BindingKind::FromImport(_) => {
if binding.is_alias() {
// Ex) Rename `import pandas as alias` to `import pandas as pd`.
Some(Edit::range_replacement(target.to_string(), binding.range))
Some(Edit::range_replacement(target.to_string(), binding.range()))
} else {
// Ex) Rename `import pandas` to `import pandas as pd`.
Some(Edit::range_replacement(
format!("{name} as {target}"),
binding.range,
binding.range(),
))
}
}
@@ -234,7 +235,7 @@ impl Renamer {
let module_name = import.call_path.first().unwrap();
Some(Edit::range_replacement(
format!("{module_name} as {target}"),
binding.range,
binding.range(),
))
}
// Avoid renaming builtins and other "special" bindings.
@@ -254,7 +255,7 @@ impl Renamer {
| BindingKind::FunctionDefinition(_)
| BindingKind::Deletion
| BindingKind::UnboundException(_) => {
Some(Edit::range_replacement(target.to_string(), binding.range))
Some(Edit::range_replacement(target.to_string(), binding.range()))
}
}
}

View File

@@ -163,7 +163,7 @@ pub(crate) fn unused_loop_control_variable(checker: &mut Checker, target: &Expr,
if scope
.get_all(name)
.map(|binding_id| checker.semantic().binding(binding_id))
.filter(|binding| binding.range.start() >= expr.range().start())
.filter(|binding| binding.start() >= expr.start())
.all(|binding| !binding.is_used())
{
diagnostic.set_fix(Fix::suggested(Edit::range_replacement(

View File

@@ -2,6 +2,7 @@ use rustc_hash::FxHashMap;
use ruff_diagnostics::{AutofixKind, Diagnostic, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_semantic::{Binding, Imported};
use crate::checkers::ast::Checker;
@@ -76,7 +77,7 @@ pub(crate) fn unconventional_import_alias(
name: qualified_name,
asname: expected_alias.to_string(),
},
binding.range,
binding.range(),
);
if checker.patch(diagnostic.kind.rule()) {
if checker.semantic().is_available(expected_alias) {

View File

@@ -1,5 +1,6 @@
use ruff_diagnostics::{AutofixKind, Diagnostic, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_semantic::Imported;
use ruff_python_semantic::{Binding, BindingKind};
@@ -63,7 +64,7 @@ pub(crate) fn unaliased_collections_abc_set_import(
return None;
}
let mut diagnostic = Diagnostic::new(UnaliasedCollectionsAbcSetImport, binding.range);
let mut diagnostic = Diagnostic::new(UnaliasedCollectionsAbcSetImport, binding.range());
if checker.patch(diagnostic.kind.rule()) {
if checker.semantic().is_available("AbstractSet") {
diagnostic.try_set_fix(|| {

View File

@@ -1,6 +1,6 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Expr, Stmt};
use ruff_python_ast::{self as ast, Expr, Ranged, Stmt};
use ruff_python_semantic::Scope;
use crate::checkers::ast::Checker;
@@ -192,7 +192,7 @@ pub(crate) fn unused_private_type_var(
UnusedPrivateTypeVar {
name: id.to_string(),
},
binding.range,
binding.range(),
));
}
}
@@ -234,7 +234,7 @@ pub(crate) fn unused_private_protocol(
UnusedPrivateProtocol {
name: class_def.name.to_string(),
},
binding.range,
binding.range(),
));
}
}
@@ -280,7 +280,7 @@ pub(crate) fn unused_private_type_alias(
UnusedPrivateTypeAlias {
name: id.to_string(),
},
binding.range,
binding.range(),
));
}
}
@@ -321,7 +321,7 @@ pub(crate) fn unused_private_typed_dict(
UnusedPrivateTypedDict {
name: class_def.name.to_string(),
},
binding.range,
binding.range(),
));
}
}

View File

@@ -11,6 +11,7 @@ mod tests {
use test_case::test_case;
use crate::registry::Rule;
use crate::settings::types::IdentifierPattern;
use crate::test::test_path;
use crate::{assert_messages, settings};
@@ -143,7 +144,7 @@ mod tests {
Rule::PytestRaisesTooBroad,
Path::new("PT011.py"),
Settings {
raises_extend_require_match_for: vec!["ZeroDivisionError".to_string()],
raises_extend_require_match_for: vec![IdentifierPattern::new("ZeroDivisionError").unwrap()],
..Settings::default()
},
"PT011_extend_broad_exceptions"
@@ -152,11 +153,29 @@ mod tests {
Rule::PytestRaisesTooBroad,
Path::new("PT011.py"),
Settings {
raises_require_match_for: vec!["ZeroDivisionError".to_string()],
raises_require_match_for: vec![IdentifierPattern::new("ZeroDivisionError").unwrap()],
..Settings::default()
},
"PT011_replace_broad_exceptions"
)]
#[test_case(
Rule::PytestRaisesTooBroad,
Path::new("PT011.py"),
Settings {
raises_require_match_for: vec![IdentifierPattern::new("*").unwrap()],
..Settings::default()
},
"PT011_glob_all"
)]
#[test_case(
Rule::PytestRaisesTooBroad,
Path::new("PT011.py"),
Settings {
raises_require_match_for: vec![IdentifierPattern::new("pickle.*").unwrap()],
..Settings::default()
},
"PT011_glob_prefix"
)]
#[test_case(
Rule::PytestRaisesWithMultipleStatements,
Path::new("PT012.py"),

View File

@@ -1,7 +1,6 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::call_path::format_call_path;
use ruff_python_ast::call_path::from_qualified_name;
use ruff_python_ast::helpers::is_compound_statement;
use ruff_python_ast::{self as ast, Expr, Ranged, Stmt, WithItem};
use ruff_python_semantic::SemanticModel;
@@ -231,7 +230,8 @@ fn exception_needs_match(checker: &mut Checker, exception: &Expr) {
.semantic()
.resolve_call_path(exception)
.and_then(|call_path| {
let is_broad_exception = checker
let call_path = format_call_path(&call_path);
checker
.settings
.flake8_pytest_style
.raises_require_match_for
@@ -242,12 +242,8 @@ fn exception_needs_match(checker: &mut Checker, exception: &Expr) {
.flake8_pytest_style
.raises_extend_require_match_for,
)
.any(|target| call_path == from_qualified_name(target));
if is_broad_exception {
Some(format_call_path(&call_path))
} else {
None
}
.any(|pattern| pattern.matches(&call_path))
.then_some(call_path)
})
{
checker.diagnostics.push(Diagnostic::new(

View File

@@ -1,12 +1,15 @@
//! Settings for the `flake8-pytest-style` plugin.
use std::error::Error;
use std::fmt;
use serde::{Deserialize, Serialize};
use crate::settings::types::IdentifierPattern;
use ruff_macros::{CacheKey, CombineOptions, ConfigurationOptions};
use super::types;
fn default_broad_exceptions() -> Vec<String> {
fn default_broad_exceptions() -> Vec<IdentifierPattern> {
[
"BaseException",
"Exception",
@@ -16,7 +19,7 @@ fn default_broad_exceptions() -> Vec<String> {
"EnvironmentError",
"socket.error",
]
.map(ToString::to_string)
.map(|pattern| IdentifierPattern::new(pattern).expect("invalid default exception pattern"))
.to_vec()
}
@@ -86,6 +89,9 @@ pub struct Options {
)]
/// List of exception names that require a match= parameter in a
/// `pytest.raises()` call.
///
/// Supports glob patterns. For more information on the glob syntax, refer
/// to the [`globset` documentation](https://docs.rs/globset/latest/globset/#syntax).
pub raises_require_match_for: Option<Vec<String>>,
#[option(
default = "[]",
@@ -100,6 +106,9 @@ pub struct Options {
/// the entire list.
/// Note that this option does not remove any exceptions from the default
/// list.
///
/// Supports glob patterns. For more information on the glob syntax, refer
/// to the [`globset` documentation](https://docs.rs/globset/latest/globset/#syntax).
pub raises_extend_require_match_for: Option<Vec<String>>,
#[option(
default = "true",
@@ -120,26 +129,44 @@ pub struct Settings {
pub parametrize_names_type: types::ParametrizeNameType,
pub parametrize_values_type: types::ParametrizeValuesType,
pub parametrize_values_row_type: types::ParametrizeValuesRowType,
pub raises_require_match_for: Vec<String>,
pub raises_extend_require_match_for: Vec<String>,
pub raises_require_match_for: Vec<IdentifierPattern>,
pub raises_extend_require_match_for: Vec<IdentifierPattern>,
pub mark_parentheses: bool,
}
impl From<Options> for Settings {
fn from(options: Options) -> Self {
Self {
impl TryFrom<Options> for Settings {
type Error = SettingsError;
fn try_from(options: Options) -> Result<Self, Self::Error> {
Ok(Self {
fixture_parentheses: options.fixture_parentheses.unwrap_or(true),
parametrize_names_type: options.parametrize_names_type.unwrap_or_default(),
parametrize_values_type: options.parametrize_values_type.unwrap_or_default(),
parametrize_values_row_type: options.parametrize_values_row_type.unwrap_or_default(),
raises_require_match_for: options
.raises_require_match_for
.map(|patterns| {
patterns
.into_iter()
.map(|pattern| IdentifierPattern::new(&pattern))
.collect()
})
.transpose()
.map_err(SettingsError::InvalidRaisesRequireMatchFor)?
.unwrap_or_else(default_broad_exceptions),
raises_extend_require_match_for: options
.raises_extend_require_match_for
.map(|patterns| {
patterns
.into_iter()
.map(|pattern| IdentifierPattern::new(&pattern))
.collect()
})
.transpose()
.map_err(SettingsError::InvalidRaisesExtendRequireMatchFor)?
.unwrap_or_default(),
mark_parentheses: options.mark_parentheses.unwrap_or(true),
}
})
}
}
impl From<Settings> for Options {
@@ -149,8 +176,20 @@ impl From<Settings> for Options {
parametrize_names_type: Some(settings.parametrize_names_type),
parametrize_values_type: Some(settings.parametrize_values_type),
parametrize_values_row_type: Some(settings.parametrize_values_row_type),
raises_require_match_for: Some(settings.raises_require_match_for),
raises_extend_require_match_for: Some(settings.raises_extend_require_match_for),
raises_require_match_for: Some(
settings
.raises_require_match_for
.iter()
.map(ToString::to_string)
.collect(),
),
raises_extend_require_match_for: Some(
settings
.raises_extend_require_match_for
.iter()
.map(ToString::to_string)
.collect(),
),
mark_parentheses: Some(settings.mark_parentheses),
}
}
@@ -169,3 +208,32 @@ impl Default for Settings {
}
}
}
/// Error returned by the [`TryFrom`] implementation of [`Settings`].
#[derive(Debug)]
pub enum SettingsError {
InvalidRaisesRequireMatchFor(glob::PatternError),
InvalidRaisesExtendRequireMatchFor(glob::PatternError),
}
impl fmt::Display for SettingsError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
SettingsError::InvalidRaisesRequireMatchFor(err) => {
write!(f, "invalid raises-require-match-for pattern: {err}")
}
SettingsError::InvalidRaisesExtendRequireMatchFor(err) => {
write!(f, "invalid raises-extend-require-match-for pattern: {err}")
}
}
}
}
impl Error for SettingsError {
fn source(&self) -> Option<&(dyn Error + 'static)> {
match self {
SettingsError::InvalidRaisesRequireMatchFor(err) => Some(err),
SettingsError::InvalidRaisesExtendRequireMatchFor(err) => Some(err),
}
}
}

View File

@@ -1,47 +1,47 @@
---
source: crates/ruff/src/rules/flake8_pytest_style/mod.rs
---
PT011.py:17:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
PT011.py:18:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
|
16 | def test_error_no_argument_given():
17 | with pytest.raises(ValueError):
17 | def test_error_no_argument_given():
18 | with pytest.raises(ValueError):
| ^^^^^^^^^^ PT011
18 | raise ValueError("Can't divide 1 by 0")
19 | raise ValueError("Can't divide 1 by 0")
|
PT011.py:20:24: PT011 `pytest.raises(socket.error)` is too broad, set the `match` parameter or use a more specific exception
PT011.py:21:24: PT011 `pytest.raises(socket.error)` is too broad, set the `match` parameter or use a more specific exception
|
18 | raise ValueError("Can't divide 1 by 0")
19 |
20 | with pytest.raises(socket.error):
19 | raise ValueError("Can't divide 1 by 0")
20 |
21 | with pytest.raises(socket.error):
| ^^^^^^^^^^^^ PT011
21 | raise ValueError("Can't divide 1 by 0")
22 | raise ValueError("Can't divide 1 by 0")
|
PT011.py:25:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
PT011.py:32:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
|
24 | def test_error_match_is_empty():
25 | with pytest.raises(ValueError, match=None):
31 | def test_error_match_is_empty():
32 | with pytest.raises(ValueError, match=None):
| ^^^^^^^^^^ PT011
26 | raise ValueError("Can't divide 1 by 0")
33 | raise ValueError("Can't divide 1 by 0")
|
PT011.py:28:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
PT011.py:35:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
|
26 | raise ValueError("Can't divide 1 by 0")
27 |
28 | with pytest.raises(ValueError, match=""):
33 | raise ValueError("Can't divide 1 by 0")
34 |
35 | with pytest.raises(ValueError, match=""):
| ^^^^^^^^^^ PT011
29 | raise ValueError("Can't divide 1 by 0")
36 | raise ValueError("Can't divide 1 by 0")
|
PT011.py:31:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
PT011.py:38:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
|
29 | raise ValueError("Can't divide 1 by 0")
30 |
31 | with pytest.raises(ValueError, match=f""):
36 | raise ValueError("Can't divide 1 by 0")
37 |
38 | with pytest.raises(ValueError, match=f""):
| ^^^^^^^^^^ PT011
32 | raise ValueError("Can't divide 1 by 0")
39 | raise ValueError("Can't divide 1 by 0")
|

View File

@@ -1,55 +1,55 @@
---
source: crates/ruff/src/rules/flake8_pytest_style/mod.rs
---
PT011.py:12:24: PT011 `pytest.raises(ZeroDivisionError)` is too broad, set the `match` parameter or use a more specific exception
PT011.py:13:24: PT011 `pytest.raises(ZeroDivisionError)` is too broad, set the `match` parameter or use a more specific exception
|
11 | def test_ok_different_error_from_config():
12 | with pytest.raises(ZeroDivisionError):
12 | def test_ok_different_error_from_config():
13 | with pytest.raises(ZeroDivisionError):
| ^^^^^^^^^^^^^^^^^ PT011
13 | raise ZeroDivisionError("Can't divide by 0")
14 | raise ZeroDivisionError("Can't divide by 0")
|
PT011.py:17:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
PT011.py:18:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
|
16 | def test_error_no_argument_given():
17 | with pytest.raises(ValueError):
17 | def test_error_no_argument_given():
18 | with pytest.raises(ValueError):
| ^^^^^^^^^^ PT011
18 | raise ValueError("Can't divide 1 by 0")
19 | raise ValueError("Can't divide 1 by 0")
|
PT011.py:20:24: PT011 `pytest.raises(socket.error)` is too broad, set the `match` parameter or use a more specific exception
PT011.py:21:24: PT011 `pytest.raises(socket.error)` is too broad, set the `match` parameter or use a more specific exception
|
18 | raise ValueError("Can't divide 1 by 0")
19 |
20 | with pytest.raises(socket.error):
19 | raise ValueError("Can't divide 1 by 0")
20 |
21 | with pytest.raises(socket.error):
| ^^^^^^^^^^^^ PT011
21 | raise ValueError("Can't divide 1 by 0")
22 | raise ValueError("Can't divide 1 by 0")
|
PT011.py:25:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
PT011.py:32:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
|
24 | def test_error_match_is_empty():
25 | with pytest.raises(ValueError, match=None):
31 | def test_error_match_is_empty():
32 | with pytest.raises(ValueError, match=None):
| ^^^^^^^^^^ PT011
26 | raise ValueError("Can't divide 1 by 0")
33 | raise ValueError("Can't divide 1 by 0")
|
PT011.py:28:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
PT011.py:35:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
|
26 | raise ValueError("Can't divide 1 by 0")
27 |
28 | with pytest.raises(ValueError, match=""):
33 | raise ValueError("Can't divide 1 by 0")
34 |
35 | with pytest.raises(ValueError, match=""):
| ^^^^^^^^^^ PT011
29 | raise ValueError("Can't divide 1 by 0")
36 | raise ValueError("Can't divide 1 by 0")
|
PT011.py:31:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
PT011.py:38:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
|
29 | raise ValueError("Can't divide 1 by 0")
30 |
31 | with pytest.raises(ValueError, match=f""):
36 | raise ValueError("Can't divide 1 by 0")
37 |
38 | with pytest.raises(ValueError, match=f""):
| ^^^^^^^^^^ PT011
32 | raise ValueError("Can't divide 1 by 0")
39 | raise ValueError("Can't divide 1 by 0")
|

View File

@@ -0,0 +1,73 @@
---
source: crates/ruff/src/rules/flake8_pytest_style/mod.rs
---
PT011.py:13:24: PT011 `pytest.raises(ZeroDivisionError)` is too broad, set the `match` parameter or use a more specific exception
|
12 | def test_ok_different_error_from_config():
13 | with pytest.raises(ZeroDivisionError):
| ^^^^^^^^^^^^^^^^^ PT011
14 | raise ZeroDivisionError("Can't divide by 0")
|
PT011.py:18:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
|
17 | def test_error_no_argument_given():
18 | with pytest.raises(ValueError):
| ^^^^^^^^^^ PT011
19 | raise ValueError("Can't divide 1 by 0")
|
PT011.py:21:24: PT011 `pytest.raises(socket.error)` is too broad, set the `match` parameter or use a more specific exception
|
19 | raise ValueError("Can't divide 1 by 0")
20 |
21 | with pytest.raises(socket.error):
| ^^^^^^^^^^^^ PT011
22 | raise ValueError("Can't divide 1 by 0")
|
PT011.py:24:24: PT011 `pytest.raises(pickle.PicklingError)` is too broad, set the `match` parameter or use a more specific exception
|
22 | raise ValueError("Can't divide 1 by 0")
23 |
24 | with pytest.raises(PicklingError):
| ^^^^^^^^^^^^^ PT011
25 | raise PicklingError("Can't pickle")
|
PT011.py:27:24: PT011 `pytest.raises(pickle.UnpicklingError)` is too broad, set the `match` parameter or use a more specific exception
|
25 | raise PicklingError("Can't pickle")
26 |
27 | with pytest.raises(UnpicklingError):
| ^^^^^^^^^^^^^^^ PT011
28 | raise UnpicklingError("Can't unpickle")
|
PT011.py:32:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
|
31 | def test_error_match_is_empty():
32 | with pytest.raises(ValueError, match=None):
| ^^^^^^^^^^ PT011
33 | raise ValueError("Can't divide 1 by 0")
|
PT011.py:35:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
|
33 | raise ValueError("Can't divide 1 by 0")
34 |
35 | with pytest.raises(ValueError, match=""):
| ^^^^^^^^^^ PT011
36 | raise ValueError("Can't divide 1 by 0")
|
PT011.py:38:24: PT011 `pytest.raises(ValueError)` is too broad, set the `match` parameter or use a more specific exception
|
36 | raise ValueError("Can't divide 1 by 0")
37 |
38 | with pytest.raises(ValueError, match=f""):
| ^^^^^^^^^^ PT011
39 | raise ValueError("Can't divide 1 by 0")
|

View File

@@ -0,0 +1,22 @@
---
source: crates/ruff/src/rules/flake8_pytest_style/mod.rs
---
PT011.py:24:24: PT011 `pytest.raises(pickle.PicklingError)` is too broad, set the `match` parameter or use a more specific exception
|
22 | raise ValueError("Can't divide 1 by 0")
23 |
24 | with pytest.raises(PicklingError):
| ^^^^^^^^^^^^^ PT011
25 | raise PicklingError("Can't pickle")
|
PT011.py:27:24: PT011 `pytest.raises(pickle.UnpicklingError)` is too broad, set the `match` parameter or use a more specific exception
|
25 | raise PicklingError("Can't pickle")
26 |
27 | with pytest.raises(UnpicklingError):
| ^^^^^^^^^^^^^^^ PT011
28 | raise UnpicklingError("Can't unpickle")
|

View File

@@ -1,12 +1,12 @@
---
source: crates/ruff/src/rules/flake8_pytest_style/mod.rs
---
PT011.py:12:24: PT011 `pytest.raises(ZeroDivisionError)` is too broad, set the `match` parameter or use a more specific exception
PT011.py:13:24: PT011 `pytest.raises(ZeroDivisionError)` is too broad, set the `match` parameter or use a more specific exception
|
11 | def test_ok_different_error_from_config():
12 | with pytest.raises(ZeroDivisionError):
12 | def test_ok_different_error_from_config():
13 | with pytest.raises(ZeroDivisionError):
| ^^^^^^^^^^^^^^^^^ PT011
13 | raise ZeroDivisionError("Can't divide by 0")
14 | raise ZeroDivisionError("Can't divide by 0")
|

View File

@@ -558,7 +558,7 @@ fn unnecessary_assign(checker: &mut Checker, stack: &Stack) {
// Replace from the start of the assignment statement to the end of the equals
// sign.
TextRange::new(
assign.range().start(),
assign.start(),
assign
.range()
.start()

View File

@@ -693,7 +693,7 @@ pub(crate) fn use_ternary_operator(checker: &mut Checker, stmt: &Stmt) {
fn body_range(branch: &IfElifBranch, locator: &Locator) -> TextRange {
TextRange::new(
locator.line_end(branch.test.end()),
locator.line_end(branch.range.end()),
locator.line_end(branch.end()),
)
}
@@ -731,7 +731,7 @@ pub(crate) fn if_with_same_arms(checker: &mut Checker, locator: &Locator, stmt_i
checker.diagnostics.push(Diagnostic::new(
IfWithSameArms,
TextRange::new(current_branch.range.start(), following_branch.range.end()),
TextRange::new(current_branch.start(), following_branch.end()),
));
}
}

View File

@@ -5,6 +5,7 @@ use rustc_hash::FxHashMap;
use ruff_diagnostics::{AutofixKind, Diagnostic, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_semantic::{AnyImport, Imported, ResolvedReferenceId, Scope, StatementId};
use ruff_text_size::TextRange;
@@ -101,11 +102,11 @@ pub(crate) fn runtime_import_in_type_checking_block(
let import = ImportBinding {
import,
reference_id,
range: binding.range,
range: binding.range(),
parent_range: binding.parent_range(checker.semantic()),
};
if checker.rule_is_ignored(Rule::RuntimeImportInTypeCheckingBlock, import.range.start())
if checker.rule_is_ignored(Rule::RuntimeImportInTypeCheckingBlock, import.start())
|| import.parent_range.is_some_and(|parent_range| {
checker.rule_is_ignored(
Rule::RuntimeImportInTypeCheckingBlock,
@@ -192,6 +193,12 @@ struct ImportBinding<'a> {
parent_range: Option<TextRange>,
}
impl Ranged for ImportBinding<'_> {
fn range(&self) -> TextRange {
self.range
}
}
/// Generate a [`Fix`] to remove runtime imports from a type-checking block.
fn fix_imports(
checker: &Checker,
@@ -211,7 +218,7 @@ fn fix_imports(
let at = imports
.iter()
.map(|ImportBinding { reference_id, .. }| {
checker.semantic().reference(*reference_id).range().start()
checker.semantic().reference(*reference_id).start()
})
.min()
.expect("Expected at least one import");

View File

@@ -5,6 +5,7 @@ use rustc_hash::FxHashMap;
use ruff_diagnostics::{AutofixKind, Diagnostic, DiagnosticKind, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_semantic::{AnyImport, Binding, Imported, ResolvedReferenceId, Scope, StatementId};
use ruff_text_size::TextRange;
@@ -308,11 +309,11 @@ pub(crate) fn typing_only_runtime_import(
let import = ImportBinding {
import,
reference_id,
range: binding.range,
range: binding.range(),
parent_range: binding.parent_range(checker.semantic()),
};
if checker.rule_is_ignored(rule_for(import_type), import.range.start())
if checker.rule_is_ignored(rule_for(import_type), import.start())
|| import.parent_range.is_some_and(|parent_range| {
checker.rule_is_ignored(rule_for(import_type), parent_range.start())
})
@@ -390,6 +391,12 @@ struct ImportBinding<'a> {
parent_range: Option<TextRange>,
}
impl Ranged for ImportBinding<'_> {
fn range(&self) -> TextRange {
self.range
}
}
/// Return the [`Rule`] for the given import type.
fn rule_for(import_type: ImportType) -> Rule {
match import_type {
@@ -456,7 +463,7 @@ fn fix_imports(
let at = imports
.iter()
.map(|ImportBinding { reference_id, .. }| {
checker.semantic().reference(*reference_id).range().start()
checker.semantic().reference(*reference_id).start()
})
.min()
.expect("Expected at least one import");

View File

@@ -2,7 +2,7 @@ use std::iter;
use regex::Regex;
use ruff_python_ast as ast;
use ruff_python_ast::{Parameter, Parameters};
use ruff_python_ast::{Parameter, Parameters, Ranged};
use ruff_diagnostics::DiagnosticKind;
use ruff_diagnostics::{Diagnostic, Violation};
@@ -303,7 +303,7 @@ fn call<'a>(
{
Some(Diagnostic::new(
argumentable.check_for(arg.name.to_string()),
binding.range,
binding.range(),
))
} else {
None

View File

@@ -1,10 +1,9 @@
use std::borrow::Cow;
use ruff_python_ast::PySourceType;
use ruff_python_ast::{PySourceType, Ranged};
use ruff_python_parser::{lexer, AsMode, Tok};
use ruff_text_size::{TextRange, TextSize};
use ruff_source_file::Locator;
use ruff_text_size::TextRange;
#[derive(Debug)]
pub(crate) struct Comment<'a> {
@@ -12,13 +11,9 @@ pub(crate) struct Comment<'a> {
pub(crate) range: TextRange,
}
impl Comment<'_> {
pub(crate) const fn start(&self) -> TextSize {
self.range.start()
}
pub(crate) const fn end(&self) -> TextSize {
self.range.end()
impl Ranged for Comment<'_> {
fn range(&self) -> TextRange {
self.range
}
}

View File

@@ -173,7 +173,7 @@ fn is_unused(expr: &Expr, semantic: &SemanticModel) -> bool {
scope
.get_all(id)
.map(|binding_id| semantic.binding(binding_id))
.filter(|binding| binding.range.start() >= expr.range().start())
.filter(|binding| binding.start() >= expr.start())
.all(|binding| !binding.is_used())
}
_ => false,

View File

@@ -1,8 +1,10 @@
use ruff_python_ast::{CmpOp, Expr, Ranged};
use ruff_text_size::{TextLen, TextRange};
use unicode_width::UnicodeWidthStr;
use ruff_python_ast::node::AnyNodeRef;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::{CmpOp, Expr, Ranged};
use ruff_source_file::{Line, Locator};
use ruff_text_size::{TextLen, TextRange};
use crate::line_width::{LineLength, LineWidth, TabSize};
@@ -14,6 +16,7 @@ pub(super) fn generate_comparison(
left: &Expr,
ops: &[CmpOp],
comparators: &[Expr],
parent: AnyNodeRef,
locator: &Locator,
) -> String {
let start = left.start();
@@ -21,7 +24,9 @@ pub(super) fn generate_comparison(
let mut contents = String::with_capacity(usize::from(end - start));
// Add the left side of the comparison.
contents.push_str(locator.slice(left.range()));
contents.push_str(locator.slice(
parenthesized_range(left.into(), parent, locator.contents()).unwrap_or(left.range()),
));
for (op, comparator) in ops.iter().zip(comparators) {
// Add the operator.
@@ -39,7 +44,12 @@ pub(super) fn generate_comparison(
});
// Add the right side of the comparison.
contents.push_str(locator.slice(comparator.range()));
contents.push_str(
locator.slice(
parenthesized_range(comparator.into(), parent, locator.contents())
.unwrap_or(comparator.range()),
),
);
}
contents

View File

@@ -129,7 +129,7 @@ pub(crate) fn invalid_escape_sequence(
for diagnostic in &mut invalid_escape_sequence {
diagnostic.set_fix(Fix::automatic(Edit::insertion(
r"\".to_string(),
diagnostic.range().start() + TextSize::from(1),
diagnostic.start() + TextSize::from(1),
)));
}
} else {

View File

@@ -112,11 +112,14 @@ pub(crate) fn lambda_assignment(
// If the assignment is in a class body, it might not be safe to replace it because the
// assignment might be carrying a type annotation that will be used by some package like
// dataclasses, which wouldn't consider the rewritten function definition to be
// equivalent. Similarly, if the lambda is shadowing a variable in the current scope,
// rewriting it as a function declaration may break type-checking.
// equivalent. Even if it _doesn't_ have an annotation, rewriting safely would require
// making this a static method.
// See: https://github.com/astral-sh/ruff/issues/3046
//
// Similarly, if the lambda is shadowing a variable in the current scope,
// rewriting it as a function declaration may break type-checking.
// See: https://github.com/astral-sh/ruff/issues/5421
if (annotation.is_some() && checker.semantic().current_scope().kind.is_class())
if checker.semantic().current_scope().kind.is_class()
|| checker
.semantic()
.current_scope()

View File

@@ -279,8 +279,13 @@ pub(crate) fn literal_comparisons(checker: &mut Checker, compare: &ast::ExprComp
.map(|(idx, op)| bad_ops.get(&idx).unwrap_or(op))
.copied()
.collect::<Vec<_>>();
let content =
generate_comparison(&compare.left, &ops, &compare.comparators, checker.locator());
let content = generate_comparison(
&compare.left,
&ops,
&compare.comparators,
compare.into(),
checker.locator(),
);
for diagnostic in &mut diagnostics {
diagnostic.set_fix(Fix::suggested(Edit::range_replacement(
content.to_string(),

View File

@@ -3,6 +3,7 @@ use ruff_diagnostics::Diagnostic;
use ruff_diagnostics::Edit;
use ruff_diagnostics::Fix;
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_parser::TokenKind;
use ruff_text_size::TextRange;

View File

@@ -1,9 +1,9 @@
use ruff_text_size::TextSize;
use ruff_diagnostics::Edit;
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_parser::TokenKind;
use ruff_text_size::TextSize;
use crate::checkers::logical_lines::LogicalLinesContext;

View File

@@ -1,5 +1,6 @@
use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_parser::TokenKind;
use crate::checkers::logical_lines::LogicalLinesContext;

View File

@@ -1,5 +1,6 @@
use ruff_diagnostics::{DiagnosticKind, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_parser::TokenKind;
use crate::checkers::logical_lines::LogicalLinesContext;

View File

@@ -13,6 +13,7 @@ use std::fmt::{Debug, Formatter};
use std::iter::FusedIterator;
use bitflags::bitflags;
use ruff_python_ast::Ranged;
use ruff_python_parser::lexer::LexResult;
use ruff_text_size::{TextLen, TextRange, TextSize};
@@ -310,22 +311,11 @@ impl LogicalLineToken {
pub(crate) const fn kind(&self) -> TokenKind {
self.kind
}
}
/// Returns the token's start location
#[inline]
pub(crate) const fn start(&self) -> TextSize {
self.range.start()
}
/// Returns the token's end location
#[inline]
pub(crate) const fn end(&self) -> TextSize {
self.range.end()
}
impl Ranged for LogicalLineToken {
/// Returns a tuple with the token's `(start, end)` locations
#[inline]
pub(crate) const fn range(&self) -> TextRange {
fn range(&self) -> TextRange {
self.range
}
}

View File

@@ -1,8 +1,8 @@
use ruff_text_size::TextRange;
use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_parser::TokenKind;
use ruff_text_size::TextRange;
use crate::checkers::logical_lines::LogicalLinesContext;

View File

@@ -1,7 +1,7 @@
use ruff_text_size::TextRange;
use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_text_size::TextRange;
use crate::checkers::logical_lines::LogicalLinesContext;

View File

@@ -1,8 +1,8 @@
use ruff_text_size::{TextRange, TextSize};
use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_parser::TokenKind;
use ruff_text_size::{TextRange, TextSize};
use crate::checkers::logical_lines::LogicalLinesContext;
use crate::rules::pycodestyle::rules::logical_lines::{LogicalLine, LogicalLineToken};

View File

@@ -1,10 +1,10 @@
use ruff_text_size::{TextLen, TextRange, TextSize};
use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_parser::TokenKind;
use ruff_python_trivia::PythonWhitespace;
use ruff_source_file::Locator;
use ruff_text_size::{TextLen, TextRange, TextSize};
use crate::checkers::logical_lines::LogicalLinesContext;
use crate::rules::pycodestyle::rules::logical_lines::LogicalLine;

View File

@@ -1,8 +1,8 @@
use ruff_text_size::{TextRange, TextSize};
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_parser::TokenKind;
use ruff_text_size::{TextRange, TextSize};
use crate::checkers::logical_lines::LogicalLinesContext;
use crate::rules::pycodestyle::rules::logical_lines::LogicalLine;

View File

@@ -94,7 +94,13 @@ pub(crate) fn not_tests(checker: &mut Checker, unary_op: &ast::ExprUnaryOp) {
let mut diagnostic = Diagnostic::new(NotInTest, unary_op.operand.range());
if checker.patch(diagnostic.kind.rule()) {
diagnostic.set_fix(Fix::automatic(Edit::range_replacement(
generate_comparison(left, &[CmpOp::NotIn], comparators, checker.locator()),
generate_comparison(
left,
&[CmpOp::NotIn],
comparators,
unary_op.into(),
checker.locator(),
),
unary_op.range(),
)));
}
@@ -106,7 +112,13 @@ pub(crate) fn not_tests(checker: &mut Checker, unary_op: &ast::ExprUnaryOp) {
let mut diagnostic = Diagnostic::new(NotIsTest, unary_op.operand.range());
if checker.patch(diagnostic.kind.rule()) {
diagnostic.set_fix(Fix::automatic(Edit::range_replacement(
generate_comparison(left, &[CmpOp::IsNot], comparators, checker.locator()),
generate_comparison(
left,
&[CmpOp::IsNot],
comparators,
unary_op.into(),
checker.locator(),
),
unary_op.range(),
)));
}

View File

@@ -181,7 +181,7 @@ E712.py:22:5: E712 [*] Comparison to `True` should be `cond is True` or `if cond
20 20 | var = 1 if cond == True else -1 if cond == False else cond
21 21 | #: E712
22 |-if (True) == TrueElement or x == TrueElement:
22 |+if True is TrueElement or x == TrueElement:
22 |+if (True) is TrueElement or x == TrueElement:
23 23 | pass
24 24 |
25 25 | if res == True != False:
@@ -204,7 +204,7 @@ E712.py:25:11: E712 [*] Comparison to `True` should be `cond is True` or `if con
25 |+if res is True is not False:
26 26 | pass
27 27 |
28 28 | #: Okay
28 28 | if(True) == TrueElement or x == TrueElement:
E712.py:25:19: E712 [*] Comparison to `False` should be `cond is not False` or `if cond:`
|
@@ -224,6 +224,46 @@ E712.py:25:19: E712 [*] Comparison to `False` should be `cond is not False` or `
25 |+if res is True is not False:
26 26 | pass
27 27 |
28 28 | #: Okay
28 28 | if(True) == TrueElement or x == TrueElement:
E712.py:28:4: E712 [*] Comparison to `True` should be `cond is True` or `if cond:`
|
26 | pass
27 |
28 | if(True) == TrueElement or x == TrueElement:
| ^^^^ E712
29 | pass
|
= help: Replace with `cond is True`
Suggested fix
25 25 | if res == True != False:
26 26 | pass
27 27 |
28 |-if(True) == TrueElement or x == TrueElement:
28 |+if(True) is TrueElement or x == TrueElement:
29 29 | pass
30 30 |
31 31 | if (yield i) == True:
E712.py:31:17: E712 [*] Comparison to `True` should be `cond is True` or `if cond:`
|
29 | pass
30 |
31 | if (yield i) == True:
| ^^^^ E712
32 | print("even")
|
= help: Replace with `cond is True`
Suggested fix
28 28 | if(True) == TrueElement or x == TrueElement:
29 29 | pass
30 30 |
31 |-if (yield i) == True:
31 |+if (yield i) is True:
32 32 | print("even")
33 33 |
34 34 | #: Okay

View File

@@ -109,7 +109,7 @@ E731.py:57:5: E731 [*] Do not assign a `lambda` expression, use a `def`
|
= help: Rewrite `f` as a `def`
Suggested fix
Possible fix
54 54 |
55 55 | class Scope:
56 56 | # E731
@@ -318,5 +318,43 @@ E731.py:135:5: E731 [*] Do not assign a `lambda` expression, use a `def`
135 |- f: Callable[[str, int, list[str]], list[str]] = lambda a, b, /, c: [*c, a * b]
135 |+ def f(a: str, b: int, /, c: list[str]) -> list[str]:
136 |+ return [*c, a * b]
136 137 |
137 138 |
138 139 | class TemperatureScales(Enum):
E731.py:139:5: E731 [*] Do not assign a `lambda` expression, use a `def`
|
138 | class TemperatureScales(Enum):
139 | CELSIUS = (lambda deg_c: deg_c)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E731
140 | FAHRENHEIT = (lambda deg_c: deg_c * 9 / 5 + 32)
|
= help: Rewrite `CELSIUS` as a `def`
Possible fix
136 136 |
137 137 |
138 138 | class TemperatureScales(Enum):
139 |- CELSIUS = (lambda deg_c: deg_c)
139 |+ def CELSIUS(deg_c):
140 |+ return deg_c
140 141 | FAHRENHEIT = (lambda deg_c: deg_c * 9 / 5 + 32)
E731.py:140:5: E731 [*] Do not assign a `lambda` expression, use a `def`
|
138 | class TemperatureScales(Enum):
139 | CELSIUS = (lambda deg_c: deg_c)
140 | FAHRENHEIT = (lambda deg_c: deg_c * 9 / 5 + 32)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E731
|
= help: Rewrite `FAHRENHEIT` as a `def`
Possible fix
137 137 |
138 138 | class TemperatureScales(Enum):
139 139 | CELSIUS = (lambda deg_c: deg_c)
140 |- FAHRENHEIT = (lambda deg_c: deg_c * 9 / 5 + 32)
140 |+ def FAHRENHEIT(deg_c):
141 |+ return deg_c * 9 / 5 + 32

View File

@@ -2,6 +2,7 @@ use memchr::memchr_iter;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use crate::checkers::ast::Checker;
use crate::docstrings::Docstring;

View File

@@ -1,5 +1,6 @@
use ruff_diagnostics::{AutofixKind, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_source_file::{UniversalNewlineIterator, UniversalNewlines};
use crate::checkers::ast::Checker;

View File

@@ -3,6 +3,7 @@ use strum::IntoEnumIterator;
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_source_file::{UniversalNewlineIterator, UniversalNewlines};
use crate::checkers::ast::Checker;

View File

@@ -3,6 +3,7 @@ use strum::IntoEnumIterator;
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_source_file::{UniversalNewlineIterator, UniversalNewlines};
use crate::checkers::ast::Checker;

View File

@@ -1,10 +1,10 @@
use ruff_text_size::{TextLen, TextRange};
use ruff_diagnostics::{AlwaysAutofixableViolation, Violation};
use ruff_diagnostics::{Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::docstrings::{clean_space, leading_space};
use ruff_python_ast::Ranged;
use ruff_source_file::NewlineWithTrailingNewline;
use ruff_text_size::{TextLen, TextRange};
use crate::checkers::ast::Checker;
use crate::docstrings::Docstring;

View File

@@ -1,5 +1,6 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_source_file::UniversalNewlines;
use crate::checkers::ast::Checker;

View File

@@ -1,8 +1,8 @@
use ruff_text_size::{TextLen, TextRange};
use ruff_diagnostics::{AutofixKind, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_source_file::NewlineWithTrailingNewline;
use ruff_text_size::{TextLen, TextRange};
use crate::checkers::ast::Checker;
use crate::docstrings::Docstring;

View File

@@ -6,6 +6,7 @@ use once_cell::sync::Lazy;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::call_path::{from_qualified_name, CallPath};
use ruff_python_ast::Ranged;
use ruff_python_semantic::analyze::visibility::{is_property, is_test};
use ruff_source_file::UniversalNewlines;

View File

@@ -1,5 +1,6 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use crate::checkers::ast::Checker;
use crate::docstrings::Docstring;

View File

@@ -1,6 +1,7 @@
use ruff_diagnostics::{AutofixKind, Diagnostic, Edit, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::str::{leading_quote, trailing_quote};
use ruff_python_ast::Ranged;
use ruff_source_file::NewlineWithTrailingNewline;
use crate::checkers::ast::Checker;

View File

@@ -8,7 +8,7 @@ use ruff_diagnostics::{Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::docstrings::{clean_space, leading_space};
use ruff_python_ast::identifier::Identifier;
use ruff_python_ast::ParameterWithDefault;
use ruff_python_ast::{ParameterWithDefault, Ranged};
use ruff_python_semantic::analyze::visibility::is_staticmethod;
use ruff_python_trivia::{textwrap::dedent, PythonWhitespace};
use ruff_source_file::NewlineWithTrailingNewline;
@@ -1640,7 +1640,7 @@ fn common_section(
if checker.patch(diagnostic.kind.rule()) {
// Replace the existing indentation with whitespace of the appropriate length.
let content = clean_space(docstring.indentation);
let fix_range = TextRange::at(context.range().start(), leading_space.text_len());
let fix_range = TextRange::at(context.start(), leading_space.text_len());
diagnostic.set_fix(Fix::automatic(if content.is_empty() {
Edit::range_deletion(fix_range)
@@ -1667,7 +1667,7 @@ fn common_section(
// Add a newline at the beginning of the next section.
diagnostic.set_fix(Fix::automatic(Edit::insertion(
line_end.to_string(),
next.range().start(),
next.start(),
)));
}
checker.diagnostics.push(diagnostic);
@@ -1684,7 +1684,7 @@ fn common_section(
// Add a newline after the section.
diagnostic.set_fix(Fix::automatic(Edit::insertion(
format!("{}{}", line_end, docstring.indentation),
context.range().end(),
context.end(),
)));
}
checker.diagnostics.push(diagnostic);
@@ -1704,7 +1704,7 @@ fn common_section(
// Add a blank line before the section.
diagnostic.set_fix(Fix::automatic(Edit::insertion(
line_end.to_string(),
context.range().start(),
context.start(),
)));
}
checker.diagnostics.push(diagnostic);

View File

@@ -1,5 +1,6 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use crate::checkers::ast::Checker;
use crate::docstrings::Docstring;

View File

@@ -1,5 +1,6 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_codegen::Quote;
use crate::checkers::ast::Checker;

View File

@@ -96,11 +96,9 @@ pub(crate) fn remove_exception_handler_assignment(
locator: &Locator,
) -> Result<Edit> {
// Lex backwards, to the token just before the `as`.
let mut tokenizer = SimpleTokenizer::up_to_without_back_comment(
bound_exception.range.start(),
locator.contents(),
)
.skip_trivia();
let mut tokenizer =
SimpleTokenizer::up_to_without_back_comment(bound_exception.start(), locator.contents())
.skip_trivia();
// Eat the `as` token.
let preceding = tokenizer
@@ -114,7 +112,7 @@ pub(crate) fn remove_exception_handler_assignment(
.context("expected the exception name to be preceded by a token")?;
// Lex forwards, to the `:` token.
let following = SimpleTokenizer::starts_at(bound_exception.range.end(), locator.contents())
let following = SimpleTokenizer::starts_at(bound_exception.end(), locator.contents())
.skip_trivia()
.next()
.context("expected the exception name to be followed by a colon")?;

View File

@@ -11,7 +11,9 @@ pub(crate) fn error_to_string(err: &FormatParseError) -> String {
FormatParseError::InvalidCharacterAfterRightBracket => {
"Only '.' or '[' may follow ']' in format field specifier"
}
FormatParseError::InvalidFormatSpecifier => "Max string recursion exceeded",
FormatParseError::PlaceholderRecursionExceeded => {
"Max format placeholder recursion exceeded"
}
FormatParseError::MissingStartBracket => "Single '}' encountered in format string",
FormatParseError::MissingRightBracket => "Expected '}' before end of string",
FormatParseError::UnmatchedBracket => "Single '{' encountered in format string",

View File

@@ -2,6 +2,7 @@ use std::string::ToString;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_semantic::{Scope, ScopeId};
use crate::checkers::ast::Checker;

View File

@@ -1,5 +1,6 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_semantic::Scope;
use crate::checkers::ast::Checker;
@@ -44,7 +45,7 @@ pub(crate) fn unused_annotation(
&& !binding.is_used()
&& !checker.settings.dummy_variable_rgx.is_match(name)
{
Some((name.to_string(), binding.range))
Some((name.to_string(), binding.range()))
} else {
None
}

View File

@@ -5,6 +5,7 @@ use rustc_hash::FxHashMap;
use ruff_diagnostics::{AutofixKind, Diagnostic, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_semantic::{AnyImport, Exceptions, Imported, Scope, StatementId};
use ruff_text_size::TextRange;
@@ -124,11 +125,11 @@ pub(crate) fn unused_import(checker: &Checker, scope: &Scope, diagnostics: &mut
let import = ImportBinding {
import,
range: binding.range,
range: binding.range(),
parent_range: binding.parent_range(checker.semantic()),
};
if checker.rule_is_ignored(Rule::UnusedImport, import.range.start())
if checker.rule_is_ignored(Rule::UnusedImport, import.start())
|| import.parent_range.is_some_and(|parent_range| {
checker.rule_is_ignored(Rule::UnusedImport, parent_range.start())
})
@@ -226,6 +227,12 @@ struct ImportBinding<'a> {
parent_range: Option<TextRange>,
}
impl Ranged for ImportBinding<'_> {
fn range(&self) -> TextRange {
self.range
}
}
/// Generate a [`Fix`] to remove unused imports from a statement.
fn fix_imports(
checker: &Checker,

View File

@@ -320,7 +320,7 @@ pub(crate) fn unused_variable(checker: &Checker, scope: &Scope, diagnostics: &mu
| "__debuggerskip__"
)
{
return Some((name.to_string(), binding.range, binding.source));
return Some((name.to_string(), binding.range(), binding.source));
}
None

View File

@@ -28,7 +28,7 @@ F521.py:3:1: F521 `.format` call has invalid format string: Expected '}' before
5 | "{:{:{}}}".format(1, 2, 3)
|
F521.py:5:1: F521 `.format` call has invalid format string: Max string recursion exceeded
F521.py:5:1: F521 `.format` call has invalid format string: Max format placeholder recursion exceeded
|
3 | "{foo[}".format(foo=1)
4 | # too much string recursion (placeholder-in-placeholder)

View File

@@ -7,11 +7,14 @@ use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for any misspelled dunder name method and for any method
/// defined with `__...__` that's not one of the pre-defined methods.
/// defined with `_..._` that's not one of the pre-defined methods
///
/// The pre-defined methods encompass all of Python's standard dunder
/// methods.
///
/// Note this includes all methods starting and ending with at least
/// one underscore to detect mistakes.
///
/// ## Why is this bad?
/// Misspelled dunder name methods may cause your code to not function
/// as expected.

View File

@@ -49,16 +49,39 @@ pub(crate) fn call(checker: &mut Checker, string: &str, range: TextRange) {
continue;
};
if let Err(FormatSpecError::InvalidFormatType) = FormatSpec::parse(format_spec) {
checker.diagnostics.push(Diagnostic::new(
BadStringFormatCharacter {
// The format type character is always the last one.
// More info in the official spec:
// https://docs.python.org/3/library/string.html#format-specification-mini-language
format_char: format_spec.chars().last().unwrap(),
},
range,
));
match FormatSpec::parse(format_spec) {
Err(FormatSpecError::InvalidFormatType) => {
checker.diagnostics.push(Diagnostic::new(
BadStringFormatCharacter {
// The format type character is always the last one.
// More info in the official spec:
// https://docs.python.org/3/library/string.html#format-specification-mini-language
format_char: format_spec.chars().last().unwrap(),
},
range,
));
}
Err(_) => {}
Ok(format_spec) => {
for replacement in format_spec.replacements() {
let FormatPart::Field { format_spec, .. } = replacement else {
continue;
};
if let Err(FormatSpecError::InvalidFormatType) =
FormatSpec::parse(format_spec)
{
checker.diagnostics.push(Diagnostic::new(
BadStringFormatCharacter {
// The format type character is always the last one.
// More info in the official spec:
// https://docs.python.org/3/library/string.html#format-specification-mini-language
format_char: format_spec.chars().last().unwrap(),
},
range,
));
}
}
}
}
}
}

View File

@@ -1,5 +1,6 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_semantic::Binding;
/// ## What it does
@@ -37,7 +38,7 @@ impl Violation for InvalidAllFormat {
/// PLE0605
pub(crate) fn invalid_all_format(binding: &Binding) -> Option<Diagnostic> {
if binding.is_invalid_all_format() {
Some(Diagnostic::new(InvalidAllFormat, binding.range))
Some(Diagnostic::new(InvalidAllFormat, binding.range()))
} else {
None
}

View File

@@ -1,5 +1,6 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Ranged;
use ruff_python_semantic::Binding;
/// ## What it does
@@ -37,7 +38,7 @@ impl Violation for InvalidAllObject {
/// PLE0604
pub(crate) fn invalid_all_object(binding: &Binding) -> Option<Diagnostic> {
if binding.is_invalid_all_object() {
Some(Diagnostic::new(InvalidAllObject, binding.range))
Some(Diagnostic::new(InvalidAllObject, binding.range()))
} else {
None
}

View File

@@ -48,7 +48,17 @@ bad_string_format_character.py:15:1: PLE1300 Unsupported format character 'y'
15 | "{:s} {:y}".format("hello", "world") # [bad-format-character]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PLE1300
16 |
17 | "{:*^30s}".format("centered")
17 | "{:*^30s}".format("centered") # OK
|
bad_string_format_character.py:20:1: PLE1300 Unsupported format character 'y'
|
18 | "{:{s}}".format("hello", s="s") # OK (nested replacement value not checked)
19 |
20 | "{:{s:y}}".format("hello", s="s") # [bad-format-character] (nested replacement format spec checked)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PLE1300
21 |
22 | ## f-strings
|

View File

@@ -201,10 +201,10 @@ fn fix_py2_block(checker: &Checker, stmt_if: &StmtIf, branch: &IfElifBranch) ->
.elif_else_clauses
.iter()
.map(Ranged::start)
.find(|start| *start > branch.range.start());
.find(|start| *start > branch.start());
Some(Fix::suggested(Edit::deletion(
branch.range.start(),
next_start.unwrap_or(branch.range.end()),
branch.start(),
next_start.unwrap_or(branch.end()),
)))
}
}
@@ -256,7 +256,7 @@ fn fix_py3_block(checker: &mut Checker, stmt_if: &StmtIf, branch: &IfElifBranch)
.slice(TextRange::new(branch.test.end(), end.end()));
Some(Fix::suggested(Edit::range_replacement(
format!("else{text}"),
TextRange::new(branch.range.start(), stmt_if.end()),
TextRange::new(branch.start(), stmt_if.end()),
)))
}
}

View File

@@ -40,6 +40,7 @@ mod tests {
feature = "unreachable-code",
test_case(Rule::UnreachableCode, Path::new("RUF014.py"))
)]
#[test_case(Rule::QuadraticListSummation, Path::new("RUF017.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
let diagnostics = test_path(

View File

@@ -40,3 +40,6 @@ pub(crate) enum Context {
Docstring,
Comment,
}
pub(crate) use quadratic_list_summation::*;
mod quadratic_list_summation;

View File

@@ -0,0 +1,133 @@
use anyhow::Result;
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Arguments, Expr, Ranged};
use ruff_python_semantic::SemanticModel;
use crate::importer::ImportRequest;
use crate::{checkers::ast::Checker, registry::Rule};
/// ## What it does
/// Checks for the use of `sum()` to flatten lists of lists, which has
/// quadratic complexity.
///
/// ## Why is this bad?
/// The use of `sum()` to flatten lists of lists is quadratic in the number of
/// lists, as `sum()` creates a new list for each element in the summation.
///
/// Instead, consider using another method of flattening lists to avoid
/// quadratic complexity. The following methods are all linear in the number of
/// lists:
///
/// - `functools.reduce(operator.iconcat, lists, [])`
/// - `list(itertools.chain.from_iterable(lists)`
/// - `[item for sublist in lists for item in sublist]`
///
/// ## Example
/// ```python
/// lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
/// joined = sum(lists, [])
/// ```
///
/// Use instead:
/// ```python
/// import functools
/// import operator
///
///
/// lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
/// functools.reduce(operator.iconcat, lists, [])
/// ```
///
/// ## References
/// - [_How Not to Flatten a List of Lists in Python_](https://mathieularose.com/how-not-to-flatten-a-list-of-lists-in-python)
/// - [_How do I make a flat list out of a list of lists?_](https://stackoverflow.com/questions/952914/how-do-i-make-a-flat-list-out-of-a-list-of-lists/953097#953097)
#[violation]
pub struct QuadraticListSummation;
impl AlwaysAutofixableViolation for QuadraticListSummation {
#[derive_message_formats]
fn message(&self) -> String {
format!("Avoid quadratic list summation")
}
fn autofix_title(&self) -> String {
format!("Replace with `functools.reduce`")
}
}
/// RUF017
pub(crate) fn quadratic_list_summation(checker: &mut Checker, call: &ast::ExprCall) {
let ast::ExprCall {
func,
arguments,
range,
} = call;
if !func_is_builtin(func, "sum", checker.semantic()) {
return;
}
if !start_is_empty_list(arguments, checker.semantic()) {
return;
};
let Some(iterable) = arguments.args.first() else {
return;
};
let mut diagnostic = Diagnostic::new(QuadraticListSummation, *range);
if checker.patch(Rule::QuadraticListSummation) {
diagnostic.try_set_fix(|| convert_to_reduce(iterable, call, checker));
}
checker.diagnostics.push(diagnostic);
}
/// Generate a [`Fix`] to convert a `sum()` call to a `functools.reduce()` call.
fn convert_to_reduce(iterable: &Expr, call: &ast::ExprCall, checker: &Checker) -> Result<Fix> {
let (reduce_edit, reduce_binding) = checker.importer().get_or_import_symbol(
&ImportRequest::import("functools", "reduce"),
call.start(),
checker.semantic(),
)?;
let (iadd_edit, iadd_binding) = checker.importer().get_or_import_symbol(
&ImportRequest::import("operator", "iadd"),
iterable.start(),
checker.semantic(),
)?;
let iterable = checker.locator().slice(iterable.range());
Ok(Fix::suggested_edits(
Edit::range_replacement(
format!("{reduce_binding}({iadd_binding}, {iterable}, [])"),
call.range(),
),
[reduce_edit, iadd_edit],
))
}
/// Check if a function is a builtin with a given name.
fn func_is_builtin(func: &Expr, name: &str, semantic: &SemanticModel) -> bool {
let Expr::Name(ast::ExprName { id, .. }) = func else {
return false;
};
id == name && semantic.is_builtin(id)
}
/// Returns `true` if the `start` argument to a `sum()` call is an empty list.
fn start_is_empty_list(arguments: &Arguments, semantic: &SemanticModel) -> bool {
let Some(keyword) = arguments.find_keyword("start") else {
return false;
};
match &keyword.value {
Expr::Call(ast::ExprCall {
func, arguments, ..
}) => arguments.is_empty() && func_is_builtin(func, "list", semantic),
Expr::List(ast::ExprList { elts, ctx, .. }) => elts.is_empty() && ctx.is_load(),
_ => false,
}
}

View File

@@ -0,0 +1,53 @@
---
source: crates/ruff/src/rules/ruff/mod.rs
---
RUF017.py:5:1: RUF017 [*] Avoid quadratic list summation
|
4 | # RUF017
5 | sum([x, y], start=[])
| ^^^^^^^^^^^^^^^^^^^^^ RUF017
6 | sum([x, y], [])
7 | sum([[1, 2, 3], [4, 5, 6]], start=[])
|
= help: Replace with `functools.reduce`
Suggested fix
1 |+import functools
2 |+import operator
1 3 | x = [1, 2, 3]
2 4 | y = [4, 5, 6]
3 5 |
4 6 | # RUF017
5 |-sum([x, y], start=[])
7 |+functools.reduce(operator.iadd, [x, y], [])
6 8 | sum([x, y], [])
7 9 | sum([[1, 2, 3], [4, 5, 6]], start=[])
8 10 | sum([[1, 2, 3], [4, 5, 6]], [])
RUF017.py:7:1: RUF017 [*] Avoid quadratic list summation
|
5 | sum([x, y], start=[])
6 | sum([x, y], [])
7 | sum([[1, 2, 3], [4, 5, 6]], start=[])
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF017
8 | sum([[1, 2, 3], [4, 5, 6]], [])
9 | sum([[1, 2, 3], [4, 5, 6]],
|
= help: Replace with `functools.reduce`
Suggested fix
1 |+import functools
2 |+import operator
1 3 | x = [1, 2, 3]
2 4 | y = [4, 5, 6]
3 5 |
4 6 | # RUF017
5 7 | sum([x, y], start=[])
6 8 | sum([x, y], [])
7 |-sum([[1, 2, 3], [4, 5, 6]], start=[])
9 |+functools.reduce(operator.iadd, [[1, 2, 3], [4, 5, 6]], [])
8 10 | sum([[1, 2, 3], [4, 5, 6]], [])
9 11 | sum([[1, 2, 3], [4, 5, 6]],
10 12 | [])

View File

@@ -232,7 +232,8 @@ impl Settings {
.unwrap_or_default(),
flake8_pytest_style: config
.flake8_pytest_style
.map(flake8_pytest_style::settings::Settings::from)
.map(flake8_pytest_style::settings::Settings::try_from)
.transpose()?
.unwrap_or_default(),
flake8_quotes: config
.flake8_quotes

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_cli"
version = "0.0.284"
version = "0.0.285"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -0,0 +1,291 @@
use ruff_text_size::TextRange;
use crate::node::AnyNodeRef;
use crate::{self as ast, Expr, Ranged};
/// Unowned pendant to [`ast::Expr`] that stores a reference instead of a owned value.
#[derive(Copy, Clone, Debug, PartialEq)]
pub enum ExpressionRef<'a> {
BoolOp(&'a ast::ExprBoolOp),
NamedExpr(&'a ast::ExprNamedExpr),
BinOp(&'a ast::ExprBinOp),
UnaryOp(&'a ast::ExprUnaryOp),
Lambda(&'a ast::ExprLambda),
IfExp(&'a ast::ExprIfExp),
Dict(&'a ast::ExprDict),
Set(&'a ast::ExprSet),
ListComp(&'a ast::ExprListComp),
SetComp(&'a ast::ExprSetComp),
DictComp(&'a ast::ExprDictComp),
GeneratorExp(&'a ast::ExprGeneratorExp),
Await(&'a ast::ExprAwait),
Yield(&'a ast::ExprYield),
YieldFrom(&'a ast::ExprYieldFrom),
Compare(&'a ast::ExprCompare),
Call(&'a ast::ExprCall),
FormattedValue(&'a ast::ExprFormattedValue),
FString(&'a ast::ExprFString),
Constant(&'a ast::ExprConstant),
Attribute(&'a ast::ExprAttribute),
Subscript(&'a ast::ExprSubscript),
Starred(&'a ast::ExprStarred),
Name(&'a ast::ExprName),
List(&'a ast::ExprList),
Tuple(&'a ast::ExprTuple),
Slice(&'a ast::ExprSlice),
IpyEscapeCommand(&'a ast::ExprIpyEscapeCommand),
}
impl<'a> From<&'a Box<Expr>> for ExpressionRef<'a> {
fn from(value: &'a Box<Expr>) -> Self {
ExpressionRef::from(value.as_ref())
}
}
impl<'a> From<&'a Expr> for ExpressionRef<'a> {
fn from(value: &'a Expr) -> Self {
match value {
Expr::BoolOp(value) => ExpressionRef::BoolOp(value),
Expr::NamedExpr(value) => ExpressionRef::NamedExpr(value),
Expr::BinOp(value) => ExpressionRef::BinOp(value),
Expr::UnaryOp(value) => ExpressionRef::UnaryOp(value),
Expr::Lambda(value) => ExpressionRef::Lambda(value),
Expr::IfExp(value) => ExpressionRef::IfExp(value),
Expr::Dict(value) => ExpressionRef::Dict(value),
Expr::Set(value) => ExpressionRef::Set(value),
Expr::ListComp(value) => ExpressionRef::ListComp(value),
Expr::SetComp(value) => ExpressionRef::SetComp(value),
Expr::DictComp(value) => ExpressionRef::DictComp(value),
Expr::GeneratorExp(value) => ExpressionRef::GeneratorExp(value),
Expr::Await(value) => ExpressionRef::Await(value),
Expr::Yield(value) => ExpressionRef::Yield(value),
Expr::YieldFrom(value) => ExpressionRef::YieldFrom(value),
Expr::Compare(value) => ExpressionRef::Compare(value),
Expr::Call(value) => ExpressionRef::Call(value),
Expr::FormattedValue(value) => ExpressionRef::FormattedValue(value),
Expr::FString(value) => ExpressionRef::FString(value),
Expr::Constant(value) => ExpressionRef::Constant(value),
Expr::Attribute(value) => ExpressionRef::Attribute(value),
Expr::Subscript(value) => ExpressionRef::Subscript(value),
Expr::Starred(value) => ExpressionRef::Starred(value),
Expr::Name(value) => ExpressionRef::Name(value),
Expr::List(value) => ExpressionRef::List(value),
Expr::Tuple(value) => ExpressionRef::Tuple(value),
Expr::Slice(value) => ExpressionRef::Slice(value),
Expr::IpyEscapeCommand(value) => ExpressionRef::IpyEscapeCommand(value),
}
}
}
impl<'a> From<&'a ast::ExprBoolOp> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprBoolOp) -> Self {
Self::BoolOp(value)
}
}
impl<'a> From<&'a ast::ExprNamedExpr> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprNamedExpr) -> Self {
Self::NamedExpr(value)
}
}
impl<'a> From<&'a ast::ExprBinOp> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprBinOp) -> Self {
Self::BinOp(value)
}
}
impl<'a> From<&'a ast::ExprUnaryOp> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprUnaryOp) -> Self {
Self::UnaryOp(value)
}
}
impl<'a> From<&'a ast::ExprLambda> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprLambda) -> Self {
Self::Lambda(value)
}
}
impl<'a> From<&'a ast::ExprIfExp> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprIfExp) -> Self {
Self::IfExp(value)
}
}
impl<'a> From<&'a ast::ExprDict> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprDict) -> Self {
Self::Dict(value)
}
}
impl<'a> From<&'a ast::ExprSet> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprSet) -> Self {
Self::Set(value)
}
}
impl<'a> From<&'a ast::ExprListComp> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprListComp) -> Self {
Self::ListComp(value)
}
}
impl<'a> From<&'a ast::ExprSetComp> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprSetComp) -> Self {
Self::SetComp(value)
}
}
impl<'a> From<&'a ast::ExprDictComp> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprDictComp) -> Self {
Self::DictComp(value)
}
}
impl<'a> From<&'a ast::ExprGeneratorExp> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprGeneratorExp) -> Self {
Self::GeneratorExp(value)
}
}
impl<'a> From<&'a ast::ExprAwait> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprAwait) -> Self {
Self::Await(value)
}
}
impl<'a> From<&'a ast::ExprYield> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprYield) -> Self {
Self::Yield(value)
}
}
impl<'a> From<&'a ast::ExprYieldFrom> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprYieldFrom) -> Self {
Self::YieldFrom(value)
}
}
impl<'a> From<&'a ast::ExprCompare> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprCompare) -> Self {
Self::Compare(value)
}
}
impl<'a> From<&'a ast::ExprCall> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprCall) -> Self {
Self::Call(value)
}
}
impl<'a> From<&'a ast::ExprFormattedValue> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprFormattedValue) -> Self {
Self::FormattedValue(value)
}
}
impl<'a> From<&'a ast::ExprFString> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprFString) -> Self {
Self::FString(value)
}
}
impl<'a> From<&'a ast::ExprConstant> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprConstant) -> Self {
Self::Constant(value)
}
}
impl<'a> From<&'a ast::ExprAttribute> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprAttribute) -> Self {
Self::Attribute(value)
}
}
impl<'a> From<&'a ast::ExprSubscript> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprSubscript) -> Self {
Self::Subscript(value)
}
}
impl<'a> From<&'a ast::ExprStarred> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprStarred) -> Self {
Self::Starred(value)
}
}
impl<'a> From<&'a ast::ExprName> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprName) -> Self {
Self::Name(value)
}
}
impl<'a> From<&'a ast::ExprList> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprList) -> Self {
Self::List(value)
}
}
impl<'a> From<&'a ast::ExprTuple> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprTuple) -> Self {
Self::Tuple(value)
}
}
impl<'a> From<&'a ast::ExprSlice> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprSlice) -> Self {
Self::Slice(value)
}
}
impl<'a> From<&'a ast::ExprIpyEscapeCommand> for ExpressionRef<'a> {
fn from(value: &'a ast::ExprIpyEscapeCommand) -> Self {
Self::IpyEscapeCommand(value)
}
}
impl<'a> From<ExpressionRef<'a>> for AnyNodeRef<'a> {
fn from(value: ExpressionRef<'a>) -> Self {
match value {
ExpressionRef::BoolOp(expression) => AnyNodeRef::ExprBoolOp(expression),
ExpressionRef::NamedExpr(expression) => AnyNodeRef::ExprNamedExpr(expression),
ExpressionRef::BinOp(expression) => AnyNodeRef::ExprBinOp(expression),
ExpressionRef::UnaryOp(expression) => AnyNodeRef::ExprUnaryOp(expression),
ExpressionRef::Lambda(expression) => AnyNodeRef::ExprLambda(expression),
ExpressionRef::IfExp(expression) => AnyNodeRef::ExprIfExp(expression),
ExpressionRef::Dict(expression) => AnyNodeRef::ExprDict(expression),
ExpressionRef::Set(expression) => AnyNodeRef::ExprSet(expression),
ExpressionRef::ListComp(expression) => AnyNodeRef::ExprListComp(expression),
ExpressionRef::SetComp(expression) => AnyNodeRef::ExprSetComp(expression),
ExpressionRef::DictComp(expression) => AnyNodeRef::ExprDictComp(expression),
ExpressionRef::GeneratorExp(expression) => AnyNodeRef::ExprGeneratorExp(expression),
ExpressionRef::Await(expression) => AnyNodeRef::ExprAwait(expression),
ExpressionRef::Yield(expression) => AnyNodeRef::ExprYield(expression),
ExpressionRef::YieldFrom(expression) => AnyNodeRef::ExprYieldFrom(expression),
ExpressionRef::Compare(expression) => AnyNodeRef::ExprCompare(expression),
ExpressionRef::Call(expression) => AnyNodeRef::ExprCall(expression),
ExpressionRef::FormattedValue(expression) => AnyNodeRef::ExprFormattedValue(expression),
ExpressionRef::FString(expression) => AnyNodeRef::ExprFString(expression),
ExpressionRef::Constant(expression) => AnyNodeRef::ExprConstant(expression),
ExpressionRef::Attribute(expression) => AnyNodeRef::ExprAttribute(expression),
ExpressionRef::Subscript(expression) => AnyNodeRef::ExprSubscript(expression),
ExpressionRef::Starred(expression) => AnyNodeRef::ExprStarred(expression),
ExpressionRef::Name(expression) => AnyNodeRef::ExprName(expression),
ExpressionRef::List(expression) => AnyNodeRef::ExprList(expression),
ExpressionRef::Tuple(expression) => AnyNodeRef::ExprTuple(expression),
ExpressionRef::Slice(expression) => AnyNodeRef::ExprSlice(expression),
ExpressionRef::IpyEscapeCommand(expression) => {
AnyNodeRef::ExprIpyEscapeCommand(expression)
}
}
}
}
impl Ranged for ExpressionRef<'_> {
fn range(&self) -> TextRange {
match self {
ExpressionRef::BoolOp(expression) => expression.range(),
ExpressionRef::NamedExpr(expression) => expression.range(),
ExpressionRef::BinOp(expression) => expression.range(),
ExpressionRef::UnaryOp(expression) => expression.range(),
ExpressionRef::Lambda(expression) => expression.range(),
ExpressionRef::IfExp(expression) => expression.range(),
ExpressionRef::Dict(expression) => expression.range(),
ExpressionRef::Set(expression) => expression.range(),
ExpressionRef::ListComp(expression) => expression.range(),
ExpressionRef::SetComp(expression) => expression.range(),
ExpressionRef::DictComp(expression) => expression.range(),
ExpressionRef::GeneratorExp(expression) => expression.range(),
ExpressionRef::Await(expression) => expression.range(),
ExpressionRef::Yield(expression) => expression.range(),
ExpressionRef::YieldFrom(expression) => expression.range(),
ExpressionRef::Compare(expression) => expression.range(),
ExpressionRef::Call(expression) => expression.range(),
ExpressionRef::FormattedValue(expression) => expression.range(),
ExpressionRef::FString(expression) => expression.range(),
ExpressionRef::Constant(expression) => expression.range(),
ExpressionRef::Attribute(expression) => expression.range(),
ExpressionRef::Subscript(expression) => expression.range(),
ExpressionRef::Starred(expression) => expression.range(),
ExpressionRef::Name(expression) => expression.range(),
ExpressionRef::List(expression) => expression.range(),
ExpressionRef::Tuple(expression) => expression.range(),
ExpressionRef::Slice(expression) => expression.range(),
ExpressionRef::IpyEscapeCommand(expression) => expression.range(),
}
}
}

View File

@@ -5,12 +5,14 @@ pub mod all;
pub mod call_path;
pub mod comparable;
pub mod docstrings;
mod expression;
pub mod hashable;
pub mod helpers;
pub mod identifier;
pub mod imports;
pub mod node;
mod nodes;
pub mod parenthesize;
pub mod relocate;
pub mod statement_visitor;
pub mod stmt_if;
@@ -20,6 +22,7 @@ pub mod types;
pub mod visitor;
pub mod whitespace;
pub use expression::*;
pub use nodes::*;
pub trait Ranged {

View File

@@ -0,0 +1,47 @@
use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer};
use ruff_text_size::{TextRange, TextSize};
use crate::node::AnyNodeRef;
use crate::{ExpressionRef, Ranged};
/// Returns the [`TextRange`] of a given expression including parentheses, if the expression is
/// parenthesized; or `None`, if the expression is not parenthesized.
pub fn parenthesized_range(
expr: ExpressionRef,
parent: AnyNodeRef,
contents: &str,
) -> Option<TextRange> {
// If the parent is a node that brings its own parentheses, exclude the closing parenthesis
// from our search range. Otherwise, we risk matching on calls, like `func(x)`, for which
// the open and close parentheses are part of the `Arguments` node.
//
// There are a few other nodes that may have their own parentheses, but are fine to exclude:
// - `Parameters`: The parameters to a function definition. Any expressions would represent
// default arguments, and so must be preceded by _at least_ the parameter name. As such,
// we won't mistake any parentheses for the opening and closing parentheses on the
// `Parameters` node itself.
// - `Tuple`: The elements of a tuple. The only risk is a single-element tuple (e.g., `(x,)`),
// which must have a trailing comma anyway.
let exclusive_parent_end = if parent.is_arguments() {
parent.end() - TextSize::new(1)
} else {
parent.end()
};
// First, test if there's a closing parenthesis because it tends to be cheaper.
let tokenizer =
SimpleTokenizer::new(contents, TextRange::new(expr.end(), exclusive_parent_end));
let right = tokenizer.skip_trivia().next()?;
if right.kind == SimpleTokenKind::RParen {
// Next, test for the opening parenthesis.
let mut tokenizer =
SimpleTokenizer::up_to_without_back_comment(expr.start(), contents).skip_trivia();
let left = tokenizer.next_back()?;
if left.kind == SimpleTokenKind::LParen {
return Some(TextRange::new(left.start(), right.end()));
}
}
None
}

View File

@@ -24,6 +24,12 @@ pub struct IfElifBranch<'a> {
pub range: TextRange,
}
impl Ranged for IfElifBranch<'_> {
fn range(&self) -> TextRange {
self.range
}
}
pub fn if_elif_branches(stmt_if: &StmtIf) -> impl Iterator<Item = IfElifBranch> {
iter::once(IfElifBranch {
kind: BranchKind::If,
@@ -40,6 +46,3 @@ pub fn if_elif_branches(stmt_if: &StmtIf) -> impl Iterator<Item = IfElifBranch>
})
}))
}
#[cfg(test)]
mod test {}

View File

@@ -0,0 +1,76 @@
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_parser::parse_expression;
#[test]
fn test_parenthesized_name() {
let source_code = r#"(x) + 1"#;
let expr = parse_expression(source_code, "<filename>").unwrap();
let bin_op = expr.as_bin_op_expr().unwrap();
let name = bin_op.left.as_ref();
let parenthesized = parenthesized_range(name.into(), bin_op.into(), source_code);
assert!(parenthesized.is_some());
}
#[test]
fn test_non_parenthesized_name() {
let source_code = r#"x + 1"#;
let expr = parse_expression(source_code, "<filename>").unwrap();
let bin_op = expr.as_bin_op_expr().unwrap();
let name = bin_op.left.as_ref();
let parenthesized = parenthesized_range(name.into(), bin_op.into(), source_code);
assert!(parenthesized.is_none());
}
#[test]
fn test_parenthesized_argument() {
let source_code = r#"f((a))"#;
let expr = parse_expression(source_code, "<filename>").unwrap();
let call = expr.as_call_expr().unwrap();
let arguments = &call.arguments;
let argument = arguments.args.first().unwrap();
let parenthesized = parenthesized_range(argument.into(), arguments.into(), source_code);
assert!(parenthesized.is_some());
}
#[test]
fn test_non_parenthesized_argument() {
let source_code = r#"f(a)"#;
let expr = parse_expression(source_code, "<filename>").unwrap();
let call = expr.as_call_expr().unwrap();
let arguments = &call.arguments;
let argument = arguments.args.first().unwrap();
let parenthesized = parenthesized_range(argument.into(), arguments.into(), source_code);
assert!(parenthesized.is_none());
}
#[test]
fn test_parenthesized_tuple_member() {
let source_code = r#"(a, (b))"#;
let expr = parse_expression(source_code, "<filename>").unwrap();
let tuple = expr.as_tuple_expr().unwrap();
let member = tuple.elts.last().unwrap();
let parenthesized = parenthesized_range(member.into(), tuple.into(), source_code);
assert!(parenthesized.is_some());
}
#[test]
fn test_non_parenthesized_tuple_member() {
let source_code = r#"(a, b)"#;
let expr = parse_expression(source_code, "<filename>").unwrap();
let tuple = expr.as_tuple_expr().unwrap();
let member = tuple.elts.last().unwrap();
let parenthesized = parenthesized_range(member.into(), tuple.into(), source_code);
assert!(parenthesized.is_none());
}

View File

@@ -0,0 +1,19 @@
@FormattedDecorator(a =b)
# leading comment
@MyDecorator( # dangling comment
list = [1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12], x = some_other_function_call({ "test": "value", "more": "other"})) # fmt: skip
# leading class comment
class Test:
pass
@FormattedDecorator(a =b)
# leading comment
@MyDecorator( # dangling comment
list = [1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12], x = some_other_function_call({ "test": "value", "more": "other"})) # fmt: skip
# leading class comment
def test():
pass

View File

@@ -0,0 +1,13 @@
def test():
# leading comment
""" This docstring does not
get formatted
""" # fmt: skip
# trailing comment
def test():
# leading comment
""" This docstring gets formatted
""" # trailing comment
and_this + gets + formatted + too

View File

@@ -0,0 +1,73 @@
def http_error(status):
match status : # fmt: skip
case 400 : # fmt: skip
return "Bad request"
case 404:
return "Not found"
case 418:
return "I'm a teapot"
case _:
return "Something's wrong with the internet"
# point is an (x, y) tuple
match point:
case (0, 0): # fmt: skip
print("Origin")
case (0, y):
print(f"Y={y}")
case (x, 0):
print(f"X={x}")
case (x, y):
print(f"X={x}, Y={y}")
case _:
raise ValueError("Not a point")
class Point:
x: int
y: int
def location(point):
match point:
case Point(x=0, y =0 ) : # fmt: skip
print("Origin is the point's location.")
case Point(x=0, y=y):
print(f"Y={y} and the point is on the y-axis.")
case Point(x=x, y=0):
print(f"X={x} and the point is on the x-axis.")
case Point():
print("The point is located somewhere else on the plane.")
case _:
print("Not a point")
match points:
case []:
print("No points in the list.")
case [
Point(0, 0)
]: # fmt: skip
print("The origin is the only point in the list.")
case [Point(x, y)]:
print(f"A single point {x}, {y} is in the list.")
case [Point(0, y1), Point(0, y2)]:
print(f"Two points on the Y axis at {y1}, {y2} are in the list.")
case _:
print("Something else is found in the list.")
match test_variable:
case (
'warning',
code,
40
): # fmt: skip
print("A warning has been received.")
case ('error', code, _):
print(f"An error {code} occurred.")
match point:
case Point(x, y) if x == y: # fmt: skip
print(f"The point is located on the diagonal Y=X at {x}.")
case Point(x, y):
print(f"Point is not on the diagonal.")

View File

@@ -0,0 +1,31 @@
for item in container:
if search_something(item):
# Found it!
process(item)
break
# leading comment
else : #fmt: skip
# Didn't find anything..
not_found_in_container()
while i < 10:
print(i)
# leading comment
else : #fmt: skip
# Didn't find anything..
print("I was already larger than 9")
try : # fmt: skip
some_call()
except Exception : # fmt: skip
pass
except : # fmt: skip
handle_exception()
else : # fmt: skip
pass
finally : # fmt: skip
finally_call()

View File

@@ -0,0 +1,20 @@
if (
# a leading condition comment
len([1, 23, 3, 4, 5]) > 2 # trailing condition comment
# trailing own line comment
): # fmt: skip
pass
if ( # trailing open parentheses comment
# a leading condition comment
len([1, 23, 3, 4, 5]) > 2
) and ((((y)))): # fmt: skip
pass
if ( # trailing open parentheses comment
# a leading condition comment
len([1, 23, 3, 4, 5]) > 2
) and y: # fmt: skip
pass

View File

@@ -0,0 +1,28 @@
class TestTypeParam[ T ]: # fmt: skip
pass
class TestTypeParam [ # trailing open paren comment
# leading comment
T # trailing type param comment
# trailing type param own line comment
]: # fmt: skip
pass
class TestTrailingComment4[
T
] ( # trailing arguments open parenthesis comment
# leading argument comment
A # trailing argument comment
# trailing argument own line comment
): # fmt: skip
pass
def test [
# comment
A,
# another
B,
] (): # fmt: skip
...

Some files were not shown because too many files have changed in this diff Show More