Compare commits

..

133 Commits

Author SHA1 Message Date
Micha Reiser
d3e160dcb7 Indent expanded binary expressions 2024-02-14 18:53:11 +01:00
Micha Reiser
003851b54c Beautify 2024-02-14 18:10:38 +01:00
Micha Reiser
bb8d2034e2 Use atomic write when persisting cache (#9981) 2024-02-14 15:09:21 +01:00
Charlie Marsh
f40e012b4e Use name directly in RUF006 (#9979) 2024-02-14 00:00:47 +00:00
Asger Hautop Drewsen
3e9d761b13 Expand asyncio-dangling-task (RUF006) to include new_event_loop (#9976)
## Summary

Fixes #9974

## Test Plan

I added some new test cases.
2024-02-13 18:28:06 +00:00
Micha Reiser
46db3f96ac Add example demonstrating that fmt: skip on expression level is not supported (#9973) 2024-02-13 15:35:27 +00:00
Dhruv Manilawala
6f9c128d77 Separate StringNormalizer from StringPart (#9954)
## Summary

This PR is a small refactor to extract out the logic for normalizing
string in the formatter from the `StringPart` struct. It also separates
the quote selection into a separate method on the new
`StringNormalizer`. Both of these will help in the f-string formatting
to use `StringPart` and `choose_quotes` irrespective of normalization.

The reason for having separate quote selection and normalization step is
so that the f-string formatting can perform quote selection on its own.

Unlike string and byte literals, the f-string formatting would require
that the normalization happens only for the literal elements of it i.e.,
the "foo" and "bar" in `f"foo {x + y} bar"`. This will automatically be
handled by the already separate `normalize_string` function.

Another use-case in the f-string formatting is to extract out the
relevant information from the `StringPart` like quotes and prefix which
is to be passed as context while formatting each element of an f-string.

## Test Plan

Ensure that clippy is happy and all tests pass.
2024-02-13 18:14:56 +05:30
Micha Reiser
6380c90031 Run isort CRLF tests (#9970) 2024-02-13 09:25:22 +01:00
Charlie Marsh
d96a0dbe57 Respect tuple assignments in typing analyzer (#9969)
## Summary

Just addressing some discrepancies between the analyzers like `is_dict`
and the logic that's matured in `find_binding_value`.
2024-02-13 05:02:52 +00:00
Dhruv Manilawala
180920fdd9 Make semantic model aware of docstring (#9960)
## Summary

This PR introduces a new semantic model flag `DOCSTRING` which suggests
that the model is currently in a module / class / function docstring.
This is the first step in eliminating the docstring detection state
machine which is prone to bugs as stated in #7595.

## Test Plan

~TODO: Is there a way to add a test case for this?~

I tested this using the following code snippet and adding a print
statement in the `string_like` analyzer to print if we're currently in a
docstring or not.

<details><summary>Test code snippet:</summary>
<p>

```python
"Docstring" ", still a docstring"
"Not a docstring"


def foo():
    "Docstring"
    "Not a docstring"
    if foo:
        "Not a docstring"
        pass


class Foo:
    "Docstring"
    "Not a docstring"

    foo: int
    "Unofficial variable docstring"

    def method():
        "Docstring"
        "Not a docstring"
        pass


def bar():
    "Not a docstring".strip()


def baz():
    _something_else = 1
    """Not a docstring"""
```

</p>
</details>
2024-02-13 04:26:08 +00:00
konsti
1ccd8354c1 Don't forget to set your cpu to performance mode (#9700)
Since i just spent quite some time wondering why my benchmarks were the
opposite of what they should be, a reminder to check your cpu governor.
Setting mine to perf mode was crucial.
2024-02-13 03:36:11 +00:00
Aleksei Latyshev
dd0ba16a79 [refurb] Implement readlines_in_for lint (FURB129) (#9880)
## Summary
Implement [implicit readlines
(FURB129)](https://github.com/dosisod/refurb/blob/master/refurb/checks/iterable/implicit_readlines.py)
lint.

## Notes
I need a help/an opinion about suggested implementations.

This implementation differs from the original one from `refurb` in the
following way. This implementation checks syntactically the call of the
method with the name `readlines()` inside `for` {loop|generator
expression}. The implementation from refurb also
[checks](https://github.com/dosisod/refurb/blob/master/refurb/checks/iterable/implicit_readlines.py#L43)
that callee is a variable with a type `io.TextIOWrapper` or
`io.BufferedReader`.

- I do not see a simple way to implement the same logic.
- The best I can have is something like
```rust
checker.semantic().binding(checker.semantic().resolve_name(attr_expr.value.as_name_expr()?)?).statement(checker.semantic())
```
and analyze cases. But this will be not about types, but about guessing
the type by assignment (or with) expression.
- Also this logic has several false negatives, when the callee is not a
variable, but the result of function call (e.g. `open(...)`).
- On the other side, maybe it is good to lint this on other things,
where this suggestion is not safe, and push the developers to change
their interfaces to be less surprising, comparing with the standard
library.
- Anyway while the current implementation has false-positives (I
mentioned some of them in the test) I marked the fixes to be unsafe.
2024-02-12 22:28:35 -05:00
Charlie Marsh
609d0a9a65 Remove symbol from type-matching API (#9968)
## Summary

These should be no-op refactors to remove some redundant data from the
type analysis APIs.
2024-02-12 20:57:19 -05:00
Auguste Lalande
8fba97f72f PLR2004: Accept 0.0 and 1.0 as common magic values (#9964)
## Summary

Accept 0.0 and 1.0 as common magic values. This is in line with the
pylint behaviour, and I think makes sense conceptually.


## Test Plan

Test cases were added to
`crates/ruff_linter/resources/test/fixtures/pylint/magic_value_comparison.py`
2024-02-13 01:21:06 +00:00
Charlie Marsh
5bc0d9c324 Add a binding kind for comprehension targets (#9967)
## Summary

I was surprised to learn that we treat `x` in `[_ for x in y]` as an
"assignment" binding kind, rather than a dedicated comprehension
variable.
2024-02-12 20:09:39 -05:00
Hashem
cf77eeb913 unused_imports/F401: Explain when imports are preserved (#9963)
The docs previously mentioned an irrelevant config option, but were
missing a link to the relevant `ignore-init-module-imports` config
option which _is_ actually used.

Additionally, this commit adds a link to the documentation to explain
the conventions around a module interface which includes using a
redundant import alias to preserve an unused import.

(noticed this while filing  #9962)
2024-02-12 19:07:20 -05:00
Dhruv Manilawala
3f4dd01e7a Rename semantic model flag to MODULE_DOCSTRING_BOUNDARY (#9959)
## Summary

This PR renames the semantic model flag `MODULE_DOCSTRING` to
`MODULE_DOCSTRING_BOUNDARY`. The main reason is for readability and for
the new semantic model flag `DOCSTRING` which tracks that the model is
in a module / class / function docstring.

I got confused earlier with the name until I looked at the use case and
it seems that the `_BOUNDARY` prefix is more appropriate for the
use-case and is consistent with other flags.
2024-02-13 00:47:12 +05:30
Micha Reiser
edfe8421ec Disable top-level docstring formatting for notebooks (#9957) 2024-02-12 18:14:02 +00:00
Charlie Marsh
ab2253db03 [pylint] Avoid suggesting set rewrites for non-hashable types (#9956)
## Summary

Ensures that `x in [y, z]` does not trigger in `x`, `y`, or `z` are
known _not_ to be hashable.

Closes https://github.com/astral-sh/ruff/issues/9928.
2024-02-12 13:05:54 -05:00
Dhruv Manilawala
33ac2867b7 Use non-parenthesized range for DebugText (#9953)
## Summary

This PR fixes the `DebugText` implementation to use the expression range
instead of the parenthesized range.

Taking the following code snippet as an example:
```python
x = 1
print(f"{  ( x  ) = }")
```

The output of running it would be:
```
  ( x  ) = 1
```

Notice that the whitespace between the parentheses and the expression is
preserved as is.

Currently, we don't preserve this information in the AST which defeats
the purpose of `DebugText` as the main purpose of the struct is to
preserve whitespaces _around_ the expression.

This is also problematic when generating the code from the AST node as
then the generator has no information about the parentheses the
whitespaces between them and the expression which would lead to the
removal of the parentheses in the generated code.

I noticed this while working on the f-string formatting where the debug
text would be used to preserve the text surrounding the expression in
the presence of debug expression. The parentheses were being dropped
then which made me realize that the problem is instead in the parser.

## Test Plan

1. Add a test case for the parser
2. Add a test case for the generator
2024-02-12 23:00:02 +05:30
Charlie Marsh
0304623878 [perflint] Catch a wider range of mutations in PERF101 (#9955)
## Summary

This PR ensures that if a list `x` is modified within a `for` loop, we
avoid flagging `list(x)` as unnecessary. Previously, we only detected
calls to exactly `.append`, and they couldn't be nested within other
statements.

Closes https://github.com/astral-sh/ruff/issues/9925.
2024-02-12 12:17:55 -05:00
Charlie Marsh
e2785f3fb6 [flake8-pyi] Ignore 'unused' private type dicts in class scopes (#9952)
## Summary

If these are defined within class scopes, they're actually attributes of
the class, and can be accessed through the class itself.

(We preserve our existing behavior for `.pyi` files.)

Closes https://github.com/astral-sh/ruff/issues/9948.
2024-02-12 17:06:20 +00:00
dependabot[bot]
90f8e4baf4 Bump the actions group with 1 update (#9943) 2024-02-12 12:05:31 -05:00
Micha Reiser
8657a392ff Docstring formatting: Preserve tab indentation when using indent-style=tabs (#9915) 2024-02-12 16:09:13 +01:00
Micha Reiser
4946a1876f Stabilize quote-style preserve (#9922) 2024-02-12 09:30:07 +00:00
dependabot[bot]
6dc1b21917 Bump indicatif from 0.17.7 to 0.17.8 (#9942)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-12 10:25:47 +01:00
dependabot[bot]
2e1160e74c Bump thiserror from 1.0.56 to 1.0.57 (#9941)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-12 10:24:40 +01:00
dependabot[bot]
37ff436e4e Bump chrono from 0.4.33 to 0.4.34 (#9940)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-12 10:24:16 +01:00
Micha Reiser
341c2698a7 Run doctests as part of CI pipeline (#9939) 2024-02-12 10:18:58 +01:00
Owen Lamont
a50e2787df Fixed nextest install line in CONTRIBUTING.md (#9929)
## Summary

I noticed the example line in CONTRIBUTING.md:

```shell
cargo install nextest
```

Didn't appear to install the intended package cargo-nextest.


![nextest](https://github.com/astral-sh/ruff/assets/12672027/7bbdd9c3-c35a-464a-b586-3e9f777f8373)

So I checked what it [should
be](https://nexte.st/book/installing-from-source.html) and replaced the
line:

```shell
cargo install cargo-nextest --locked
```

## Test Plan

Just checked the cargo install appeared to give sane looking results

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2024-02-11 15:22:17 +00:00
wzy
25868d0371 docs: add mdformat-ruff to integrations.md (#9924)
Can [mdformat-ruff](https://github.com/Freed-Wu/mdformat-ruff) be hosted
in <https://github.com/astral-sh> like other integrations of ruff? TIA!
2024-02-11 03:39:15 +00:00
Charlie Marsh
af2cba7c0a Migrate to nextest (#9921)
## Summary

We've had success with `nextest` in other projects, so lets migrate
Ruff.

The Linux tests look a little bit faster (from 2m32s down to 2m8s), the
Windows tests look a little bit slower but not dramatically so.
2024-02-10 18:58:56 +00:00
Alex Waygood
8ec56277e9 Allow arbitrary configuration options to be overridden via the CLI (#9599)
Fixes #8368
Fixes https://github.com/astral-sh/ruff/issues/9186

## Summary

Arbitrary TOML strings can be provided via the command-line to override
configuration options in `pyproject.toml` or `ruff.toml`. As an example:
to run over typeshed and respect typeshed's `pyproject.toml`, but
override a specific isort setting and enable an additional pep8-naming
setting:

```
cargo run -- check ../typeshed --no-cache --config ../typeshed/pyproject.toml --config "lint.isort.combine-as-imports=false" --config "lint.extend-select=['N801']"
```

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Zanie Blue <contact@zanie.dev>
2024-02-09 21:56:37 +00:00
Charlie Marsh
b21ba71ef4 Run cargo update (#9917)
Mostly removes dependencies.
2024-02-09 16:30:31 -05:00
Alex Waygood
d387d0ba82 RUF022, RUF023: Ensure closing parentheses for multiline sequences are always on their own line (#9793)
## Summary

Currently these rules apply the heuristic that if the original sequence
doesn't have a newline in between the final sequence item and the
closing parenthesis, the autofix won't add one for you. The feedback
from @ThiefMaster, however, was that this was producing slightly unusual
formatting -- things like this:

```py
__all__ = [
    "b", "c",
    "a", "d"]
```

were being autofixed to this:

```py
__all__ = [
    "a",
    "b",
    "c",
    "d"]
```

When, if it was _going_ to be exploded anyway, they'd prefer something
like this (with the closing parenthesis on its own line, and a trailing comma added):

```py
__all__ = [
    "a",
    "b",
    "c",
    "d",
]
```

I'm still pretty skeptical that we'll be able to please everybody here
with the formatting choices we make; _but_, on the other hand, this
_specific_ change is pretty easy to make.

## Test Plan

`cargo test`. I also ran the autofixes for RUF022 and RUF023 on CPython
to check how they looked; they looked fine to me.
2024-02-09 21:27:44 +00:00
Charlie Marsh
6f0e4ad332 Remove unnecessary string cloning from the parser (#9884)
Closes https://github.com/astral-sh/ruff/issues/9869.
2024-02-09 16:03:27 -05:00
trag1c
7ca515c0aa Corrected PTH203–PTH205 rule descriptions (#9914)
## Summary
Closes #9898.

## Test Plan
```sh
python scripts/generate_mkdocs.py && mkdocs serve -f mkdocs.public.yml
```
2024-02-09 15:47:07 -05:00
Micha Reiser
1ce07d65bd Use usize instead of TextSize for indent_len (#9903) 2024-02-09 20:41:36 +00:00
Micha Reiser
00ef01d035 Update pyproject-toml to 0.9 (#9916) 2024-02-09 20:38:34 +00:00
Charlie Marsh
52ebfc9718 Respect duplicates when rewriting type aliases (#9905)
## Summary

If a generic appears multiple times on the right-hand side, we should
only include it once on the left-hand side when rewriting.

Closes https://github.com/astral-sh/ruff/issues/9904.
2024-02-09 14:02:41 +00:00
Hoël Bagard
12a91f4e90 Fix E30X panics on blank lines with trailing white spaces (#9907) 2024-02-09 14:00:26 +00:00
Mikko Leppänen
b4f2882b72 [pydocstyle-D405] Allow using parameters as a sub-section header (#9894)
## Summary

This review contains a fix for
[D405](https://docs.astral.sh/ruff/rules/capitalize-section-name/)
(capitalize-section-name)
The problem is that Ruff considers the sub-section header as a normal
section if it has the same name as some section name. For instance, a
function/method has an argument named "parameters". This only applies if
you use Numpy style docstring.

See: [ISSUE](https://github.com/astral-sh/ruff/issues/9806)

The following will not raise D405 after the fix:
```python  
def some_function(parameters: list[str]):
    """A function with a parameters parameter

    Parameters
    ----------

    parameters:
        A list of string parameters
    """
    ...
```


## Test Plan

```bash
cargo test
```

---------

Co-authored-by: Mikko Leppänen <mikko.leppanen@vaisala.com>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2024-02-08 21:54:32 -05:00
Charlie Marsh
49fe1b85f2 Reduce size of Expr from 80 to 64 bytes (#9900)
## Summary

This PR reduces the size of `Expr` from 80 to 64 bytes, by reducing the
sizes of...

- `ExprCall` from 72 to 56 bytes, by using boxed slices for `Arguments`.
- `ExprCompare` from 64 to 48 bytes, by using boxed slices for its
various vectors.

In testing, the parser gets a bit faster, and the linter benchmarks
improve quite a bit.
2024-02-09 02:53:13 +00:00
Micha Reiser
bd8123c0d8 Fix clippy unused variable warning (#9902) 2024-02-08 22:13:31 +00:00
Micha Reiser
49c5e715f9 Filter out test rules in RuleSelector JSON schema (#9901) 2024-02-08 21:06:51 +00:00
Micha Reiser
fe7d965334 Reduce Result<Tok, LexicalError> size by using Box<str> instead of String (#9885) 2024-02-08 20:36:22 +00:00
Hoël Bagard
9027169125 [pycodestyle] Add blank line(s) rules (E301, E302, E303, E304, E305, E306) (#9266)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-02-08 18:35:08 +00:00
Micha Reiser
688177ff6a Use Rust 1.76 (#9897) 2024-02-08 18:20:08 +00:00
trag1c
eb2784c495 Corrected Path symlink method name (PTH114) (#9896)
## Summary
Corrects mentions of `Path.is_link` to `Path.is_symlink` (the former
doesn't exist).

## Test Plan
```sh
python scripts/generate_mkdocs.py && mkdocs serve -f mkdocs.public.yml
```
2024-02-08 13:09:28 -05:00
Charlie Marsh
6fffde72e7 Use memchr for string lexing (#9888)
## Summary

On `main`, string lexing consists of walking through the string
character-by-character to search for the closing quote (with some
nuance: we also need to skip escaped characters, and error if we see
newlines in non-triple-quoted strings). This PR rewrites `lex_string` to
instead use `memchr` to search for the closing quote, which is
significantly faster. On my machine, at least, the `globals.py`
benchmark (which contains a lot of docstrings) gets 40% faster...

```text
lexer/numpy/globals.py  time:   [3.6410 µs 3.6496 µs 3.6585 µs]
                        thrpt:  [806.53 MiB/s 808.49 MiB/s 810.41 MiB/s]
                 change:
                        time:   [-40.413% -40.185% -39.984%] (p = 0.00 < 0.05)
                        thrpt:  [+66.623% +67.181% +67.822%]
                        Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) high mild
lexer/unicode/pypinyin.py
                        time:   [12.422 µs 12.445 µs 12.467 µs]
                        thrpt:  [337.03 MiB/s 337.65 MiB/s 338.27 MiB/s]
                 change:
                        time:   [-9.4213% -9.1930% -8.9586%] (p = 0.00 < 0.05)
                        thrpt:  [+9.8401% +10.124% +10.401%]
                        Performance has improved.
Found 3 outliers among 100 measurements (3.00%)
  1 (1.00%) high mild
  2 (2.00%) high severe
lexer/pydantic/types.py time:   [107.45 µs 107.50 µs 107.56 µs]
                        thrpt:  [237.11 MiB/s 237.24 MiB/s 237.35 MiB/s]
                 change:
                        time:   [-4.0108% -3.7005% -3.3787%] (p = 0.00 < 0.05)
                        thrpt:  [+3.4968% +3.8427% +4.1784%]
                        Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
  2 (2.00%) high mild
  5 (5.00%) high severe
lexer/numpy/ctypeslib.py
                        time:   [46.123 µs 46.165 µs 46.208 µs]
                        thrpt:  [360.36 MiB/s 360.69 MiB/s 361.01 MiB/s]
                 change:
                        time:   [-19.313% -18.996% -18.710%] (p = 0.00 < 0.05)
                        thrpt:  [+23.016% +23.451% +23.935%]
                        Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
  3 (3.00%) low mild
  1 (1.00%) high mild
  4 (4.00%) high severe
lexer/large/dataset.py  time:   [231.07 µs 231.19 µs 231.33 µs]
                        thrpt:  [175.87 MiB/s 175.97 MiB/s 176.06 MiB/s]
                 change:
                        time:   [-2.0437% -1.7663% -1.4922%] (p = 0.00 < 0.05)
                        thrpt:  [+1.5148% +1.7981% +2.0864%]
                        Performance has improved.
Found 10 outliers among 100 measurements (10.00%)
  5 (5.00%) high mild
  5 (5.00%) high severe
```
2024-02-08 17:23:06 +00:00
Jane Lewis
ad313b9089 RUF027 no longer has false negatives with string literals inside of method calls (#9865)
Fixes #9857.

## Summary

Statements like `logging.info("Today it is: {day}")` will no longer be
ignored by RUF027. As before, statements like `"Today it is:
{day}".format(day="Tuesday")` will continue to be ignored.

## Test Plan

The snapshot tests were expanded to include new cases. Additionally, the
snapshot tests have been split in two to separate positive cases from
negative cases.
2024-02-08 10:00:20 -05:00
Charlie Marsh
f76a3e8502 Detect mark_safe usages in decorators (#9887)
## Summary

Django's `mark_safe` can also be used as a decorator, so we should
detect usages of `@mark_safe` for the purpose of the relevant Bandit
rule.

Closes https://github.com/astral-sh/ruff/issues/9780.
2024-02-07 23:10:46 -05:00
Tom Kuson
ed07fa08bd Fix list formatting in documention (#9886)
## Summary

Adds a blank line to render the list correctly.

## Test Plan

Ocular inspection
2024-02-07 20:01:21 -05:00
Charlie Marsh
45937426c7 Fix blank-line docstring rules for module-level docstrings (#9878)
## Summary

Given:

```python
"""Make a summary line.

Note:
----
  Per the code comment the next two lines are blank. "// The first blank line is the line containing the closing
      triple quotes, so we need at least two."

"""
```

It turns out we excluded the line ending in `"""`, because it's empty
(unlike for functions, where it consists of the indent). This PR changes
the `following_lines` iterator to always include the trailing newline,
which gives us correct and consistent handling between function and
module-level docstrings.

Closes https://github.com/astral-sh/ruff/issues/9877.
2024-02-07 16:48:28 -05:00
Charlie Marsh
533dcfb114 Add a note regarding ignore-without-code (#9879)
Closes https://github.com/astral-sh/ruff/issues/9863.
2024-02-07 21:20:18 +00:00
Hugo van Kemenade
bc023f47a1 Fix typo in option name: output_format -> output-format (#9874) 2024-02-07 16:17:58 +00:00
Jack McIvor
aa38307415 Add more NPY002 violations (#9862) 2024-02-07 09:54:11 -05:00
Charlie Marsh
e9ddd4819a Make show-settings filters directory-agnostic (#9866)
Closes https://github.com/astral-sh/ruff/issues/9864.
2024-02-07 03:20:27 +00:00
Micha Reiser
fdb5eefb33 Improve trailing comma rule performance (#9867) 2024-02-06 23:04:36 +00:00
Charlie Marsh
daae28efc7 Respect async with in timeout-without-await (#9859)
Closes https://github.com/astral-sh/ruff/issues/9855.
2024-02-06 12:04:24 -05:00
Charlie Marsh
75553ab1c0 Remove ecosystem failures (#9854)
## Summary

These are kinda disruptive, I'd prefer to TODO unless someone is
interested in solving them ASAP.
2024-02-06 09:45:13 -05:00
Charlie Marsh
c34908f5ad Use memchr for tab-indentation detection (#9853)
## Summary

The benchmarks show a pretty consistent 1% speedup here for all-rules,
though not enough to trigger our threshold of course:

![Screenshot 2024-02-05 at 11 55
59 PM](https://github.com/astral-sh/ruff/assets/1309177/317dca3f-f25f-46f5-8ea8-894a1747d006)
2024-02-06 09:44:56 -05:00
Charlie Marsh
a662c2447c Ignore builtins when detecting missing f-strings (#9849)
## Summary

Reported on Discord: if the name maps to a builtin, it's not bound
locally, so is very unlikely to be intended as an f-string expression.
2024-02-05 23:49:56 -05:00
Seo Sanghyeon
df7fb95cbc Index multiline f-strings (#9837)
Fix #9777.
2024-02-05 21:25:33 -05:00
Adrian
83195a6030 ruff-ecosystem: Add indico/indico repo (#9850)
It's a pretty big codebase using lots of different stuff, so a good
candidate for finding obscure problems.

I didn't look more closely which options are used (I have the feeling
`--select ALL` is not implied, since I see you adding it via
`check_options` for certain entries but not for others), the repo itself
has a pretty large ruff.toml - but assuming ecosystem just cares about
differences between base and head of a PR, `ALL` most likely makes
sense.
2024-02-06 00:37:58 +00:00
Daniël van Noord
d31d09d7cd Add `--preview` to instruction for running newly added tests (#9846)
## Summary

This surprised me while working on adding a test. I thought about adding
an additional `note`, but how often is this incorrect? In general,
people reading the contributing guidelines probably want to enable this
flag and those who don't will know enough about the testing setup to
have their own commands/aliases.

## Test Plan

Ran CI on local fork and got an all green.
2024-02-05 19:33:22 -05:00
Tyler C Laprade, CFA
0f436b71f3 Typo in 0.2.1 changelog (#9847)
`refurn` -> `refurb`
2024-02-05 17:51:27 -05:00
Eero Vaher
cd5bcd815d Mention a related setting in C408 description (#9839)
#2977 added the `allow-dict-calls-with-keyword-arguments` configuration
option for the `unnecessary-collection-call (C408)` rule, but it did not
update the rule description.
2024-02-06 03:57:53 +05:30
Charlie Marsh
0ccca4083a Bump version to v0.2.1 (#9843) 2024-02-05 15:31:05 -05:00
Charlie Marsh
041ce1e166 Respect generic Protocol in ellipsis removal (#9841)
Closes https://github.com/astral-sh/ruff/issues/9840.
2024-02-05 19:36:16 +00:00
Dhruv Manilawala
36b752876e Implement AnyNode/AnyNodeRef for FStringFormatSpec (#9836)
## Summary

This PR adds the `AnyNode` and `AnyNodeRef` implementation for
`FStringFormatSpec` node which will be required in the f-string
formatting.

The main usage for this is so that we can pass in the node directly to
`suppressed_node` in case debug expression is used to format is as
verbatim text.
2024-02-05 19:23:43 +00:00
Micha Reiser
b3dc565473 Add --range option to ruff format (#9733)
Co-authored-by: T-256 <132141463+T-256@users.noreply.github.com>
2024-02-05 19:21:45 +00:00
Thomas M Kehrenberg
e708c08b64 Fix default for max-positional-args (#9838)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary
`max-positional-args` defaults to `max-args` if it's not specified and
the default to `max-args` is 5, so saying that the default is 3 is
definitely wrong. Ideally, we wouldn't specify a default at all for this
config option, but I don't think that's possible?

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

<!-- How was it tested? -->
Not sure.
2024-02-05 16:58:14 +00:00
Charlie Marsh
73902323d5 Revert "Use publicly available Apple Silicon runners (#9726)" (#9834)
## Summary

Sadly, the Apple Silicon runners use macOS 14 and produce binaries that
segfault when run on macOS 11 (at least), and possibly on macOS 12
and/or macOS 13.

macOS 11 is EOL, but it doesn't seem like a good tradeoff to speed up
our release builds at the expense of user support and compatibility.

This reverts commit f0066e1b89.

Closes https://github.com/astral-sh/ruff/issues/9823.
2024-02-05 11:24:51 -05:00
Charlie Marsh
9781563ef6 Add fast-path for comment detection (#9808)
## Summary

When we fall through to parsing, the comment-detection rule is a
significant portion of lint time. This PR adds an additional fast
heuristic whereby we abort if a comment contains two consecutive name
tokens (via the zero-allocation lexer). For the `ctypeslib.py`, which
has a few cases that are now caught by this, it's a 2.5x speedup for the
rule (and a 20% speedup for token-based rules).
2024-02-05 11:00:18 -05:00
Zanie Blue
84aea7f0c8 Drop __get__ and __set__ from unnecessary-dunder-call (#9791)
These are for descriptors which affects the behavior of the object _as a
property_; I do not think they should be called directly but there is no
alternative when working with the object directly.

Closes https://github.com/astral-sh/ruff/issues/9789
2024-02-05 10:54:29 -05:00
Shaygan Hooshyari
b47f85eb69 Preview Style: Format module level docstring (#9725)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-02-05 15:03:34 +00:00
Micha Reiser
80fc02e7d5 Don't trim last empty line in docstrings (#9813) 2024-02-05 13:29:24 +00:00
dependabot[bot]
55d0e1148c Bump memchr from 2.6.4 to 2.7.1 (#9827)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-05 13:24:23 +00:00
dependabot[bot]
1de945e3eb Bump is-macro from 0.3.4 to 0.3.5 (#9829) 2024-02-05 13:11:15 +00:00
dependabot[bot]
e277ba20da Bump pyproject-toml from 0.8.1 to 0.8.2 (#9826) 2024-02-05 14:05:52 +01:00
dependabot[bot]
2e836a4cbe Bump toml from 0.8.8 to 0.8.9 (#9828) 2024-02-05 14:05:27 +01:00
dependabot[bot]
57d6cdb8d3 Bump itertools from 0.12.0 to 0.12.1 (#9830) 2024-02-05 14:03:06 +01:00
Charlie Marsh
602f8b8250 Remove CST-based fixer for C408 (#9822)
## Summary

We have to keep the fixer for a specific case: `dict` calls that include
keyword-argument members.
2024-02-04 22:26:51 -05:00
Charlie Marsh
a6bc4b2e48 Remove CST-based fixers for C405 and C409 (#9821) 2024-02-05 02:17:34 +00:00
Charlie Marsh
c5fa0ccffb Remove CST-based fixers for C400, C401, C410, and C418 (#9819) 2024-02-04 21:00:11 -05:00
Charlie Marsh
dd77d29d0e Remove LibCST-based fixer for C403 (#9818)
## Summary

Experimenting with rewriting one of the comprehension fixes _without_
LibCST.
2024-02-04 20:08:19 -05:00
Charlie Marsh
ad0121660e Run dunder method rule on methods directly (#9815)
This stood out in the flamegraph and I realized it requires us to
traverse over all statements in the class (unnecessarily).
2024-02-04 14:24:57 -05:00
Charlie Marsh
5c99967c4d Short-circuit typing matches based on imports (#9800) 2024-02-04 14:06:44 -05:00
Charlie Marsh
c53aae0b6f Add our own ignored-names abstractions (#9802)
## Summary

These run over nearly every identifier. It's rare to override them, so
when not provided, we can just use a match against the hardcoded default
set.
2024-02-03 09:48:07 -05:00
Charlie Marsh
2352de2277 Slight speed-up for lowercase and uppercase identifier checks (#9798)
It turns out that for ASCII identifiers, this is nearly 2x faster:

```
Parser/before     time:   [15.388 ns 15.395 ns 15.406 ns]
Parser/after      time:   [8.3786 ns 8.5821 ns 8.7715 ns]
```
2024-02-03 14:40:41 +00:00
Jane Lewis
e0a6034cbb Implement RUF027: Missing F-String Syntax lint (#9728)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

Fixes #8151

This PR implements a new rule, `RUF027`.

## What it does
Checks for strings that contain f-string syntax but are not f-strings.

### Why is this bad?
An f-string missing an `f` at the beginning won't format anything, and
instead treat the interpolation syntax as literal.

### Example

```python
name = "Sarah"
dayofweek = "Tuesday"
msg = "Hello {name}! It is {dayofweek} today!"
```

It should instead be:
```python
name = "Sarah"
dayofweek = "Tuesday"
msg = f"Hello {name}! It is {dayofweek} today!"
```

## Heuristics
Since there are many possible string literals which contain syntax
similar to f-strings yet are not intended to be,
this lint will disqualify any literal that satisfies any of the
following conditions:
1. The string literal is a standalone expression. For example, a
docstring.
2. The literal is part of a function call with keyword arguments that
match at least one variable (for example: `format("Message: {value}",
value = "Hello World")`)
3. The literal (or a parent expression of the literal) has a direct
method call on it (for example: `"{value}".format(...)`)
4. The string has no `{...}` expression sections, or uses invalid
f-string syntax.
5. The string references variables that are not in scope, or it doesn't
capture variables at all.
6. Any format specifiers in the potential f-string are invalid.

## Test Plan

I created a new test file, `RUF027.py`, which is both an example of what
the lint should catch and a way to test edge cases that may trigger
false positives.
2024-02-03 00:21:03 +00:00
Emil Telstad
25d93053da Update max-pos-args example to max-positional-args. (#9797) 2024-02-02 20:29:13 +00:00
Charlie Marsh
ee5b07d4ca Skip empty lines when determining base indentation (#9795)
## Summary

It turns out we saw a panic in cases when dedenting blocks like the `def
wrapper` here:

```python
def instrument_url(f: UrlFuncT) -> UrlFuncT:
    # TODO: Type this with ParamSpec to preserve the function signature.
    if not INSTRUMENTING:  # nocoverage -- option is always enabled; should we remove?
        return f
    else:

        def wrapper(
            self: "ZulipTestCase", url: str, info: object = {}, **kwargs: Union[bool, str]
        ) -> HttpResponseBase:
```

Since we relied on the first line to determine the indentation, instead
of the first non-empty line.

## Test Plan

`cargo test`
2024-02-02 19:42:47 +00:00
Charlie Marsh
e50603caf6 Track top-level module imports in the semantic model (#9775)
## Summary

This is a simple idea to avoid unnecessary work in the linter,
especially for rules that run on all name and/or all attribute nodes.
Imagine a rule like the NumPy deprecation check. If the user never
imported `numpy`, we should be able to skip that rule entirely --
whereas today, we do a `resolve_call_path` check on _every_ name in the
file. It turns out that there's basically a finite set of modules that
we care about, so we now track imports on those modules as explicit
flags on the semantic model. In rules that can _only_ ever trigger if
those modules were imported, we add a dedicated and extremely cheap
check to the top of the rule.

We could consider generalizing this to all modules, but I would expect
that not to be much faster than `resolve_call_path`, which is just a
hash map lookup on `TextSize` anyway.

It would also be nice to make this declarative, such that rules could
declare the modules they care about, the analyzers could call the rules
as appropriate. But, I don't think such a design should block merging
this.
2024-02-02 14:37:20 -05:00
Charlie Marsh
c3ca34543f Skip LibCST parsing for standard dedent adjustments (#9769)
## Summary

Often, when fixing, we need to dedent a block of code (e.g., if we
remove an `if` and dedent its body). Today, we use LibCST to parse and
adjust the indentation, which is really expensive -- but this is only
really necessary if the block contains a multiline string, since naively
adjusting the indentation for such a string can change the whitespace
_within_ the string.

This PR uses a simple dedent implementation for cases in which the block
doesn't intersect with a multi-line string (or an f-string, since we
don't support tracking multi-line strings for f-strings right now).

We could improve this even further by using the ranges to guide the
dedent function, such that we don't apply the dedent if the line starts
within a multiline string. But that would also need to take f-strings
into account, which is a little tricky.

## Test Plan

`cargo test`
2024-02-02 18:13:46 +00:00
Micha Reiser
4f7fb566f0 Range formatting: Fix invalid syntax after parenthesizing expression (#9751) 2024-02-02 17:56:25 +01:00
Jordan Danford
50bfbcf568 README.md: add missing "your" in support section, add alt text to Astral logo (#9787) 2024-02-02 09:09:19 -06:00
Charlie Marsh
ea1c089652 Use AhoCorasick to speed up quote match (#9773)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

When I was looking at the v0.2.0 release, this method showed up in a
CodSpeed regression (we were calling it more), so I decided to quickly
look at speeding it up. @BurntSushi suggested using Aho-Corasick, and it
looks like it's about 7 or 8x faster:

```text
Parser/AhoCorasick      time:   [8.5646 ns 8.5914 ns 8.6191 ns]
Parser/Iterator         time:   [64.992 ns 65.124 ns 65.271 ns]
```

## Test Plan

`cargo test`
2024-02-02 09:57:39 -05:00
Mikael Arguedas
b947dde8ad [flake8-bugbear][B006] remove outdated comment (#9776)
I noticed that the comment doesn't match the behavior:
- zip function is not used anymore
- parameters are not scanned in reverse

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

No need

Signed-off-by: Mikael Arguedas <mikael.arguedas@gmail.com>
2024-02-02 09:32:46 -05:00
trag1c
d259cd0d32 Made hyperlink on homepage correctly redirect to GitHub (#9784)
## Summary

Closes #9783. Feels hacky because of the different key so there *might*
be a nicer way to do this 😄

## Test Plan
Tested locally with `mkdocs serve`.
2024-02-02 09:32:23 -05:00
Thomas Grainger
af4db39205 Add pytest to who's using ruff (#9782) 2024-02-02 11:26:30 +00:00
Alex Gaynor
467c091382 Fixed example code in weak_cryptographic_key.rs (#9774)
The proper way to use these APIs is to instantiate the curve classes
2024-02-01 22:42:31 -05:00
Charlie Marsh
92d99a72d9 Fix references to deprecated ANN rules in changelog (#9771)
We deprecated `ANN101` and `ANN102`, but the changelog says `ANN001` and
`ANN002`. (I've confirmed that the code reflects the correct
deprecation; it's just a documentation error.)
2024-02-02 02:27:30 +00:00
Charlie Marsh
66d2c1e1c4 Move adjust_indentation to a shared home (#9768)
Now that this method is used in multiple linters, it should be moved out
of the `pyupgrade` module.
2024-02-02 00:53:59 +00:00
Charlie Marsh
ded8c7629f Invert order of checks in zero-sleep-call (#9766)
The other conditions are cheaper and should eliminate the vast majority
of these checks.
2024-02-01 23:30:03 +00:00
Zanie Blue
1fadefa67b Bump version to 0.2.0 (#9762)
Follows https://github.com/astral-sh/ruff/pull/9680
2024-02-01 17:10:33 -06:00
Charlie Marsh
06ad687efd Deduplicate deprecation warnings for v0.2.0 release (#9764)
## Summary

Adds an additional warning macro (we should consolidate these later)
that shows a warning once based on the content of the warning itself.
This is less efficient than `warn_user_once!` and `warn_user_by_id!`,
but this is so expensive that it doesn't matter at all.

Applies this macro to the various warnings for the v0.2.0 release, and
also includes the filename in said warnings, so the FastAPI case is now:

```text
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in /Users/crmarsh/workspace/fastapi/pyproject.toml:
  - 'ignore' -> 'lint.ignore'
  - 'select' -> 'lint.select'
  - 'isort' -> 'lint.isort'
  - 'pyupgrade' -> 'lint.pyupgrade'
  - 'per-file-ignores' -> 'lint.per-file-ignores'
```

---------

Co-authored-by: Zanie <contact@zanie.dev>
2024-02-01 17:10:24 -06:00
Jane Lewis
148b64ead3 Fix issue where output format mode would not change to full if preview mode was set in configuration file (#9763)
## Summary

This was causing build failures for #9599. We were referencing the
command line overrides instead of the merged configuration data, hence
the issue.

## Test Plan

A snapshot test was added.
2024-02-01 16:07:21 -06:00
Charlie Marsh
99eddbd2a0 Remove stale preview documentation from stabilized rule behaviors (#9759)
These behaviors were stabilized, so the docs referring to them as
preview-only are incorrect.
2024-02-01 13:35:02 -06:00
Zanie Blue
836d2eaa01 Restore RUF011 documentation (#9758)
For consistency with other redirected rules as in
https://github.com/astral-sh/ruff/pull/9755

Follow-up to #9428
2024-02-01 13:35:02 -06:00
Zanie Blue
994514d686 Redirect PHG001 to S307 and PGH002 to G010 (#9756)
Follow-up to #9754 and #9689. Alternative to #9714.
Replaces #7506 and #7507
Same ideas as #9755
Part of #8931
2024-02-01 13:35:02 -06:00
Zanie Blue
a578414246 Redirect TRY200 to B904 (#9755)
Follow-up to #9754 and #9689. Alternative to #9714.

Marks `TRY200` as removed and redirects to `B904` instead of marking as
deprecated and suggesting `B904` instead.
2024-02-01 13:35:02 -06:00
Zanie Blue
0d752e56cd Add tests for redirected rules (#9754)
Extends https://github.com/astral-sh/ruff/pull/9752 adding internal test
rules for redirection

Fixes a bug where we did not see warnings for exact codes that are
redirected (just prefixes)
2024-02-01 13:35:02 -06:00
Zanie Blue
46c0937bfa Use fake rules for testing deprecation and removal infrastructure (#9752)
Updates #9689 and #9691 to use rule testing infrastructure from #9747
2024-02-01 13:35:02 -06:00
Zanie
e5008ca714 Fix bug where selection included deprecated rules during preview (#9746)
Cherry-picked from https://github.com/astral-sh/ruff/pull/9714 which is
being abandoned for now because we need to invest more into our
redirection infrastructure before it is feasible.

Fixes a bug in the implementation where we improperly included
deprecated rules in `RuleSelector.rules()` when preview is on. Includes
some clean-up of error messages and the implementation.
# Conflicts:
#	crates/ruff/tests/integration_test.rs
2024-02-01 13:35:02 -06:00
Charlie Marsh
85a7edcc70 Recategorize runtime-string-union to TCH010 (#9721)
## Summary

This rule was added to `flake8-type-checking` as `TC010`. We're about to
stabilize it, so we might as well use the correct code.

See: https://github.com/astral-sh/ruff/issues/9573.
2024-02-01 13:35:02 -06:00
Charlie Marsh
7db3aea1c6 Stabilize some rules for v0.2.0 release (#9712)
## Summary

This PR stabilizes the preview rules from:

- `flake8-trio` (6 rules)
- `flake8-quotes` (1 rule)
- `pyupgrade` (1 rule)
- `flake8-pyi` (1 rule)
- `flake8-simplify` (2 rules)
- `flake8-bandit` (9 rules; 14 remain in preview)
- `flake8-type-checking` (1 rule)
- `numpy` (1 rule)
- `ruff` (4 rules, one elevated from nursery; 6 remain in preview as
they were added within the last 30 days)
- `flake8-logging` (4 rules)

I see these are largely uncontroversial.
2024-02-01 13:35:02 -06:00
Zanie Blue
e0bc08a758 Add rule removal infrastructure (#9691)
Similar to https://github.com/astral-sh/ruff/pull/9689 — retains removed
rules for better error messages and documentation but removed rules
_cannot_ be used in any context.

Removes PLR1706 as a useful test case and something we want to
accomplish in #9680 anyway. The rule was in preview so we do not need to
deprecate it first.

Closes https://github.com/astral-sh/ruff/issues/9007

## Test plan

<img width="1110" alt="Rules table"
src="https://github.com/astral-sh/ruff/assets/2586601/ac9fa682-623c-44aa-8e51-d8ab0d308355">

<img width="1110" alt="Rule page"
src="https://github.com/astral-sh/ruff/assets/2586601/05850b2d-7ca5-49bb-8df8-bb931bab25cd">
2024-02-01 13:35:02 -06:00
Zanie Blue
a0ef087e73 Add rule deprecation infrastructure (#9689)
Adds a new `Deprecated` rule group in addition to `Stable` and
`Preview`.

Deprecated rules:
- Warn on explicit selection without preview
- Error on explicit selection with preview
- Are excluded when selected by prefix with preview

Deprecates `TRY200`, `ANN101`, and `ANN102` as a proof of concept. We
can consider deprecating them separately.
2024-02-01 13:35:02 -06:00
Zanie Blue
c86e14d1d4 Remove the NURSERY selector from the json schema (#9695) 2024-02-01 13:35:02 -06:00
Zanie
e37b3b0742 Always request the concise output format during ecosystem checks (#9708)
Fixes a regression in the ecosystem checks from
https://github.com/astral-sh/ruff/pull/9687 which was causing them to
run for multiple hours due to the size of the output.

We need the concise format for comparisons.

We should probably update the ecosystem checks to actually diff the full
output in the future because that'd be nice.
# Conflicts:
#	python/ruff-ecosystem/ruff_ecosystem/projects.py
2024-02-01 13:35:02 -06:00
Zanie
a0f32dfa55 Error if nursery rules are selected without preview (#9683)
Extends #9682 to error if the nursery selector is used or nursery rules
are selected without preview.

Part of #7992 — we will remove this in 0.3.0 instead so we can provide
nice errors in 0.2.0.
# Conflicts:
#	crates/ruff/tests/integration_test.rs
#	crates/ruff_workspace/src/configuration.rs
2024-02-01 13:35:02 -06:00
Zanie
6aa643346f Replace --show-source and --no-show-source with --output_format=<full|concise> (#9687)
Fixes #7350

## Summary

* `--show-source` and `--no-show-source` are now deprecated.
* `output-format` supports two new variants, `full` and `concise`.
`text` is now a deprecated variant, and any use of it is treated as the
default serialization format.
* `--output-format` now default to `concise`
* In preview mode, `--output-format` defaults to `full`
* `--show-source` will still set `--output-format` to `full` if the
output format is not otherwise specified.
* likewise, `--no-show-source` can override an output format that was
set in a file-based configuration, though it will also be overridden by
`--output-format`

## Test Plan

A lot of tests were updated to use `--output-format=full`. Additional
tests were added to ensure the correct deprecation warnings appeared,
and that deprecated options behaved as intended.
# Conflicts:
#	crates/ruff/tests/integration_test.rs
2024-02-01 13:35:02 -06:00
Charlie Marsh
ae13d8fddf Remove preview gating for flake8-simplify rules (#9686)
## Summary

Un-gates detecting `dict.get` rewrites in `if` expressions (rather than
just `if` statements).
2024-02-01 13:35:02 -06:00
Charlie Marsh
2d6fd0fc91 Remove preview gating for flake8-pie rules (#9684)
## Summary

Both of the preview behaviors gated here seem like improvements, so
let's make them stable in v0.2.0
2024-02-01 13:35:02 -06:00
Charlie Marsh
33fe988cfc Remove preview gating for pycodestyle rules (#9685)
## Summary

Un-gates the behavior to allow `sys.path` modifications between imports,
which removed a bunch of false positives in the ecosystem CI at the
time.
2024-02-01 13:35:02 -06:00
Zanie
0f674d1d90 Remove preview gating for newly-added stable fixes (#9681)
## Summary

At present, our versioning policy forbids the addition of safe fixes to
stable rules outside of a minor release, so we've accumulated a bunch of
new fixes that are behind `--preview`, and can be ungated in v0.2.0.

To find these, I just grepped for `preview.is_enabled()` and identified
all such cases. I then audited the `preview_rules` test fixtures and
removed any tests that existed only to test this autofix behavior.
# Conflicts:
#	crates/ruff_linter/src/rules/flake8_simplify/snapshots/ruff_linter__rules__flake8_simplify__tests__SIM114_SIM114.py.snap
#	crates/ruff_linter/src/rules/flake8_simplify/snapshots/ruff_linter__rules__flake8_simplify__tests__preview__SIM114_SIM114.py.snap
2024-02-01 13:35:02 -06:00
Zanie
7962bca40a Recategorize static-key-dict-comprehension from RUF011 to B035 (#9428)
## Summary

This rule was added to flake8-bugbear. In general, we tend to prefer
redirecting to prominent plugins when our own rules are reimplemented
(since more projects have `B` activated than `RUF`).

## Test Plan

`cargo test`
# Conflicts:
#	crates/ruff_linter/src/rules/ruff/rules/mod.rs
2024-02-01 13:35:02 -06:00
Charlie Marsh
b81fc5ed11 [flake8-pyi] Mark unaliased-collections-abc-set-import fix as safe (#9679)
## Summary

Prompted by
https://github.com/astral-sh/ruff/issues/8482#issuecomment-1859299411.
The rename is only unsafe when the symbol is exported, so we can narrow
the conditions.
2024-02-01 13:35:02 -06:00
Micha Reiser
c2bf725086 Add deprecation message for top-level lint settings (#9582) 2024-02-01 13:35:02 -06:00
Micha Reiser
c3b33e9c4d Promote lint. settings over top-level settings (#9476) 2024-02-01 13:35:02 -06:00
Charlie Marsh
6996ff7b1e Use consistent method to detect preview enablement (#9760)
I missed these two in the v0.2.0 stabilizations because they use a match
instead of the dedicated method.
2024-02-01 18:58:05 +00:00
507 changed files with 16304 additions and 4798 deletions

8
.config/nextest.toml Normal file
View File

@@ -0,0 +1,8 @@
[profile.ci]
# Print out output for failing tests as soon as they fail, and also at the end
# of the run (for easy scrollability).
failure-output = "immediate-final"
# Do not cancel the test run on the first failure.
fail-fast = false
status-level = "skip"

View File

@@ -111,16 +111,23 @@ jobs:
- uses: actions/checkout@v4
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@v2
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@v2
with:
tool: cargo-insta
- uses: Swatinem/rust-cache@v2
- name: "Run tests"
run: cargo insta test --all --exclude ruff_dev --all-features --unreferenced reject
- name: "Run dev tests"
# e.g. generating the schema — these should not run with all features enabled
run: cargo insta test -p ruff_dev --unreferenced reject
shell: bash
env:
NEXTEST_PROFILE: "ci"
run: cargo insta test --all-features --unreferenced reject --test-runner nextest
# Check for broken links in the documentation.
- run: cargo doc --all --no-deps
env:
@@ -141,15 +148,16 @@ jobs:
- uses: actions/checkout@v4
- name: "Install Rust toolchain"
run: rustup show
- name: "Install cargo insta"
- name: "Install cargo nextest"
uses: taiki-e/install-action@v2
with:
tool: cargo-insta
tool: cargo-nextest
- uses: Swatinem/rust-cache@v2
- name: "Run tests"
shell: bash
# We can't reject unreferenced snapshots on windows because flake8_executable can't run on windows
run: cargo insta test --all --exclude ruff_dev --all-features
run: |
cargo nextest run --all-features --profile ci
cargo test --all-features --doc
cargo-test-wasm:
name: "cargo test (wasm)"
@@ -410,7 +418,7 @@ jobs:
- uses: actions/setup-python@v5
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
uses: webfactory/ssh-agent@v0.8.0
uses: webfactory/ssh-agent@v0.9.0
with:
ssh-private-key: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY }}
- name: "Install Rust toolchain"

View File

@@ -23,7 +23,7 @@ jobs:
- uses: actions/setup-python@v5
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
uses: webfactory/ssh-agent@v0.8.0
uses: webfactory/ssh-agent@v0.9.0
with:
ssh-private-key: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY }}
- name: "Install Rust toolchain"

View File

@@ -58,7 +58,7 @@ jobs:
path: dist
macos-x86_64:
runs-on: macos-14
runs-on: macos-latest
steps:
- uses: actions/checkout@v4
with:
@@ -74,6 +74,11 @@ jobs:
with:
target: x86_64
args: --release --locked --out dist
- name: "Test wheel - x86_64"
run: |
pip install dist/${{ env.PACKAGE_NAME }}-*.whl --force-reinstall
ruff --help
python -m ruff --help
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
@@ -93,7 +98,7 @@ jobs:
*.sha256
macos-universal:
runs-on: macos-14
runs-on: macos-latest
steps:
- uses: actions/checkout@v4
with:

View File

@@ -1,5 +1,215 @@
# Changelog
## 0.2.1
This release includes support for range formatting (i.e., the ability to format specific lines
within a source file).
### Preview features
- \[`refurb`\] Implement `missing-f-string-syntax` (`RUF027`) ([#9728](https://github.com/astral-sh/ruff/pull/9728))
- Format module-level docstrings ([#9725](https://github.com/astral-sh/ruff/pull/9725))
### Formatter
- Add `--range` option to `ruff format` ([#9733](https://github.com/astral-sh/ruff/pull/9733))
- Don't trim last empty line in docstrings ([#9813](https://github.com/astral-sh/ruff/pull/9813))
### Bug fixes
- Skip empty lines when determining base indentation ([#9795](https://github.com/astral-sh/ruff/pull/9795))
- Drop `__get__` and `__set__` from `unnecessary-dunder-call` ([#9791](https://github.com/astral-sh/ruff/pull/9791))
- Respect generic `Protocol` in ellipsis removal ([#9841](https://github.com/astral-sh/ruff/pull/9841))
- Revert "Use publicly available Apple Silicon runners (#9726)" ([#9834](https://github.com/astral-sh/ruff/pull/9834))
### Performance
- Skip LibCST parsing for standard dedent adjustments ([#9769](https://github.com/astral-sh/ruff/pull/9769))
- Remove CST-based fixer for `C408` ([#9822](https://github.com/astral-sh/ruff/pull/9822))
- Add our own ignored-names abstractions ([#9802](https://github.com/astral-sh/ruff/pull/9802))
- Remove CST-based fixers for `C400`, `C401`, `C410`, and `C418` ([#9819](https://github.com/astral-sh/ruff/pull/9819))
- Use `AhoCorasick` to speed up quote match ([#9773](https://github.com/astral-sh/ruff/pull/9773))
- Remove CST-based fixers for `C405` and `C409` ([#9821](https://github.com/astral-sh/ruff/pull/9821))
- Add fast-path for comment detection ([#9808](https://github.com/astral-sh/ruff/pull/9808))
- Invert order of checks in `zero-sleep-call` ([#9766](https://github.com/astral-sh/ruff/pull/9766))
- Short-circuit typing matches based on imports ([#9800](https://github.com/astral-sh/ruff/pull/9800))
- Run dunder method rule on methods directly ([#9815](https://github.com/astral-sh/ruff/pull/9815))
- Track top-level module imports in the semantic model ([#9775](https://github.com/astral-sh/ruff/pull/9775))
- Slight speed-up for lowercase and uppercase identifier checks ([#9798](https://github.com/astral-sh/ruff/pull/9798))
- Remove LibCST-based fixer for `C403` ([#9818](https://github.com/astral-sh/ruff/pull/9818))
### Documentation
- Update `max-pos-args` example to `max-positional-args` ([#9797](https://github.com/astral-sh/ruff/pull/9797))
- Fixed example code in `weak_cryptographic_key.rs` ([#9774](https://github.com/astral-sh/ruff/pull/9774))
- Fix references to deprecated `ANN` rules in changelog ([#9771](https://github.com/astral-sh/ruff/pull/9771))
- Fix default for `max-positional-args` ([#9838](https://github.com/astral-sh/ruff/pull/9838))
## 0.2.0
### Breaking changes
- The `NURSERY` selector cannot be used anymore
- Legacy selection of nursery rules by exact codes is no longer allowed without preview enabled
See also, the "Remapped rules" section which may result in disabled rules.
### Deprecations
The following rules are now deprecated:
- [`missing-type-self`](https://docs.astral.sh/ruff/rules/missing-type-self/) (`ANN101`)
- [`missing-type-cls`](https://docs.astral.sh/ruff/rules/missing-type-cls/) (`ANN102`)
The following command line options are now deprecated:
- `--show-source`; use `--output-format full` instead
- `--no-show-source`; use `--output-format concise` instead
- `--output-format text`; use `full` or `concise` instead
The following settings have moved and the previous name is deprecated:
- `ruff.allowed-confusables` → [`ruff.lint.allowed-confusables`](https://docs.astral.sh//ruff/settings/#lint_allowed-confusables)
- `ruff.dummy-variable-rgx` → [`ruff.lint.dummy-variable-rgx`](https://docs.astral.sh//ruff/settings/#lint_dummy-variable-rgx)
- `ruff.explicit-preview-rules` → [`ruff.lint.explicit-preview-rules`](https://docs.astral.sh//ruff/settings/#lint_explicit-preview-rules)
- `ruff.extend-fixable` → [`ruff.lint.extend-fixable`](https://docs.astral.sh//ruff/settings/#lint_extend-fixable)
- `ruff.extend-ignore` → [`ruff.lint.extend-ignore`](https://docs.astral.sh//ruff/settings/#lint_extend-ignore)
- `ruff.extend-per-file-ignores` → [`ruff.lint.extend-per-file-ignores`](https://docs.astral.sh//ruff/settings/#lint_extend-per-file-ignores)
- `ruff.extend-safe-fixes` → [`ruff.lint.extend-safe-fixes`](https://docs.astral.sh//ruff/settings/#lint_extend-safe-fixes)
- `ruff.extend-select` → [`ruff.lint.extend-select`](https://docs.astral.sh//ruff/settings/#lint_extend-select)
- `ruff.extend-unfixable` → [`ruff.lint.extend-unfixable`](https://docs.astral.sh//ruff/settings/#lint_extend-unfixable)
- `ruff.extend-unsafe-fixes` → [`ruff.lint.extend-unsafe-fixes`](https://docs.astral.sh//ruff/settings/#lint_extend-unsafe-fixes)
- `ruff.external` → [`ruff.lint.external`](https://docs.astral.sh//ruff/settings/#lint_external)
- `ruff.fixable` → [`ruff.lint.fixable`](https://docs.astral.sh//ruff/settings/#lint_fixable)
- `ruff.flake8-annotations` → [`ruff.lint.flake8-annotations`](https://docs.astral.sh//ruff/settings/#lint_flake8-annotations)
- `ruff.flake8-bandit` → [`ruff.lint.flake8-bandit`](https://docs.astral.sh//ruff/settings/#lint_flake8-bandit)
- `ruff.flake8-bugbear` → [`ruff.lint.flake8-bugbear`](https://docs.astral.sh//ruff/settings/#lint_flake8-bugbear)
- `ruff.flake8-builtins` → [`ruff.lint.flake8-builtins`](https://docs.astral.sh//ruff/settings/#lint_flake8-builtins)
- `ruff.flake8-comprehensions` → [`ruff.lint.flake8-comprehensions`](https://docs.astral.sh//ruff/settings/#lint_flake8-comprehensions)
- `ruff.flake8-copyright` → [`ruff.lint.flake8-copyright`](https://docs.astral.sh//ruff/settings/#lint_flake8-copyright)
- `ruff.flake8-errmsg` → [`ruff.lint.flake8-errmsg`](https://docs.astral.sh//ruff/settings/#lint_flake8-errmsg)
- `ruff.flake8-gettext` → [`ruff.lint.flake8-gettext`](https://docs.astral.sh//ruff/settings/#lint_flake8-gettext)
- `ruff.flake8-implicit-str-concat` → [`ruff.lint.flake8-implicit-str-concat`](https://docs.astral.sh//ruff/settings/#lint_flake8-implicit-str-concat)
- `ruff.flake8-import-conventions` → [`ruff.lint.flake8-import-conventions`](https://docs.astral.sh//ruff/settings/#lint_flake8-import-conventions)
- `ruff.flake8-pytest-style` → [`ruff.lint.flake8-pytest-style`](https://docs.astral.sh//ruff/settings/#lint_flake8-pytest-style)
- `ruff.flake8-quotes` → [`ruff.lint.flake8-quotes`](https://docs.astral.sh//ruff/settings/#lint_flake8-quotes)
- `ruff.flake8-self` → [`ruff.lint.flake8-self`](https://docs.astral.sh//ruff/settings/#lint_flake8-self)
- `ruff.flake8-tidy-imports` → [`ruff.lint.flake8-tidy-imports`](https://docs.astral.sh//ruff/settings/#lint_flake8-tidy-imports)
- `ruff.flake8-type-checking` → [`ruff.lint.flake8-type-checking`](https://docs.astral.sh//ruff/settings/#lint_flake8-type-checking)
- `ruff.flake8-unused-arguments` → [`ruff.lint.flake8-unused-arguments`](https://docs.astral.sh//ruff/settings/#lint_flake8-unused-arguments)
- `ruff.ignore` → [`ruff.lint.ignore`](https://docs.astral.sh//ruff/settings/#lint_ignore)
- `ruff.ignore-init-module-imports` → [`ruff.lint.ignore-init-module-imports`](https://docs.astral.sh//ruff/settings/#lint_ignore-init-module-imports)
- `ruff.isort` → [`ruff.lint.isort`](https://docs.astral.sh//ruff/settings/#lint_isort)
- `ruff.logger-objects` → [`ruff.lint.logger-objects`](https://docs.astral.sh//ruff/settings/#lint_logger-objects)
- `ruff.mccabe` → [`ruff.lint.mccabe`](https://docs.astral.sh//ruff/settings/#lint_mccabe)
- `ruff.pep8-naming` → [`ruff.lint.pep8-naming`](https://docs.astral.sh//ruff/settings/#lint_pep8-naming)
- `ruff.per-file-ignores` → [`ruff.lint.per-file-ignores`](https://docs.astral.sh//ruff/settings/#lint_per-file-ignores)
- `ruff.pycodestyle` → [`ruff.lint.pycodestyle`](https://docs.astral.sh//ruff/settings/#lint_pycodestyle)
- `ruff.pydocstyle` → [`ruff.lint.pydocstyle`](https://docs.astral.sh//ruff/settings/#lint_pydocstyle)
- `ruff.pyflakes` → [`ruff.lint.pyflakes`](https://docs.astral.sh//ruff/settings/#lint_pyflakes)
- `ruff.pylint` → [`ruff.lint.pylint`](https://docs.astral.sh//ruff/settings/#lint_pylint)
- `ruff.pyupgrade` → [`ruff.lint.pyupgrade`](https://docs.astral.sh//ruff/settings/#lint_pyupgrade)
- `ruff.select` → [`ruff.lint.select`](https://docs.astral.sh//ruff/settings/#lint_select)
- `ruff.task-tags` → [`ruff.lint.task-tags`](https://docs.astral.sh//ruff/settings/#lint_task-tags)
- `ruff.typing-modules` → [`ruff.lint.typing-modules`](https://docs.astral.sh//ruff/settings/#lint_typing-modules)
- `ruff.unfixable` → [`ruff.lint.unfixable`](https://docs.astral.sh//ruff/settings/#lint_unfixable)
### Remapped rules
The following rules have been remapped to new codes:
- [`raise-without-from-inside-except`](https://docs.astral.sh/ruff/rules/raise-without-from-inside-except/): `TRY200` to `B904`
- [`suspicious-eval-usage`](https://docs.astral.sh/ruff/rules/suspicious-eval-usage/): `PGH001` to `S307`
- [`logging-warn`](https://docs.astral.sh/ruff/rules/logging-warn/): `PGH002` to `G010`
- [`static-key-dict-comprehension`](https://docs.astral.sh/ruff/rules/static-key-dict-comprehension): `RUF011` to `B035`
- [`runtime-string-union`](https://docs.astral.sh/ruff/rules/runtime-string-union): `TCH006` to `TCH010`
### Stabilizations
The following rules have been stabilized and are no longer in preview:
- [`trio-timeout-without-await`](https://docs.astral.sh/ruff/rules/trio-timeout-without-await) (`TRIO100`)
- [`trio-sync-call`](https://docs.astral.sh/ruff/rules/trio-sync-call) (`TRIO105`)
- [`trio-async-function-with-timeout`](https://docs.astral.sh/ruff/rules/trio-async-function-with-timeout) (`TRIO109`)
- [`trio-unneeded-sleep`](https://docs.astral.sh/ruff/rules/trio-unneeded-sleep) (`TRIO110`)
- [`trio-zero-sleep-call`](https://docs.astral.sh/ruff/rules/trio-zero-sleep-call) (`TRIO115`)
- [`unnecessary-escaped-quote`](https://docs.astral.sh/ruff/rules/unnecessary-escaped-quote) (`Q004`)
- [`enumerate-for-loop`](https://docs.astral.sh/ruff/rules/enumerate-for-loop) (`SIM113`)
- [`zip-dict-keys-and-values`](https://docs.astral.sh/ruff/rules/zip-dict-keys-and-values) (`SIM911`)
- [`timeout-error-alias`](https://docs.astral.sh/ruff/rules/timeout-error-alias) (`UP041`)
- [`flask-debug-true`](https://docs.astral.sh/ruff/rules/flask-debug-true) (`S201`)
- [`tarfile-unsafe-members`](https://docs.astral.sh/ruff/rules/tarfile-unsafe-members) (`S202`)
- [`ssl-insecure-version`](https://docs.astral.sh/ruff/rules/ssl-insecure-version) (`S502`)
- [`ssl-with-bad-defaults`](https://docs.astral.sh/ruff/rules/ssl-with-bad-defaults) (`S503`)
- [`ssl-with-no-version`](https://docs.astral.sh/ruff/rules/ssl-with-no-version) (`S504`)
- [`weak-cryptographic-key`](https://docs.astral.sh/ruff/rules/weak-cryptographic-key) (`S505`)
- [`ssh-no-host-key-verification`](https://docs.astral.sh/ruff/rules/ssh-no-host-key-verification) (`S507`)
- [`django-raw-sql`](https://docs.astral.sh/ruff/rules/django-raw-sql) (`S611`)
- [`mako-templates`](https://docs.astral.sh/ruff/rules/mako-templates) (`S702`)
- [`generator-return-from-iter-method`](https://docs.astral.sh/ruff/rules/generator-return-from-iter-method) (`PYI058`)
- [`runtime-string-union`](https://docs.astral.sh/ruff/rules/runtime-string-union) (`TCH006`)
- [`numpy2-deprecation`](https://docs.astral.sh/ruff/rules/numpy2-deprecation) (`NPY201`)
- [`quadratic-list-summation`](https://docs.astral.sh/ruff/rules/quadratic-list-summation) (`RUF017`)
- [`assignment-in-assert`](https://docs.astral.sh/ruff/rules/assignment-in-assert) (`RUF018`)
- [`unnecessary-key-check`](https://docs.astral.sh/ruff/rules/unnecessary-key-check) (`RUF019`)
- [`never-union`](https://docs.astral.sh/ruff/rules/never-union) (`RUF020`)
- [`direct-logger-instantiation`](https://docs.astral.sh/ruff/rules/direct-logger-instantiation) (`LOG001`)
- [`invalid-get-logger-argument`](https://docs.astral.sh/ruff/rules/invalid-get-logger-argument) (`LOG002`)
- [`exception-without-exc-info`](https://docs.astral.sh/ruff/rules/exception-without-exc-info) (`LOG007`)
- [`undocumented-warn`](https://docs.astral.sh/ruff/rules/undocumented-warn) (`LOG009`)
Fixes for the following rules have been stabilized and are now available without preview:
- [`triple-single-quotes`](https://docs.astral.sh/ruff/rules/triple-single-quotes) (`D300`)
- [`non-pep604-annotation`](https://docs.astral.sh/ruff/rules/non-pep604-annotation) (`UP007`)
- [`dict-get-with-none-default`](https://docs.astral.sh/ruff/rules/dict-get-with-none-default) (`SIM910`)
- [`in-dict-keys`](https://docs.astral.sh/ruff/rules/in-dict-keys) (`SIM118`)
- [`collapsible-else-if`](https://docs.astral.sh/ruff/rules/collapsible-else-if) (`PLR5501`)
- [`if-with-same-arms`](https://docs.astral.sh/ruff/rules/if-with-same-arms) (`SIM114`)
- [`useless-else-on-loop`](https://docs.astral.sh/ruff/rules/useless-else-on-loop) (`PLW0120`)
- [`unnecessary-literal-union`](https://docs.astral.sh/ruff/rules/unnecessary-literal-union) (`PYI030`)
- [`unnecessary-spread`](https://docs.astral.sh/ruff/rules/unnecessary-spread) (`PIE800`)
- [`error-instead-of-exception`](https://docs.astral.sh/ruff/rules/error-instead-of-exception) (`TRY400`)
- [`redefined-while-unused`](https://docs.astral.sh/ruff/rules/redefined-while-unused) (`F811`)
- [`duplicate-value`](https://docs.astral.sh/ruff/rules/duplicate-value) (`B033`)
- [`multiple-imports-on-one-line`](https://docs.astral.sh/ruff/rules/multiple-imports-on-one-line) (`E401`)
- [`non-pep585-annotation`](https://docs.astral.sh/ruff/rules/non-pep585-annotation) (`UP006`)
Fixes for the following rules have been promoted from unsafe to safe:
- [`unaliased-collections-abc-set-import`](https://docs.astral.sh/ruff/rules/unaliased-collections-abc-set-import) (`PYI025`)
The following behaviors have been stabilized:
- [`module-import-not-at-top-of-file`](https://docs.astral.sh/ruff/rules/module-import-not-at-top-of-file/) (`E402`) allows `sys.path` modifications between imports
- [`reimplemented-container-builtin`](https://docs.astral.sh/ruff/rules/reimplemented-container-builtin/) (`PIE807`) includes lambdas that can be replaced with `dict`
- [`unnecessary-placeholder`](https://docs.astral.sh/ruff/rules/unnecessary-placeholder/) (`PIE790`) applies to unnecessary ellipses (`...`)
- [`if-else-block-instead-of-dict-get`](https://docs.astral.sh/ruff/rules/if-else-block-instead-of-dict-get/) (`SIM401`) applies to `if-else` expressions
### Preview features
- \[`refurb`\] Implement `metaclass_abcmeta` (`FURB180`) ([#9658](https://github.com/astral-sh/ruff/pull/9658))
- Implement `blank_line_after_nested_stub_class` preview style ([#9155](https://github.com/astral-sh/ruff/pull/9155))
- The preview rule [`and-or-ternary`](https://docs.astral.sh/ruff/rules/and-or-ternary) (`PLR1706`) was removed
### Bug fixes
- \[`flake8-async`\] Take `pathlib.Path` into account when analyzing async functions ([#9703](https://github.com/astral-sh/ruff/pull/9703))
- \[`flake8-return`\] - fix indentation syntax error (`RET505`) ([#9705](https://github.com/astral-sh/ruff/pull/9705))
- Detect multi-statement lines in else removal ([#9748](https://github.com/astral-sh/ruff/pull/9748))
- `RUF022`, `RUF023`: never add two trailing commas to the end of a sequence ([#9698](https://github.com/astral-sh/ruff/pull/9698))
- `RUF023`: Don't sort `__match_args__`, only `__slots__` ([#9724](https://github.com/astral-sh/ruff/pull/9724))
- \[`flake8-simplify`\] - Fix syntax error in autofix (`SIM114`) ([#9704](https://github.com/astral-sh/ruff/pull/9704))
- \[`pylint`\] Show verbatim constant in `magic-value-comparison` (`PLR2004`) ([#9694](https://github.com/astral-sh/ruff/pull/9694))
- Removing trailing whitespace inside multiline strings is unsafe ([#9744](https://github.com/astral-sh/ruff/pull/9744))
- Support `IfExp` with dual string arms in `invalid-envvar-default` ([#9734](https://github.com/astral-sh/ruff/pull/9734))
- \[`pylint`\] Add `__mro_entries__` to known dunder methods (`PLW3201`) ([#9706](https://github.com/astral-sh/ruff/pull/9706))
### Documentation
- Removed rules are now retained in the documentation ([#9691](https://github.com/astral-sh/ruff/pull/9691))
- Deprecated rules are now indicated in the documentation ([#9689](https://github.com/astral-sh/ruff/pull/9689))
## 0.1.15
### Preview features

View File

@@ -26,6 +26,10 @@ Welcome! We're happy to have you here. Thank you in advance for your contributio
- [`cargo dev`](#cargo-dev)
- [Subsystems](#subsystems)
- [Compilation Pipeline](#compilation-pipeline)
- [Import Categorization](#import-categorization)
- [Project root](#project-root)
- [Package root](#package-root)
- [Import categorization](#import-categorization-1)
## The Basics
@@ -63,7 +67,7 @@ You'll also need [Insta](https://insta.rs/docs/) to update snapshot tests:
cargo install cargo-insta
```
and pre-commit to run some validation checks:
And you'll need pre-commit to run some validation checks:
```shell
pipx install pre-commit # or `pip install pre-commit` if you have a virtualenv
@@ -76,6 +80,16 @@ when making a commit:
pre-commit install
```
We recommend [nextest](https://nexte.st/) to run Ruff's test suite (via `cargo nextest run`),
though it's not strictly necessary:
```shell
cargo install cargo-nextest --locked
```
Throughout this guide, any usages of `cargo test` can be replaced with `cargo nextest run`,
if you choose to install `nextest`.
### Development
After cloning the repository, run Ruff locally from the repository root with:
@@ -231,7 +245,7 @@ Once you've completed the code for the rule itself, you can define tests with th
For example, if you're adding a new rule named `E402`, you would run:
```shell
cargo run -p ruff -- check crates/ruff_linter/resources/test/fixtures/pycodestyle/E402.py --no-cache --select E402
cargo run -p ruff -- check crates/ruff_linter/resources/test/fixtures/pycodestyle/E402.py --no-cache --preview --select E402
```
**Note:** Only a subset of rules are enabled by default. When testing a new rule, ensure that
@@ -373,6 +387,11 @@ We have several ways of benchmarking and profiling Ruff:
- Microbenchmarks which run the linter or the formatter on individual files. These run on pull requests.
- Profiling the linter on either the microbenchmarks or entire projects
> \[!NOTE\]
> When running benchmarks, ensure that your CPU is otherwise idle (e.g., close any background
> applications, like web browsers). You may also want to switch your CPU to a "performance"
> mode, if it exists, especially when benchmarking short-lived processes.
### CPython Benchmark
First, clone [CPython](https://github.com/python/cpython). It's a large and diverse Python codebase,

740
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -19,8 +19,9 @@ argfile = { version = "0.1.6" }
assert_cmd = { version = "2.0.13" }
bincode = { version = "1.3.3" }
bitflags = { version = "2.4.1" }
bstr = { version = "1.9.0" }
cachedir = { version = "0.3.1" }
chrono = { version = "0.4.33", default-features = false, features = ["clock"] }
chrono = { version = "0.4.34", default-features = false, features = ["clock"] }
clap = { version = "4.4.18", features = ["derive"] }
clap_complete_command = { version = "0.5.1" }
clearscreen = { version = "2.0.0" }
@@ -43,19 +44,19 @@ hexf-parse = { version ="0.2.1"}
ignore = { version = "0.4.22" }
imara-diff ={ version = "0.1.5"}
imperative = { version = "1.0.4" }
indicatif ={ version = "0.17.7"}
indicatif ={ version = "0.17.8"}
indoc ={ version = "2.0.4"}
insta = { version = "1.34.0", feature = ["filters", "glob"] }
insta-cmd = { version = "0.4.0" }
is-macro = { version = "0.3.4" }
is-macro = { version = "0.3.5" }
is-wsl = { version = "0.4.0" }
itertools = { version = "0.12.0" }
itertools = { version = "0.12.1" }
js-sys = { version = "0.3.67" }
lalrpop-util = { version = "0.20.0", default-features = false }
lexical-parse-float = { version = "0.8.0", features = ["format"] }
libcst = { version = "1.1.0", default-features = false }
log = { version = "0.4.17" }
memchr = { version = "2.6.4" }
memchr = { version = "2.7.1" }
mimalloc = { version ="0.1.39"}
natord = { version = "1.0.9" }
notify = { version = "6.1.1" }
@@ -65,7 +66,7 @@ pathdiff = { version = "0.2.1" }
pep440_rs = { version = "0.4.0", features = ["serde"] }
pretty_assertions = "1.3.0"
proc-macro2 = { version = "1.0.78" }
pyproject-toml = { version = "0.8.1" }
pyproject-toml = { version = "0.9.0" }
quick-junit = { version = "0.3.5" }
quote = { version = "1.0.23" }
rand = { version = "0.8.5" }
@@ -91,9 +92,9 @@ strum_macros = { version = "0.25.3" }
syn = { version = "2.0.40" }
tempfile = { version ="3.9.0"}
test-case = { version = "3.3.1" }
thiserror = { version = "1.0.51" }
thiserror = { version = "1.0.57" }
tikv-jemallocator = { version ="0.5.0"}
toml = { version = "0.8.8" }
toml = { version = "0.8.9" }
tracing = { version = "0.1.40" }
tracing-indicatif = { version = "0.3.6" }
tracing-subscriber = { version = "0.3.18", features = ["env-filter"] }

View File

@@ -150,7 +150,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.1.15
rev: v0.2.1
hooks:
# Run the linter.
- id: ruff
@@ -433,6 +433,7 @@ Ruff is used by a number of major open-source projects and companies, including:
- [PyInstaller](https://github.com/pyinstaller/pyinstaller)
- [PyMC](https://github.com/pymc-devs/pymc/)
- [PyMC-Marketing](https://github.com/pymc-labs/pymc-marketing)
- [pytest](https://github.com/pytest-dev/pytest)
- [PyTorch](https://github.com/pytorch/pytorch)
- [Pydantic](https://github.com/pydantic/pydantic)
- [Pylint](https://github.com/PyCQA/pylint)
@@ -463,7 +464,7 @@ Ruff is used by a number of major open-source projects and companies, including:
### Show Your Support
If you're using Ruff, consider adding the Ruff badge to project's `README.md`:
If you're using Ruff, consider adding the Ruff badge to your project's `README.md`:
```md
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
@@ -489,6 +490,6 @@ MIT
<div align="center">
<a target="_blank" href="https://astral.sh" style="background:none">
<img src="https://raw.githubusercontent.com/astral-sh/ruff/main/assets/svg/Astral.svg">
<img src="https://raw.githubusercontent.com/astral-sh/ruff/main/assets/svg/Astral.svg" alt="Made by Astral">
</a>
</div>

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.1.15"
version = "0.2.1"
publish = false
authors = { workspace = true }
edition = { workspace = true }
@@ -48,7 +48,9 @@ serde = { workspace = true }
serde_json = { workspace = true }
shellexpand = { workspace = true }
strum = { workspace = true, features = [] }
tempfile = { workspace = true }
thiserror = { workspace = true }
toml = { workspace = true }
tracing = { workspace = true, features = ["log"] }
walkdir = { workspace = true }
wild = { workspace = true }

View File

@@ -1,8 +1,18 @@
use std::path::PathBuf;
use std::cmp::Ordering;
use std::fmt::Formatter;
use std::ops::Deref;
use std::path::{Path, PathBuf};
use std::str::FromStr;
use std::sync::Arc;
use anyhow::bail;
use clap::builder::{TypedValueParser, ValueParserFactory};
use clap::{command, Parser};
use colored::Colorize;
use path_absolutize::path_dedot;
use regex::Regex;
use rustc_hash::FxHashMap;
use toml;
use ruff_linter::line_width::LineLength;
use ruff_linter::logging::LogLevel;
@@ -12,8 +22,10 @@ use ruff_linter::settings::types::{
SerializationFormat, UnsafeFixes,
};
use ruff_linter::{warn_user, RuleParser, RuleSelector, RuleSelectorParser};
use ruff_source_file::{LineIndex, OneIndexed};
use ruff_text_size::TextRange;
use ruff_workspace::configuration::{Configuration, RuleSelection};
use ruff_workspace::options::PycodestyleOptions;
use ruff_workspace::options::{Options, PycodestyleOptions};
use ruff_workspace::resolver::ConfigurationTransformer;
#[derive(Debug, Parser)]
@@ -149,10 +161,20 @@ pub struct CheckCommand {
preview: bool,
#[clap(long, overrides_with("preview"), hide = true)]
no_preview: bool,
/// Path to the `pyproject.toml` or `ruff.toml` file to use for
/// configuration.
#[arg(long, conflicts_with = "isolated")]
pub config: Option<PathBuf>,
/// Either a path to a TOML configuration file (`pyproject.toml` or `ruff.toml`),
/// or a TOML `<KEY> = <VALUE>` pair
/// (such as you might find in a `ruff.toml` configuration file)
/// overriding a specific configuration option.
/// Overrides of individual settings using this option always take precedence
/// over all configuration files, including configuration files that were also
/// specified using `--config`.
#[arg(
long,
action = clap::ArgAction::Append,
value_name = "CONFIG_OPTION",
value_parser = ConfigArgumentParser,
)]
pub config: Vec<SingleConfigArgument>,
/// Comma-separated list of rule codes to enable (or ALL, to enable all rules).
#[arg(
long,
@@ -285,7 +307,15 @@ pub struct CheckCommand {
#[arg(short, long, env = "RUFF_NO_CACHE", help_heading = "Miscellaneous")]
pub no_cache: bool,
/// Ignore all configuration files.
#[arg(long, conflicts_with = "config", help_heading = "Miscellaneous")]
//
// Note: We can't mark this as conflicting with `--config` here
// as `--config` can be used for specifying configuration overrides
// as well as configuration files.
// Specifying a configuration file conflicts with `--isolated`;
// specifying a configuration override does not.
// If a user specifies `ruff check --isolated --config=ruff.toml`,
// we emit an error later on, after the initial parsing by clap.
#[arg(long, help_heading = "Miscellaneous")]
pub isolated: bool,
/// Path to the cache directory.
#[arg(long, env = "RUFF_CACHE_DIR", help_heading = "Miscellaneous")]
@@ -378,9 +408,20 @@ pub struct FormatCommand {
/// difference between the current file and how the formatted file would look like.
#[arg(long)]
pub diff: bool,
/// Path to the `pyproject.toml` or `ruff.toml` file to use for configuration.
#[arg(long, conflicts_with = "isolated")]
pub config: Option<PathBuf>,
/// Either a path to a TOML configuration file (`pyproject.toml` or `ruff.toml`),
/// or a TOML `<KEY> = <VALUE>` pair
/// (such as you might find in a `ruff.toml` configuration file)
/// overriding a specific configuration option.
/// Overrides of individual settings using this option always take precedence
/// over all configuration files, including configuration files that were also
/// specified using `--config`.
#[arg(
long,
action = clap::ArgAction::Append,
value_name = "CONFIG_OPTION",
value_parser = ConfigArgumentParser,
)]
pub config: Vec<SingleConfigArgument>,
/// Disable cache reads.
#[arg(short, long, env = "RUFF_NO_CACHE", help_heading = "Miscellaneous")]
@@ -422,7 +463,15 @@ pub struct FormatCommand {
#[arg(long, help_heading = "Format configuration")]
pub line_length: Option<LineLength>,
/// Ignore all configuration files.
#[arg(long, conflicts_with = "config", help_heading = "Miscellaneous")]
//
// Note: We can't mark this as conflicting with `--config` here
// as `--config` can be used for specifying configuration overrides
// as well as configuration files.
// Specifying a configuration file conflicts with `--isolated`;
// specifying a configuration override does not.
// If a user specifies `ruff check --isolated --config=ruff.toml`,
// we emit an error later on, after the initial parsing by clap.
#[arg(long, help_heading = "Miscellaneous")]
pub isolated: bool,
/// The name of the file when passing it through stdin.
#[arg(long, help_heading = "Miscellaneous")]
@@ -440,6 +489,21 @@ pub struct FormatCommand {
preview: bool,
#[clap(long, overrides_with("preview"), hide = true)]
no_preview: bool,
/// When specified, Ruff will try to only format the code in the given range.
/// It might be necessary to extend the start backwards or the end forwards, to fully enclose a logical line.
/// The `<RANGE>` uses the format `<start_line>:<start_column>-<end_line>:<end_column>`.
///
/// - The line and column numbers are 1 based.
/// - The column specifies the nth-unicode codepoint on that line.
/// - The end offset is exclusive.
/// - The column numbers are optional. You can write `--range=1-2` instead of `--range=1:1-2:1`.
/// - The end position is optional. You can write `--range=2` to format the entire document starting from the second line.
/// - The start position is optional. You can write `--range=-3` to format the first three lines of the document.
///
/// The option can only be used when formatting a single file. Range formatting of notebooks is unsupported.
#[clap(long, help_heading = "Editor options", verbatim_doc_comment)]
pub range: Option<FormatRange>,
}
#[derive(Debug, Clone, Copy, clap::ValueEnum)]
@@ -494,100 +558,181 @@ impl From<&LogLevelArgs> for LogLevel {
}
}
/// Configuration-related arguments passed via the CLI.
#[derive(Default)]
pub struct ConfigArguments {
/// Path to a pyproject.toml or ruff.toml configuration file (etc.).
/// Either 0 or 1 configuration file paths may be provided on the command line.
config_file: Option<PathBuf>,
/// Overrides provided via the `--config "KEY=VALUE"` option.
/// An arbitrary number of these overrides may be provided on the command line.
/// These overrides take precedence over all configuration files,
/// even configuration files that were also specified using `--config`.
overrides: Configuration,
/// Overrides provided via dedicated flags such as `--line-length` etc.
/// These overrides take precedence over all configuration files,
/// and also over all overrides specified using any `--config "KEY=VALUE"` flags.
per_flag_overrides: ExplicitConfigOverrides,
}
impl ConfigArguments {
pub fn config_file(&self) -> Option<&Path> {
self.config_file.as_deref()
}
fn from_cli_arguments(
config_options: Vec<SingleConfigArgument>,
per_flag_overrides: ExplicitConfigOverrides,
isolated: bool,
) -> anyhow::Result<Self> {
let mut new = Self {
per_flag_overrides,
..Self::default()
};
for option in config_options {
match option {
SingleConfigArgument::SettingsOverride(overridden_option) => {
let overridden_option = Arc::try_unwrap(overridden_option)
.unwrap_or_else(|option| option.deref().clone());
new.overrides = new.overrides.combine(Configuration::from_options(
overridden_option,
None,
&path_dedot::CWD,
)?);
}
SingleConfigArgument::FilePath(path) => {
if isolated {
bail!(
"\
The argument `--config={}` cannot be used with `--isolated`
tip: You cannot specify a configuration file and also specify `--isolated`,
as `--isolated` causes ruff to ignore all configuration files.
For more information, try `--help`.
",
path.display()
);
}
if let Some(ref config_file) = new.config_file {
let (first, second) = (config_file.display(), path.display());
bail!(
"\
You cannot specify more than one configuration file on the command line.
tip: remove either `--config={first}` or `--config={second}`.
For more information, try `--help`.
"
);
}
new.config_file = Some(path);
}
}
}
Ok(new)
}
}
impl ConfigurationTransformer for ConfigArguments {
fn transform(&self, config: Configuration) -> Configuration {
let with_config_overrides = self.overrides.clone().combine(config);
self.per_flag_overrides.transform(with_config_overrides)
}
}
impl CheckCommand {
/// Partition the CLI into command-line arguments and configuration
/// overrides.
pub fn partition(self) -> (CheckArguments, CliOverrides) {
(
CheckArguments {
add_noqa: self.add_noqa,
config: self.config,
diff: self.diff,
ecosystem_ci: self.ecosystem_ci,
exit_non_zero_on_fix: self.exit_non_zero_on_fix,
exit_zero: self.exit_zero,
files: self.files,
ignore_noqa: self.ignore_noqa,
isolated: self.isolated,
no_cache: self.no_cache,
output_file: self.output_file,
show_files: self.show_files,
show_settings: self.show_settings,
statistics: self.statistics,
stdin_filename: self.stdin_filename,
watch: self.watch,
},
CliOverrides {
dummy_variable_rgx: self.dummy_variable_rgx,
exclude: self.exclude,
extend_exclude: self.extend_exclude,
extend_fixable: self.extend_fixable,
extend_ignore: self.extend_ignore,
extend_per_file_ignores: self.extend_per_file_ignores,
extend_select: self.extend_select,
extend_unfixable: self.extend_unfixable,
fixable: self.fixable,
ignore: self.ignore,
line_length: self.line_length,
per_file_ignores: self.per_file_ignores,
preview: resolve_bool_arg(self.preview, self.no_preview).map(PreviewMode::from),
respect_gitignore: resolve_bool_arg(
self.respect_gitignore,
self.no_respect_gitignore,
),
select: self.select,
target_version: self.target_version,
unfixable: self.unfixable,
// TODO(charlie): Included in `pyproject.toml`, but not inherited.
cache_dir: self.cache_dir,
fix: resolve_bool_arg(self.fix, self.no_fix),
fix_only: resolve_bool_arg(self.fix_only, self.no_fix_only),
unsafe_fixes: resolve_bool_arg(self.unsafe_fixes, self.no_unsafe_fixes)
.map(UnsafeFixes::from),
force_exclude: resolve_bool_arg(self.force_exclude, self.no_force_exclude),
output_format: resolve_output_format(
self.output_format,
resolve_bool_arg(self.show_source, self.no_show_source),
resolve_bool_arg(self.preview, self.no_preview).unwrap_or_default(),
),
show_fixes: resolve_bool_arg(self.show_fixes, self.no_show_fixes),
extension: self.extension,
},
)
pub fn partition(self) -> anyhow::Result<(CheckArguments, ConfigArguments)> {
let check_arguments = CheckArguments {
add_noqa: self.add_noqa,
diff: self.diff,
ecosystem_ci: self.ecosystem_ci,
exit_non_zero_on_fix: self.exit_non_zero_on_fix,
exit_zero: self.exit_zero,
files: self.files,
ignore_noqa: self.ignore_noqa,
isolated: self.isolated,
no_cache: self.no_cache,
output_file: self.output_file,
show_files: self.show_files,
show_settings: self.show_settings,
statistics: self.statistics,
stdin_filename: self.stdin_filename,
watch: self.watch,
};
let cli_overrides = ExplicitConfigOverrides {
dummy_variable_rgx: self.dummy_variable_rgx,
exclude: self.exclude,
extend_exclude: self.extend_exclude,
extend_fixable: self.extend_fixable,
extend_ignore: self.extend_ignore,
extend_per_file_ignores: self.extend_per_file_ignores,
extend_select: self.extend_select,
extend_unfixable: self.extend_unfixable,
fixable: self.fixable,
ignore: self.ignore,
line_length: self.line_length,
per_file_ignores: self.per_file_ignores,
preview: resolve_bool_arg(self.preview, self.no_preview).map(PreviewMode::from),
respect_gitignore: resolve_bool_arg(self.respect_gitignore, self.no_respect_gitignore),
select: self.select,
target_version: self.target_version,
unfixable: self.unfixable,
// TODO(charlie): Included in `pyproject.toml`, but not inherited.
cache_dir: self.cache_dir,
fix: resolve_bool_arg(self.fix, self.no_fix),
fix_only: resolve_bool_arg(self.fix_only, self.no_fix_only),
unsafe_fixes: resolve_bool_arg(self.unsafe_fixes, self.no_unsafe_fixes)
.map(UnsafeFixes::from),
force_exclude: resolve_bool_arg(self.force_exclude, self.no_force_exclude),
output_format: resolve_output_format(
self.output_format,
resolve_bool_arg(self.show_source, self.no_show_source),
resolve_bool_arg(self.preview, self.no_preview).unwrap_or_default(),
),
show_fixes: resolve_bool_arg(self.show_fixes, self.no_show_fixes),
extension: self.extension,
};
let config_args =
ConfigArguments::from_cli_arguments(self.config, cli_overrides, self.isolated)?;
Ok((check_arguments, config_args))
}
}
impl FormatCommand {
/// Partition the CLI into command-line arguments and configuration
/// overrides.
pub fn partition(self) -> (FormatArguments, CliOverrides) {
(
FormatArguments {
check: self.check,
diff: self.diff,
config: self.config,
files: self.files,
isolated: self.isolated,
no_cache: self.no_cache,
stdin_filename: self.stdin_filename,
},
CliOverrides {
line_length: self.line_length,
respect_gitignore: resolve_bool_arg(
self.respect_gitignore,
self.no_respect_gitignore,
),
exclude: self.exclude,
preview: resolve_bool_arg(self.preview, self.no_preview).map(PreviewMode::from),
force_exclude: resolve_bool_arg(self.force_exclude, self.no_force_exclude),
target_version: self.target_version,
cache_dir: self.cache_dir,
extension: self.extension,
pub fn partition(self) -> anyhow::Result<(FormatArguments, ConfigArguments)> {
let format_arguments = FormatArguments {
check: self.check,
diff: self.diff,
files: self.files,
isolated: self.isolated,
no_cache: self.no_cache,
stdin_filename: self.stdin_filename,
range: self.range,
};
// Unsupported on the formatter CLI, but required on `Overrides`.
..CliOverrides::default()
},
)
let cli_overrides = ExplicitConfigOverrides {
line_length: self.line_length,
respect_gitignore: resolve_bool_arg(self.respect_gitignore, self.no_respect_gitignore),
exclude: self.exclude,
preview: resolve_bool_arg(self.preview, self.no_preview).map(PreviewMode::from),
force_exclude: resolve_bool_arg(self.force_exclude, self.no_force_exclude),
target_version: self.target_version,
cache_dir: self.cache_dir,
extension: self.extension,
// Unsupported on the formatter CLI, but required on `Overrides`.
..ExplicitConfigOverrides::default()
};
let config_args =
ConfigArguments::from_cli_arguments(self.config, cli_overrides, self.isolated)?;
Ok((format_arguments, config_args))
}
}
@@ -600,6 +745,154 @@ fn resolve_bool_arg(yes: bool, no: bool) -> Option<bool> {
}
}
#[derive(Debug)]
enum TomlParseFailureKind {
SyntaxError,
UnknownOption,
}
impl std::fmt::Display for TomlParseFailureKind {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let display = match self {
Self::SyntaxError => "The supplied argument is not valid TOML",
Self::UnknownOption => {
"Could not parse the supplied argument as a `ruff.toml` configuration option"
}
};
write!(f, "{display}")
}
}
#[derive(Debug)]
struct TomlParseFailure {
kind: TomlParseFailureKind,
underlying_error: toml::de::Error,
}
impl std::fmt::Display for TomlParseFailure {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let TomlParseFailure {
kind,
underlying_error,
} = self;
let display = format!("{kind}:\n\n{underlying_error}");
write!(f, "{}", display.trim_end())
}
}
/// Enumeration to represent a single `--config` argument
/// passed via the CLI.
///
/// Using the `--config` flag, users may pass 0 or 1 paths
/// to configuration files and an arbitrary number of
/// "inline TOML" overrides for specific settings.
///
/// For example:
///
/// ```sh
/// ruff check --config "path/to/ruff.toml" --config "extend-select=['E501', 'F841']" --config "lint.per-file-ignores = {'some_file.py' = ['F841']}"
/// ```
#[derive(Clone, Debug)]
pub enum SingleConfigArgument {
FilePath(PathBuf),
SettingsOverride(Arc<Options>),
}
#[derive(Clone)]
pub struct ConfigArgumentParser;
impl ValueParserFactory for SingleConfigArgument {
type Parser = ConfigArgumentParser;
fn value_parser() -> Self::Parser {
ConfigArgumentParser
}
}
impl TypedValueParser for ConfigArgumentParser {
type Value = SingleConfigArgument;
fn parse_ref(
&self,
cmd: &clap::Command,
arg: Option<&clap::Arg>,
value: &std::ffi::OsStr,
) -> Result<Self::Value, clap::Error> {
let path_to_config_file = PathBuf::from(value);
if path_to_config_file.exists() {
return Ok(SingleConfigArgument::FilePath(path_to_config_file));
}
let value = value
.to_str()
.ok_or_else(|| clap::Error::new(clap::error::ErrorKind::InvalidUtf8))?;
let toml_parse_error = match toml::Table::from_str(value) {
Ok(table) => match table.try_into() {
Ok(option) => return Ok(SingleConfigArgument::SettingsOverride(Arc::new(option))),
Err(underlying_error) => TomlParseFailure {
kind: TomlParseFailureKind::UnknownOption,
underlying_error,
},
},
Err(underlying_error) => TomlParseFailure {
kind: TomlParseFailureKind::SyntaxError,
underlying_error,
},
};
let mut new_error = clap::Error::new(clap::error::ErrorKind::ValueValidation).with_cmd(cmd);
if let Some(arg) = arg {
new_error.insert(
clap::error::ContextKind::InvalidArg,
clap::error::ContextValue::String(arg.to_string()),
);
}
new_error.insert(
clap::error::ContextKind::InvalidValue,
clap::error::ContextValue::String(value.to_string()),
);
// small hack so that multiline tips
// have the same indent on the left-hand side:
let tip_indent = " ".repeat(" tip: ".len());
let mut tip = format!(
"\
A `--config` flag must either be a path to a `.toml` configuration file
{tip_indent}or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
{tip_indent}option"
);
// Here we do some heuristics to try to figure out whether
// the user was trying to pass in a path to a configuration file
// or some inline TOML.
// We want to display the most helpful error to the user as possible.
if std::path::Path::new(value)
.extension()
.map_or(false, |ext| ext.eq_ignore_ascii_case("toml"))
{
if !value.contains('=') {
tip.push_str(&format!(
"
It looks like you were trying to pass a path to a configuration file.
The path `{value}` does not exist"
));
}
} else if value.contains('=') {
tip.push_str(&format!("\n\n{toml_parse_error}"));
}
new_error.insert(
clap::error::ContextKind::Suggested,
clap::error::ContextValue::StyledStrs(vec![tip.into()]),
);
Err(new_error)
}
}
fn resolve_output_format(
output_format: Option<SerializationFormat>,
show_sources: Option<bool>,
@@ -642,7 +935,6 @@ fn resolve_output_format(
#[allow(clippy::struct_excessive_bools)]
pub struct CheckArguments {
pub add_noqa: bool,
pub config: Option<PathBuf>,
pub diff: bool,
pub ecosystem_ci: bool,
pub exit_non_zero_on_fix: bool,
@@ -666,45 +958,235 @@ pub struct FormatArguments {
pub check: bool,
pub no_cache: bool,
pub diff: bool,
pub config: Option<PathBuf>,
pub files: Vec<PathBuf>,
pub isolated: bool,
pub stdin_filename: Option<PathBuf>,
pub range: Option<FormatRange>,
}
/// CLI settings that function as configuration overrides.
/// A text range specified by line and column numbers.
#[derive(Copy, Clone, Debug)]
pub struct FormatRange {
start: LineColumn,
end: LineColumn,
}
impl FormatRange {
/// Converts the line:column range to a byte offset range specific for `source`.
///
/// Returns an empty range if the start range is past the end of `source`.
pub(super) fn to_text_range(self, source: &str, line_index: &LineIndex) -> TextRange {
let start_byte_offset = line_index.offset(self.start.line, self.start.column, source);
let end_byte_offset = line_index.offset(self.end.line, self.end.column, source);
TextRange::new(start_byte_offset, end_byte_offset)
}
}
impl FromStr for FormatRange {
type Err = FormatRangeParseError;
fn from_str(value: &str) -> Result<Self, Self::Err> {
let (start, end) = value.split_once('-').unwrap_or((value, ""));
let start = if start.is_empty() {
LineColumn::default()
} else {
start.parse().map_err(FormatRangeParseError::InvalidStart)?
};
let end = if end.is_empty() {
LineColumn {
line: OneIndexed::MAX,
column: OneIndexed::MAX,
}
} else {
end.parse().map_err(FormatRangeParseError::InvalidEnd)?
};
if start > end {
return Err(FormatRangeParseError::StartGreaterThanEnd(start, end));
}
Ok(FormatRange { start, end })
}
}
#[derive(Clone, Debug)]
pub enum FormatRangeParseError {
InvalidStart(LineColumnParseError),
InvalidEnd(LineColumnParseError),
StartGreaterThanEnd(LineColumn, LineColumn),
}
impl std::fmt::Display for FormatRangeParseError {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let tip = " tip:".bold().green();
match self {
FormatRangeParseError::StartGreaterThanEnd(start, end) => {
write!(
f,
"the start position '{start_invalid}' is greater than the end position '{end_invalid}'.\n {tip} Try switching start and end: '{end}-{start}'",
start_invalid=start.to_string().bold().yellow(),
end_invalid=end.to_string().bold().yellow(),
start=start.to_string().green().bold(),
end=end.to_string().green().bold()
)
}
FormatRangeParseError::InvalidStart(inner) => inner.write(f, true),
FormatRangeParseError::InvalidEnd(inner) => inner.write(f, false),
}
}
}
impl std::error::Error for FormatRangeParseError {}
#[derive(Copy, Clone, Debug)]
pub struct LineColumn {
pub line: OneIndexed,
pub column: OneIndexed,
}
impl std::fmt::Display for LineColumn {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
write!(f, "{line}:{column}", line = self.line, column = self.column)
}
}
impl Default for LineColumn {
fn default() -> Self {
LineColumn {
line: OneIndexed::MIN,
column: OneIndexed::MIN,
}
}
}
impl PartialOrd for LineColumn {
#[inline]
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl Ord for LineColumn {
fn cmp(&self, other: &Self) -> Ordering {
self.line
.cmp(&other.line)
.then(self.column.cmp(&other.column))
}
}
impl PartialEq for LineColumn {
fn eq(&self, other: &Self) -> bool {
self.cmp(other) == Ordering::Equal
}
}
impl Eq for LineColumn {}
impl FromStr for LineColumn {
type Err = LineColumnParseError;
fn from_str(value: &str) -> Result<Self, Self::Err> {
let (line, column) = value.split_once(':').unwrap_or((value, "1"));
let line: usize = line.parse().map_err(LineColumnParseError::LineParseError)?;
let column: usize = column
.parse()
.map_err(LineColumnParseError::ColumnParseError)?;
match (OneIndexed::new(line), OneIndexed::new(column)) {
(Some(line), Some(column)) => Ok(LineColumn { line, column }),
(Some(line), None) => Err(LineColumnParseError::ZeroColumnIndex { line }),
(None, Some(column)) => Err(LineColumnParseError::ZeroLineIndex { column }),
(None, None) => Err(LineColumnParseError::ZeroLineAndColumnIndex),
}
}
}
#[derive(Clone, Debug)]
pub enum LineColumnParseError {
ZeroLineIndex { column: OneIndexed },
ZeroColumnIndex { line: OneIndexed },
ZeroLineAndColumnIndex,
LineParseError(std::num::ParseIntError),
ColumnParseError(std::num::ParseIntError),
}
impl LineColumnParseError {
fn write(&self, f: &mut std::fmt::Formatter, start_range: bool) -> std::fmt::Result {
let tip = "tip:".bold().green();
let range = if start_range { "start" } else { "end" };
match self {
LineColumnParseError::ColumnParseError(inner) => {
write!(f, "the {range}s column is not a valid number ({inner})'\n {tip} The format is 'line:column'.")
}
LineColumnParseError::LineParseError(inner) => {
write!(f, "the {range} line is not a valid number ({inner})\n {tip} The format is 'line:column'.")
}
LineColumnParseError::ZeroColumnIndex { line } => {
write!(
f,
"the {range} column is 0, but it should be 1 or greater.\n {tip} The column numbers start at 1.\n {tip} Try {suggestion} instead.",
suggestion=format!("{line}:1").green().bold()
)
}
LineColumnParseError::ZeroLineIndex { column } => {
write!(
f,
"the {range} line is 0, but it should be 1 or greater.\n {tip} The line numbers start at 1.\n {tip} Try {suggestion} instead.",
suggestion=format!("1:{column}").green().bold()
)
}
LineColumnParseError::ZeroLineAndColumnIndex => {
write!(
f,
"the {range} line and column are both 0, but they should be 1 or greater.\n {tip} The line and column numbers start at 1.\n {tip} Try {suggestion} instead.",
suggestion="1:1".to_string().green().bold()
)
}
}
}
}
/// Configuration overrides provided via dedicated CLI flags:
/// `--line-length`, `--respect-gitignore`, etc.
#[derive(Clone, Default)]
#[allow(clippy::struct_excessive_bools)]
pub struct CliOverrides {
pub dummy_variable_rgx: Option<Regex>,
pub exclude: Option<Vec<FilePattern>>,
pub extend_exclude: Option<Vec<FilePattern>>,
pub extend_fixable: Option<Vec<RuleSelector>>,
pub extend_ignore: Option<Vec<RuleSelector>>,
pub extend_select: Option<Vec<RuleSelector>>,
pub extend_unfixable: Option<Vec<RuleSelector>>,
pub fixable: Option<Vec<RuleSelector>>,
pub ignore: Option<Vec<RuleSelector>>,
pub line_length: Option<LineLength>,
pub per_file_ignores: Option<Vec<PatternPrefixPair>>,
pub extend_per_file_ignores: Option<Vec<PatternPrefixPair>>,
pub preview: Option<PreviewMode>,
pub respect_gitignore: Option<bool>,
pub select: Option<Vec<RuleSelector>>,
pub target_version: Option<PythonVersion>,
pub unfixable: Option<Vec<RuleSelector>>,
struct ExplicitConfigOverrides {
dummy_variable_rgx: Option<Regex>,
exclude: Option<Vec<FilePattern>>,
extend_exclude: Option<Vec<FilePattern>>,
extend_fixable: Option<Vec<RuleSelector>>,
extend_ignore: Option<Vec<RuleSelector>>,
extend_select: Option<Vec<RuleSelector>>,
extend_unfixable: Option<Vec<RuleSelector>>,
fixable: Option<Vec<RuleSelector>>,
ignore: Option<Vec<RuleSelector>>,
line_length: Option<LineLength>,
per_file_ignores: Option<Vec<PatternPrefixPair>>,
extend_per_file_ignores: Option<Vec<PatternPrefixPair>>,
preview: Option<PreviewMode>,
respect_gitignore: Option<bool>,
select: Option<Vec<RuleSelector>>,
target_version: Option<PythonVersion>,
unfixable: Option<Vec<RuleSelector>>,
// TODO(charlie): Captured in pyproject.toml as a default, but not part of `Settings`.
pub cache_dir: Option<PathBuf>,
pub fix: Option<bool>,
pub fix_only: Option<bool>,
pub unsafe_fixes: Option<UnsafeFixes>,
pub force_exclude: Option<bool>,
pub output_format: Option<SerializationFormat>,
pub show_fixes: Option<bool>,
pub extension: Option<Vec<ExtensionPair>>,
cache_dir: Option<PathBuf>,
fix: Option<bool>,
fix_only: Option<bool>,
unsafe_fixes: Option<UnsafeFixes>,
force_exclude: Option<bool>,
output_format: Option<SerializationFormat>,
show_fixes: Option<bool>,
extension: Option<Vec<ExtensionPair>>,
}
impl ConfigurationTransformer for CliOverrides {
impl ConfigurationTransformer for ExplicitConfigOverrides {
fn transform(&self, mut config: Configuration) -> Configuration {
if let Some(cache_dir) = &self.cache_dir {
config.cache_dir = Some(cache_dir.clone());

View File

@@ -1,7 +1,7 @@
use std::fmt::Debug;
use std::fs::{self, File};
use std::hash::Hasher;
use std::io::{self, BufReader, BufWriter, Write};
use std::io::{self, BufReader, Write};
use std::path::{Path, PathBuf};
use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::Mutex;
@@ -15,6 +15,7 @@ use rayon::iter::ParallelIterator;
use rayon::iter::{IntoParallelIterator, ParallelBridge};
use rustc_hash::FxHashMap;
use serde::{Deserialize, Serialize};
use tempfile::NamedTempFile;
use ruff_cache::{CacheKey, CacheKeyHasher};
use ruff_diagnostics::{DiagnosticKind, Fix};
@@ -165,15 +166,29 @@ impl Cache {
return Ok(());
}
let file = File::create(&self.path)
.with_context(|| format!("Failed to create cache file '{}'", self.path.display()))?;
let writer = BufWriter::new(file);
bincode::serialize_into(writer, &self.package).with_context(|| {
// Write the cache to a temporary file first and then rename it for an "atomic" write.
// Protects against data loss if the process is killed during the write and races between different ruff
// processes, resulting in a corrupted cache file. https://github.com/astral-sh/ruff/issues/8147#issuecomment-1943345964
let mut temp_file =
NamedTempFile::new_in(self.path.parent().expect("Write path must have a parent"))
.context("Failed to create temporary file")?;
// Serialize to in-memory buffer because hyperfine benchmark showed that it's faster than
// using a `BufWriter` and our cache files are small enough that streaming isn't necessary.
let serialized =
bincode::serialize(&self.package).context("Failed to serialize cache data")?;
temp_file
.write_all(&serialized)
.context("Failed to write serialized cache to temporary file.")?;
temp_file.persist(&self.path).with_context(|| {
format!(
"Failed to serialise cache to file '{}'",
"Failed to rename temporary cache file to {}",
self.path.display()
)
})
})?;
Ok(())
}
/// Applies the pending changes without storing the cache to disk.
@@ -1050,6 +1065,7 @@ mod tests {
&self.settings.formatter,
PySourceType::Python,
FormatMode::Write,
None,
Some(cache),
)
}

View File

@@ -12,17 +12,17 @@ use ruff_linter::warn_user_once;
use ruff_python_ast::{PySourceType, SourceType};
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig, ResolvedFile};
use crate::args::CliOverrides;
use crate::args::ConfigArguments;
/// Add `noqa` directives to a collection of files.
pub(crate) fn add_noqa(
files: &[PathBuf],
pyproject_config: &PyprojectConfig,
overrides: &CliOverrides,
config_arguments: &ConfigArguments,
) -> Result<usize> {
// Collect all the files to check.
let start = Instant::now();
let (paths, resolver) = python_files_in_path(files, pyproject_config, overrides)?;
let (paths, resolver) = python_files_in_path(files, pyproject_config, config_arguments)?;
let duration = start.elapsed();
debug!("Identified files to lint in: {:?}", duration);

View File

@@ -24,7 +24,7 @@ use ruff_workspace::resolver::{
match_exclusion, python_files_in_path, PyprojectConfig, ResolvedFile,
};
use crate::args::CliOverrides;
use crate::args::ConfigArguments;
use crate::cache::{Cache, PackageCacheMap, PackageCaches};
use crate::diagnostics::Diagnostics;
use crate::panic::catch_unwind;
@@ -34,7 +34,7 @@ use crate::panic::catch_unwind;
pub(crate) fn check(
files: &[PathBuf],
pyproject_config: &PyprojectConfig,
overrides: &CliOverrides,
config_arguments: &ConfigArguments,
cache: flags::Cache,
noqa: flags::Noqa,
fix_mode: flags::FixMode,
@@ -42,7 +42,7 @@ pub(crate) fn check(
) -> Result<Diagnostics> {
// Collect all the Python files to check.
let start = Instant::now();
let (paths, resolver) = python_files_in_path(files, pyproject_config, overrides)?;
let (paths, resolver) = python_files_in_path(files, pyproject_config, config_arguments)?;
debug!("Identified files to lint in: {:?}", start.elapsed());
if paths.is_empty() {
@@ -233,7 +233,7 @@ mod test {
use ruff_workspace::resolver::{PyprojectConfig, PyprojectDiscoveryStrategy};
use ruff_workspace::Settings;
use crate::args::CliOverrides;
use crate::args::ConfigArguments;
use super::check;
@@ -272,7 +272,7 @@ mod test {
// Notebooks are not included by default
&[tempdir.path().to_path_buf(), notebook],
&pyproject_config,
&CliOverrides::default(),
&ConfigArguments::default(),
flags::Cache::Disabled,
flags::Noqa::Disabled,
flags::FixMode::Generate,

View File

@@ -6,7 +6,7 @@ use ruff_linter::packaging;
use ruff_linter::settings::flags;
use ruff_workspace::resolver::{match_exclusion, python_file_at_path, PyprojectConfig, Resolver};
use crate::args::CliOverrides;
use crate::args::ConfigArguments;
use crate::diagnostics::{lint_stdin, Diagnostics};
use crate::stdin::{parrot_stdin, read_from_stdin};
@@ -14,7 +14,7 @@ use crate::stdin::{parrot_stdin, read_from_stdin};
pub(crate) fn check_stdin(
filename: Option<&Path>,
pyproject_config: &PyprojectConfig,
overrides: &CliOverrides,
overrides: &ConfigArguments,
noqa: flags::Noqa,
fix_mode: flags::FixMode,
) -> Result<Diagnostics> {

View File

@@ -23,12 +23,13 @@ use ruff_linter::rules::flake8_quotes::settings::Quote;
use ruff_linter::source_kind::{SourceError, SourceKind};
use ruff_linter::warn_user_once;
use ruff_python_ast::{PySourceType, SourceType};
use ruff_python_formatter::{format_module_source, FormatModuleError, QuoteStyle};
use ruff_python_formatter::{format_module_source, format_range, FormatModuleError, QuoteStyle};
use ruff_source_file::LineIndex;
use ruff_text_size::{TextLen, TextRange, TextSize};
use ruff_workspace::resolver::{match_exclusion, python_files_in_path, ResolvedFile, Resolver};
use ruff_workspace::FormatterSettings;
use crate::args::{CliOverrides, FormatArguments};
use crate::args::{ConfigArguments, FormatArguments, FormatRange};
use crate::cache::{Cache, FileCacheKey, PackageCacheMap, PackageCaches};
use crate::panic::{catch_unwind, PanicError};
use crate::resolve::resolve;
@@ -59,24 +60,30 @@ impl FormatMode {
/// Format a set of files, and return the exit status.
pub(crate) fn format(
cli: FormatArguments,
overrides: &CliOverrides,
config_arguments: &ConfigArguments,
log_level: LogLevel,
) -> Result<ExitStatus> {
let pyproject_config = resolve(
cli.isolated,
cli.config.as_deref(),
overrides,
config_arguments,
cli.stdin_filename.as_deref(),
)?;
let mode = FormatMode::from_cli(&cli);
let files = resolve_default_files(cli.files, false);
let (paths, resolver) = python_files_in_path(&files, &pyproject_config, overrides)?;
let (paths, resolver) = python_files_in_path(&files, &pyproject_config, config_arguments)?;
if paths.is_empty() {
warn_user_once!("No Python files found under the given path(s)");
return Ok(ExitStatus::Success);
}
if cli.range.is_some() && paths.len() > 1 {
return Err(anyhow::anyhow!(
"The `--range` option is only supported when formatting a single file but the specified paths resolve to {} files.",
paths.len()
));
}
warn_incompatible_formatter_settings(&resolver);
// Discover the package root for each Python file.
@@ -139,7 +146,14 @@ pub(crate) fn format(
Some(
match catch_unwind(|| {
format_path(path, &settings.formatter, source_type, mode, cache)
format_path(
path,
&settings.formatter,
source_type,
mode,
cli.range,
cache,
)
}) {
Ok(inner) => inner.map(|result| FormatPathResult {
path: resolved_file.path().to_path_buf(),
@@ -226,6 +240,7 @@ pub(crate) fn format_path(
settings: &FormatterSettings,
source_type: PySourceType,
mode: FormatMode,
range: Option<FormatRange>,
cache: Option<&Cache>,
) -> Result<FormatResult, FormatCommandError> {
if let Some(cache) = cache {
@@ -250,8 +265,12 @@ pub(crate) fn format_path(
}
};
// Don't write back to the cache if formatting a range.
let cache = cache.filter(|_| range.is_none());
// Format the source.
let format_result = match format_source(&unformatted, source_type, Some(path), settings)? {
let format_result = match format_source(&unformatted, source_type, Some(path), settings, range)?
{
FormattedSource::Formatted(formatted) => match mode {
FormatMode::Write => {
let mut writer = File::create(path).map_err(|err| {
@@ -319,12 +338,31 @@ pub(crate) fn format_source(
source_type: PySourceType,
path: Option<&Path>,
settings: &FormatterSettings,
range: Option<FormatRange>,
) -> Result<FormattedSource, FormatCommandError> {
match &source_kind {
SourceKind::Python(unformatted) => {
let options = settings.to_format_options(source_type, unformatted);
let formatted = format_module_source(unformatted, options).map_err(|err| {
let formatted = if let Some(range) = range {
let line_index = LineIndex::from_source_text(unformatted);
let byte_range = range.to_text_range(unformatted, &line_index);
format_range(unformatted, byte_range, options).map(|formatted_range| {
let mut formatted = unformatted.to_string();
formatted.replace_range(
std::ops::Range::<usize>::from(formatted_range.source_range()),
formatted_range.as_code(),
);
formatted
})
} else {
// Using `Printed::into_code` requires adding `ruff_formatter` as a direct dependency, and I suspect that Rust can optimize the closure away regardless.
#[allow(clippy::redundant_closure_for_method_calls)]
format_module_source(unformatted, options).map(|formatted| formatted.into_code())
};
let formatted = formatted.map_err(|err| {
if let FormatModuleError::ParseError(err) = err {
DisplayParseError::from_source_kind(
err,
@@ -337,7 +375,6 @@ pub(crate) fn format_source(
}
})?;
let formatted = formatted.into_code();
if formatted.len() == unformatted.len() && formatted == *unformatted {
Ok(FormattedSource::Unchanged)
} else {
@@ -349,6 +386,12 @@ pub(crate) fn format_source(
return Ok(FormattedSource::Unchanged);
}
if range.is_some() {
return Err(FormatCommandError::RangeFormatNotebook(
path.map(Path::to_path_buf),
));
}
let options = settings.to_format_options(source_type, notebook.source_code());
let mut output: Option<String> = None;
@@ -589,6 +632,7 @@ pub(crate) enum FormatCommandError {
Format(Option<PathBuf>, FormatModuleError),
Write(Option<PathBuf>, SourceError),
Diff(Option<PathBuf>, io::Error),
RangeFormatNotebook(Option<PathBuf>),
}
impl FormatCommandError {
@@ -606,7 +650,8 @@ impl FormatCommandError {
| Self::Read(path, _)
| Self::Format(path, _)
| Self::Write(path, _)
| Self::Diff(path, _) => path.as_deref(),
| Self::Diff(path, _)
| Self::RangeFormatNotebook(path) => path.as_deref(),
}
}
}
@@ -628,9 +673,10 @@ impl Display for FormatCommandError {
} else {
write!(
f,
"{} {}",
"Encountered error:".bold(),
err.io_error()
"{header} {error}",
header = "Encountered error:".bold(),
error = err
.io_error()
.map_or_else(|| err.to_string(), std::string::ToString::to_string)
)
}
@@ -648,7 +694,7 @@ impl Display for FormatCommandError {
":".bold()
)
} else {
write!(f, "{}{} {err}", "Failed to read".bold(), ":".bold())
write!(f, "{header} {err}", header = "Failed to read:".bold())
}
}
Self::Write(path, err) => {
@@ -661,7 +707,7 @@ impl Display for FormatCommandError {
":".bold()
)
} else {
write!(f, "{}{} {err}", "Failed to write".bold(), ":".bold())
write!(f, "{header} {err}", header = "Failed to write:".bold())
}
}
Self::Format(path, err) => {
@@ -674,7 +720,7 @@ impl Display for FormatCommandError {
":".bold()
)
} else {
write!(f, "{}{} {err}", "Failed to format".bold(), ":".bold())
write!(f, "{header} {err}", header = "Failed to format:".bold())
}
}
Self::Diff(path, err) => {
@@ -689,9 +735,25 @@ impl Display for FormatCommandError {
} else {
write!(
f,
"{}{} {err}",
"Failed to generate diff".bold(),
":".bold()
"{header} {err}",
header = "Failed to generate diff:".bold(),
)
}
}
Self::RangeFormatNotebook(path) => {
if let Some(path) = path {
write!(
f,
"{header}{path}{colon} Range formatting isn't supported for notebooks.",
header = "Failed to format ".bold(),
path = fs::relativize_path(path).bold(),
colon = ":".bold()
)
} else {
write!(
f,
"{header} Range formatting isn't supported for notebooks",
header = "Failed to format:".bold()
)
}
}

View File

@@ -9,7 +9,7 @@ use ruff_python_ast::{PySourceType, SourceType};
use ruff_workspace::resolver::{match_exclusion, python_file_at_path, Resolver};
use ruff_workspace::FormatterSettings;
use crate::args::{CliOverrides, FormatArguments};
use crate::args::{ConfigArguments, FormatArguments, FormatRange};
use crate::commands::format::{
format_source, warn_incompatible_formatter_settings, FormatCommandError, FormatMode,
FormatResult, FormattedSource,
@@ -19,11 +19,13 @@ use crate::stdin::{parrot_stdin, read_from_stdin};
use crate::ExitStatus;
/// Run the formatter over a single file, read from `stdin`.
pub(crate) fn format_stdin(cli: &FormatArguments, overrides: &CliOverrides) -> Result<ExitStatus> {
pub(crate) fn format_stdin(
cli: &FormatArguments,
config_arguments: &ConfigArguments,
) -> Result<ExitStatus> {
let pyproject_config = resolve(
cli.isolated,
cli.config.as_deref(),
overrides,
config_arguments,
cli.stdin_filename.as_deref(),
)?;
@@ -34,7 +36,7 @@ pub(crate) fn format_stdin(cli: &FormatArguments, overrides: &CliOverrides) -> R
if resolver.force_exclude() {
if let Some(filename) = cli.stdin_filename.as_deref() {
if !python_file_at_path(filename, &mut resolver, overrides)? {
if !python_file_at_path(filename, &mut resolver, config_arguments)? {
if mode.is_write() {
parrot_stdin()?;
}
@@ -69,7 +71,7 @@ pub(crate) fn format_stdin(cli: &FormatArguments, overrides: &CliOverrides) -> R
};
// Format the file.
match format_source_code(path, settings, source_type, mode) {
match format_source_code(path, cli.range, settings, source_type, mode) {
Ok(result) => match mode {
FormatMode::Write => Ok(ExitStatus::Success),
FormatMode::Check | FormatMode::Diff => {
@@ -90,6 +92,7 @@ pub(crate) fn format_stdin(cli: &FormatArguments, overrides: &CliOverrides) -> R
/// Format source code read from `stdin`.
fn format_source_code(
path: Option<&Path>,
range: Option<FormatRange>,
settings: &FormatterSettings,
source_type: PySourceType,
mode: FormatMode,
@@ -107,7 +110,7 @@ fn format_source_code(
};
// Format the source.
let formatted = format_source(&source_kind, source_type, path, settings)?;
let formatted = format_source(&source_kind, source_type, path, settings, range)?;
match &formatted {
FormattedSource::Formatted(formatted) => match mode {

View File

@@ -7,17 +7,17 @@ use itertools::Itertools;
use ruff_linter::warn_user_once;
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig, ResolvedFile};
use crate::args::CliOverrides;
use crate::args::ConfigArguments;
/// Show the list of files to be checked based on current settings.
pub(crate) fn show_files(
files: &[PathBuf],
pyproject_config: &PyprojectConfig,
overrides: &CliOverrides,
config_arguments: &ConfigArguments,
writer: &mut impl Write,
) -> Result<()> {
// Collect all files in the hierarchy.
let (paths, _resolver) = python_files_in_path(files, pyproject_config, overrides)?;
let (paths, _resolver) = python_files_in_path(files, pyproject_config, config_arguments)?;
if paths.is_empty() {
warn_user_once!("No Python files found under the given path(s)");

View File

@@ -6,17 +6,17 @@ use itertools::Itertools;
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig, ResolvedFile};
use crate::args::CliOverrides;
use crate::args::ConfigArguments;
/// Print the user-facing configuration settings.
pub(crate) fn show_settings(
files: &[PathBuf],
pyproject_config: &PyprojectConfig,
overrides: &CliOverrides,
config_arguments: &ConfigArguments,
writer: &mut impl Write,
) -> Result<()> {
// Collect all files in the hierarchy.
let (paths, resolver) = python_files_in_path(files, pyproject_config, overrides)?;
let (paths, resolver) = python_files_in_path(files, pyproject_config, config_arguments)?;
// Print the list of files.
let Some(path) = paths
@@ -31,9 +31,9 @@ pub(crate) fn show_settings(
let settings = resolver.resolve(&path);
writeln!(writer, "Resolved settings for: {path:?}")?;
writeln!(writer, "Resolved settings for: \"{}\"", path.display())?;
if let Some(settings_path) = pyproject_config.path.as_ref() {
writeln!(writer, "Settings path: {settings_path:?}")?;
writeln!(writer, "Settings path: \"{}\"", settings_path.display())?;
}
write!(writer, "{settings}")?;

View File

@@ -204,24 +204,23 @@ pub fn run(
}
fn format(args: FormatCommand, log_level: LogLevel) -> Result<ExitStatus> {
let (cli, overrides) = args.partition();
let (cli, config_arguments) = args.partition()?;
if is_stdin(&cli.files, cli.stdin_filename.as_deref()) {
commands::format_stdin::format_stdin(&cli, &overrides)
commands::format_stdin::format_stdin(&cli, &config_arguments)
} else {
commands::format::format(cli, &overrides, log_level)
commands::format::format(cli, &config_arguments, log_level)
}
}
pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
let (cli, overrides) = args.partition();
let (cli, config_arguments) = args.partition()?;
// Construct the "default" settings. These are used when no `pyproject.toml`
// files are present, or files are injected from outside of the hierarchy.
let pyproject_config = resolve::resolve(
cli.isolated,
cli.config.as_deref(),
&overrides,
&config_arguments,
cli.stdin_filename.as_deref(),
)?;
@@ -239,11 +238,21 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
let files = resolve_default_files(cli.files, is_stdin);
if cli.show_settings {
commands::show_settings::show_settings(&files, &pyproject_config, &overrides, &mut writer)?;
commands::show_settings::show_settings(
&files,
&pyproject_config,
&config_arguments,
&mut writer,
)?;
return Ok(ExitStatus::Success);
}
if cli.show_files {
commands::show_files::show_files(&files, &pyproject_config, &overrides, &mut writer)?;
commands::show_files::show_files(
&files,
&pyproject_config,
&config_arguments,
&mut writer,
)?;
return Ok(ExitStatus::Success);
}
@@ -302,7 +311,8 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
if !fix_mode.is_generate() {
warn_user!("--fix is incompatible with --add-noqa.");
}
let modifications = commands::add_noqa::add_noqa(&files, &pyproject_config, &overrides)?;
let modifications =
commands::add_noqa::add_noqa(&files, &pyproject_config, &config_arguments)?;
if modifications > 0 && log_level >= LogLevel::Default {
let s = if modifications == 1 { "" } else { "s" };
#[allow(clippy::print_stderr)]
@@ -321,7 +331,11 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
printer_flags,
);
let preview = overrides.preview.unwrap_or_default().is_enabled();
// the settings should already be combined with the CLI overrides at this point
// TODO(jane): let's make this `PreviewMode`
// TODO: this should reference the global preview mode once https://github.com/astral-sh/ruff/issues/8232
// is resolved.
let preview = pyproject_config.settings.linter.preview.is_enabled();
if cli.watch {
if output_format != SerializationFormat::default(preview) {
@@ -348,7 +362,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
let messages = commands::check::check(
&files,
&pyproject_config,
&overrides,
&config_arguments,
cache.into(),
noqa.into(),
fix_mode,
@@ -370,8 +384,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
if matches!(change_kind, ChangeKind::Configuration) {
pyproject_config = resolve::resolve(
cli.isolated,
cli.config.as_deref(),
&overrides,
&config_arguments,
cli.stdin_filename.as_deref(),
)?;
}
@@ -381,7 +394,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
let messages = commands::check::check(
&files,
&pyproject_config,
&overrides,
&config_arguments,
cache.into(),
noqa.into(),
fix_mode,
@@ -398,7 +411,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
commands::check_stdin::check_stdin(
cli.stdin_filename.map(fs::normalize_path).as_deref(),
&pyproject_config,
&overrides,
&config_arguments,
noqa.into(),
fix_mode,
)?
@@ -406,7 +419,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
commands::check::check(
&files,
&pyproject_config,
&overrides,
&config_arguments,
cache.into(),
noqa.into(),
fix_mode,

View File

@@ -11,19 +11,18 @@ use ruff_workspace::resolver::{
Relativity,
};
use crate::args::CliOverrides;
use crate::args::ConfigArguments;
/// Resolve the relevant settings strategy and defaults for the current
/// invocation.
pub fn resolve(
isolated: bool,
config: Option<&Path>,
overrides: &CliOverrides,
config_arguments: &ConfigArguments,
stdin_filename: Option<&Path>,
) -> Result<PyprojectConfig> {
// First priority: if we're running in isolated mode, use the default settings.
if isolated {
let config = overrides.transform(Configuration::default());
let config = config_arguments.transform(Configuration::default());
let settings = config.into_settings(&path_dedot::CWD)?;
debug!("Isolated mode, not reading any pyproject.toml");
return Ok(PyprojectConfig::new(
@@ -36,12 +35,13 @@ pub fn resolve(
// Second priority: the user specified a `pyproject.toml` file. Use that
// `pyproject.toml` for _all_ configuration, and resolve paths relative to the
// current working directory. (This matches ESLint's behavior.)
if let Some(pyproject) = config
if let Some(pyproject) = config_arguments
.config_file()
.map(|config| config.display().to_string())
.map(|config| shellexpand::full(&config).map(|config| PathBuf::from(config.as_ref())))
.transpose()?
{
let settings = resolve_root_settings(&pyproject, Relativity::Cwd, overrides)?;
let settings = resolve_root_settings(&pyproject, Relativity::Cwd, config_arguments)?;
debug!(
"Using user-specified configuration file at: {}",
pyproject.display()
@@ -67,7 +67,7 @@ pub fn resolve(
"Using configuration file (via parent) at: {}",
pyproject.display()
);
let settings = resolve_root_settings(&pyproject, Relativity::Parent, overrides)?;
let settings = resolve_root_settings(&pyproject, Relativity::Parent, config_arguments)?;
return Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Hierarchical,
settings,
@@ -84,7 +84,7 @@ pub fn resolve(
"Using configuration file (via cwd) at: {}",
pyproject.display()
);
let settings = resolve_root_settings(&pyproject, Relativity::Cwd, overrides)?;
let settings = resolve_root_settings(&pyproject, Relativity::Cwd, config_arguments)?;
return Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Hierarchical,
settings,
@@ -97,7 +97,7 @@ pub fn resolve(
// "closest" `pyproject.toml` file for every Python file later on, so these act
// as the "default" settings.)
debug!("Using Ruff default settings");
let config = overrides.transform(Configuration::default());
let config = config_arguments.transform(Configuration::default());
let settings = config.into_settings(&path_dedot::CWD)?;
Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Hierarchical,

View File

@@ -90,6 +90,179 @@ fn format_warn_stdin_filename_with_files() {
"###);
}
#[test]
fn nonexistent_config_file() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--config", "foo.toml", "."]), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'foo.toml' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
It looks like you were trying to pass a path to a configuration file.
The path `foo.toml` does not exist
For more information, try '--help'.
"###);
}
#[test]
fn config_override_rejected_if_invalid_toml() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--config", "foo = bar", "."]), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'foo = bar' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
The supplied argument is not valid TOML:
TOML parse error at line 1, column 7
|
1 | foo = bar
| ^
invalid string
expected `"`, `'`
For more information, try '--help'.
"###);
}
#[test]
fn too_many_config_files() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_dot_toml = tempdir.path().join("ruff.toml");
let ruff2_dot_toml = tempdir.path().join("ruff2.toml");
fs::File::create(&ruff_dot_toml)?;
fs::File::create(&ruff2_dot_toml)?;
let expected_stderr = format!(
"\
ruff failed
Cause: You cannot specify more than one configuration file on the command line.
tip: remove either `--config={}` or `--config={}`.
For more information, try `--help`.
",
ruff_dot_toml.display(),
ruff2_dot_toml.display(),
);
let cmd = Command::new(get_cargo_bin(BIN_NAME))
.arg("format")
.arg("--config")
.arg(&ruff_dot_toml)
.arg("--config")
.arg(&ruff2_dot_toml)
.arg(".")
.output()?;
let stderr = std::str::from_utf8(&cmd.stderr)?;
assert_eq!(stderr, expected_stderr);
Ok(())
}
#[test]
fn config_file_and_isolated() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_dot_toml = tempdir.path().join("ruff.toml");
fs::File::create(&ruff_dot_toml)?;
let expected_stderr = format!(
"\
ruff failed
Cause: The argument `--config={}` cannot be used with `--isolated`
tip: You cannot specify a configuration file and also specify `--isolated`,
as `--isolated` causes ruff to ignore all configuration files.
For more information, try `--help`.
",
ruff_dot_toml.display(),
);
let cmd = Command::new(get_cargo_bin(BIN_NAME))
.arg("format")
.arg("--config")
.arg(&ruff_dot_toml)
.arg("--isolated")
.arg(".")
.output()?;
let stderr = std::str::from_utf8(&cmd.stderr)?;
assert_eq!(stderr, expected_stderr);
Ok(())
}
#[test]
fn config_override_via_cli() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(&ruff_toml, "line-length = 100")?;
let fixture = r#"
def foo():
print("looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong string")
"#;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.arg("format")
.arg("--config")
.arg(&ruff_toml)
// This overrides the long line length set in the config file
.args(["--config", "line-length=80"])
.arg("-")
.pass_stdin(fixture), @r###"
success: true
exit_code: 0
----- stdout -----
def foo():
print(
"looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong string"
)
----- stderr -----
"###);
Ok(())
}
#[test]
fn config_doubly_overridden_via_cli() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(&ruff_toml, "line-length = 70")?;
let fixture = r#"
def foo():
print("looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong string")
"#;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.arg("format")
.arg("--config")
.arg(&ruff_toml)
// This overrides the long line length set in the config file...
.args(["--config", "line-length=80"])
// ...but this overrides them both:
.args(["--line-length", "100"])
.arg("-")
.pass_stdin(fixture), @r###"
success: true
exit_code: 0
----- stdout -----
def foo():
print("looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong string")
----- stderr -----
"###);
Ok(())
}
#[test]
fn format_options() -> Result<()> {
let tempdir = TempDir::new()?;
@@ -508,11 +681,9 @@ if __name__ == '__main__':
say_hy("dear Ruff contributor")
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
- 'ignore' -> 'lint.ignore'
"###);
Ok(())
}
@@ -551,11 +722,9 @@ if __name__ == '__main__':
say_hy("dear Ruff contributor")
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
- 'ignore' -> 'lint.ignore'
"###);
Ok(())
}
@@ -1548,3 +1717,322 @@ include = ["*.ipy"]
"###);
Ok(())
}
#[test]
fn range_formatting() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=2:8-2:14"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Shouldn't format this" )
"#), @r###"
success: true
exit_code: 0
----- stdout -----
def foo(
arg1,
arg2,
):
print("Shouldn't format this" )
----- stderr -----
"###);
}
#[test]
fn range_formatting_unicode() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=2:21-3"])
.arg("-")
.pass_stdin(r#"
def foo(arg1="👋🏽" ): print("Format this" )
"#), @r###"
success: true
exit_code: 0
----- stdout -----
def foo(arg1="👋🏽" ):
print("Format this")
----- stderr -----
"###);
}
#[test]
fn range_formatting_multiple_files() -> std::io::Result<()> {
let tempdir = TempDir::new()?;
let file1 = tempdir.path().join("file1.py");
fs::write(
&file1,
r#"
def file1(arg1, arg2,):
print("Shouldn't format this" )
"#,
)?;
let file2 = tempdir.path().join("file2.py");
fs::write(
&file2,
r#"
def file2(arg1, arg2,):
print("Shouldn't format this" )
"#,
)?;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--range=1:8-1:15"])
.arg(file1)
.arg(file2), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: The `--range` option is only supported when formatting a single file but the specified paths resolve to 2 files.
"###);
Ok(())
}
#[test]
fn range_formatting_out_of_bounds() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=100:40-200:1"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Shouldn't format this" )
"#), @r###"
success: true
exit_code: 0
----- stdout -----
def foo(arg1, arg2,):
print("Shouldn't format this" )
----- stderr -----
"###);
}
#[test]
fn range_start_larger_than_end() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=90-50"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Shouldn't format this" )
"#), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value '90-50' for '--range <RANGE>': the start position '90:1' is greater than the end position '50:1'.
tip: Try switching start and end: '50:1-90:1'
For more information, try '--help'.
"###);
}
#[test]
fn range_line_numbers_only() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=2-3"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Shouldn't format this" )
"#), @r###"
success: true
exit_code: 0
----- stdout -----
def foo(
arg1,
arg2,
):
print("Shouldn't format this" )
----- stderr -----
"###);
}
#[test]
fn range_start_only() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=3"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Should format this" )
"#), @r###"
success: true
exit_code: 0
----- stdout -----
def foo(arg1, arg2,):
print("Should format this")
----- stderr -----
"###);
}
#[test]
fn range_end_only() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=-3"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Should format this" )
"#), @r###"
success: true
exit_code: 0
----- stdout -----
def foo(
arg1,
arg2,
):
print("Should format this" )
----- stderr -----
"###);
}
#[test]
fn range_missing_line() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=1-:20"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Should format this" )
"#), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value '1-:20' for '--range <RANGE>': the end line is not a valid number (cannot parse integer from empty string)
tip: The format is 'line:column'.
For more information, try '--help'.
"###);
}
#[test]
fn zero_line_number() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=0:2"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Should format this" )
"#), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value '0:2' for '--range <RANGE>': the start line is 0, but it should be 1 or greater.
tip: The line numbers start at 1.
tip: Try 1:2 instead.
For more information, try '--help'.
"###);
}
#[test]
fn column_and_line_zero() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=0:0"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Should format this" )
"#), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value '0:0' for '--range <RANGE>': the start line and column are both 0, but they should be 1 or greater.
tip: The line and column numbers start at 1.
tip: Try 1:1 instead.
For more information, try '--help'.
"###);
}
#[test]
fn range_formatting_notebook() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--no-cache", "--stdin-filename", "main.ipynb", "--range=1-2"])
.arg("-")
.pass_stdin(r#"
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "ad6f36d9-4b7d-4562-8d00-f15a0f1fbb6d",
"metadata": {},
"outputs": [],
"source": [
"x=1"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
"#), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: Failed to format main.ipynb: Range formatting isn't supported for notebooks.
"###);
}

View File

@@ -755,6 +755,35 @@ fn full_output_preview() {
"###);
}
#[test]
fn full_output_preview_config() -> Result<()> {
let tempdir = TempDir::new()?;
let pyproject_toml = tempdir.path().join("pyproject.toml");
fs::write(
&pyproject_toml,
r#"
[tool.ruff]
preview = true
"#,
)?;
let mut cmd = RuffCheck::default().config(&pyproject_toml).build();
assert_cmd_snapshot!(cmd.pass_stdin("l = 1"), @r###"
success: false
exit_code: 1
----- stdout -----
-:1:1: E741 Ambiguous variable name: `l`
|
1 | l = 1
| ^ E741
|
Found 1 error.
----- stderr -----
"###);
Ok(())
}
#[test]
fn full_output_format() {
let mut cmd = RuffCheck::default().output_format("full").build();

View File

@@ -2,6 +2,7 @@
#![cfg(not(target_family = "wasm"))]
use regex::escape;
use std::fs;
use std::process::Command;
use std::str;
@@ -13,6 +14,10 @@ use tempfile::TempDir;
const BIN_NAME: &str = "ruff";
const STDIN_BASE_OPTIONS: &[&str] = &["--no-cache", "--output-format", "concise"];
fn tempdir_filter(tempdir: &TempDir) -> String {
format!(r"{}\\?/?", escape(tempdir.path().to_str().unwrap()))
}
#[test]
fn top_level_options() -> Result<()> {
let tempdir = TempDir::new()?;
@@ -27,29 +32,32 @@ inline-quotes = "single"
"#,
)?;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
.args(["--stdin-filename", "test.py"])
.arg("-")
.pass_stdin(r#"a = "abcba".strip("aba")"#), @r###"
success: false
exit_code: 1
----- stdout -----
test.py:1:5: Q000 [*] Double quotes found but single quotes preferred
test.py:1:5: B005 Using `.strip()` with multi-character strings is misleading
test.py:1:19: Q000 [*] Double quotes found but single quotes preferred
Found 3 errors.
[*] 2 fixable with the `--fix` option.
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
.args(["--stdin-filename", "test.py"])
.arg("-")
.pass_stdin(r#"a = "abcba".strip("aba")"#), @r###"
success: false
exit_code: 1
----- stdout -----
test.py:1:5: Q000 [*] Double quotes found but single quotes preferred
test.py:1:5: B005 Using `.strip()` with multi-character strings is misleading
test.py:1:19: Q000 [*] Double quotes found but single quotes preferred
Found 3 errors.
[*] 2 fixable with the `--fix` option.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
- 'extend-select' -> 'lint.extend-select'
- 'flake8-quotes' -> 'lint.flake8-quotes'
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `[TMP]/ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
- 'flake8-quotes' -> 'lint.flake8-quotes'
"###);
});
"###);
Ok(())
}
@@ -68,6 +76,9 @@ inline-quotes = "single"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
@@ -85,6 +96,8 @@ inline-quotes = "single"
----- stderr -----
"###);
});
Ok(())
}
@@ -103,6 +116,9 @@ inline-quotes = "single"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
@@ -119,11 +135,11 @@ inline-quotes = "single"
[*] 2 fixable with the `--fix` option.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `[TMP]/ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
"###);
});
Ok(())
}
@@ -146,6 +162,9 @@ inline-quotes = "single"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
@@ -162,11 +181,11 @@ inline-quotes = "single"
[*] 2 fixable with the `--fix` option.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `[TMP]/ruff.toml`:
- 'flake8-quotes' -> 'lint.flake8-quotes'
"###);
});
Ok(())
}
@@ -222,6 +241,9 @@ OTHER = "OTHER"
fs::write(out_dir.join("a.py"), r#"a = "a""#)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.arg("check")
@@ -241,11 +263,11 @@ OTHER = "OTHER"
[*] 3 fixable with the `--fix` option.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
"###);
});
Ok(())
}
@@ -266,6 +288,9 @@ inline-quotes = "single"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.arg("check")
@@ -288,11 +313,11 @@ if __name__ == "__main__":
[*] 2 fixable with the `--fix` option.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
"###);
});
Ok(())
}
@@ -311,6 +336,9 @@ max-line-length = 100
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
@@ -330,12 +358,12 @@ _ = "---------------------------------------------------------------------------
Found 1 error.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `[TMP]/ruff.toml`:
- 'select' -> 'lint.select'
- 'pycodestyle' -> 'lint.pycodestyle'
"###);
});
Ok(())
}
@@ -353,6 +381,9 @@ inline-quotes = "single"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.arg("check")
@@ -377,11 +408,11 @@ if __name__ == "__main__":
[*] 1 fixable with the `--fix` option.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
"###);
});
Ok(())
}
@@ -399,6 +430,9 @@ inline-quotes = "single"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.arg("check")
@@ -423,11 +457,11 @@ if __name__ == "__main__":
[*] 1 fixable with the `--fix` option.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
"###);
});
Ok(())
}
@@ -456,6 +490,9 @@ ignore = ["D203", "D212"]
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(sub_dir)
.arg("check")
@@ -468,9 +505,346 @@ ignore = ["D203", "D212"]
----- stderr -----
warning: No Python files found under the given path(s)
"###);
});
Ok(())
}
#[test]
fn nonexistent_config_file() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--config", "foo.toml", "."]), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'foo.toml' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
It looks like you were trying to pass a path to a configuration file.
The path `foo.toml` does not exist
For more information, try '--help'.
"###);
}
#[test]
fn config_override_rejected_if_invalid_toml() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--config", "foo = bar", "."]), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'foo = bar' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
The supplied argument is not valid TOML:
TOML parse error at line 1, column 7
|
1 | foo = bar
| ^
invalid string
expected `"`, `'`
For more information, try '--help'.
"###);
}
#[test]
fn too_many_config_files() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_dot_toml = tempdir.path().join("ruff.toml");
let ruff2_dot_toml = tempdir.path().join("ruff2.toml");
fs::File::create(&ruff_dot_toml)?;
fs::File::create(&ruff2_dot_toml)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_dot_toml)
.arg("--config")
.arg(&ruff2_dot_toml)
.arg("."), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: You cannot specify more than one configuration file on the command line.
tip: remove either `--config=[TMP]/ruff.toml` or `--config=[TMP]/ruff2.toml`.
For more information, try `--help`.
"###);
});
Ok(())
}
#[test]
fn config_file_and_isolated() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_dot_toml = tempdir.path().join("ruff.toml");
fs::File::create(&ruff_dot_toml)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_dot_toml)
.arg("--isolated")
.arg("."), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: The argument `--config=[TMP]/ruff.toml` cannot be used with `--isolated`
tip: You cannot specify a configuration file and also specify `--isolated`,
as `--isolated` causes ruff to ignore all configuration files.
For more information, try `--help`.
"###);
});
Ok(())
}
#[test]
fn config_override_via_cli() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
line-length = 100
[lint]
select = ["I"]
[lint.isort]
combine-as-imports = true
"#,
)?;
let fixture = r#"
from foo import (
aaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbb as bbbbbbbbbbbbbbbb,
cccccccccccccccc,
ddddddddddd as ddddddddddddd,
eeeeeeeeeeeeeee,
ffffffffffff as ffffffffffffff,
ggggggggggggg,
hhhhhhh as hhhhhhhhhhh,
iiiiiiiiiiiiii,
jjjjjjjjjjjjj as jjjjjj,
)
x = "longer_than_90_charactersssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss"
"#;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
.args(["--config", "line-length=90"])
.args(["--config", "lint.extend-select=['E501', 'F841']"])
.args(["--config", "lint.isort.combine-as-imports = false"])
.arg("-")
.pass_stdin(fixture), @r###"
success: false
exit_code: 1
----- stdout -----
-:2:1: I001 [*] Import block is un-sorted or un-formatted
-:15:91: E501 Line too long (97 > 90)
Found 2 errors.
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
Ok(())
}
#[test]
fn valid_toml_but_nonexistent_option_provided_via_config_argument() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args([".", "--config", "extend-select=['F481']"]), // No such code as F481!
@r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'extend-select=['F481']' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
Could not parse the supplied argument as a `ruff.toml` configuration option:
Unknown rule selector: `F481`
For more information, try '--help'.
"###);
}
#[test]
fn each_toml_option_requires_a_new_flag_1() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
// commas can't be used to delimit different config overrides;
// you need a new --config flag for each override
.args([".", "--config", "extend-select=['F841'], line-length=90"]),
@r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'extend-select=['F841'], line-length=90' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
The supplied argument is not valid TOML:
TOML parse error at line 1, column 23
|
1 | extend-select=['F841'], line-length=90
| ^
expected newline, `#`
For more information, try '--help'.
"###);
}
#[test]
fn each_toml_option_requires_a_new_flag_2() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
// spaces *also* can't be used to delimit different config overrides;
// you need a new --config flag for each override
.args([".", "--config", "extend-select=['F841'] line-length=90"]),
@r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'extend-select=['F841'] line-length=90' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
The supplied argument is not valid TOML:
TOML parse error at line 1, column 24
|
1 | extend-select=['F841'] line-length=90
| ^
expected newline, `#`
For more information, try '--help'.
"###);
}
#[test]
fn config_doubly_overridden_via_cli() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
line-length = 100
[lint]
select=["E501"]
"#,
)?;
let fixture = "x = 'longer_than_90_charactersssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss'";
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
// The --line-length flag takes priority over both the config file
// and the `--config="line-length=110"` flag,
// despite them both being specified after this flag on the command line:
.args(["--line-length", "90"])
.arg("--config")
.arg(&ruff_toml)
.args(["--config", "line-length=110"])
.arg("-")
.pass_stdin(fixture), @r###"
success: false
exit_code: 1
----- stdout -----
-:1:91: E501 Line too long (97 > 90)
Found 1 error.
----- stderr -----
"###);
Ok(())
}
#[test]
fn complex_config_setting_overridden_via_cli() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(&ruff_toml, "lint.select = ['N801']")?;
let fixture = "class violates_n801: pass";
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
.args(["--config", "lint.per-file-ignores = {'generated.py' = ['N801']}"])
.args(["--stdin-filename", "generated.py"])
.arg("-")
.pass_stdin(fixture), @r###"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
"###);
Ok(())
}
#[test]
fn deprecated_config_option_overridden_via_cli() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--config", "select=['N801']", "-"])
.pass_stdin("class lowercase: ..."),
@r###"
success: false
exit_code: 1
----- stdout -----
-:1:7: N801 Class name `lowercase` should use CapWords convention
Found 1 error.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your `--config` CLI arguments:
- 'select' -> 'lint.select'
"###);
}
#[test]
fn extension() -> Result<()> {
let tempdir = TempDir::new()?;
@@ -524,6 +898,9 @@ include = ["*.ipy"]
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.arg("check")
@@ -540,5 +917,7 @@ include = ["*.ipy"]
----- stderr -----
"###);
});
Ok(())
}

View File

@@ -39,10 +39,8 @@ fn check_project_include_defaults() {
[BASEPATH]/include-test/subdirectory/c.py
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `nested-project/pyproject.toml`:
- 'select' -> 'lint.select'
"###);
});
}

View File

@@ -4,25 +4,29 @@ use std::process::Command;
const BIN_NAME: &str = "ruff";
#[cfg(not(target_os = "windows"))]
const TEST_FILTERS: &[(&str, &str)] = &[
("\"[^\\*\"]*/pyproject.toml", "\"[BASEPATH]/pyproject.toml"),
("\".*/crates", "\"[BASEPATH]/crates"),
("\".*/\\.ruff_cache", "\"[BASEPATH]/.ruff_cache"),
("\".*/ruff\"", "\"[BASEPATH]\""),
];
#[cfg(target_os = "windows")]
const TEST_FILTERS: &[(&str, &str)] = &[
(r#""[^\*"]*\\pyproject.toml"#, "\"[BASEPATH]/pyproject.toml"),
(r#"".*\\crates"#, "\"[BASEPATH]/crates"),
(r#"".*\\\.ruff_cache"#, "\"[BASEPATH]/.ruff_cache"),
(r#"".*\\ruff""#, "\"[BASEPATH]\""),
(r#"\\+(\w\w|\s|")"#, "/$1"),
];
#[test]
fn display_default_settings() {
insta::with_settings!({ filters => TEST_FILTERS.to_vec() }, {
// Navigate from the crate directory to the workspace root.
let base_path = Path::new(env!("CARGO_MANIFEST_DIR"))
.parent()
.unwrap()
.parent()
.unwrap();
let base_path = base_path.to_string_lossy();
// Escape the backslashes for the regex.
let base_path = regex::escape(&base_path);
#[cfg(not(target_os = "windows"))]
let test_filters = &[(base_path.as_ref(), "[BASEPATH]")];
#[cfg(target_os = "windows")]
let test_filters = &[
(base_path.as_ref(), "[BASEPATH]"),
(r#"\\+(\w\w|\s|\.|")"#, "/$1"),
];
insta::with_settings!({ filters => test_filters.to_vec() }, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "--show-settings", "unformatted.py"]).current_dir(Path::new("./resources/test/fixtures")));
});

View File

@@ -25,6 +25,15 @@ import cycles. They also increase the cognitive load of reading the code.
If an import statement is used to check for the availability or existence
of a module, consider using `importlib.util.find_spec` instead.
If an import statement is used to re-export a symbol as part of a module's
public interface, consider using a "redundant" import alias, which
instructs Ruff (and other tools) to respect the re-export, and avoid
marking it as unused, as in:
```python
from module import member as member
```
## Example
```python
import numpy as np # unused import
@@ -51,11 +60,12 @@ else:
```
## Options
- `lint.pyflakes.extend-generics`
- `lint.ignore-init-module-imports`
## References
- [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)
- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)
- [Typing documentation: interface conventions](https://typing.readthedocs.io/en/latest/source/libraries.html#library-interface-public-and-private-symbols)
----- stderr -----

View File

@@ -205,7 +205,9 @@ linter.external = []
linter.ignore_init_module_imports = false
linter.logger_objects = []
linter.namespace_packages = []
linter.src = ["[BASEPATH]"]
linter.src = [
"[BASEPATH]",
]
linter.tab_size = 4
linter.line_length = 88
linter.task_tags = [

View File

@@ -13,6 +13,7 @@ license = { workspace = true }
[lib]
bench = false
doctest = false
[[bench]]
name = "linter"

View File

@@ -27,7 +27,7 @@ use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
use tracing_subscriber::EnvFilter;
use ruff::args::{CliOverrides, FormatArguments, FormatCommand, LogLevelArgs};
use ruff::args::{ConfigArguments, FormatArguments, FormatCommand, LogLevelArgs};
use ruff::resolve::resolve;
use ruff_formatter::{FormatError, LineWidth, PrintError};
use ruff_linter::logging::LogLevel;
@@ -38,24 +38,23 @@ use ruff_python_formatter::{
use ruff_python_parser::ParseError;
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig, ResolvedFile, Resolver};
fn parse_cli(dirs: &[PathBuf]) -> anyhow::Result<(FormatArguments, CliOverrides)> {
fn parse_cli(dirs: &[PathBuf]) -> anyhow::Result<(FormatArguments, ConfigArguments)> {
let args_matches = FormatCommand::command()
.no_binary_name(true)
.get_matches_from(dirs);
let arguments: FormatCommand = FormatCommand::from_arg_matches(&args_matches)?;
let (cli, overrides) = arguments.partition();
Ok((cli, overrides))
let (cli, config_arguments) = arguments.partition()?;
Ok((cli, config_arguments))
}
/// Find the [`PyprojectConfig`] to use for formatting.
fn find_pyproject_config(
cli: &FormatArguments,
overrides: &CliOverrides,
config_arguments: &ConfigArguments,
) -> anyhow::Result<PyprojectConfig> {
let mut pyproject_config = resolve(
cli.isolated,
cli.config.as_deref(),
overrides,
config_arguments,
cli.stdin_filename.as_deref(),
)?;
// We don't want to format pyproject.toml
@@ -72,9 +71,9 @@ fn find_pyproject_config(
fn ruff_check_paths<'a>(
pyproject_config: &'a PyprojectConfig,
cli: &FormatArguments,
overrides: &CliOverrides,
config_arguments: &ConfigArguments,
) -> anyhow::Result<(Vec<Result<ResolvedFile, ignore::Error>>, Resolver<'a>)> {
let (paths, resolver) = python_files_in_path(&cli.files, pyproject_config, overrides)?;
let (paths, resolver) = python_files_in_path(&cli.files, pyproject_config, config_arguments)?;
Ok((paths, resolver))
}

View File

@@ -11,6 +11,7 @@ repository = { workspace = true }
license = { workspace = true }
[lib]
doctest = false
[dependencies]
ruff_text_size = { path = "../ruff_text_size" }

View File

@@ -308,11 +308,8 @@ impl std::fmt::Debug for Token {
/// assert_eq!(printed.as_code(), r#""Hello 'Ruff'""#);
/// assert_eq!(printed.sourcemap(), [
/// SourceMarker { source: TextSize::new(0), dest: TextSize::new(0) },
/// SourceMarker { source: TextSize::new(0), dest: TextSize::new(7) },
/// SourceMarker { source: TextSize::new(8), dest: TextSize::new(7) },
/// SourceMarker { source: TextSize::new(8), dest: TextSize::new(13) },
/// SourceMarker { source: TextSize::new(14), dest: TextSize::new(13) },
/// SourceMarker { source: TextSize::new(14), dest: TextSize::new(14) },
/// SourceMarker { source: TextSize::new(20), dest: TextSize::new(14) },
/// ]);
///
@@ -340,18 +337,18 @@ impl<Context> Format<Context> for SourcePosition {
}
}
/// Creates a text from a dynamic string with its optional start-position in the source document.
/// Creates a text from a dynamic string.
///
/// This is done by allocating a new string internally.
pub fn text(text: &str, position: Option<TextSize>) -> Text {
pub fn text(text: &str) -> Text {
debug_assert_no_newlines(text);
Text { text, position }
Text { text }
}
#[derive(Eq, PartialEq)]
pub struct Text<'a> {
text: &'a str,
position: Option<TextSize>,
}
impl<Context> Format<Context> for Text<'_>
@@ -359,10 +356,6 @@ where
Context: FormatContext,
{
fn fmt(&self, f: &mut Formatter<Context>) -> FormatResult<()> {
if let Some(position) = self.position {
source_position(position).fmt(f)?;
}
f.write_element(FormatElement::Text {
text: self.text.to_string().into_boxed_str(),
text_width: TextWidth::from_text(self.text, f.options().indent_width()),
@@ -2292,7 +2285,7 @@ impl<Context, T> std::fmt::Debug for FormatWith<Context, T> {
/// let mut join = f.join_with(&separator);
///
/// for item in &self.items {
/// join.entry(&format_with(|f| write!(f, [text(item, None)])));
/// join.entry(&format_with(|f| write!(f, [text(item)])));
/// }
/// join.finish()
/// })),
@@ -2377,7 +2370,7 @@ where
/// let mut count = 0;
///
/// let value = format_once(|f| {
/// write!(f, [text(&std::format!("Formatted {count}."), None)])
/// write!(f, [text(&std::format!("Formatted {count}."))])
/// });
///
/// format!(SimpleFormatContext::default(), [value]).expect("Formatting once works fine");

View File

@@ -346,10 +346,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
}
FormatElement::SourcePosition(position) => {
write!(
f,
[text(&std::format!("source_position({position:?})"), None)]
)?;
write!(f, [text(&std::format!("source_position({position:?})"))])?;
}
FormatElement::LineSuffixBoundary => {
@@ -360,7 +357,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
write!(f, [token("best_fitting(")])?;
if *mode != BestFittingMode::FirstLine {
write!(f, [text(&std::format!("mode: {mode:?}, "), None)])?;
write!(f, [text(&std::format!("mode: {mode:?}, "))])?;
}
write!(f, [token("[")])?;
@@ -392,17 +389,14 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
write!(
f,
[
text(&std::format!("<interned {index}>"), None),
text(&std::format!("<interned {index}>")),
space(),
&&**interned,
]
)?;
}
Some(reference) => {
write!(
f,
[text(&std::format!("<ref interned *{reference}>"), None)]
)?;
write!(f, [text(&std::format!("<ref interned *{reference}>"))])?;
}
}
}
@@ -421,7 +415,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
f,
[
token("<END_TAG_WITHOUT_START<"),
text(&std::format!("{:?}", tag.kind()), None),
text(&std::format!("{:?}", tag.kind())),
token(">>"),
]
)?;
@@ -436,9 +430,9 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
token(")"),
soft_line_break_or_space(),
token("ERROR<START_END_TAG_MISMATCH<start: "),
text(&std::format!("{start_kind:?}"), None),
text(&std::format!("{start_kind:?}")),
token(", end: "),
text(&std::format!("{:?}", tag.kind()), None),
text(&std::format!("{:?}", tag.kind())),
token(">>")
]
)?;
@@ -470,7 +464,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
f,
[
token("align("),
text(&count.to_string(), None),
text(&count.to_string()),
token(","),
space(),
]
@@ -482,7 +476,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
f,
[
token("line_suffix("),
text(&std::format!("{reserved_width:?}"), None),
text(&std::format!("{reserved_width:?}")),
token(","),
space(),
]
@@ -499,11 +493,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
if let Some(group_id) = group.id() {
write!(
f,
[
text(&std::format!("\"{group_id:?}\""), None),
token(","),
space(),
]
[text(&std::format!("\"{group_id:?}\"")), token(","), space(),]
)?;
}
@@ -524,11 +514,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
if let Some(group_id) = id {
write!(
f,
[
text(&std::format!("\"{group_id:?}\""), None),
token(","),
space(),
]
[text(&std::format!("\"{group_id:?}\"")), token(","), space(),]
)?;
}
}
@@ -561,7 +547,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
f,
[
token("indent_if_group_breaks("),
text(&std::format!("\"{id:?}\""), None),
text(&std::format!("\"{id:?}\"")),
token(","),
space(),
]
@@ -581,11 +567,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
if let Some(group_id) = condition.group_id {
write!(
f,
[
text(&std::format!("\"{group_id:?}\""), None),
token(","),
space(),
]
[text(&std::format!("\"{group_id:?}\"")), token(","), space()]
)?;
}
}
@@ -595,7 +577,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
f,
[
token("label("),
text(&std::format!("\"{label_id:?}\""), None),
text(&std::format!("\"{label_id:?}\"")),
token(","),
space(),
]
@@ -664,7 +646,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
ContentArrayEnd,
token(")"),
soft_line_break_or_space(),
text(&std::format!("<START_WITHOUT_END<{top:?}>>"), None),
text(&std::format!("<START_WITHOUT_END<{top:?}>>")),
]
)?;
}
@@ -807,7 +789,7 @@ impl Format<IrFormatContext<'_>> for Condition {
f,
[
token("if_group_fits_on_line("),
text(&std::format!("\"{id:?}\""), None),
text(&std::format!("\"{id:?}\"")),
token(")")
]
),
@@ -816,7 +798,7 @@ impl Format<IrFormatContext<'_>> for Condition {
f,
[
token("if_group_breaks("),
text(&std::format!("\"{id:?}\""), None),
text(&std::format!("\"{id:?}\"")),
token(")")
]
),

View File

@@ -1,8 +1,9 @@
use crate::format_element::PrintMode;
use crate::{GroupId, TextSize};
use std::cell::Cell;
use std::num::NonZeroU8;
use crate::format_element::PrintMode;
use crate::{GroupId, TextSize};
/// A Tag marking the start and end of some content to which some special formatting should be applied.
///
/// Tags always come in pairs of a start and an end tag and the styling defined by this tag
@@ -99,6 +100,10 @@ pub enum Tag {
}
impl Tag {
pub const fn align(count: NonZeroU8) -> Tag {
Tag::StartAlign(Align(count))
}
/// Returns `true` if `self` is any start tag.
pub const fn is_start(&self) -> bool {
matches!(

View File

@@ -32,7 +32,7 @@ pub trait MemoizeFormat<Context> {
/// let value = self.value.get();
/// self.value.set(value + 1);
///
/// write!(f, [text(&std::format!("Formatted {value} times."), None)])
/// write!(f, [text(&std::format!("Formatted {value} times."))])
/// }
/// }
///
@@ -110,7 +110,7 @@ where
/// write!(f, [
/// token("Count:"),
/// space(),
/// text(&std::format!("{current}"), None),
/// text(&std::format!("{current}")),
/// hard_line_break()
/// ])?;
///

View File

@@ -41,7 +41,7 @@ use std::marker::PhantomData;
use std::num::{NonZeroU16, NonZeroU8, TryFromIntError};
use crate::format_element::document::Document;
use crate::printer::{Printer, PrinterOptions, SourceMapGeneration};
use crate::printer::{Printer, PrinterOptions};
pub use arguments::{Argument, Arguments};
pub use buffer::{
Buffer, BufferExtensions, BufferSnapshot, Inspect, RemoveSoftLinesBuffer, VecBuffer,
@@ -269,7 +269,6 @@ impl FormatOptions for SimpleFormatOptions {
line_width: self.line_width,
indent_style: self.indent_style,
indent_width: self.indent_width,
source_map_generation: SourceMapGeneration::Enabled,
..PrinterOptions::default()
}
}
@@ -433,28 +432,40 @@ impl Printed {
std::mem::take(&mut self.verbatim_ranges)
}
/// Slices the formatted code to the sub-slices that covers the passed `source_range`.
/// Slices the formatted code to the sub-slices that covers the passed `source_range` in `source`.
///
/// The implementation uses the source map generated during formatting to find the closest range
/// in the formatted document that covers `source_range` or more. The returned slice
/// matches the `source_range` exactly (except indent, see below) if the formatter emits [`FormatElement::SourcePosition`] for
/// the range's offsets.
///
/// ## Indentation
/// The indentation before `source_range.start` is replaced with the indentation returned by the formatter
/// to fix up incorrectly intended code.
///
/// Returns the entire document if the source map is empty.
///
/// # Panics
/// If `source_range` points to offsets that are not in the bounds of `source`.
#[must_use]
pub fn slice_range(self, source_range: TextRange) -> PrintedRange {
pub fn slice_range(self, source_range: TextRange, source: &str) -> PrintedRange {
let mut start_marker: Option<SourceMarker> = None;
let mut end_marker: Option<SourceMarker> = None;
// Note: The printer can generate multiple source map entries for the same source position.
// For example if you have:
// * token("a + b")
// * `source_position(276)`
// * `token("def")`
// * `token("foo")`
// * `source_position(284)`
// The printer uses the source position 276 for both the tokens `def` and `foo` because that's the only position it knows of.
// * `token(")")`
// * `source_position(276)`
// * `hard_line_break`
// The printer uses the source position 276 for both the tokens `)` and the `\n` because
// there were multiple `source_position` entries in the IR with the same offset.
// This can happen if multiple nodes start or end at the same position. A common example
// for this are expressions and expression statement that always end at the same offset.
//
// Warning: Source markers are often emitted sorted by their source position but it's not guaranteed.
// Warning: Source markers are often emitted sorted by their source position but it's not guaranteed
// and depends on the emitted `IR`.
// They are only guaranteed to be sorted in increasing order by their destination position.
for marker in self.sourcemap {
// Take the closest start marker, but skip over start_markers that have the same start.
@@ -471,17 +482,44 @@ impl Printed {
}
}
let start = start_marker.map(|marker| marker.dest).unwrap_or_default();
let end = end_marker.map_or_else(|| self.code.text_len(), |marker| marker.dest);
let code_range = TextRange::new(start, end);
let (source_start, formatted_start) = start_marker
.map(|marker| (marker.source, marker.dest))
.unwrap_or_default();
let (source_end, formatted_end) = end_marker
.map_or((source.text_len(), self.code.text_len()), |marker| {
(marker.source, marker.dest)
});
let source_range = TextRange::new(source_start, source_end);
let formatted_range = TextRange::new(formatted_start, formatted_end);
// Extend both ranges to include the indentation
let source_range = extend_range_to_include_indent(source_range, source);
let formatted_range = extend_range_to_include_indent(formatted_range, &self.code);
PrintedRange {
code: self.code[code_range].to_string(),
code: self.code[formatted_range].to_string(),
source_range,
}
}
}
/// Extends `range` backwards (by reducing `range.start`) to include any directly preceding whitespace (`\t` or ` `).
///
/// # Panics
/// If `range.start` is out of `source`'s bounds.
fn extend_range_to_include_indent(range: TextRange, source: &str) -> TextRange {
let whitespace_len: TextSize = source[..usize::from(range.start())]
.chars()
.rev()
.take_while(|c| matches!(c, ' ' | '\t'))
.map(TextLen::text_len)
.sum();
TextRange::new(range.start() - whitespace_len, range.end())
}
#[derive(Debug, Clone, Eq, PartialEq)]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
@@ -537,7 +575,7 @@ pub type FormatResult<F> = Result<F, FormatError>;
/// impl Format<SimpleFormatContext> for Paragraph {
/// fn fmt(&self, f: &mut Formatter<SimpleFormatContext>) -> FormatResult<()> {
/// write!(f, [
/// text(&self.0, None),
/// text(&self.0),
/// hard_line_break(),
/// ])
/// }

View File

@@ -21,8 +21,7 @@ impl<'a> LineSuffixes<'a> {
/// Takes all the pending line suffixes.
pub(super) fn take_pending<'l>(
&'l mut self,
) -> impl Iterator<Item = LineSuffixEntry<'a>> + DoubleEndedIterator + 'l + ExactSizeIterator
{
) -> impl DoubleEndedIterator<Item = LineSuffixEntry<'a>> + 'l + ExactSizeIterator {
self.suffixes.drain(..)
}

View File

@@ -4,7 +4,7 @@ use drop_bomb::DebugDropBomb;
use unicode_width::UnicodeWidthChar;
pub use printer_options::*;
use ruff_text_size::{Ranged, TextLen, TextSize};
use ruff_text_size::{TextLen, TextSize};
use crate::format_element::document::Document;
use crate::format_element::tag::{Condition, GroupMode};
@@ -76,6 +76,9 @@ impl<'a> Printer<'a> {
}
}
// Push any pending marker
self.push_marker();
Ok(Printed::new(
self.state.buffer,
None,
@@ -97,42 +100,38 @@ impl<'a> Printer<'a> {
let args = stack.top();
match element {
FormatElement::Space => self.print_text(Text::Token(" "), None),
FormatElement::Token { text } => self.print_text(Text::Token(text), None),
FormatElement::Text { text, text_width } => self.print_text(
Text::Text {
text,
text_width: *text_width,
},
None,
),
FormatElement::Space => self.print_text(Text::Token(" ")),
FormatElement::Token { text } => self.print_text(Text::Token(text)),
FormatElement::Text { text, text_width } => self.print_text(Text::Text {
text,
text_width: *text_width,
}),
FormatElement::SourceCodeSlice { slice, text_width } => {
let text = slice.text(self.source_code);
self.print_text(
Text::Text {
text,
text_width: *text_width,
},
Some(slice.range()),
);
self.print_text(Text::Text {
text,
text_width: *text_width,
});
}
FormatElement::Line(line_mode) => {
if args.mode().is_flat()
&& matches!(line_mode, LineMode::Soft | LineMode::SoftOrSpace)
{
if line_mode == &LineMode::SoftOrSpace {
self.print_text(Text::Token(" "), None);
self.print_text(Text::Token(" "));
}
} else if self.state.line_suffixes.has_pending() {
self.flush_line_suffixes(queue, stack, Some(element));
} else {
// Only print a newline if the current line isn't already empty
if self.state.line_width > 0 {
self.push_marker();
self.print_char('\n');
}
// Print a second line break if this is an empty line
if line_mode == &LineMode::Empty {
self.push_marker();
self.print_char('\n');
}
@@ -145,14 +144,11 @@ impl<'a> Printer<'a> {
}
FormatElement::SourcePosition(position) => {
self.state.source_position = *position;
// The printer defers printing indents until the next text
// is printed. Pushing the marker now would mean that the
// mapped range includes the indent range, which we don't want.
// Only add a marker if we're not in an indented context, e.g. at the end of the file.
if self.state.pending_indent.is_empty() {
self.push_marker();
}
// Queue the source map position and emit it when printing the next character
self.state.pending_source_position = Some(*position);
}
FormatElement::LineSuffixBoundary => {
@@ -444,7 +440,7 @@ impl<'a> Printer<'a> {
Ok(print_mode)
}
fn print_text(&mut self, text: Text, source_range: Option<TextRange>) {
fn print_text(&mut self, text: Text) {
if !self.state.pending_indent.is_empty() {
let (indent_char, repeat_count) = match self.options.indent_style() {
IndentStyle::Tab => ('\t', 1),
@@ -467,19 +463,6 @@ impl<'a> Printer<'a> {
}
}
// Insert source map markers before and after the token
//
// If the token has source position information the start marker
// will use the start position of the original token, and the end
// marker will use that position + the text length of the token
//
// If the token has no source position (was created by the formatter)
// both the start and end marker will use the last known position
// in the input source (from state.source_position)
if let Some(range) = source_range {
self.state.source_position = range.start();
}
self.push_marker();
match text {
@@ -502,21 +485,15 @@ impl<'a> Printer<'a> {
}
}
}
if let Some(range) = source_range {
self.state.source_position = range.end();
}
self.push_marker();
}
fn push_marker(&mut self) {
if self.options.source_map_generation.is_disabled() {
let Some(source_position) = self.state.pending_source_position.take() else {
return;
}
};
let marker = SourceMarker {
source: self.state.source_position,
source: source_position,
dest: self.state.buffer.text_len(),
};
@@ -897,7 +874,7 @@ enum FillPairLayout {
struct PrinterState<'a> {
buffer: String,
source_markers: Vec<SourceMarker>,
source_position: TextSize,
pending_source_position: Option<TextSize>,
pending_indent: Indention,
measured_group_fits: bool,
line_width: u32,
@@ -1752,7 +1729,7 @@ a",
let result = format_with_options(
&format_args![
token("function main() {"),
block_indent(&text("let x = `This is a multiline\nstring`;", None)),
block_indent(&text("let x = `This is a multiline\nstring`;")),
token("}"),
hard_line_break()
],
@@ -1769,7 +1746,7 @@ a",
fn it_breaks_a_group_if_a_string_contains_a_newline() {
let result = format(&FormatArrayElements {
items: vec![
&text("`This is a string spanning\ntwo lines`", None),
&text("`This is a string spanning\ntwo lines`"),
&token("\"b\""),
],
});

View File

@@ -14,10 +14,6 @@ pub struct PrinterOptions {
/// The type of line ending to apply to the printed input
pub line_ending: LineEnding,
/// Whether the printer should build a source map that allows mapping positions in the source document
/// to positions in the formatted document.
pub source_map_generation: SourceMapGeneration,
}
impl<'a, O> From<&'a O> for PrinterOptions

View File

@@ -11,6 +11,7 @@ repository = { workspace = true }
license = { workspace = true }
[lib]
doctest = false
[dependencies]
ruff_macros = { path = "../ruff_macros" }

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.1.15"
version = "0.2.1"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -0,0 +1,22 @@
from django.utils.safestring import mark_safe
def some_func():
return mark_safe('<script>alert("evil!")</script>')
@mark_safe
def some_func():
return '<script>alert("evil!")</script>'
from django.utils.html import mark_safe
def some_func():
return mark_safe('<script>alert("evil!")</script>')
@mark_safe
def some_func():
return '<script>alert("evil!")</script>'

View File

@@ -13,3 +13,11 @@ s = f"{set([f(x) for x in 'ab'])}"
s = f"{ set([x for x in 'ab']) | set([x for x in 'ab']) }"
s = f"{set([x for x in 'ab']) | set([x for x in 'ab'])}"
s = set( # comment
[x for x in range(3)]
)
s = set([ # comment
x for x in range(3)
])

View File

@@ -20,3 +20,10 @@ f"{dict(x='y') | dict(y='z')}"
f"{ dict(x='y') | dict(y='z') }"
f"a {dict(x='y') | dict(y='z')} b"
f"a { dict(x='y') | dict(y='z') } b"
dict(
# comment
)
tuple( # comment
)

View File

@@ -8,3 +8,11 @@ t4 = tuple([
t5 = tuple(
(1, 2)
)
tuple( # comment
[1, 2]
)
tuple([ # comment
1, 2
])

View File

@@ -2,3 +2,12 @@ l1 = list([1, 2])
l2 = list((1, 2))
l3 = list([])
l4 = list(())
list( # comment
[1, 2]
)
list([ # comment
1, 2
])

View File

@@ -207,3 +207,23 @@ class Repro:
def stub(self) -> str:
"""Docstring"""
...
class Repro(Protocol[int]):
def func(self) -> str:
"""Docstring"""
...
def impl(self) -> str:
"""Docstring"""
return self.func()
class Repro[int](Protocol):
def func(self) -> str:
"""Docstring"""
...
def impl(self) -> str:
"""Docstring"""
return self.func()

View File

@@ -11,13 +11,25 @@ class _UnusedTypedDict2(typing.TypedDict):
class _UsedTypedDict(TypedDict):
foo: bytes
foo: bytes
class _CustomClass(_UsedTypedDict):
bar: list[int]
_UnusedTypedDict3 = TypedDict("_UnusedTypedDict3", {"foo": int})
_UsedTypedDict3 = TypedDict("_UsedTypedDict3", {"bar": bytes})
def uses_UsedTypedDict3(arg: _UsedTypedDict3) -> None: ...
# In `.py` files, we don't flag unused definitions in class scopes (unlike in `.pyi`
# files).
class _CustomClass3:
class _UnusedTypeDict4(TypedDict):
pass
def method(self) -> None:
_CustomClass3._UnusedTypeDict4()

View File

@@ -35,3 +35,13 @@ _UnusedTypedDict3 = TypedDict("_UnusedTypedDict3", {"foo": int})
_UsedTypedDict3 = TypedDict("_UsedTypedDict3", {"bar": bytes})
def uses_UsedTypedDict3(arg: _UsedTypedDict3) -> None: ...
# In `.pyi` files, we flag unused definitions in class scopes as well as in the global
# scope (unlike in `.py` files).
class _CustomClass3:
class _UnusedTypeDict4(TypedDict):
pass
def method(self) -> None:
_CustomClass3._UnusedTypeDict4()

View File

@@ -198,3 +198,43 @@ else:
def sb(self):
if self._sb is not None: return self._sb
else: self._sb = '\033[01;%dm'; self._sa = '\033[0;0m';
def indent(x, y, w, z):
if x: # [no-else-return]
a = 1
return y
else:
c = 3
return z
def indent(x, y, w, z):
if x: # [no-else-return]
a = 1
return y
else:
# comment
c = 3
return z
def indent(x, y, w, z):
if x: # [no-else-return]
a = 1
return y
else:
# comment
c = 3
return z
def indent(x, y, w, z):
if x: # [no-else-return]
a = 1
return y
else:
# comment
c = 3
return z

View File

@@ -1,18 +1,27 @@
import trio
async def foo():
async def func():
with trio.fail_after():
...
async def foo():
async def func():
with trio.fail_at():
await ...
async def foo():
async def func():
with trio.move_on_after():
...
async def foo():
async def func():
with trio.move_at():
await ...
async def func():
with trio.move_at():
async with trio.open_nursery() as nursery:
...

View File

@@ -19,8 +19,11 @@ numpy.random.seed()
numpy.random.get_state()
numpy.random.set_state()
numpy.random.rand()
numpy.random.ranf()
numpy.random.sample()
numpy.random.randn()
numpy.random.randint()
numpy.random.random()
numpy.random.random_integers()
numpy.random.random_sample()
numpy.random.choice()
@@ -35,7 +38,6 @@ numpy.random.exponential()
numpy.random.f()
numpy.random.gamma()
numpy.random.geometric()
numpy.random.get_state()
numpy.random.gumbel()
numpy.random.hypergeometric()
numpy.random.laplace()

View File

@@ -36,35 +36,47 @@ for i in list( # Comment
): # PERF101
pass
for i in list(foo_dict): # Ok
for i in list(foo_dict): # OK
pass
for i in list(1): # Ok
for i in list(1): # OK
pass
for i in list(foo_int): # Ok
for i in list(foo_int): # OK
pass
import itertools
for i in itertools.product(foo_int): # Ok
for i in itertools.product(foo_int): # OK
pass
for i in list(foo_list): # Ok
for i in list(foo_list): # OK
foo_list.append(i + 1)
for i in list(foo_list): # PERF101
# Make sure we match the correct list
other_list.append(i + 1)
for i in list(foo_tuple): # Ok
for i in list(foo_tuple): # OK
foo_tuple.append(i + 1)
for i in list(foo_set): # Ok
for i in list(foo_set): # OK
foo_set.append(i + 1)
x, y, nested_tuple = (1, 2, (3, 4, 5))
for i in list(nested_tuple): # PERF101
pass
for i in list(foo_list): # OK
if True:
foo_list.append(i + 1)
for i in list(foo_list): # OK
if True:
foo_list[i] = i + 1
for i in list(foo_list): # OK
if True:
del foo_list[i + 1]

View File

@@ -0,0 +1,848 @@
"""Fixtures for the errors E301, E302, E303, E304, E305 and E306.
Since these errors are about new lines, each test starts with either "No error" or "# E30X".
Each test's end is signaled by a "# end" line.
There should be no E30X error outside of a test's bound.
"""
# No error
class Class:
pass
# end
# No error
class Class:
"""Docstring"""
def __init__(self) -> None:
pass
# end
# No error
def func():
pass
# end
# No error
# comment
class Class:
pass
# end
# No error
# comment
def func():
pass
# end
# no error
def foo():
pass
def bar():
pass
class Foo(object):
pass
class Bar(object):
pass
# end
# No error
class Class(object):
def func1():
pass
def func2():
pass
# end
# No error
class Class(object):
def func1():
pass
# comment
def func2():
pass
# end
# No error
class Class:
def func1():
pass
# comment
def func2():
pass
# This is a
# ... multi-line comment
def func3():
pass
# This is a
# ... multi-line comment
@decorator
class Class:
def func1():
pass
# comment
def func2():
pass
@property
def func3():
pass
# end
# No error
try:
from nonexistent import Bar
except ImportError:
class Bar(object):
"""This is a Bar replacement"""
# end
# No error
def with_feature(f):
"""Some decorator"""
wrapper = f
if has_this_feature(f):
def wrapper(*args):
call_feature(args[0])
return f(*args)
return wrapper
# end
# No error
try:
next
except NameError:
def next(iterator, default):
for item in iterator:
return item
return default
# end
# No error
def fn():
pass
class Foo():
"""Class Foo"""
def fn():
pass
# end
# No error
# comment
def c():
pass
# comment
def d():
pass
# This is a
# ... multi-line comment
# And this one is
# ... a second paragraph
# ... which spans on 3 lines
# Function `e` is below
# NOTE: Hey this is a testcase
def e():
pass
def fn():
print()
# comment
print()
print()
# Comment 1
# Comment 2
# Comment 3
def fn2():
pass
# end
# no error
if __name__ == '__main__':
foo()
# end
# no error
defaults = {}
defaults.update({})
# end
# no error
def foo(x):
classification = x
definitely = not classification
# end
# no error
def bar(): pass
def baz(): pass
# end
# no error
def foo():
def bar(): pass
def baz(): pass
# end
# no error
from typing import overload
from typing import Union
# end
# no error
@overload
def f(x: int) -> int: ...
@overload
def f(x: str) -> str: ...
# end
# no error
def f(x: Union[int, str]) -> Union[int, str]:
return x
# end
# no error
from typing import Protocol
class C(Protocol):
@property
def f(self) -> int: ...
@property
def g(self) -> str: ...
# end
# no error
def f(
a,
):
pass
# end
# no error
if True:
class Class:
"""Docstring"""
def function(self):
...
# end
# no error
if True:
def function(self):
...
# end
# no error
@decorator
# comment
@decorator
def function():
pass
# end
# no error
class Class:
def method(self):
if True:
def function():
pass
# end
# no error
@decorator
async def function(data: None) -> None:
...
# end
# no error
class Class:
def method():
"""docstring"""
# comment
def function():
pass
# end
# no error
try:
if True:
# comment
class Class:
pass
except:
pass
# end
# no error
def f():
def f():
pass
# end
# no error
class MyClass:
# comment
def method(self) -> None:
pass
# end
# no error
def function1():
# Comment
def function2():
pass
# end
# no error
async def function1():
await function2()
async with function3():
pass
# end
# no error
if (
cond1
and cond2
):
pass
#end
# no error
async def function1():
await function2()
async with function3():
pass
# end
# no error
async def function1():
await function2()
async with function3():
pass
# end
# no error
async def function1():
await function2()
async with function3():
pass
# end
# no error
class Test:
async
def a(self): pass
# end
# no error
class Test:
def a():
pass
# wrongly indented comment
def b():
pass
# end
# no error
def test():
pass
# Wrongly indented comment
pass
# end
# E301
class Class(object):
def func1():
pass
def func2():
pass
# end
# E301
class Class:
def fn1():
pass
# comment
def fn2():
pass
# end
# E302
"""Main module."""
def fn():
pass
# end
# E302
import sys
def get_sys_path():
return sys.path
# end
# E302
def a():
pass
def b():
pass
# end
# E302
def a():
pass
# comment
def b():
pass
# end
# E302
def a():
pass
async def b():
pass
# end
# E302
async def x():
pass
async def x(y: int = 1):
pass
# end
# E302
def bar():
pass
def baz(): pass
# end
# E302
def bar(): pass
def baz():
pass
# end
# E302
def f():
pass
# comment
@decorator
def g():
pass
# end
# E302
class Test:
pass
def method1():
return 1
def method2():
return 22
# end
# E303
def fn():
_ = None
# arbitrary comment
def inner(): # E306 not expected (pycodestyle detects E306)
pass
# end
# E303
def fn():
_ = None
# arbitrary comment
def inner(): # E306 not expected (pycodestyle detects E306)
pass
# end
# E303
print()
print()
# end
# E303:5:1
print()
# comment
print()
# end
# E303:5:5 E303:8:5
def a():
print()
# comment
# another comment
print()
# end
# E303
#!python
"""This class docstring comes on line 5.
It gives error E303: too many blank lines (3)
"""
# end
# E303
class Class:
def a(self):
pass
def b(self):
pass
# end
# E303
if True:
a = 1
a = 2
# end
# E303
class Test:
# comment
# another comment
def test(self): pass
# end
# E303
class Test:
def a(self):
pass
# wrongly indented comment
def b(self):
pass
# end
# E303
def fn():
pass
pass
# end
# E304
@decorator
def function():
pass
# end
# E304
@decorator
# comment E304 not expected
def function():
pass
# end
# E304
@decorator
# comment E304 not expected
# second comment E304 not expected
def function():
pass
# end
# E305:7:1
def fn():
print()
# comment
# another comment
fn()
# end
# E305
class Class():
pass
# comment
# another comment
a = 1
# end
# E305:8:1
def fn():
print()
# comment
# another comment
try:
fn()
except Exception:
pass
# end
# E305:5:1
def a():
print()
# Two spaces before comments, too.
if a():
a()
# end
#: E305:8:1
# Example from https://github.com/PyCQA/pycodestyle/issues/400
import stuff
def main():
blah, blah
if __name__ == '__main__':
main()
# end
# E306:3:5
def a():
x = 1
def b():
pass
# end
#: E306:3:5
async def a():
x = 1
def b():
pass
# end
#: E306:3:5 E306:5:9
def a():
x = 2
def b():
x = 1
def c():
pass
# end
# E306:3:5 E306:6:5
def a():
x = 1
class C:
pass
x = 2
def b():
pass
# end
# E306
def foo():
def bar():
pass
def baz(): pass
# end
# E306:3:5
def foo():
def bar(): pass
def baz():
pass
# end
# E306
def a():
x = 2
@decorator
def b():
pass
# end
# E306
def a():
x = 2
@decorator
async def b():
pass
# end
# E306
def a():
x = 2
async def b():
pass
# end

View File

@@ -1,2 +1,15 @@
'''trailing whitespace
inside a multiline string'''
f'''trailing whitespace
inside a multiline f-string'''
# Trailing whitespace after `{`
f'abc {
1 + 2
}'
# Trailing whitespace after `2`
f'abc {
1 + 2
}'

View File

@@ -562,3 +562,46 @@ def titlecase_sub_section_header():
Returns:
"""
def test_method_should_be_correctly_capitalized(parameters: list[str], other_parameters: dict[str, str]): # noqa: D213
"""Test parameters and attributes sections are capitalized correctly.
Parameters
----------
parameters:
A list of string parameters
other_parameters:
A dictionary of string attributes
Other Parameters
----------
other_parameters:
A dictionary of string attributes
parameters:
A list of string parameters
"""
def test_lowercase_sub_section_header_should_be_valid(parameters: list[str], value: int): # noqa: D213
"""Test that lower case subsection header is valid even if it has the same name as section kind.
Parameters:
----------
parameters:
A list of string parameters
value:
Some value
"""
def test_lowercase_sub_section_header_different_kind(returns: int):
"""Test that lower case subsection header is valid even if it is of a different kind.
Parameters
------------------
returns:
some value
"""

View File

@@ -4,7 +4,12 @@
1 in (
1, 2, 3
)
# OK
fruits = ["cherry", "grapes"]
"cherry" in fruits
_ = {key: value for key, value in {"a": 1, "b": 2}.items() if key in ("a", "b")}
# OK
fruits in [[1, 2, 3], [4, 5, 6]]
fruits in [1, 2, 3]
1 in [[1, 2, 3], [4, 5, 6]]
_ = {key: value for key, value in {"a": 1, "b": 2}.items() if key in (["a", "b"], ["c", "d"])}

View File

@@ -35,6 +35,15 @@ if argc != 0: # correct
if argc != 1: # correct
pass
if argc != -1.0: # correct
pass
if argc != 0.0: # correct
pass
if argc != 1.0: # correct
pass
if argc != 2: # [magic-value-comparison]
pass
@@ -44,6 +53,12 @@ if argc != -2: # [magic-value-comparison]
if argc != +2: # [magic-value-comparison]
pass
if argc != -2.0: # [magic-value-comparison]
pass
if argc != +2.0: # [magic-value-comparison]
pass
if __name__ == "__main__": # correct
pass

View File

@@ -30,6 +30,11 @@ class Thing:
def do_thing(self, item):
return object.__getattribute__(self, item) # PLC2801
def use_descriptor(self, item):
item.__get__(self, type(self)) # OK
item.__set__(self, 1) # OK
item.__delete__(self) # OK
blah = lambda: {"a": 1}.__delitem__("a") # OK

View File

@@ -215,3 +215,13 @@ if sys.version_info[:2] > (3,13):
if sys.version_info[:3] > (3,13):
print("py3")
if sys.version_info > (3,0):
f"this is\
allowed too"
f"""the indentation on
this line is significant"""
"this is\
allowed too"

View File

@@ -46,3 +46,8 @@ x: typing.TypeAlias = list[T]
# OK
x: TypeAlias
x: int = 1
# Ensure that "T" appears only once in the type parameters for the modernized
# type alias.
T = typing.TypeVar["T"]
Decorator: TypeAlias = typing.Callable[[T], T]

View File

@@ -0,0 +1,75 @@
import codecs
import io
from pathlib import Path
# Errors
with open("FURB129.py") as f:
for _line in f.readlines():
pass
a = [line.lower() for line in f.readlines()]
b = {line.upper() for line in f.readlines()}
c = {line.lower(): line.upper() for line in f.readlines()}
with Path("FURB129.py").open() as f:
for _line in f.readlines():
pass
for _line in open("FURB129.py").readlines():
pass
for _line in Path("FURB129.py").open().readlines():
pass
def func():
f = Path("FURB129.py").open()
for _line in f.readlines():
pass
f.close()
def func(f: io.BytesIO):
for _line in f.readlines():
pass
def func():
with (open("FURB129.py") as f, foo as bar):
for _line in f.readlines():
pass
for _line in bar.readlines():
pass
# False positives
def func(f):
for _line in f.readlines():
pass
def func(f: codecs.StreamReader):
for _line in f.readlines():
pass
def func():
class A:
def readlines(self) -> list[str]:
return ["a", "b", "c"]
return A()
for _line in func().readlines():
pass
# OK
for _line in ["a", "b", "c"]:
pass
with open("FURB129.py") as f:
for _line in f:
pass
for _line in f.readlines(10):
pass
for _not_line in f.readline():
pass

View File

@@ -162,3 +162,26 @@ async def f(x: bool):
T = asyncio.create_task(asyncio.sleep(1))
else:
T = None
# Error
def f():
loop = asyncio.new_event_loop()
loop.create_task(main()) # Error
# Error
def f():
loop = asyncio.get_event_loop()
loop.create_task(main()) # Error
# OK
def f():
global task
loop = asyncio.new_event_loop()
task = loop.create_task(main()) # Error
# OK
def f():
global task
loop = asyncio.get_event_loop()
task = loop.create_task(main()) # Error

View File

@@ -250,6 +250,23 @@ __all__ = (
,
)
__all__ = ( # comment about the opening paren
# multiline strange comment 0a
# multiline strange comment 0b
"foo" # inline comment about foo
# multiline strange comment 1a
# multiline strange comment 1b
, # comment about the comma??
# comment about bar part a
# comment about bar part b
"bar" # inline comment about bar
# strange multiline comment comment 2a
# strange multiline comment 2b
,
# strange multiline comment 3a
# strange multiline comment 3b
) # comment about the closing paren
###################################
# These should all not get flagged:
###################################

View File

@@ -188,6 +188,10 @@ class BezierBuilder4:
,
)
__slots__ = {"foo", "bar",
"baz", "bingo"
}
###################################
# These should all not get flagged:
###################################

View File

@@ -0,0 +1,70 @@
val = 2
"always ignore this: {val}"
print("but don't ignore this: {val}") # RUF027
def simple_cases():
a = 4
b = "{a}" # RUF027
c = "{a} {b} f'{val}' " # RUF027
def escaped_string():
a = 4
b = "escaped string: {{ brackets surround me }}" # RUF027
def raw_string():
a = 4
b = r"raw string with formatting: {a}" # RUF027
c = r"raw string with \backslashes\ and \"escaped quotes\": {a}" # RUF027
def print_name(name: str):
a = 4
print("Hello, {name}!") # RUF027
print("The test value we're using today is {a}") # RUF027
def nested_funcs():
a = 4
print(do_nothing(do_nothing("{a}"))) # RUF027
def tripled_quoted():
a = 4
c = a
single_line = """ {a} """ # RUF027
# RUF027
multi_line = a = """b { # comment
c} d
"""
def single_quoted_multi_line():
a = 4
# RUF027
b = " {\
a} \
"
def implicit_concat():
a = 4
b = "{a}" "+" "{b}" r" \\ " # RUF027 for the first part only
print(f"{a}" "{a}" f"{b}") # RUF027
def escaped_chars():
a = 4
b = "\"not escaped:\" '{a}' \"escaped:\": '{{c}}'" # RUF027
def method_calls():
value = {}
value.method = print_name
first = "Wendy"
last = "Appleseed"
value.method("{first} {last}") # RUF027

View File

@@ -0,0 +1,36 @@
def do_nothing(a):
return a
def alternative_formatter(src, **kwargs):
src.format(**kwargs)
def format2(src, *args):
pass
# These should not cause an RUF027 message
def negative_cases():
a = 4
positive = False
"""{a}"""
"don't format: {a}"
c = """ {b} """
d = "bad variable: {invalid}"
e = "incorrect syntax: {}"
f = "uses a builtin: {max}"
json = "{ positive: false }"
json2 = "{ 'positive': false }"
json3 = "{ 'positive': 'false' }"
alternative_formatter("{a}", a=5)
formatted = "{a}".fmt(a=7)
print(do_nothing("{a}".format(a=3)))
print(do_nothing(alternative_formatter("{a}", a=5)))
print(format(do_nothing("{a}"), a=5))
print("{a}".to_upper())
print(do_nothing("{a}").format(a="Test"))
print(do_nothing("{a}").format2(a))
print(("{a}" "{c}").format(a=1, c=2))
print("{a}".attribute.chaining.call(a=2))
print("{a} {c}".format(a))

View File

@@ -2,11 +2,14 @@ use ruff_python_ast::Comprehension;
use crate::checkers::ast::Checker;
use crate::codes::Rule;
use crate::rules::flake8_simplify;
use crate::rules::{flake8_simplify, refurb};
/// Run lint rules over a [`Comprehension`] syntax nodes.
pub(crate) fn comprehension(comprehension: &Comprehension, checker: &mut Checker) {
if checker.enabled(Rule::InDictKeys) {
flake8_simplify::rules::key_in_dict_comprehension(checker, comprehension);
}
if checker.enabled(Rule::ReadlinesInFor) {
refurb::rules::readlines_in_comprehension(checker, comprehension);
}
}

View File

@@ -281,17 +281,21 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
}
}
if checker.enabled(Rule::UnusedPrivateTypeVar) {
flake8_pyi::rules::unused_private_type_var(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateProtocol) {
flake8_pyi::rules::unused_private_protocol(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypeAlias) {
flake8_pyi::rules::unused_private_type_alias(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypedDict) {
flake8_pyi::rules::unused_private_typed_dict(checker, scope, &mut diagnostics);
if checker.source_type.is_stub()
|| matches!(scope.kind, ScopeKind::Module | ScopeKind::Function(_))
{
if checker.enabled(Rule::UnusedPrivateTypeVar) {
flake8_pyi::rules::unused_private_type_var(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateProtocol) {
flake8_pyi::rules::unused_private_protocol(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypeAlias) {
flake8_pyi::rules::unused_private_type_alias(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypedDict) {
flake8_pyi::rules::unused_private_typed_dict(checker, scope, &mut diagnostics);
}
}
if checker.enabled(Rule::AsyncioDanglingTask) {

View File

@@ -1,13 +1,15 @@
use ruff_python_ast::str::raw_contents_range;
use ruff_text_size::{Ranged, TextRange};
use ruff_python_semantic::{BindingKind, ContextualizedDefinition, Export};
use ruff_python_semantic::{
BindingKind, ContextualizedDefinition, Definition, Export, Member, MemberKind,
};
use crate::checkers::ast::Checker;
use crate::codes::Rule;
use crate::docstrings::Docstring;
use crate::fs::relativize_path;
use crate::rules::{flake8_annotations, flake8_pyi, pydocstyle};
use crate::rules::{flake8_annotations, flake8_pyi, pydocstyle, pylint};
use crate::{docstrings, warn_user};
/// Run lint rules over all [`Definition`] nodes in the [`SemanticModel`].
@@ -31,6 +33,7 @@ pub(crate) fn definitions(checker: &mut Checker) {
]);
let enforce_stubs = checker.source_type.is_stub() && checker.enabled(Rule::DocstringInStub);
let enforce_stubs_and_runtime = checker.enabled(Rule::IterMethodReturnIterable);
let enforce_dunder_method = checker.enabled(Rule::BadDunderMethodName);
let enforce_docstrings = checker.any_enabled(&[
Rule::BlankLineAfterLastSection,
Rule::BlankLineAfterSummary,
@@ -80,7 +83,12 @@ pub(crate) fn definitions(checker: &mut Checker) {
Rule::UndocumentedPublicPackage,
]);
if !enforce_annotations && !enforce_docstrings && !enforce_stubs && !enforce_stubs_and_runtime {
if !enforce_annotations
&& !enforce_docstrings
&& !enforce_stubs
&& !enforce_stubs_and_runtime
&& !enforce_dunder_method
{
return;
}
@@ -147,6 +155,19 @@ pub(crate) fn definitions(checker: &mut Checker) {
}
}
// pylint
if enforce_dunder_method {
if checker.enabled(Rule::BadDunderMethodName) {
if let Definition::Member(Member {
kind: MemberKind::Method(method),
..
}) = definition
{
pylint::rules::bad_dunder_method_name(checker, method);
}
}
}
// pydocstyle
if enforce_docstrings {
if pydocstyle::helpers::should_ignore_definition(

View File

@@ -319,7 +319,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
numpy::rules::numpy_2_0_deprecation(checker, expr);
}
if checker.enabled(Rule::DeprecatedMockImport) {
pyupgrade::rules::deprecated_mock_attribute(checker, expr);
pyupgrade::rules::deprecated_mock_attribute(checker, attribute);
}
if checker.enabled(Rule::SixPY3) {
flake8_2020::rules::name_or_attribute(checker, expr);
@@ -336,7 +336,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::UndocumentedWarn) {
flake8_logging::rules::undocumented_warn(checker, expr);
}
pandas_vet::rules::attr(checker, attribute);
if checker.enabled(Rule::PandasUseOfDotValues) {
pandas_vet::rules::attr(checker, attribute);
}
}
Expr::Call(
call @ ast::ExprCall {
@@ -636,14 +638,10 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
flake8_bandit::rules::tarfile_unsafe_members(checker, call);
}
if checker.enabled(Rule::UnnecessaryGeneratorList) {
flake8_comprehensions::rules::unnecessary_generator_list(
checker, expr, func, args, keywords,
);
flake8_comprehensions::rules::unnecessary_generator_list(checker, call);
}
if checker.enabled(Rule::UnnecessaryGeneratorSet) {
flake8_comprehensions::rules::unnecessary_generator_set(
checker, expr, func, args, keywords,
);
flake8_comprehensions::rules::unnecessary_generator_set(checker, call);
}
if checker.enabled(Rule::UnnecessaryGeneratorDict) {
flake8_comprehensions::rules::unnecessary_generator_dict(
@@ -651,9 +649,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
);
}
if checker.enabled(Rule::UnnecessaryListComprehensionSet) {
flake8_comprehensions::rules::unnecessary_list_comprehension_set(
checker, expr, func, args, keywords,
);
flake8_comprehensions::rules::unnecessary_list_comprehension_set(checker, call);
}
if checker.enabled(Rule::UnnecessaryListComprehensionDict) {
flake8_comprehensions::rules::unnecessary_list_comprehension_dict(
@@ -661,9 +657,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
);
}
if checker.enabled(Rule::UnnecessaryLiteralSet) {
flake8_comprehensions::rules::unnecessary_literal_set(
checker, expr, func, args, keywords,
);
flake8_comprehensions::rules::unnecessary_literal_set(checker, call);
}
if checker.enabled(Rule::UnnecessaryLiteralDict) {
flake8_comprehensions::rules::unnecessary_literal_dict(
@@ -673,27 +667,18 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::UnnecessaryCollectionCall) {
flake8_comprehensions::rules::unnecessary_collection_call(
checker,
expr,
func,
args,
keywords,
call,
&checker.settings.flake8_comprehensions,
);
}
if checker.enabled(Rule::UnnecessaryLiteralWithinTupleCall) {
flake8_comprehensions::rules::unnecessary_literal_within_tuple_call(
checker, expr, func, args, keywords,
);
flake8_comprehensions::rules::unnecessary_literal_within_tuple_call(checker, call);
}
if checker.enabled(Rule::UnnecessaryLiteralWithinListCall) {
flake8_comprehensions::rules::unnecessary_literal_within_list_call(
checker, expr, func, args, keywords,
);
flake8_comprehensions::rules::unnecessary_literal_within_list_call(checker, call);
}
if checker.enabled(Rule::UnnecessaryLiteralWithinDictCall) {
flake8_comprehensions::rules::unnecessary_literal_within_dict_call(
checker, expr, func, args, keywords,
);
flake8_comprehensions::rules::unnecessary_literal_within_dict_call(checker, call);
}
if checker.enabled(Rule::UnnecessaryListCall) {
flake8_comprehensions::rules::unnecessary_list_call(checker, expr, func, args);
@@ -976,7 +961,6 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::UnsortedDunderAll) {
ruff::rules::sort_dunder_all_extend_call(checker, call);
}
if checker.enabled(Rule::DefaultFactoryKwarg) {
ruff::rules::default_factory_kwarg(checker, call);
}
@@ -1041,6 +1025,16 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
pyupgrade::rules::unicode_kind_prefix(checker, string_literal);
}
}
if checker.enabled(Rule::MissingFStringSyntax) {
for string_literal in value.literals() {
ruff::rules::missing_fstring_syntax(
&mut checker.diagnostics,
string_literal,
checker.locator,
&checker.semantic,
);
}
}
}
Expr::BinOp(ast::ExprBinOp {
left,
@@ -1312,12 +1306,22 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
refurb::rules::math_constant(checker, number_literal);
}
}
Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) => {
Expr::StringLiteral(ast::ExprStringLiteral { value, range: _ }) => {
if checker.enabled(Rule::UnicodeKindPrefix) {
for string_part in value {
pyupgrade::rules::unicode_kind_prefix(checker, string_part);
}
}
if checker.enabled(Rule::MissingFStringSyntax) {
for string_literal in value.as_slice() {
ruff::rules::missing_fstring_syntax(
&mut checker.diagnostics,
string_literal,
checker.locator,
&checker.semantic,
);
}
}
}
Expr::IfExp(
if_exp @ ast::ExprIfExp {

View File

@@ -247,6 +247,11 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::HardcodedPasswordDefault) {
flake8_bandit::rules::hardcoded_password_default(checker, parameters);
}
if checker.enabled(Rule::SuspiciousMarkSafeUsage) {
for decorator in decorator_list {
flake8_bandit::rules::suspicious_function_decorator(checker, decorator);
}
}
if checker.enabled(Rule::PropertyWithParameters) {
pylint::rules::property_with_parameters(checker, stmt, decorator_list, parameters);
}
@@ -513,9 +518,6 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::SingleStringSlots) {
pylint::rules::single_string_slots(checker, class_def);
}
if checker.enabled(Rule::BadDunderMethodName) {
pylint::rules::bad_dunder_method_name(checker, body);
}
if checker.enabled(Rule::MetaClassABCMeta) {
refurb::rules::metaclass_abcmeta(checker, class_def);
}
@@ -1315,6 +1317,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::UnnecessaryDictIndexLookup) {
pylint::rules::unnecessary_dict_index_lookup(checker, for_stmt);
}
if checker.enabled(Rule::ReadlinesInFor) {
refurb::rules::readlines_in_for(checker, for_stmt);
}
if !is_async {
if checker.enabled(Rule::ReimplementedBuiltin) {
flake8_simplify::rules::convert_for_loop_to_any_all(checker, stmt);

View File

@@ -31,8 +31,8 @@ use std::path::Path;
use itertools::Itertools;
use log::debug;
use ruff_python_ast::{
self as ast, Arguments, Comprehension, ElifElseClause, ExceptHandler, Expr, ExprContext,
Keyword, MatchCase, Parameter, ParameterWithDefault, Parameters, Pattern, Stmt, Suite, UnaryOp,
self as ast, Comprehension, ElifElseClause, ExceptHandler, Expr, ExprContext, Keyword,
MatchCase, Parameter, ParameterWithDefault, Parameters, Pattern, Stmt, Suite, UnaryOp,
};
use ruff_text_size::{Ranged, TextRange, TextSize};
@@ -40,7 +40,7 @@ use ruff_diagnostics::{Diagnostic, IsolationLevel};
use ruff_notebook::{CellOffsets, NotebookIndex};
use ruff_python_ast::all::{extract_all_names, DunderAllFlags};
use ruff_python_ast::helpers::{
collect_import_from_member, extract_handled_exceptions, to_module_path,
collect_import_from_member, extract_handled_exceptions, is_docstring_stmt, to_module_path,
};
use ruff_python_ast::identifier::Identifier;
use ruff_python_ast::str::trailing_quote;
@@ -71,6 +71,38 @@ mod analyze;
mod annotation;
mod deferred;
/// State representing whether a docstring is expected or not for the next statement.
#[derive(Default, Debug, Copy, Clone, PartialEq)]
enum DocstringState {
/// The next statement is expected to be a docstring, but not necessarily so.
///
/// For example, in the following code:
///
/// ```python
/// class Foo:
/// pass
///
///
/// def bar(x, y):
/// """Docstring."""
/// return x + y
/// ```
///
/// For `Foo`, the state is expected when the checker is visiting the class
/// body but isn't going to be present. While, for `bar` function, the docstring
/// is expected and present.
#[default]
Expected,
Other,
}
impl DocstringState {
/// Returns `true` if the next statement is expected to be a docstring.
const fn is_expected(self) -> bool {
matches!(self, DocstringState::Expected)
}
}
pub(crate) struct Checker<'a> {
/// The [`Path`] to the file under analysis.
path: &'a Path,
@@ -114,6 +146,8 @@ pub(crate) struct Checker<'a> {
pub(crate) flake8_bugbear_seen: Vec<TextRange>,
/// The end offset of the last visited statement.
last_stmt_end: TextSize,
/// A state describing if a docstring is expected or not.
docstring_state: DocstringState,
}
impl<'a> Checker<'a> {
@@ -153,6 +187,7 @@ impl<'a> Checker<'a> {
cell_offsets,
notebook_index,
last_stmt_end: TextSize::default(),
docstring_state: DocstringState::default(),
}
}
}
@@ -197,7 +232,7 @@ impl<'a> Checker<'a> {
let trailing_quote = trailing_quote(self.locator.slice(string_range))?;
// Invert the quote character, if it's a single quote.
match *trailing_quote {
match trailing_quote {
"'" => Some(Quote::Double),
"\"" => Some(Quote::Single),
_ => None,
@@ -305,19 +340,16 @@ where
self.semantic.flags -= SemanticModelFlags::IMPORT_BOUNDARY;
}
// Track whether we've seen docstrings, non-imports, etc.
// Track whether we've seen module docstrings, non-imports, etc.
match stmt {
Stmt::Expr(ast::StmtExpr { value, .. })
if !self
.semantic
.flags
.intersects(SemanticModelFlags::MODULE_DOCSTRING)
if !self.semantic.seen_module_docstring_boundary()
&& value.is_string_literal_expr() =>
{
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING;
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING_BOUNDARY;
}
Stmt::ImportFrom(ast::StmtImportFrom { module, names, .. }) => {
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING;
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING_BOUNDARY;
// Allow __future__ imports until we see a non-__future__ import.
if let Some("__future__") = module.as_deref() {
@@ -332,11 +364,11 @@ where
}
}
Stmt::Import(_) => {
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING;
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING_BOUNDARY;
self.semantic.flags |= SemanticModelFlags::FUTURES_BOUNDARY;
}
_ => {
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING;
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING_BOUNDARY;
self.semantic.flags |= SemanticModelFlags::FUTURES_BOUNDARY;
if !(self.semantic.seen_import_boundary()
|| helpers::is_assignment_to_a_dunder(stmt)
@@ -349,17 +381,20 @@ where
}
}
// Track each top-level import, to guide import insertions.
if matches!(stmt, Stmt::Import(_) | Stmt::ImportFrom(_)) {
if self.semantic.at_top_level() {
self.importer.visit_import(stmt);
}
}
// Store the flags prior to any further descent, so that we can restore them after visiting
// the node.
let flags_snapshot = self.semantic.flags;
// Update the semantic model if it is in a docstring. This should be done after the
// flags snapshot to ensure that it gets reset once the statement is analyzed.
if self.docstring_state.is_expected() {
if is_docstring_stmt(stmt) {
self.semantic.flags |= SemanticModelFlags::DOCSTRING;
}
// Reset the state irrespective of whether the statement is a docstring or not.
self.docstring_state = DocstringState::Other;
}
// Step 1: Binding
match stmt {
Stmt::AugAssign(ast::StmtAugAssign {
@@ -371,14 +406,22 @@ where
self.handle_node_load(target);
}
Stmt::Import(ast::StmtImport { names, range: _ }) => {
if self.semantic.at_top_level() {
self.importer.visit_import(stmt);
}
for alias in names {
if alias.name.contains('.') && alias.asname.is_none() {
// Given `import foo.bar`, `name` would be "foo", and `qualified_name` would be
// "foo.bar".
let name = alias.name.split('.').next().unwrap();
// Given `import foo.bar`, `module` would be "foo", and `call_path` would be
// `["foo", "bar"]`.
let module = alias.name.split('.').next().unwrap();
// Mark the top-level module as "seen" by the semantic model.
self.semantic.add_module(module);
if alias.asname.is_none() && alias.name.contains('.') {
let call_path: Box<[&str]> = alias.name.split('.').collect();
self.add_binding(
name,
module,
alias.identifier(),
BindingKind::SubmoduleImport(SubmoduleImport { call_path }),
BindingFlags::EXTERNAL,
@@ -413,8 +456,20 @@ where
level,
range: _,
}) => {
if self.semantic.at_top_level() {
self.importer.visit_import(stmt);
}
let module = module.as_deref();
let level = *level;
// Mark the top-level module as "seen" by the semantic model.
if level.map_or(true, |level| level == 0) {
if let Some(module) = module.and_then(|module| module.split('.').next()) {
self.semantic.add_module(module);
}
}
for alias in names {
if let Some("__future__") = module {
let name = alias.asname.as_ref().unwrap_or(&alias.name);
@@ -641,6 +696,8 @@ where
self.semantic.set_globals(globals);
}
// Set the docstring state before visiting the class body.
self.docstring_state = DocstringState::Expected;
self.visit_body(body);
}
Stmt::TypeAlias(ast::StmtTypeAlias {
@@ -976,12 +1033,7 @@ where
}
Expr::Call(ast::ExprCall {
func,
arguments:
Arguments {
args,
keywords,
range: _,
},
arguments,
range: _,
}) => {
self.visit_expr(func);
@@ -1024,7 +1076,7 @@ where
});
match callable {
Some(typing::Callable::Bool) => {
let mut args = args.iter();
let mut args = arguments.args.iter();
if let Some(arg) = args.next() {
self.visit_boolean_test(arg);
}
@@ -1033,7 +1085,7 @@ where
}
}
Some(typing::Callable::Cast) => {
let mut args = args.iter();
let mut args = arguments.args.iter();
if let Some(arg) = args.next() {
self.visit_type_definition(arg);
}
@@ -1042,7 +1094,7 @@ where
}
}
Some(typing::Callable::NewType) => {
let mut args = args.iter();
let mut args = arguments.args.iter();
if let Some(arg) = args.next() {
self.visit_non_type_definition(arg);
}
@@ -1051,21 +1103,21 @@ where
}
}
Some(typing::Callable::TypeVar) => {
let mut args = args.iter();
let mut args = arguments.args.iter();
if let Some(arg) = args.next() {
self.visit_non_type_definition(arg);
}
for arg in args {
self.visit_type_definition(arg);
}
for keyword in keywords {
for keyword in arguments.keywords.iter() {
let Keyword {
arg,
value,
range: _,
} = keyword;
if let Some(id) = arg {
if id == "bound" {
if id.as_str() == "bound" {
self.visit_type_definition(value);
} else {
self.visit_non_type_definition(value);
@@ -1075,7 +1127,7 @@ where
}
Some(typing::Callable::NamedTuple) => {
// Ex) NamedTuple("a", [("a", int)])
let mut args = args.iter();
let mut args = arguments.args.iter();
if let Some(arg) = args.next() {
self.visit_non_type_definition(arg);
}
@@ -1104,7 +1156,7 @@ where
}
}
for keyword in keywords {
for keyword in arguments.keywords.iter() {
let Keyword { arg, value, .. } = keyword;
match (arg.as_ref(), value) {
// Ex) NamedTuple("a", **{"a": int})
@@ -1131,7 +1183,7 @@ where
}
Some(typing::Callable::TypedDict) => {
// Ex) TypedDict("a", {"a": int})
let mut args = args.iter();
let mut args = arguments.args.iter();
if let Some(arg) = args.next() {
self.visit_non_type_definition(arg);
}
@@ -1154,13 +1206,13 @@ where
}
// Ex) TypedDict("a", a=int)
for keyword in keywords {
for keyword in arguments.keywords.iter() {
let Keyword { value, .. } = keyword;
self.visit_type_definition(value);
}
}
Some(typing::Callable::MypyExtension) => {
let mut args = args.iter();
let mut args = arguments.args.iter();
if let Some(arg) = args.next() {
// Ex) DefaultNamedArg(bool | None, name="some_prop_name")
self.visit_type_definition(arg);
@@ -1168,13 +1220,13 @@ where
for arg in args {
self.visit_non_type_definition(arg);
}
for keyword in keywords {
for keyword in arguments.keywords.iter() {
let Keyword { value, .. } = keyword;
self.visit_non_type_definition(value);
}
} else {
// Ex) DefaultNamedArg(type="bool", name="some_prop_name")
for keyword in keywords {
for keyword in arguments.keywords.iter() {
let Keyword {
value,
arg,
@@ -1192,10 +1244,10 @@ where
// If we're in a type definition, we need to treat the arguments to any
// other callables as non-type definitions (i.e., we don't want to treat
// any strings as deferred type definitions).
for arg in args {
for arg in arguments.args.iter() {
self.visit_non_type_definition(arg);
}
for keyword in keywords {
for keyword in arguments.keywords.iter() {
let Keyword { value, .. } = keyword;
self.visit_non_type_definition(value);
}
@@ -1280,6 +1332,16 @@ where
self.semantic.flags |= SemanticModelFlags::F_STRING;
visitor::walk_expr(self, expr);
}
Expr::NamedExpr(ast::ExprNamedExpr {
target,
value,
range: _,
}) => {
self.visit_expr(value);
self.semantic.flags |= SemanticModelFlags::NAMED_EXPRESSION_ASSIGNMENT;
self.visit_expr(target);
}
_ => visitor::walk_expr(self, expr),
}
@@ -1496,6 +1558,8 @@ impl<'a> Checker<'a> {
unreachable!("Generator expression must contain at least one generator");
};
let flags = self.semantic.flags;
// Generators are compiled as nested functions. (This may change with PEP 709.)
// As such, the `iter` of the first generator is evaluated in the outer scope, while all
// subsequent nodes are evaluated in the inner scope.
@@ -1525,14 +1589,22 @@ impl<'a> Checker<'a> {
// `x` is local to `foo`, and the `T` in `y=T` skips the class scope when resolving.
self.visit_expr(&generator.iter);
self.semantic.push_scope(ScopeKind::Generator);
self.semantic.flags = flags | SemanticModelFlags::COMPREHENSION_ASSIGNMENT;
self.visit_expr(&generator.target);
self.semantic.flags = flags;
for expr in &generator.ifs {
self.visit_boolean_test(expr);
}
for generator in iterator {
self.visit_expr(&generator.iter);
self.semantic.flags = flags | SemanticModelFlags::COMPREHENSION_ASSIGNMENT;
self.visit_expr(&generator.target);
self.semantic.flags = flags;
for expr in &generator.ifs {
self.visit_boolean_test(expr);
}
@@ -1731,11 +1803,21 @@ impl<'a> Checker<'a> {
return;
}
// A binding within a `for` must be a loop variable, as in:
// ```python
// for x in range(10):
// ...
// ```
if parent.is_for_stmt() {
self.add_binding(id, expr.range(), BindingKind::LoopVar, flags);
return;
}
// A binding within a `with` must be an item, as in:
// ```python
// with open("file.txt") as fp:
// ...
// ```
if parent.is_with_stmt() {
self.add_binding(id, expr.range(), BindingKind::WithItemVar, flags);
return;
@@ -1791,17 +1873,26 @@ impl<'a> Checker<'a> {
}
// If the expression is the left-hand side of a walrus operator, then it's a named
// expression assignment.
if self
.semantic
.current_expressions()
.filter_map(Expr::as_named_expr_expr)
.any(|parent| parent.target.as_ref() == expr)
{
// expression assignment, as in:
// ```python
// if (x := 10) > 5:
// ...
// ```
if self.semantic.in_named_expression_assignment() {
self.add_binding(id, expr.range(), BindingKind::NamedExprAssignment, flags);
return;
}
// If the expression is part of a comprehension target, then it's a comprehension variable
// assignment, as in:
// ```python
// [x for x in range(10)]
// ```
if self.semantic.in_comprehension_assignment() {
self.add_binding(id, expr.range(), BindingKind::ComprehensionVar, flags);
return;
}
self.add_binding(id, expr.range(), BindingKind::Assignment, flags);
}
@@ -1917,6 +2008,8 @@ impl<'a> Checker<'a> {
};
self.visit_parameters(parameters);
// Set the docstring state before visiting the function body.
self.docstring_state = DocstringState::Expected;
self.visit_body(body);
}
}

View File

@@ -1,3 +1,4 @@
use crate::line_width::IndentWidth;
use ruff_diagnostics::Diagnostic;
use ruff_python_codegen::Stylist;
use ruff_python_parser::lexer::LexResult;
@@ -15,11 +16,11 @@ use crate::rules::pycodestyle::rules::logical_lines::{
use crate::settings::LinterSettings;
/// Return the amount of indentation, expanding tabs to the next multiple of the settings' tab size.
fn expand_indent(line: &str, settings: &LinterSettings) -> usize {
pub(crate) fn expand_indent(line: &str, indent_width: IndentWidth) -> usize {
let line = line.trim_end_matches(['\n', '\r']);
let mut indent = 0;
let tab_size = settings.tab_size.as_usize();
let tab_size = indent_width.as_usize();
for c in line.bytes() {
match c {
b'\t' => indent = (indent / tab_size) * tab_size + tab_size,
@@ -85,7 +86,7 @@ pub(crate) fn check_logical_lines(
TextRange::new(locator.line_start(first_token.start()), first_token.start())
};
let indent_level = expand_indent(locator.slice(range), settings);
let indent_level = expand_indent(locator.slice(range), settings.tab_size);
let indent_size = 4;

View File

@@ -4,6 +4,7 @@ use std::path::Path;
use ruff_notebook::CellOffsets;
use ruff_python_ast::PySourceType;
use ruff_python_codegen::Stylist;
use ruff_python_parser::lexer::LexResult;
use ruff_python_parser::Tok;
@@ -14,6 +15,7 @@ use ruff_source_file::Locator;
use crate::directives::TodoComment;
use crate::lex::docstring_detection::StateMachine;
use crate::registry::{AsRule, Rule};
use crate::rules::pycodestyle::rules::BlankLinesChecker;
use crate::rules::ruff::rules::Context;
use crate::rules::{
eradicate, flake8_commas, flake8_executable, flake8_fixme, flake8_implicit_str_concat,
@@ -21,17 +23,37 @@ use crate::rules::{
};
use crate::settings::LinterSettings;
#[allow(clippy::too_many_arguments)]
pub(crate) fn check_tokens(
tokens: &[LexResult],
path: &Path,
locator: &Locator,
indexer: &Indexer,
stylist: &Stylist,
settings: &LinterSettings,
source_type: PySourceType,
cell_offsets: Option<&CellOffsets>,
) -> Vec<Diagnostic> {
let mut diagnostics: Vec<Diagnostic> = vec![];
if settings.rules.any_enabled(&[
Rule::BlankLineBetweenMethods,
Rule::BlankLinesTopLevel,
Rule::TooManyBlankLines,
Rule::BlankLineAfterDecorator,
Rule::BlankLinesAfterFunctionOrClass,
Rule::BlankLinesBeforeNestedDefinition,
]) {
let mut blank_lines_checker = BlankLinesChecker::default();
blank_lines_checker.check_lines(
tokens,
locator,
stylist,
settings.tab_size,
&mut diagnostics,
);
}
if settings.rules.enabled(Rule::BlanketNOQA) {
pygrep_hooks::rules::blanket_noqa(&mut diagnostics, indexer, locator);
}
@@ -95,7 +117,7 @@ pub(crate) fn check_tokens(
}
if settings.rules.enabled(Rule::TabIndentation) {
pycodestyle::rules::tab_indentation(&mut diagnostics, tokens, locator, indexer);
pycodestyle::rules::tab_indentation(&mut diagnostics, locator, indexer);
}
if settings.rules.any_enabled(&[

View File

@@ -137,6 +137,12 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pycodestyle, "E274") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::TabBeforeKeyword),
#[allow(deprecated)]
(Pycodestyle, "E275") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MissingWhitespaceAfterKeyword),
(Pycodestyle, "E301") => (RuleGroup::Preview, rules::pycodestyle::rules::BlankLineBetweenMethods),
(Pycodestyle, "E302") => (RuleGroup::Preview, rules::pycodestyle::rules::BlankLinesTopLevel),
(Pycodestyle, "E303") => (RuleGroup::Preview, rules::pycodestyle::rules::TooManyBlankLines),
(Pycodestyle, "E304") => (RuleGroup::Preview, rules::pycodestyle::rules::BlankLineAfterDecorator),
(Pycodestyle, "E305") => (RuleGroup::Preview, rules::pycodestyle::rules::BlankLinesAfterFunctionOrClass),
(Pycodestyle, "E306") => (RuleGroup::Preview, rules::pycodestyle::rules::BlankLinesBeforeNestedDefinition),
(Pycodestyle, "E401") => (RuleGroup::Stable, rules::pycodestyle::rules::MultipleImportsOnOneLine),
(Pycodestyle, "E402") => (RuleGroup::Stable, rules::pycodestyle::rules::ModuleImportNotAtTopOfFile),
(Pycodestyle, "E501") => (RuleGroup::Stable, rules::pycodestyle::rules::LineTooLong),
@@ -937,6 +943,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Ruff, "024") => (RuleGroup::Preview, rules::ruff::rules::MutableFromkeysValue),
(Ruff, "025") => (RuleGroup::Preview, rules::ruff::rules::UnnecessaryDictComprehensionForIterable),
(Ruff, "026") => (RuleGroup::Preview, rules::ruff::rules::DefaultFactoryKwarg),
(Ruff, "027") => (RuleGroup::Preview, rules::ruff::rules::MissingFStringSyntax),
(Ruff, "100") => (RuleGroup::Stable, rules::ruff::rules::UnusedNOQA),
(Ruff, "200") => (RuleGroup::Stable, rules::ruff::rules::InvalidPyprojectToml),
#[cfg(feature = "test-rules")]
@@ -1018,6 +1025,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
#[allow(deprecated)]
(Refurb, "113") => (RuleGroup::Nursery, rules::refurb::rules::RepeatedAppend),
(Refurb, "118") => (RuleGroup::Preview, rules::refurb::rules::ReimplementedOperator),
(Refurb, "129") => (RuleGroup::Preview, rules::refurb::rules::ReadlinesInFor),
#[allow(deprecated)]
(Refurb, "131") => (RuleGroup::Nursery, rules::refurb::rules::DeleteFullSlice),
#[allow(deprecated)]

View File

@@ -5,7 +5,7 @@ use ruff_python_ast::docstrings::{leading_space, leading_words};
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
use strum_macros::EnumIter;
use ruff_source_file::{Line, UniversalNewlineIterator, UniversalNewlines};
use ruff_source_file::{Line, NewlineWithTrailingNewline, UniversalNewlines};
use crate::docstrings::styles::SectionStyle;
use crate::docstrings::{Docstring, DocstringBody};
@@ -130,6 +130,34 @@ impl SectionKind {
Self::Yields => "Yields",
}
}
/// Returns `true` if a section can contain subsections, as in:
/// ```python
/// Yields
/// ------
/// int
/// Description of the anonymous integer return value.
/// ```
///
/// For NumPy, see: <https://numpydoc.readthedocs.io/en/latest/format.html>
///
/// For Google, see: <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>
pub(crate) fn has_subsections(self) -> bool {
matches!(
self,
Self::Args
| Self::Arguments
| Self::OtherArgs
| Self::OtherParameters
| Self::OtherParams
| Self::Parameters
| Self::Raises
| Self::Returns
| Self::SeeAlso
| Self::Warns
| Self::Yields
)
}
}
pub(crate) struct SectionContexts<'a> {
@@ -356,13 +384,16 @@ impl<'a> SectionContext<'a> {
pub(crate) fn previous_line(&self) -> Option<&'a str> {
let previous =
&self.docstring_body.as_str()[TextRange::up_to(self.range_relative().start())];
previous.universal_newlines().last().map(|l| l.as_str())
previous
.universal_newlines()
.last()
.map(|line| line.as_str())
}
/// Returns the lines belonging to this section after the summary line.
pub(crate) fn following_lines(&self) -> UniversalNewlineIterator<'a> {
pub(crate) fn following_lines(&self) -> NewlineWithTrailingNewline<'a> {
let lines = self.following_lines_str();
UniversalNewlineIterator::with_offset(lines, self.offset() + self.data.summary_full_end)
NewlineWithTrailingNewline::with_offset(lines, self.offset() + self.data.summary_full_end)
}
fn following_lines_str(&self) -> &'a str {
@@ -459,13 +490,54 @@ fn is_docstring_section(
// args: The arguments to the function.
// """
// ```
// Or `parameters` in:
// ```python
// def func(parameters: tuple[int]):
// """Toggle the gizmo.
//
// Parameters:
// -----
// parameters:
// The arguments to the function.
// """
// ```
// However, if the header is an _exact_ match (like `Returns:`, as opposed to `returns:`), then
// continue to treat it as a section header.
if let Some(previous_section) = previous_section {
if previous_section.indent_size < indent_size {
if section_kind.has_subsections() {
if let Some(previous_section) = previous_section {
let verbatim = &line[TextRange::at(indent_size, section_name_size)];
if section_kind.as_str() != verbatim {
return false;
// If the section is more deeply indented, assume it's a subsection, as in:
// ```python
// def func(args: tuple[int]):
// """Toggle the gizmo.
//
// Args:
// args: The arguments to the function.
// """
// ```
if previous_section.indent_size < indent_size {
if section_kind.as_str() != verbatim {
return false;
}
}
// If the section isn't underlined, and isn't title-cased, assume it's a subsection,
// as in:
// ```python
// def func(parameters: tuple[int]):
// """Toggle the gizmo.
//
// Parameters:
// -----
// parameters:
// The arguments to the function.
// """
// ```
if !next_line_is_underline && verbatim.chars().next().is_some_and(char::is_lowercase) {
if section_kind.as_str() != verbatim {
return false;
}
}
}
}

View File

@@ -8,6 +8,7 @@ use ruff_python_ast::{self as ast, Arguments, ExceptHandler, Stmt};
use ruff_python_ast::{AnyNodeRef, ArgOrKeyword};
use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer;
use ruff_python_trivia::textwrap::dedent_to;
use ruff_python_trivia::{
has_leading_content, is_python_whitespace, CommentRanges, PythonWhitespace, SimpleTokenKind,
SimpleTokenizer,
@@ -15,7 +16,9 @@ use ruff_python_trivia::{
use ruff_source_file::{Locator, NewlineWithTrailingNewline, UniversalNewlines};
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
use crate::cst::matchers::{match_function_def, match_indented_block, match_statement};
use crate::fix::codemods;
use crate::fix::codemods::CodegenStylist;
use crate::line_width::{IndentWidth, LineLength, LineWidthBuilder};
/// Return the `Fix` to use when deleting a `Stmt`.
@@ -166,6 +169,50 @@ pub(crate) fn add_argument(
}
}
/// Safely adjust the indentation of the indented block at [`TextRange`].
///
/// The [`TextRange`] is assumed to represent an entire indented block, including the leading
/// indentation of that block. For example, to dedent the body here:
/// ```python
/// if True:
/// print("Hello, world!")
/// ```
///
/// The range would be the entirety of ` print("Hello, world!")`.
pub(crate) fn adjust_indentation(
range: TextRange,
indentation: &str,
locator: &Locator,
indexer: &Indexer,
stylist: &Stylist,
) -> Result<String> {
// If the range includes a multi-line string, use LibCST to ensure that we don't adjust the
// whitespace _within_ the string.
if indexer.multiline_ranges().intersects(range) || indexer.fstring_ranges().intersects(range) {
let contents = locator.slice(range);
let module_text = format!("def f():{}{contents}", stylist.line_ending().as_str());
let mut tree = match_statement(&module_text)?;
let embedding = match_function_def(&mut tree)?;
let indented_block = match_indented_block(&mut embedding.body)?;
indented_block.indent = Some(indentation);
let module_text = indented_block.codegen_stylist(stylist);
let module_text = module_text
.strip_prefix(stylist.line_ending().as_str())
.unwrap()
.to_string();
Ok(module_text)
} else {
// Otherwise, we can do a simple adjustment ourselves.
let contents = locator.slice(range);
Ok(dedent_to(contents, indentation))
}
}
/// Determine if a vector contains only one, specific element.
fn is_only<T: PartialEq>(vec: &[T], value: &T) -> bool {
vec.len() == 1 && vec[0] == *value

View File

@@ -109,6 +109,7 @@ pub fn check_path(
path,
locator,
indexer,
stylist,
settings,
source_type,
source_kind.as_ipy_notebook().map(Notebook::cell_offsets),

View File

@@ -8,6 +8,7 @@ use fern;
use log::Level;
use once_cell::sync::Lazy;
use ruff_python_parser::{ParseError, ParseErrorType};
use rustc_hash::FxHashSet;
use ruff_source_file::{LineIndex, OneIndexed, SourceCode, SourceLocation};
@@ -15,7 +16,7 @@ use crate::fs;
use crate::source_kind::SourceKind;
use ruff_notebook::Notebook;
pub static WARNINGS: Lazy<Mutex<Vec<&'static str>>> = Lazy::new(Mutex::default);
pub static IDENTIFIERS: Lazy<Mutex<Vec<&'static str>>> = Lazy::new(Mutex::default);
/// Warn a user once, with uniqueness determined by the given ID.
#[macro_export]
@@ -24,7 +25,7 @@ macro_rules! warn_user_once_by_id {
use colored::Colorize;
use log::warn;
if let Ok(mut states) = $crate::logging::WARNINGS.lock() {
if let Ok(mut states) = $crate::logging::IDENTIFIERS.lock() {
if !states.contains(&$id) {
let message = format!("{}", format_args!($($arg)*));
warn!("{}", message.bold());
@@ -34,6 +35,26 @@ macro_rules! warn_user_once_by_id {
};
}
pub static MESSAGES: Lazy<Mutex<FxHashSet<String>>> = Lazy::new(Mutex::default);
/// Warn a user once, if warnings are enabled, with uniqueness determined by the content of the
/// message.
#[macro_export]
macro_rules! warn_user_once_by_message {
($($arg:tt)*) => {
use colored::Colorize;
use log::warn;
if let Ok(mut states) = $crate::logging::MESSAGES.lock() {
let message = format!("{}", format_args!($($arg)*));
if !states.contains(&message) {
warn!("{}", message.bold());
states.insert(message);
}
}
};
}
/// Warn a user once, with uniqueness determined by the calling location itself.
#[macro_export]
macro_rules! warn_user_once {

View File

@@ -264,6 +264,11 @@ impl Rule {
| Rule::BadQuotesMultilineString
| Rule::BlanketNOQA
| Rule::BlanketTypeIgnore
| Rule::BlankLineAfterDecorator
| Rule::BlankLineBetweenMethods
| Rule::BlankLinesAfterFunctionOrClass
| Rule::BlankLinesBeforeNestedDefinition
| Rule::BlankLinesTopLevel
| Rule::CommentedOutCode
| Rule::EmptyComment
| Rule::ExtraneousParentheses
@@ -296,6 +301,7 @@ impl Rule {
| Rule::ShebangNotFirstLine
| Rule::SingleLineImplicitStringConcatenation
| Rule::TabIndentation
| Rule::TooManyBlankLines
| Rule::TrailingCommaOnBareTuple
| Rule::TypeCommentInStub
| Rule::UselessSemicolon

View File

@@ -248,6 +248,7 @@ impl Renamer {
| BindingKind::Assignment
| BindingKind::BoundException
| BindingKind::LoopVar
| BindingKind::ComprehensionVar
| BindingKind::WithItemVar
| BindingKind::Global
| BindingKind::Nonlocal(_)

View File

@@ -102,7 +102,7 @@ static REDIRECTS: Lazy<HashMap<&'static str, &'static str>> = Lazy::new(|| {
("TCH006", "TCH010"),
("TRY200", "B904"),
("PGH001", "S307"),
("PHG002", "G010"),
("PGH002", "G010"),
// Test redirect by exact code
#[cfg(feature = "test-rules")]
("RUF940", "RUF950"),

View File

@@ -321,6 +321,16 @@ mod schema {
true
}
})
.filter(|_rule| {
// Filter out all test-only rules
#[cfg(feature = "test-rules")]
#[allow(clippy::used_underscore_binding)]
if _rule.starts_with("RUF9") {
return false;
}
true
})
.sorted()
.map(Value::String)
.collect(),

View File

@@ -1,14 +1,16 @@
/// See: [eradicate.py](https://github.com/myint/eradicate/blob/98f199940979c94447a461d50d27862b118b282d/eradicate.py)
use aho_corasick::AhoCorasick;
use itertools::Itertools;
use once_cell::sync::Lazy;
use regex::{Regex, RegexSet};
use ruff_python_parser::parse_suite;
use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer};
use ruff_text_size::TextSize;
static CODE_INDICATORS: Lazy<AhoCorasick> = Lazy::new(|| {
AhoCorasick::new([
"(", ")", "[", "]", "{", "}", ":", "=", "%", "print", "return", "break", "continue",
"import",
"(", ")", "[", "]", "{", "}", ":", "=", "%", "return", "break", "continue", "import",
])
.unwrap()
});
@@ -44,6 +46,14 @@ pub(crate) fn comment_contains_code(line: &str, task_tags: &[String]) -> bool {
return false;
}
// Fast path: if the comment contains consecutive identifiers, we know it won't parse.
let tokenizer = SimpleTokenizer::starts_at(TextSize::default(), line).skip_trivia();
if tokenizer.tuple_windows().any(|(first, second)| {
first.kind == SimpleTokenKind::Name && second.kind == SimpleTokenKind::Name
}) {
return false;
}
// Ignore task tag comments (e.g., "# TODO(tom): Refactor").
if line
.split(&[' ', ':', '('])
@@ -123,9 +133,10 @@ mod tests {
#[test]
fn comment_contains_code_with_print() {
assert!(comment_contains_code("#print", &[]));
assert!(comment_contains_code("#print(1)", &[]));
assert!(!comment_contains_code("#print", &[]));
assert!(!comment_contains_code("#print 1", &[]));
assert!(!comment_contains_code("#to print", &[]));
}

View File

@@ -1,7 +1,7 @@
use ruff_python_ast::Expr;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Expr;
use ruff_python_semantic::Modules;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
@@ -47,6 +47,10 @@ impl Violation for SixPY3 {
/// YTT202
pub(crate) fn name_or_attribute(checker: &mut Checker, expr: &Expr) {
if !checker.semantic().seen_module(Modules::SIX) {
return;
}
if checker
.semantic()
.resolve_call_path(expr)

View File

@@ -3,6 +3,7 @@ use ruff_python_ast::ExprCall;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::call_path::CallPath;
use ruff_python_semantic::Modules;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
@@ -42,17 +43,19 @@ impl Violation for BlockingOsCallInAsyncFunction {
/// ASYNC102
pub(crate) fn blocking_os_call(checker: &mut Checker, call: &ExprCall) {
if checker.semantic().in_async_context() {
if checker
.semantic()
.resolve_call_path(call.func.as_ref())
.as_ref()
.is_some_and(is_unsafe_os_method)
{
checker.diagnostics.push(Diagnostic::new(
BlockingOsCallInAsyncFunction,
call.func.range(),
));
if checker.semantic().seen_module(Modules::OS) {
if checker.semantic().in_async_context() {
if checker
.semantic()
.resolve_call_path(call.func.as_ref())
.as_ref()
.is_some_and(is_unsafe_os_method)
{
checker.diagnostics.push(Diagnostic::new(
BlockingOsCallInAsyncFunction,
call.func.range(),
));
}
}
}
}

View File

@@ -118,8 +118,7 @@ fn is_open_call_from_pathlib(func: &Expr, semantic: &SemanticModel) -> bool {
let binding = semantic.binding(binding_id);
let Some(Expr::Call(call)) = analyze::typing::find_binding_value(&name.id, binding, semantic)
else {
let Some(Expr::Call(call)) = analyze::typing::find_binding_value(binding, semantic) else {
return false;
};

View File

@@ -46,6 +46,7 @@ mod tests {
#[test_case(Rule::SubprocessWithoutShellEqualsTrue, Path::new("S603.py"))]
#[test_case(Rule::SuspiciousPickleUsage, Path::new("S301.py"))]
#[test_case(Rule::SuspiciousEvalUsage, Path::new("S307.py"))]
#[test_case(Rule::SuspiciousMarkSafeUsage, Path::new("S308.py"))]
#[test_case(Rule::SuspiciousURLOpenUsage, Path::new("S310.py"))]
#[test_case(Rule::SuspiciousTelnetUsage, Path::new("S312.py"))]
#[test_case(Rule::SuspiciousTelnetlibImport, Path::new("S401.py"))]

View File

@@ -4,7 +4,7 @@ use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::call_path::CallPath;
use ruff_python_ast::{self as ast, Expr, Operator};
use ruff_python_semantic::SemanticModel;
use ruff_python_semantic::{Modules, SemanticModel};
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
@@ -60,6 +60,10 @@ enum Reason {
/// S103
pub(crate) fn bad_file_permissions(checker: &mut Checker, call: &ast::ExprCall) {
if !checker.semantic().seen_module(Modules::OS) {
return;
}
if checker
.semantic()
.resolve_call_path(&call.func)

View File

@@ -1,6 +1,7 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Expr};
use ruff_python_semantic::Modules;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
@@ -35,6 +36,10 @@ impl Violation for DjangoRawSql {
/// S611
pub(crate) fn django_raw_sql(checker: &mut Checker, call: &ast::ExprCall) {
if !checker.semantic().seen_module(Modules::DJANGO) {
return;
}
if checker
.semantic()
.resolve_call_path(&call.func)

View File

@@ -40,7 +40,9 @@ impl Violation for HardcodedBindAllInterfaces {
pub(crate) fn hardcoded_bind_all_interfaces(checker: &mut Checker, string: StringLike) {
let is_bind_all_interface = match string {
StringLike::StringLiteral(ast::ExprStringLiteral { value, .. }) => value == "0.0.0.0",
StringLike::FStringLiteral(ast::FStringLiteralElement { value, .. }) => value == "0.0.0.0",
StringLike::FStringLiteral(ast::FStringLiteralElement { value, .. }) => {
&**value == "0.0.0.0"
}
StringLike::BytesLiteral(_) => return,
};

View File

@@ -1,6 +1,7 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast};
use ruff_python_semantic::Modules;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
@@ -35,6 +36,10 @@ impl Violation for LoggingConfigInsecureListen {
/// S612
pub(crate) fn logging_config_insecure_listen(checker: &mut Checker, call: &ast::ExprCall) {
if !checker.semantic().seen_module(Modules::LOGGING) {
return;
}
if checker
.semantic()
.resolve_call_path(&call.func)

View File

@@ -3,7 +3,7 @@
//! See: <https://bandit.readthedocs.io/en/latest/blacklists/blacklist_calls.html>
use ruff_diagnostics::{Diagnostic, DiagnosticKind, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Expr, ExprCall};
use ruff_python_ast::{self as ast, Decorator, Expr, ExprCall};
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
@@ -848,7 +848,7 @@ pub(crate) fn suspicious_function_call(checker: &mut Checker, call: &ExprCall) {
// Eval
["" | "builtins", "eval"] => Some(SuspiciousEvalUsage.into()),
// MarkSafe
["django", "utils", "safestring", "mark_safe"] => Some(SuspiciousMarkSafeUsage.into()),
["django", "utils", "safestring" | "html", "mark_safe"] => Some(SuspiciousMarkSafeUsage.into()),
// URLOpen (`urlopen`, `urlretrieve`, `Request`)
["urllib", "request", "urlopen" | "urlretrieve" | "Request"] |
["six", "moves", "urllib", "request", "urlopen" | "urlretrieve" | "Request"] => {
@@ -901,3 +901,27 @@ pub(crate) fn suspicious_function_call(checker: &mut Checker, call: &ExprCall) {
checker.diagnostics.push(diagnostic);
}
}
/// S308
pub(crate) fn suspicious_function_decorator(checker: &mut Checker, decorator: &Decorator) {
let Some(diagnostic_kind) = checker
.semantic()
.resolve_call_path(&decorator.expression)
.and_then(|call_path| {
match call_path.as_slice() {
// MarkSafe
["django", "utils", "safestring" | "html", "mark_safe"] => {
Some(SuspiciousMarkSafeUsage.into())
}
_ => None,
}
})
else {
return;
};
let diagnostic = Diagnostic::new::<DiagnosticKind>(diagnostic_kind, decorator.range());
if checker.enabled(diagnostic.kind.rule()) {
checker.diagnostics.push(diagnostic);
}
}

View File

@@ -3,6 +3,7 @@ use ruff_diagnostics::Diagnostic;
use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast};
use ruff_python_semantic::Modules;
use ruff_text_size::Ranged;
/// ## What it does
@@ -48,6 +49,10 @@ impl Violation for TarfileUnsafeMembers {
/// S202
pub(crate) fn tarfile_unsafe_members(checker: &mut Checker, call: &ast::ExprCall) {
if !checker.semantic().seen_module(Modules::TARFILE) {
return;
}
if !call
.func
.as_attribute_expr()
@@ -65,10 +70,6 @@ pub(crate) fn tarfile_unsafe_members(checker: &mut Checker, call: &ast::ExprCall
return;
}
if !checker.semantic().seen(&["tarfile"]) {
return;
}
checker
.diagnostics
.push(Diagnostic::new(TarfileUnsafeMembers, call.func.range()));

View File

@@ -19,7 +19,7 @@ use crate::checkers::ast::Checker;
/// from cryptography.hazmat.primitives.asymmetric import dsa, ec
///
/// dsa.generate_private_key(key_size=512)
/// ec.generate_private_key(curve=ec.SECT163K1)
/// ec.generate_private_key(curve=ec.SECT163K1())
/// ```
///
/// Use instead:
@@ -27,7 +27,7 @@ use crate::checkers::ast::Checker;
/// from cryptography.hazmat.primitives.asymmetric import dsa, ec
///
/// dsa.generate_private_key(key_size=4096)
/// ec.generate_private_key(curve=ec.SECP384R1)
/// ec.generate_private_key(curve=ec.SECP384R1())
/// ```
///
/// ## References

View File

@@ -0,0 +1,34 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S308.py:5:12: S308 Use of `mark_safe` may expose cross-site scripting vulnerabilities
|
4 | def some_func():
5 | return mark_safe('<script>alert("evil!")</script>')
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ S308
|
S308.py:8:1: S308 Use of `mark_safe` may expose cross-site scripting vulnerabilities
|
8 | @mark_safe
| ^^^^^^^^^^ S308
9 | def some_func():
10 | return '<script>alert("evil!")</script>'
|
S308.py:17:12: S308 Use of `mark_safe` may expose cross-site scripting vulnerabilities
|
16 | def some_func():
17 | return mark_safe('<script>alert("evil!")</script>')
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ S308
|
S308.py:20:1: S308 Use of `mark_safe` may expose cross-site scripting vulnerabilities
|
20 | @mark_safe
| ^^^^^^^^^^ S308
21 | def some_func():
22 | return '<script>alert("evil!")</script>'
|

View File

@@ -191,6 +191,11 @@ fn match_annotation_to_complex_bool(annotation: &Expr, semantic: &SemanticModel)
}
// Ex) `typing.Union[bool, int]`
Expr::Subscript(ast::ExprSubscript { value, slice, .. }) => {
// If the typing modules were never imported, we'll never match below.
if !semantic.seen_typing() {
return false;
}
let call_path = semantic.resolve_call_path(value);
if call_path
.as_ref()

View File

@@ -59,11 +59,11 @@ fn assertion_error(msg: Option<&Expr>) -> Stmt {
})),
arguments: Arguments {
args: if let Some(msg) = msg {
vec![msg.clone()]
Box::from([msg.clone()])
} else {
vec![]
Box::from([])
},
keywords: vec![],
keywords: Box::from([]),
range: TextRange::default(),
},
range: TextRange::default(),

View File

@@ -91,7 +91,7 @@ pub(crate) fn assert_raises_exception(checker: &mut Checker, items: &[WithItem])
return;
}
let [arg] = arguments.args.as_slice() else {
let [arg] = &*arguments.args else {
return;
};

Some files were not shown because too many files have changed in this diff Show More