Compare commits

..

70 Commits

Author SHA1 Message Date
Charlie Marsh
f460f9c5c0 Bump version to v0.1.6 (#8744) 2023-11-17 13:29:19 -05:00
Tuomas Siipola
2faac1e7a8 [refurb] Implement math-constant (FURB152) (#8727)
## Summary

Implements
[FURB152](https://github.com/dosisod/refurb/blob/master/docs/checks.md#furb152-use-math-constant)
that checks for literals that are similar to constants in `math` module,
for example:

```python
A = 3.141592 * r ** 2
```

Use instead:
```python
A = math.pi * r ** 2
```

Related to #1348.
2023-11-17 17:37:44 +00:00
Charlie Marsh
b7dbb9062c Remove incorrect deprecation label for stdout and stderr (#8743)
Closes https://github.com/astral-sh/ruff/issues/8738.
2023-11-17 12:34:02 -05:00
Charlie Marsh
66794bc9fe Remove erroneous bad-dunder-name reference (#8742)
Closes #8731.
2023-11-17 17:26:29 +00:00
konsti
dca430f4d2 Fix instability with await fluent style (#8676)
Fix an instability where await was followed by a breaking fluent style
expression:

```python
test_data = await (
    Stream.from_async(async_data)
    .flat_map_async()
    .map()
    .filter_async(is_valid_data)
    .to_list()
)
```

Note that this technically a minor style change (see ecosystem check)
2023-11-17 12:24:19 -05:00
Adil Zouitine
841e6c889e Add River in "Who's Using Ruff?" section (#8740)
## Summary

I added [`River`](https://github.com/online-ml/river) in "Who's Using
Ruff?" section. River is an online machine learning library, with 4.5k
stars and 460k downloads, since few months we used
[`Ruff`](3758eb1e0a/pyproject.toml (L31))
as linter.
2023-11-17 08:13:50 -06:00
Zanie Blue
bd99175fea Update D208 to preserve indentation offsets when fixing overindented lines (#8699)
Closes https://github.com/astral-sh/ruff/issues/8695

We track the smallest offset seen for overindented lines then only
reduce the indentation of the lines that far to preserve indentation in
other lines. This rule's behavior now matches our formatter, which is
nice.

We may want to gate this with preview.
2023-11-16 22:11:07 -06:00
Ofek Lev
4c86b155f2 Fix typo (#8735) 2023-11-16 22:10:09 -06:00
Vince Chan
e2109c1353 Improve debug printing for resolving origin of config settings (#8729)
## Summary

When running ruff in verbose mode with `-v`, the first debug logs show
where the config settings are taken from. For example:
```
❯ ruff check ./some_file.py -v
[2023-11-17][00:16:25][ruff_cli::resolve][DEBUG] Using pyproject.toml (parent) at /Users/vince/demo/ruff.toml
```

This threw me off for a second because I knew I had no python project
there, and therefore no `pyproject.toml` file. Then I realised it was
actually reading a `ruff.toml` file (obvious when you read the whole
print I suppose) and that the pyproject.toml is a hardcoded string in
the debug log.

I think it would be nice to tweak the wording slightly so it is clear
that the settings don't neccessarily have to come from a
`pyproject.toml` file.
2023-11-17 01:10:36 +00:00
Charlie Marsh
1fcccf82fc Avoid syntax error via importing trio.lowlevel (#8730)
We ended up with a syntax error here via `from trio import
lowlevel.checkpoint`. The new solution avoids that error, but does miss
cases like:

```py
from trio.lowlevel import Timer
```

Where it could insert `from trio.lowlevel import Timer, checkpoint`.
Instead, it'll add `from trio import lowlevel`.

See:
https://github.com/astral-sh/ruff/issues/8402#issuecomment-1810838129
2023-11-17 01:07:59 +00:00
konsti
14e65afdc6 Update to Rust 1.74 and use new clippy lints table (#8722)
Update to [Rust
1.74](https://blog.rust-lang.org/2023/11/16/Rust-1.74.0.html) and use
the new clippy lints table.

The update itself introduced a new clippy lint about superfluous hashes
in raw strings, which got removed.

I moved our lint config from `rustflags` to the newly stabilized
[workspace.lints](https://doc.rust-lang.org/stable/cargo/reference/workspaces.html#the-lints-table).
One consequence is that we have to `unsafe_code = "warn"` instead of
"forbid" because the latter now actually bans unsafe code:

```
error[E0453]: allow(unsafe_code) incompatible with previous forbid
  --> crates/ruff_source_file/src/newlines.rs:62:17
   |
62 |         #[allow(unsafe_code)]
   |                 ^^^^^^^^^^^ overruled by previous forbid
   |
   = note: `forbid` lint level was set on command line
```

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-11-16 18:12:46 -05:00
Charlie Marsh
6d5d079a18 Avoid missing namespace violations in scripts with shebangs (#8710)
## Summary

I think it's reasonable to avoid raising `INP001` for scripts, and
shebangs are one sufficient way to detect scripts.

Closes https://github.com/astral-sh/ruff/issues/8690.
2023-11-16 17:21:33 -05:00
Zanie Blue
d1e88dc984 Update UP032 to unescape curly braces in literal parts of converted strings (#8697)
Closes #8694
2023-11-16 13:59:54 -06:00
konsti
dda31b6996 List all ipython builtins (#8719)
I checked for ipython-specific builtins on python 3.11 using
```python
import json
from subprocess import check_output

builtins_python = json.loads(check_output(["python3", "-c" "import json; print(json.dumps(dir(__builtins__)))"]))
builtins_ipython = json.loads(check_output(["ipython3", "-c" "import json; print(json.dumps(dir(__builtins__)))"]))
print(sorted(set(builtins_ipython) - set(builtins_python)))
```
and updated the relevant constant and match. The list changes from

`display`

to

`__IPYTHON__`, `display`, `get_ipython`.

Followup to #8707
2023-11-16 19:06:25 +01:00
Charlie Marsh
b6a7787318 Remove pyproject.toml from fixtures directory (#8726)
## Summary

This exists to power a test, but it ends up affecting the behavior of
all files in the directory. Namely, it means that these files _aren't_
excluded when you format or lint them directly, since in that case, Ruff
will fall back to looking at the `pyproject.toml` in
`crates/ruff_linter/resources/test/fixtures`, which _doesn't_ exclude
these files, unlike our top-level `pyproject.toml`.
2023-11-16 13:04:52 -05:00
Jonas Haag
5fa961f670 Improve N803 example (#8714) 2023-11-16 12:50:31 -05:00
Charlie Marsh
2424188bb2 Trim trailing empty strings when converting to f-strings (#8712)
## Summary

When converting from a `.format` call to an f-string, we can trim any
trailing empty tokens.

Closes https://github.com/astral-sh/ruff/issues/8683.
2023-11-15 23:14:49 -05:00
Charlie Marsh
a59172528c Add fix for future-required-type-annotation (#8711)
## Summary

We already support inserting imports for `I002` -- this PR just adds the
same fix for `FA102`, which is explicitly about `from __future__ import
annotations`.

Closes https://github.com/astral-sh/ruff/issues/8682.
2023-11-16 03:08:02 +00:00
Charlie Marsh
cd29761b9c Run unicode prefix rule over tokens (#8709)
## Summary

It seems like the range of an `ExprStringLiteral` can be somewhat
unreliable when the string is part of an implicit concatenation with an
f-string. Using the tokens themselves is more reliable.

Closes #8680.
Closes https://github.com/astral-sh/ruff/issues/7784.
2023-11-16 02:30:42 +00:00
Charlie Marsh
4ac78d5725 Treat display as a builtin in IPython (#8707)
## Summary

`display` is a special-cased builtin in IPython. This PR adds it to the
builtin namespace when analyzing IPython notebooks.

Closes https://github.com/astral-sh/ruff/issues/8702.
2023-11-16 01:58:44 +00:00
Alan Du
2083352ae3 Add autofix for PIE800 (#8668)
## Summary

This adds an autofix for PIE800 (unnecessary spread) -- whenever we see
a `**{...}` inside another dictionary literal, just delete the `**{` and
`}` to inline the key-value pairs. So `{"a": "b", **{"c": "d"}}` becomes
just `{"a": "b", "c": "d"}`.

I have enabled this just for preview mode.

## Test Plan

Updated the preview snapshot test.
2023-11-15 18:11:04 +00:00
Tuomas Siipola
0e2ece5217 Implement FURB136 (#8664)
## Summary

Implements
[FURB136](https://github.com/dosisod/refurb/blob/master/docs/checks.md#furb136-use-min-max)
that checks for `if` expressions that can be replaced with `min()` or
`max()` calls. See issue #1348 for more information.

This implementation diverges from Refurb's original implementation by
retaining the order of equal values. For example, Refurb suggest that
the following expressions:

```python
highest_score1 = score1 if score1 > score2 else score2
highest_score2 = score1 if score1 >= score2 else score2
```

should be to rewritten as:

```python
highest_score1 = max(score1, score2)
highest_score2 = max(score1, score2)
```

whereas this implementation provides more correct alternatives:

```python
highest_score1 = max(score2, score1)
highest_score2 = max(score1, score2)
```

## Test Plan

Unit test checks all eight possibilities.
2023-11-15 18:10:13 +00:00
konsti
a783b14e7d Add --skip-magic-trailing-comma to formatter dev comment (#8689)
Testing the compatibility with the future stable black style, i realized
the `ruff_python_formatter` dev main was lacking the
`--skip-magic-trailing-comma` option. This does not affect `ruff
format`.

Usage:
```shell
cargo run --bin ruff_python_formatter -p ruff_python_formatter -- --skip-magic-trailing-comma --emit stdout scratch.py
```
2023-11-15 09:23:46 +00:00
Jelmer Vernooij
9d76e4e0b9 isort: Support disabling sections with `no-sections = true` (#8657)
## Summary

This adds a ``no-sections`` option for isort in the linter, similar to
the ``no_sections`` option that exists in upstream isort
(https://pycqa.github.io/isort/docs/configuration/options.html#no-sections)

This option puts all imports except for ``__future__`` into the same
section, and is mostly used by monorepos.

I've taken a bit of a leap in assuming that ruff wants to support the
exact same option; more than happy to refactor if you'd prefer a
different way of setting this up.

Fixes #8653

## Test Plan

I've added a test and have run it on a large Python codebase that uses
isort with --no-sections. The option is disabled by default.
2023-11-14 21:45:51 +00:00
bluthej
561277925f [isort] Simplify code structure for ordering imports (#8685)
While fixing #8661 I noticed that the code structure for sorting imports
could be simplified.

## Summary

- Move the logic for `force_sort_within_sections` from `isort/mod.rs` to
`isort/ordering.rs` => now there is just one line in `isort/mod.rs`:
`let imports = order_imports(import_block, settings);` which yields the
sorted imports
- Change the function signature of `order_imports` to directly return a
`Vec<EitherImport<'a>>` => no need for `OrderedImportBlock`

I think this is a bit of an improvement because the code is simpler and
there should be a bit of a speedup when setting
`force-sort-within-sections` to true. Indeed, when it's set to true
we're now directly ordering all the imports, whereas before we would
first order the straight imports, then the from imports, combine them
and finally sort the combination a second time (this is probably not
noticeable in practice though).

## Test Plan

No tests added, this is a simple refactor.
2023-11-14 16:43:46 -05:00
doolio
1074324c52 Add starlette as a user of ruff (#8672)
Mentioned [here on
discord](https://discord.com/channels/1039017663004942429/1039017663512449056/1173726569852837958).
Used their github url as that seems to be the most common url type
though not for Litestar.
2023-11-14 08:38:12 -06:00
Dhruv Manilawala
4099b9610f F-strings doesn't contain bytes literal for PLW0129 (#8675)
For the `PLW0129` rule, the f-string case shouldn't match against bytes
literal as f-strings cannot contain them. F-strings are made up of
either string literals or formatted expressions.
2023-11-14 18:56:18 +05:30
Charlie Marsh
f7d249ae06 Remove repeated and erroneous scoped settings headers in docs (#8670)
Closes https://github.com/astral-sh/ruff/issues/8505.
2023-11-14 05:44:30 +00:00
Charlie Marsh
bf2cc3f520 Add autotyping-like return type inference for annotation rules (#8643)
## Summary

This PR adds (unsafe) fixes to the flake8-annotations rules that enforce
missing return types, offering to automatically insert type annotations
for functions with literal return values. The logic is smart enough to
generate simplified unions (e.g., `float` instead of `int | float`) and
deal with implicit returns (`return` without a value).

Closes https://github.com/astral-sh/ruff/issues/1640 (though we could
open a separate issue for referring parameter types).

Closes https://github.com/astral-sh/ruff/issues/8213.

## Test Plan

`cargo test`
2023-11-13 23:34:15 -05:00
bluthej
23c819b4b3 Fix ordering for force-sort-within-sections (#8665)
Fixes #8661 

## Summary

Imports like `from x import y` don't have an "asname" for the module, so
they were placed before imports like `import x as w` since `None` <
`Some(s)` for any string s.
The fix is to first sort by `first_alias`, since it's `None` for `import
x as w`, and then by `asname`.

## Test Plan

I included the example from the issue to avoid future regressions.
2023-11-13 18:27:56 -05:00
Adrian
16060670b8 Add new rule to check for useless quote escapes (#8630)
When using the autofixer for `Q000` it does not remove the backslashes
from quotes that no longer need escaping.

This new rule checks for such backslashes (regardless whether they come
from the autofixer or not) and can remove them.

fixes #8617
2023-11-13 21:59:37 +00:00
Charlie Marsh
534fc34f11 Extend unnecessary-pass (PIE790) to include ellipses in preview (#8641)
## Summary

This PR extends `unnecessary-pass` (`PIE790`) to flag unnecessary
ellipsis expressions in addition to `pass` statements. A `pass` is
equivalent to a standalone `...`, so it feels correct to me that a
single rule should cover both cases.

When we look to v0.2.0, we should also consider deprecating `PYI013`,
which flags ellipses only for classes.

Closes https://github.com/astral-sh/ruff/issues/8602.
2023-11-13 19:28:16 +00:00
Charlie Marsh
df9ade7fd9 Use AST transformer for relocate (#8660) 2023-11-13 13:24:27 -05:00
Charlie Marsh
345e1401cf Treat class C: ... and class C(): ... equivalently (#8659)
## Summary

These should be seen as identical from the `ComparableAst` perspective.
2023-11-13 18:03:04 +00:00
Charlie Marsh
a8e0d4ab4f Fix lingering generated reference for MkDocs (#8658)
Missed this in #8652.
2023-11-13 18:00:01 +00:00
Alan Du
6f23bdb78f Generalize PIE807 to handle dict literals (#8608)
## Summary

PIE807 will rewrite `lambda: []` to `list` -- AFAICT though, the same
rationale also applies to dicts, so I've modified the code to also
rewrite `lambda: {}` to `dict`.

Two things I'm not sure about:
* Should this go to a new rule? This no longer actually matches the
behavior of flake8-pie, and while I think thematically it makes sense to
be part of the same rule, we could make it a standalone rule (but if so,
where should I put it and what error code should I use)?
* If we want a single rule, are there backwards compatibility concerns
with the rule name change (from `reimplemented_list_builtin` to
`reimplemented_container_builtin`?

## Test Plan

Added snapshot tests of the functionality.
2023-11-13 17:55:17 +00:00
Charlie Marsh
d574fcd1ac Compare formatted and unformatted ASTs during formatter tests (#8624)
## Summary

This PR implements validation in the formatter tests to ensure that we
don't modify the AST during formatting. Black has similar logic.

In implementing this, I learned that Black actually _does_ modify the
AST, and their test infrastructure normalizes the AST to wipe away those
differences. Specifically, Black changes the indentation of docstrings,
which _does_ modify the AST; and it also inserts parentheses in `del`
statements, which changes the AST too.

Ruff also does both these things, so we _also_ implement the same
normalization using a new visitor that allows for modifying the AST.

Closes https://github.com/astral-sh/ruff/issues/8184.

## Test Plan

`cargo test`
2023-11-13 17:43:27 +00:00
Charlie Marsh
3592f44ade Allow whitespace around colon in slices for whitespace-before-punctuation (E203) (#8654)
## Summary

This PR makes `whitespace-before-punctuation` (`E203`) compatible with
the formatter by relaxing the rule a bit, as compared to the pycodestyle
implementation. It's also more consistent with PEP 8, which says:

> However, in a slice the colon acts like a binary operator, and should
have equal amounts on either side (treating it as the operator with the
lowest priority).

Closes https://github.com/astral-sh/ruff/issues/7259.
Closes https://github.com/astral-sh/ruff/issues/8642.

## Test Plan

`cargo test`
2023-11-13 12:16:13 -05:00
Andrew Gallant
8984072df2 ruff_python_formatter: copy and inline shared traits (#8656)
It seems as though using `include!(...)` to avoid the source code copy
breaks rust-analzer. Namely, it treats the included file as unlinked,
and so any part of analysis (e.g., goto-definition) that needs that file
to reason about the code ends up failing.

<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

<!-- How was it tested? -->
2023-11-13 12:16:04 -05:00
Charlie Marsh
6a6de53722 Omit Insiders-only plugin when building docs on CI (#8652) 2023-11-13 10:24:58 -05:00
dependabot[bot]
5ba852a878 Bump annotate-snippets from 0.9.1 to 0.9.2 (#8646) 2023-11-13 14:55:15 +00:00
dependabot[bot]
c4fc2b8584 Bump pyproject-toml from 0.8.0 to 0.8.1 (#8648) 2023-11-13 14:53:47 +00:00
dependabot[bot]
1c5f2288ba Bump fs-err from 2.9.0 to 2.10.0 (#8649) 2023-11-13 09:38:44 -05:00
dependabot[bot]
62f1830898 Bump quick-junit from 0.3.3 to 0.3.5 (#8645) 2023-11-13 09:38:31 -05:00
dependabot[bot]
abca0a86ea Bump smallvec from 1.11.1 to 1.11.2 (#8647) 2023-11-13 09:38:11 -05:00
Charlie Marsh
213d315373 Avoid recommending Self usages in metaclasses (#8639)
PEP 673 forbids the use of `typing(_extensions).Self` in metaclasses, so
we want to avoid flagging `PYI034` on metaclasses. This is based on an
analogous change in `flake8-pyi`:
https://github.com/PyCQA/flake8-pyi/pull/436.

Closes https://github.com/astral-sh/ruff/issues/8353.
2023-11-12 19:47:48 -05:00
Charlie Marsh
7fd95e15d9 Document conventions in the FAQ (#8638)
Enumerates all rules defined in each convention in the FAQ. These lists
mirror
[pydocstyle](https://www.pydocstyle.org/en/latest/error_codes.html#default-conventions).

Closes https://github.com/astral-sh/ruff/issues/8573.
2023-11-12 22:56:39 +00:00
Charlie Marsh
02946e7b0c Redirect from rule codes to rule pages in docs (#8636)
## Summary

This adds redirects from, e.g., `https://docs.astral.sh/ruff/rules/F401`
to `https://docs.astral.sh/ruff/rules/unused-import`, which are
generated automatically when creating the documentation. Though we want
to move towards human-readable names eventually, I think this is a nice
and user-friendly change (and doesn't require any fancy infrastructure,
since the redirects are handled via a plugin and added client-side).

Closes #4710.
2023-11-12 17:47:10 -05:00
Charlie Marsh
cbd9157bbf Use function range for no-self-use (#8637)
Previously, this rule used the range of the `self` annotation, but it's
a lot more natural to use the range of the function name (since it also
means the `# noqa` is associated with the method rather than its first
argument).

Closes https://github.com/astral-sh/ruff/issues/8635.
2023-11-12 16:37:52 -05:00
Charlie Marsh
70f491d31e Omit unrolled augmented assignments in PIE794 (#8634)
Closes https://github.com/astral-sh/ruff/issues/8497.
2023-11-12 20:40:33 +00:00
Jonathan Plasse
776eb8724f Fix FBT001 false negative with unions and optional (#7501)
## Summary

- Close #7487

In the spirit of `flake8-boolean-trap`, any positional argument that can
accept a boolean should raise `FBT001`.
Raise `FBT001` for all annotations that accept booleans (e.g.
`Optional[bool]`, `Union[int, bool]`).

## Test Plan

Add a fixture, with an annotation using `|`, `Optional`, and `Union`,
and containing a boolean.
2023-11-12 15:09:23 -05:00
Charlie Wilson
5f78580775 Remove unecessary commentary in PD901 message (#8625)
## Summary

Removes unnecessary commentary from the PD901 message. This does make it
different from pandas-vet, but it improves consistency with the rest of
messages.

Current Message:

> `df` is a bad variable name. Be kinder to your future self.


New Message

> `df` is a bad variable name.


## Test Plan

The relevant snapshot has been updated with the new message.

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-11-12 17:20:05 +00:00
Bodo Graumann
4d301f6dcc Improve docs for RUF001, RUF002 and RUF003 (#8628)
I got an error from RUF001 and wanted to override it. How to do that was
not quite obvious. In the process I have tried to improve the
documentation for the rule and it's siblings.
2023-11-12 17:19:25 +00:00
Charlie Marsh
96b265ccec Implement autofix for multiple-spaces-after-operator and multiple-spaces-before-operator (#8623) 2023-11-11 23:46:16 +00:00
Charlie Marsh
e0a0ddcf7d Implement autofix for multiple-spaces-after-keyword and multiple-spaces-before-keyword (#8622)
Closes https://github.com/astral-sh/ruff/issues/8312.
2023-11-11 23:41:12 +00:00
Charlie Marsh
9724dfd939 Implement autofix for unnecessary-lambda (PLW0108) (#8621)
Closes https://github.com/astral-sh/ruff/issues/8618.
2023-11-11 18:34:02 -05:00
Steven DeMartini
d7144d6d8e Fix docs typo for ruff format preview configuration (#8611)
## Summary

The ruff configuration section is called "format", rather than
"preview". Using the configuration as it was written in the docs gives
an error:

```
$ ruff format --check .
ruff failed
  Cause: TOML parse error at line 143, column 1
    |
143 | [tool.ruff.preview]
    | ^^^^^^^^^^^^^^^^^^^
invalid type: map, expected a boolean
```

## Test Plan

Tested running `ruff format` with the following in my `pyproject.toml`:

```toml
[tool.ruff.format]
preview = true
```

and it worked properly (using preview rules for formatting).
2023-11-11 03:07:32 +00:00
Jesse Serrao
39728a1198 Add check for is comparison with mutable initialisers to rule F632 (#8607)
## Summary

Adds an extra check to F632 to check for any `is` comparisons to a
mutable initialisers.
Implements #8589 .

Example:
```Python
named_var = {}
if named_var is {}:  # F632 (fix)
    pass
```
The if condition will always evaluate to False because it checks on
identity and it's impossible to take the same identity as a hard coded
list/set/dict initializer.

## Test Plan

Multiple test cases were added to ensure the rule works + doesn't flag
false positives + the fix works correctly.
2023-11-11 00:29:23 +00:00
Shantanu
8207d6df82 Fix unnecessary parentheses in UP007 fix (#8610)
Fixes #8609
2023-11-10 19:15:09 -05:00
Jake Park
c8edac9d2b [pylint] Implement redefined-argument-from-local (R1704) (#8159)
## Summary

It implements Pylint rule R1704: redefined-argument-from-local

Problematic code:
```python
def show(host_id=10.11):
    # +1: [redefined-argument-from-local]
    for host_id, host in [[12.13, "Venus"], [14.15, "Mars"]]:
        print(host_id, host)
```

Correct code:
```python
def show(host_id=10.11):
    for inner_host_id, host in [[12.13, "Venus"], [14.15, "Mars"]]:
        print(host_id, inner_host_id, host)
```

References:
[Pylint
documentation](https://pylint.readthedocs.io/en/latest/user_guide/messages/refactor/redefined-argument-from-local.html)
[Related Issue](https://github.com/astral-sh/ruff/issues/970)

## Test Plan

`cargo test`
2023-11-10 14:13:07 -05:00
Alan Du
5a1a8bebca Allow overriding pydocstyle convention rules (#8586)
## Summary

This fixes #2606 by moving where we apply the convention ignores --
instead of applying that at the very end, e track, we now track which
rules have been specifically enabled (via `Specificity::Rule`). If they
have, then we do *not* apply the docstring overrides at the end.

## Test Plan

Added unit tests to `ruff_workspace` and an integration test to
`ruff_cli`
2023-11-10 18:47:37 +00:00
Dhruv Manilawala
3e00ddce38 Preserve trailing semicolon for Notebooks (#8590)
## Summary

This PR updates the formatter to preserve trailing semicolon for Jupyter
Notebooks.

The motivation behind the change is that semicolons in notebooks are
typically used to hide the output, for example when plotting. This is
highlighted in the linked issue.

The conditions required as to when the trailing semicolon should be
preserved are:
1. It should be a top-level statement which is last in the module.
2. For statement, it can be either assignment, annotated assignment, or
augmented assignment. Here, the target should only be a single
identifier i.e., multiple assignments or tuple unpacking isn't
considered.
3. For expression, it can be any.

## Test Plan

Add a new integration test in `ruff_cli`. The test notebook basically
acts as a document as to which trailing semicolons are to be preserved.

fixes: #8254
2023-11-10 21:53:35 +05:30
Andrew Gallant
a7dbe9d670 refine pyupgrade's TimeoutErrorAlias lint (UP041) to remove false positives (#8587)
Previously, this lint had its alias detection logic a little
backwards. That is, for Python 3.11+, it would *only* detect
asyncio.TimeoutError as an alias, but it should have also detected
socket.timeout as an alias. And in Python <3.11, it would falsely
detect asyncio.TimeoutError as an alias where it should have only
detected socket.timeout as an alias.

We fix it so that both asyncio.TimeoutError and socket.timeout are
detected as aliases in Python 3.11+, and only socket.timeout is
detected as an alias in Python 3.10.

Fixes #8565

## Test Plan

I tested this by updating the existing snapshot test which had
erroneously
asserted that socket.timeout should not be replaced with TimeoutError in
Python
3.11+. I also added a new regression test that targets Python 3.10 and
ensures
that the suggestion to replace asyncio.TimeoutError with TimeoutError
does not
occur.
2023-11-10 10:15:33 -05:00
Charlie Marsh
036b6bc0bd Document context manager breaking deviation vs. Black (#8597)
Closes https://github.com/astral-sh/ruff/issues/8180.
Closes https://github.com/astral-sh/ruff/issues/8580.
Closes https://github.com/astral-sh/ruff/issues/7441.
2023-11-10 04:32:29 +00:00
Dhruv Manilawala
d5606b7705 Consider the new f-string tokens for flake8-commas (#8582)
## Summary

This fixes the bug where the `flake8-commas` rules weren't taking the
new f-string tokens into account.

## Test Plan

Add new test cases around f-strings for all of `flake8-commas`'s rules.

fixes: #8556
2023-11-10 09:49:14 +05:30
Charlie Marsh
7968e190dd Write unchanged, excluded files to stdout when read via stdin (#8596)
## Summary

When you run Ruff via stdin, and pass `format` or `check --fix`, we
typically write the changed or unchanged contents to stdout. It turns
out we forgot to do this when the file is _excluded_, so if you run
`ruff format /path/to/excluded/file.py`, we don't write _anything_ to
`stdout`. This led to a bug in the LSP whereby we deleted file contents
for third-party files.

The right thing to do here is write back the unchanged contents, as it
should always be safe to write the output of stdout back to a file.
2023-11-09 23:15:01 -05:00
Charlie Marsh
346a828db2 Add a BindingKind for WithItem variables (#8594) 2023-11-09 22:44:49 -05:00
Charlie Marsh
0ac124acef Make unpacked assignment a flag rather than a BindingKind (#8595)
## Summary

An assignment can be _both_ (e.g.) a loop variable _and_ assigned via
unpacking. In other words, unpacking is a quality of an assignment, not
a _kind_.
2023-11-09 21:41:30 -05:00
Adrian
4ebd0bd31e Support local and dynamic class- and static-method decorators (#8592)
## Summary

This brings ruff's behavior in line with what `pep8-naming` already does
and thus closes #8397.

I had initially implemented this to look at the last segment of a dotted
path only when the entry in the `*-decorators` setting started with a
`.`, but in the end I thought it's better to remain consistent w/
`pep8-naming` and doing a match against the last segment of the
decorator name in any case.

If you prefer to diverge from this in favor of less ambiguity in the
configuration let me know and I'll change it so you would need to put
e.g. `.expression` in the `classmethod-decorators` list.

## Test Plan

Tested against the file in the issue linked below, plus the new testcase
added in this PR.
2023-11-10 02:04:25 +00:00
Zanie Blue
565ddebb15 Improve detection of TYPE_CHECKING blocks imported from typing_extensions or _typeshed (#8429)
~Improves detection of types imported from `typing_extensions`. Removes
the hard-coded list of supported types in `typing_extensions`; instead
assuming all types could be imported from `typing`, `_typeshed`, or
`typing_extensions`.~

~The typing extensions package appears to re-export types even if they
do not need modification.~


Adds detection of `if typing_extensions.TYPE_CHECKING` blocks. Avoids
inserting a new `if TYPE_CHECKING` block and `from typing import
TYPE_CHECKING` if `typing_extensions.TYPE_CHECKING` is used (closes
https://github.com/astral-sh/ruff/issues/8427)

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-11-09 12:21:03 -06:00
339 changed files with 12071 additions and 2525 deletions

View File

@@ -1,37 +1,3 @@
[alias]
dev = "run --package ruff_dev --bin ruff_dev"
benchmark = "bench -p ruff_benchmark --bench linter --bench formatter --"
[target.'cfg(all())']
rustflags = [
# CLIPPY LINT SETTINGS
# This is a workaround to configure lints for the entire workspace, pending the ability to configure this via TOML.
# See: `https://github.com/rust-lang/cargo/issues/5034`
# `https://github.com/EmbarkStudios/rust-ecosystem/issues/22#issuecomment-947011395`
"-Dunsafe_code",
"-Wclippy::pedantic",
# Allowed pedantic lints
"-Wclippy::char_lit_as_u8",
"-Aclippy::collapsible_else_if",
"-Aclippy::collapsible_if",
"-Aclippy::implicit_hasher",
"-Aclippy::match_same_arms",
"-Aclippy::missing_errors_doc",
"-Aclippy::missing_panics_doc",
"-Aclippy::module_name_repetitions",
"-Aclippy::must_use_candidate",
"-Aclippy::similar_names",
"-Aclippy::too_many_lines",
# Disallowed restriction lints
"-Wclippy::print_stdout",
"-Wclippy::print_stderr",
"-Wclippy::dbg_macro",
"-Wclippy::empty_drop",
"-Wclippy::empty_structs_with_brackets",
"-Wclippy::exit",
"-Wclippy::get_unwrap",
"-Wclippy::rc_buffer",
"-Wclippy::rc_mutex",
"-Wclippy::rest_pat_in_fully_bound_structs",
"-Wunreachable_pub"
]

View File

@@ -392,7 +392,7 @@ jobs:
run: mkdocs build --strict -f mkdocs.insiders.yml
- name: "Build docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: mkdocs build --strict -f mkdocs.generated.yml
run: mkdocs build --strict -f mkdocs.public.yml
check-formatter-instability-and-black-similarity:
name: "formatter instabilities and black similarity"

View File

@@ -44,7 +44,7 @@ jobs:
run: mkdocs build --strict -f mkdocs.insiders.yml
- name: "Build docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: mkdocs build --strict -f mkdocs.generated.yml
run: mkdocs build --strict -f mkdocs.public.yml
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
uses: cloudflare/wrangler-action@v3.3.2

View File

@@ -1,5 +1,78 @@
# Changelog
## 0.1.6
### Preview features
- \[`flake8-boolean-trap`\] Extend `boolean-type-hint-positional-argument` (`FBT001`) to include booleans in unions ([#7501](https://github.com/astral-sh/ruff/pull/7501))
- \[`flake8-pie`\] Extend `reimplemented-list-builtin` (`PIE807`) to `dict` reimplementations ([#8608](https://github.com/astral-sh/ruff/pull/8608))
- \[`flake8-pie`\] Extend `unnecessary-pass` (`PIE790`) to include ellipses (`...`) ([#8641](https://github.com/astral-sh/ruff/pull/8641))
- \[`flake8-pie`\] Implement fix for `unnecessary-spread` (`PIE800`) ([#8668](https://github.com/astral-sh/ruff/pull/8668))
- \[`flake8-quotes`\] Implement `unnecessary-escaped-quote` (`Q004`) ([#8630](https://github.com/astral-sh/ruff/pull/8630))
- \[`pycodestyle`\] Implement fix for `multiple-spaces-after-keyword` (`E271`) and `multiple-spaces-before-keyword` (`E272`) ([#8622](https://github.com/astral-sh/ruff/pull/8622))
- \[`pycodestyle`\] Implement fix for `multiple-spaces-after-operator` (`E222`) and `multiple-spaces-before-operator` (`E221`) ([#8623](https://github.com/astral-sh/ruff/pull/8623))
- \[`pyflakes`\] Extend `is-literal` (`F632`) to include comparisons against mutable initializers ([#8607](https://github.com/astral-sh/ruff/pull/8607))
- \[`pylint`\] Implement `redefined-argument-from-local` (`PLR1704`) ([#8159](https://github.com/astral-sh/ruff/pull/8159))
- \[`pylint`\] Implement fix for `unnecessary-lambda` (`PLW0108`) ([#8621](https://github.com/astral-sh/ruff/pull/8621))
- \[`refurb`\] Implement `if-expr-min-max` (`FURB136`) ([#8664](https://github.com/astral-sh/ruff/pull/8664))
- \[`refurb`\] Implement `math-constant` (`FURB152`) ([#8727](https://github.com/astral-sh/ruff/pull/8727))
### Rule changes
- \[`flake8-annotations`\] Add autotyping-like return type inference for annotation rules ([#8643](https://github.com/astral-sh/ruff/pull/8643))
- \[`flake8-future-annotations`\] Implement fix for `future-required-type-annotation` (`FA102`) ([#8711](https://github.com/astral-sh/ruff/pull/8711))
- \[`flake8-implicit-namespace-package`\] Avoid missing namespace violations in scripts with shebangs ([#8710](https://github.com/astral-sh/ruff/pull/8710))
- \[`pydocstyle`\] Update `over-indentation` (`D208`) to preserve indentation offsets when fixing overindented lines ([#8699](https://github.com/astral-sh/ruff/pull/8699))
- \[`pyupgrade`\] Refine `timeout-error-alias` (`UP041`) to remove false positives ([#8587](https://github.com/astral-sh/ruff/pull/8587))
### Formatter
- Fix instability in `await` formatting with fluent style ([#8676](https://github.com/astral-sh/ruff/pull/8676))
- Compare formatted and unformatted ASTs during formatter tests ([#8624](https://github.com/astral-sh/ruff/pull/8624))
- Preserve trailing semicolon for Notebooks ([#8590](https://github.com/astral-sh/ruff/pull/8590))
### CLI
- Improve debug printing for resolving origin of config settings ([#8729](https://github.com/astral-sh/ruff/pull/8729))
- Write unchanged, excluded files to stdout when read via stdin ([#8596](https://github.com/astral-sh/ruff/pull/8596))
### Configuration
- \[`isort`\] Support disabling sections with `no-sections = true` ([#8657](https://github.com/astral-sh/ruff/pull/8657))
- \[`pep8-naming`\] Support local and dynamic class- and static-method decorators ([#8592](https://github.com/astral-sh/ruff/pull/8592))
- \[`pydocstyle`\] Allow overriding pydocstyle convention rules ([#8586](https://github.com/astral-sh/ruff/pull/8586))
### Bug fixes
- Avoid syntax error via importing `trio.lowlevel` ([#8730](https://github.com/astral-sh/ruff/pull/8730))
- Omit unrolled augmented assignments in `PIE794` ([#8634](https://github.com/astral-sh/ruff/pull/8634))
- Slice source code instead of generating it for `EM` fixes ([#7746](https://github.com/astral-sh/ruff/pull/7746))
- Allow whitespace around colon in slices for `whitespace-before-punctuation` (`E203`) ([#8654](https://github.com/astral-sh/ruff/pull/8654))
- Use function range for `no-self-use` ([#8637](https://github.com/astral-sh/ruff/pull/8637))
- F-strings doesn't contain bytes literal for `PLW0129` ([#8675](https://github.com/astral-sh/ruff/pull/8675))
- Improve detection of `TYPE_CHECKING` blocks imported from `typing_extensions` or `_typeshed` ([#8429](https://github.com/astral-sh/ruff/pull/8429))
- Treat display as a builtin in IPython ([#8707](https://github.com/astral-sh/ruff/pull/8707))
- Avoid `FURB113` autofix if comments are present ([#8494](https://github.com/astral-sh/ruff/pull/8494))
- Consider the new f-string tokens for `flake8-commas` ([#8582](https://github.com/astral-sh/ruff/pull/8582))
- Remove erroneous bad-dunder-name reference ([#8742](https://github.com/astral-sh/ruff/pull/8742))
- Avoid recommending Self usages in metaclasses ([#8639](https://github.com/astral-sh/ruff/pull/8639))
- Detect runtime-evaluated base classes defined in the current file ([#8572](https://github.com/astral-sh/ruff/pull/8572))
- Avoid inserting trailing commas within f-strings ([#8574](https://github.com/astral-sh/ruff/pull/8574))
- Remove incorrect deprecation label for stdout and stderr ([#8743](https://github.com/astral-sh/ruff/pull/8743))
- Fix unnecessary parentheses in UP007 fix ([#8610](https://github.com/astral-sh/ruff/pull/8610))
- Remove repeated and erroneous scoped settings headers in docs ([#8670](https://github.com/astral-sh/ruff/pull/8670))
- Trim trailing empty strings when converting to f-strings ([#8712](https://github.com/astral-sh/ruff/pull/8712))
- Fix ordering for `force-sort-within-sections` ([#8665](https://github.com/astral-sh/ruff/pull/8665))
- Run unicode prefix rule over tokens ([#8709](https://github.com/astral-sh/ruff/pull/8709))
- Update UP032 to unescape curly braces in literal parts of converted strings ([#8697](https://github.com/astral-sh/ruff/pull/8697))
- List all ipython builtins ([#8719](https://github.com/astral-sh/ruff/pull/8719))
### Documentation
- Document conventions in the FAQ ([#8638](https://github.com/astral-sh/ruff/pull/8638))
- Redirect from rule codes to rule pages in docs ([#8636](https://github.com/astral-sh/ruff/pull/8636))
- Fix permalink to convention setting ([#8575](https://github.com/astral-sh/ruff/pull/8575))
## 0.1.5
### Preview features

98
Cargo.lock generated
View File

@@ -64,9 +64,9 @@ checksum = "c7021ce4924a3f25f802b2cccd1af585e39ea1a363a1aa2e72afe54b67a3a7a7"
[[package]]
name = "annotate-snippets"
version = "0.9.1"
version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c3b9d411ecbaf79885c6df4d75fff75858d5995ff25385657a28af47e82f9c36"
checksum = "ccaf7e9dfbb6ab22c82e473cd1a8a7bd313c19a5b7e40970f3d89ef5a5c9e81e"
dependencies = [
"unicode-width",
"yansi-term",
@@ -278,9 +278,7 @@ checksum = "7f2c685bad3eb3d45a01354cedb7d5faa66194d1d58ba6e267a8de788f79db38"
dependencies = [
"android-tzdata",
"iana-time-zone",
"js-sys",
"num-traits",
"wasm-bindgen",
"windows-targets 0.48.5",
]
@@ -810,7 +808,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.1.5"
version = "0.1.6"
dependencies = [
"anyhow",
"clap",
@@ -829,7 +827,7 @@ dependencies = [
"serde_json",
"strum",
"strum_macros",
"toml",
"toml 0.7.8",
]
[[package]]
@@ -859,9 +857,12 @@ dependencies = [
[[package]]
name = "fs-err"
version = "2.9.0"
version = "2.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0845fa252299212f0389d64ba26f34fa32cfe41588355f21ed507c59a0f64541"
checksum = "fb5fd9bcbe8b1087cbd395b51498c01bc997cef73e778a80b77a811af5e2d29f"
dependencies = [
"autocfg",
]
[[package]]
name = "fsevent-sys"
@@ -927,9 +928,9 @@ checksum = "8a9ee70c43aaf417c914396645a0fa852624801b24ebb7ae78fe8272889ac888"
[[package]]
name = "hashbrown"
version = "0.14.0"
version = "0.14.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2c6201b9ff9fd90a5a3bac2e56a830d0caa509576f0e503818ee82c181b3437a"
checksum = "f93e7192158dbcda357bdec5fb5788eebf8bbac027f3f33e719d29135ae84156"
[[package]]
name = "heck"
@@ -1033,12 +1034,12 @@ dependencies = [
[[package]]
name = "indexmap"
version = "2.0.0"
version = "2.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d5477fe2230a79769d8dc68e0eabf5437907c0457a5614a9e8dddb67f65eb65d"
checksum = "d530e1a18b1cb4c484e6e34556a0d948706958449fca0cab753d649f2bce3d1f"
dependencies = [
"equivalent",
"hashbrown 0.14.0",
"hashbrown 0.14.2",
"serde",
]
@@ -1801,36 +1802,37 @@ dependencies = [
[[package]]
name = "pyproject-toml"
version = "0.8.0"
version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0774c13ff0b8b7ebb4791c050c497aefcfe3f6a222c0829c7017161ed38391ff"
checksum = "46d4a5e69187f23a29f8aa0ea57491d104ba541bc55f76552c2a74962aa20e04"
dependencies = [
"indexmap",
"pep440_rs",
"pep508_rs",
"serde",
"toml",
"toml 0.8.2",
]
[[package]]
name = "quick-junit"
version = "0.3.3"
version = "0.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6bf780b59d590c25f8c59b44c124166a2a93587868b619fb8f5b47fb15e9ed6d"
checksum = "1b9599bffc2cd7511355996e0cfd979266b2cfa3f3ff5247d07a3a6e1ded6158"
dependencies = [
"chrono",
"indexmap",
"nextest-workspace-hack",
"quick-xml",
"strip-ansi-escapes",
"thiserror",
"uuid",
]
[[package]]
name = "quick-xml"
version = "0.29.0"
version = "0.31.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "81b9228215d82c7b61490fec1de287136b5de6f5700f6e58ea9ad61a7964ca51"
checksum = "1004a344b30a54e2ee58d66a71b32d2db2feb0a31f9a2d302bf0536f15de2a33"
dependencies = [
"memchr",
]
@@ -2060,9 +2062,9 @@ dependencies = [
[[package]]
name = "ruff_cli"
version = "0.1.5"
version = "0.1.6"
dependencies = [
"annotate-snippets 0.9.1",
"annotate-snippets 0.9.2",
"anyhow",
"argfile",
"assert_cmd",
@@ -2152,7 +2154,7 @@ dependencies = [
"strum",
"strum_macros",
"tempfile",
"toml",
"toml 0.7.8",
"tracing",
"tracing-indicatif",
"tracing-subscriber",
@@ -2196,10 +2198,10 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.1.5"
version = "0.1.6"
dependencies = [
"aho-corasick",
"annotate-snippets 0.9.1",
"annotate-snippets 0.9.2",
"anyhow",
"bitflags 2.4.1",
"chrono",
@@ -2252,7 +2254,7 @@ dependencies = [
"tempfile",
"test-case",
"thiserror",
"toml",
"toml 0.7.8",
"typed-arena",
"unicode-width",
"unicode_names2",
@@ -2447,7 +2449,7 @@ dependencies = [
[[package]]
name = "ruff_shrinking"
version = "0.1.5"
version = "0.1.6"
dependencies = [
"anyhow",
"clap",
@@ -2538,7 +2540,7 @@ dependencies = [
"shellexpand",
"strum",
"tempfile",
"toml",
"toml 0.7.8",
]
[[package]]
@@ -2802,9 +2804,9 @@ checksum = "38b58827f4464d87d377d175e90bf58eb00fd8716ff0a62f80356b5e61555d0d"
[[package]]
name = "smallvec"
version = "1.11.1"
version = "1.11.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "942b4a808e05215192e39f4ab80813e599068285906cc91aa64f923db842bd5a"
checksum = "4dccd0940a2dcdf68d092b8cbab7dc0ad8fa938bf95787e1b916b0e3d0e8e970"
[[package]]
name = "spin"
@@ -2831,6 +2833,15 @@ dependencies = [
"precomputed-hash",
]
[[package]]
name = "strip-ansi-escapes"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "55ff8ef943b384c414f54aefa961dd2bd853add74ec75e7ac74cf91dba62bcfa"
dependencies = [
"vte",
]
[[package]]
name = "strsim"
version = "0.10.0"
@@ -3086,7 +3097,19 @@ dependencies = [
"serde",
"serde_spanned",
"toml_datetime",
"toml_edit",
"toml_edit 0.19.15",
]
[[package]]
name = "toml"
version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "185d8ab0dfbb35cf1399a6344d8484209c088f75f8f68230da55d48d95d43e3d"
dependencies = [
"serde",
"serde_spanned",
"toml_datetime",
"toml_edit 0.20.2",
]
[[package]]
@@ -3111,6 +3134,19 @@ dependencies = [
"winnow",
]
[[package]]
name = "toml_edit"
version = "0.20.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "396e4d48bbb2b7554c944bde63101b5ae446cff6ec4a24227428f15eb72ef338"
dependencies = [
"indexmap",
"serde",
"serde_spanned",
"toml_datetime",
"winnow",
]
[[package]]
name = "tracing"
version = "0.1.40"

View File

@@ -38,7 +38,7 @@ serde = { version = "1.0.190", features = ["derive"] }
serde_json = { version = "1.0.108" }
shellexpand = { version = "3.0.0" }
similar = { version = "2.3.0", features = ["inline"] }
smallvec = { version = "1.11.1" }
smallvec = { version = "1.11.2" }
static_assertions = "1.1.0"
strum = { version = "0.25.0", features = ["strum_macros"] }
strum_macros = { version = "0.25.3" }
@@ -55,6 +55,38 @@ unicode-width = { version = "0.1.11" }
uuid = { version = "1.5.0", features = ["v4", "fast-rng", "macro-diagnostics", "js"] }
wsl = { version = "0.1.0" }
[workspace.lints.rust]
unsafe_code = "warn"
unreachable_pub = "warn"
[workspace.lints.clippy]
pedantic = { level = "warn", priority = -2 }
# Allowed pedantic lints
char_lit_as_u8 = "allow"
collapsible_else_if = "allow"
collapsible_if = "allow"
implicit_hasher = "allow"
match_same_arms = "allow"
missing_errors_doc = "allow"
missing_panics_doc = "allow"
module_name_repetitions = "allow"
must_use_candidate = "allow"
similar_names = "allow"
too_many_lines = "allow"
# To allow `#[allow(clippy::all)]` in `crates/ruff_python_parser/src/python.rs`.
needless_raw_string_hashes = "allow"
# Disallowed restriction lints
print_stdout = "warn"
print_stderr = "warn"
dbg_macro = "warn"
empty_drop = "warn"
empty_structs_with_brackets = "warn"
exit = "warn"
get_unwrap = "warn"
rc_buffer = "warn"
rc_mutex = "warn"
rest_pat_in_fully_bound_structs = "warn"
[profile.release]
lto = "fat"
codegen-units = 1

View File

@@ -54,7 +54,7 @@ Ruff is extremely actively developed and used in major open-source projects like
- [Pandas](https://github.com/pandas-dev/pandas)
- [SciPy](https://github.com/scipy/scipy)
...and many more.
...and [many more](#whos-using-ruff).
Ruff is backed by [Astral](https://astral.sh). Read the [launch post](https://astral.sh/blog/announcing-astral-the-company-behind-ruff),
or the original [project announcement](https://notes.crmarsh.com/python-tooling-could-be-much-much-faster).
@@ -150,7 +150,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.1.5
rev: v0.1.6
hooks:
# Run the linter.
- id: ruff
@@ -377,8 +377,8 @@ Ruff is used by a number of major open-source projects and companies, including:
- Anthropic ([Python SDK](https://github.com/anthropics/anthropic-sdk-python))
- [Apache Airflow](https://github.com/apache/airflow)
- AstraZeneca ([Magnus](https://github.com/AstraZeneca/magnus-core))
- Benchling ([Refac](https://github.com/benchling/refac))
- [Babel](https://github.com/python-babel/babel)
- Benchling ([Refac](https://github.com/benchling/refac))
- [Bokeh](https://github.com/bokeh/bokeh)
- [Cryptography (PyCA)](https://github.com/pyca/cryptography)
- [DVC](https://github.com/iterative/dvc)
@@ -389,15 +389,16 @@ Ruff is used by a number of major open-source projects and companies, including:
- [Gradio](https://github.com/gradio-app/gradio)
- [Great Expectations](https://github.com/great-expectations/great_expectations)
- [HTTPX](https://github.com/encode/httpx)
- [Hatch](https://github.com/pypa/hatch)
- [Home Assistant](https://github.com/home-assistant/core)
- Hugging Face ([Transformers](https://github.com/huggingface/transformers),
[Datasets](https://github.com/huggingface/datasets),
[Diffusers](https://github.com/huggingface/diffusers))
- [Hatch](https://github.com/pypa/hatch)
- [Home Assistant](https://github.com/home-assistant/core)
- ING Bank ([popmon](https://github.com/ing-bank/popmon), [probatus](https://github.com/ing-bank/probatus))
- [Ibis](https://github.com/ibis-project/ibis)
- [Jupyter](https://github.com/jupyter-server/jupyter_server)
- [LangChain](https://github.com/hwchase17/langchain)
- [Litestar](https://litestar.dev/)
- [LlamaIndex](https://github.com/jerryjliu/llama_index)
- Matrix ([Synapse](https://github.com/matrix-org/synapse))
- [MegaLinter](https://github.com/oxsecurity/megalinter)
@@ -422,20 +423,21 @@ Ruff is used by a number of major open-source projects and companies, including:
- [PostHog](https://github.com/PostHog/posthog)
- Prefect ([Python SDK](https://github.com/PrefectHQ/prefect), [Marvin](https://github.com/PrefectHQ/marvin))
- [PyInstaller](https://github.com/pyinstaller/pyinstaller)
- [PyMC-Marketing](https://github.com/pymc-labs/pymc-marketing)
- [PyTorch](https://github.com/pytorch/pytorch)
- [Pydantic](https://github.com/pydantic/pydantic)
- [Pylint](https://github.com/PyCQA/pylint)
- [PyMC-Marketing](https://github.com/pymc-labs/pymc-marketing)
- [Reflex](https://github.com/reflex-dev/reflex)
- [River](https://github.com/online-ml/river)
- [Rippling](https://rippling.com)
- [Robyn](https://github.com/sansyrox/robyn)
- Scale AI ([Launch SDK](https://github.com/scaleapi/launch-python-client))
- Snowflake ([SnowCLI](https://github.com/Snowflake-Labs/snowcli))
- [Saleor](https://github.com/saleor/saleor)
- Scale AI ([Launch SDK](https://github.com/scaleapi/launch-python-client))
- [SciPy](https://github.com/scipy/scipy)
- Snowflake ([SnowCLI](https://github.com/Snowflake-Labs/snowcli))
- [Sphinx](https://github.com/sphinx-doc/sphinx)
- [Stable Baselines3](https://github.com/DLR-RM/stable-baselines3)
- [Litestar](https://litestar.dev/)
- [Starlette](https://github.com/encode/starlette)
- [The Algorithms](https://github.com/TheAlgorithms/Python)
- [Vega-Altair](https://github.com/altair-viz/altair)
- WordPress ([Openverse](https://github.com/WordPress/openverse))

View File

@@ -1,6 +1,6 @@
[package]
name = "flake8-to-ruff"
version = "0.1.5"
version = "0.1.6"
description = """
Convert Flake8 configuration files to Ruff configuration files.
"""
@@ -34,3 +34,6 @@ toml = { workspace = true }
[dev-dependencies]
pretty_assertions = "1.3.0"
[lints]
workspace = true

View File

@@ -46,6 +46,9 @@ ruff_python_formatter = { path = "../ruff_python_formatter" }
ruff_python_index = { path = "../ruff_python_index" }
ruff_python_parser = { path = "../ruff_python_parser" }
[lints]
workspace = true
[features]
codspeed = ["codspeed-criterion-compat"]

View File

@@ -20,3 +20,6 @@ seahash = "4.1.0"
[dev-dependencies]
ruff_macros = { path = "../ruff_macros" }
[lints]
workspace = true

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_cli"
version = "0.1.5"
version = "0.1.6"
publish = false
authors = { workspace = true }
edition = { workspace = true }
@@ -28,7 +28,7 @@ ruff_python_trivia = { path = "../ruff_python_trivia" }
ruff_workspace = { path = "../ruff_workspace" }
ruff_text_size = { path = "../ruff_text_size" }
annotate-snippets = { version = "0.9.1", features = ["color"] }
annotate-snippets = { version = "0.9.2", features = ["color"] }
anyhow = { workspace = true }
argfile = { version = "0.1.6" }
bincode = { version = "1.3.3" }
@@ -76,3 +76,6 @@ mimalloc = "0.1.39"
[target.'cfg(all(not(target_os = "windows"), not(target_os = "openbsd"), any(target_arch = "x86_64", target_arch = "aarch64", target_arch = "powerpc64")))'.dependencies]
tikv-jemallocator = "0.5.0"
[lints]
workspace = true

View File

@@ -0,0 +1,413 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "4f8ce941-1492-4d4e-8ab5-70d733fe891a",
"metadata": {},
"outputs": [],
"source": [
"%config ZMQInteractiveShell.ast_node_interactivity=\"last_expr_or_assign\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "721ec705-0c65-4bfb-9809-7ed8bc534186",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Assignment statement without a semicolon\n",
"x = 1"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "de50e495-17e5-41cc-94bd-565757555d7e",
"metadata": {},
"outputs": [],
"source": [
"# Assignment statement with a semicolon\n",
"x = 1;\n",
"x = 1;"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "39e31201-23da-44eb-8684-41bba3663991",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"2"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Augmented assignment without a semicolon\n",
"x += 1"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6b73d3dd-c73a-4697-9e97-e109a6c1fbab",
"metadata": {},
"outputs": [],
"source": [
"# Augmented assignment without a semicolon\n",
"x += 1;\n",
"x += 1; # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2a3e5b86-aa5b-46ba-b9c6-0386d876f58c",
"metadata": {},
"outputs": [],
"source": [
"# Multiple assignment without a semicolon\n",
"x = y = 1"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "07f89e51-9357-4cfb-8fc5-76fb75e35949",
"metadata": {},
"outputs": [],
"source": [
"# Multiple assignment with a semicolon\n",
"x = y = 1;\n",
"x = y = 1;"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c22b539d-473e-48f8-a236-625e58c47a00",
"metadata": {},
"outputs": [],
"source": [
"# Tuple unpacking without a semicolon\n",
"x, y = 1, 2"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "12c87940-a0d5-403b-a81c-7507eb06dc7e",
"metadata": {},
"outputs": [],
"source": [
"# Tuple unpacking with a semicolon (irrelevant)\n",
"x, y = 1, 2;\n",
"x, y = 1, 2; # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "5a768c76-6bc4-470c-b37e-8cc14bc6caf4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Annotated assignment statement without a semicolon\n",
"x: int = 1"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "21bfda82-1a9a-4ba1-9078-74ac480804b5",
"metadata": {},
"outputs": [],
"source": [
"# Annotated assignment statement without a semicolon\n",
"x: int = 1;\n",
"x: int = 1; # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "09929999-ff29-4d10-ad2b-e665af15812d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Assignment expression without a semicolon\n",
"(x := 1)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "32a83217-1bad-4f61-855e-ffcdb119c763",
"metadata": {},
"outputs": [],
"source": [
"# Assignment expression with a semicolon\n",
"(x := 1);\n",
"(x := 1); # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "61b81865-277e-4964-b03e-eb78f1f318eb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"x = 1\n",
"# Expression without a semicolon\n",
"x"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "974c29be-67e1-4000-95fa-6ca118a63bad",
"metadata": {},
"outputs": [],
"source": [
"x = 1\n",
"# Expression with a semicolon\n",
"x;"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "cfeb1757-46d6-4f13-969f-a283b6d0304f",
"metadata": {},
"outputs": [],
"source": [
"class Point:\n",
" def __init__(self, x, y):\n",
" self.x = x\n",
" self.y = y\n",
"\n",
"\n",
"p = Point(0, 0);"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "2ee7f1a5-ccfe-4004-bfa4-ef834a58da97",
"metadata": {},
"outputs": [],
"source": [
"# Assignment statement where the left is an attribute access doesn't\n",
"# print the value.\n",
"p.x = 1;"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "3e49370a-048b-474d-aa0a-3d1d4a73ad37",
"metadata": {},
"outputs": [],
"source": [
"data = {}\n",
"\n",
"# Neither does the subscript node\n",
"data[\"foo\"] = 1;"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "d594bdd3-eaa9-41ef-8cda-cf01bc273b2d",
"metadata": {},
"outputs": [],
"source": [
"if (x := 1):\n",
" # It should be the top level statement\n",
" x"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "e532f0cf-80c7-42b7-8226-6002fcf74fb6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Parentheses with comments\n",
"(\n",
" x := 1 # comment\n",
") # comment"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "473c5d62-871b-46ed-8a34-27095243f462",
"metadata": {},
"outputs": [],
"source": [
"# Parentheses with comments\n",
"(\n",
" x := 1 # comment\n",
"); # comment"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "8c3c2361-f49f-45fe-bbe3-7e27410a8a86",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hello world!'"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\"\"\"Hello world!\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "23dbe9b5-3f68-4890-ab2d-ab0dbfd0712a",
"metadata": {},
"outputs": [],
"source": [
"\"\"\"Hello world!\"\"\"; # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "3ce33108-d95d-4c70-83d1-0d4fd36a2951",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'x = 1'"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"x = 1\n",
"f\"x = {x}\""
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "654a4a67-de43-4684-824a-9451c67db48f",
"metadata": {},
"outputs": [],
"source": [
"x = 1\n",
"f\"x = {x}\";\n",
"f\"x = {x}\"; # comment\n",
"# comment"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python (ruff-playground)",
"language": "python",
"name": "ruff-playground"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -202,12 +202,12 @@ fn lint_path(
match result {
Ok(inner) => inner,
Err(error) => {
let message = r#"This indicates a bug in Ruff. If you could open an issue at:
let message = r"This indicates a bug in Ruff. If you could open an issue at:
https://github.com/astral-sh/ruff/issues/new?title=%5BLinter%20panic%5D
...with the relevant file contents, the `pyproject.toml` settings, and the following stack trace, we'd be very appreciative!
"#;
";
error!(
"{}{}{} {message}\n{error}",

View File

@@ -8,7 +8,7 @@ use ruff_workspace::resolver::{match_exclusion, python_file_at_path, PyprojectCo
use crate::args::CliOverrides;
use crate::diagnostics::{lint_stdin, Diagnostics};
use crate::stdin::read_from_stdin;
use crate::stdin::{parrot_stdin, read_from_stdin};
/// Run the linter over a single file, read from `stdin`.
pub(crate) fn check_stdin(
@@ -21,6 +21,9 @@ pub(crate) fn check_stdin(
if pyproject_config.settings.file_resolver.force_exclude {
if let Some(filename) = filename {
if !python_file_at_path(filename, pyproject_config, overrides)? {
if fix_mode.is_apply() {
parrot_stdin()?;
}
return Ok(Diagnostics::default());
}
@@ -29,14 +32,17 @@ pub(crate) fn check_stdin(
.file_name()
.is_some_and(|name| match_exclusion(filename, name, &lint_settings.exclude))
{
if fix_mode.is_apply() {
parrot_stdin()?;
}
return Ok(Diagnostics::default());
}
}
}
let stdin = read_from_stdin()?;
let package_root = filename.and_then(Path::parent).and_then(|path| {
packaging::detect_package_root(path, &pyproject_config.settings.linter.namespace_packages)
});
let stdin = read_from_stdin()?;
let mut diagnostics = lint_stdin(
filename,
package_root,

View File

@@ -660,12 +660,12 @@ impl Display for FormatCommandError {
}
}
Self::Panic(path, err) => {
let message = r#"This indicates a bug in Ruff. If you could open an issue at:
let message = r"This indicates a bug in Ruff. If you could open an issue at:
https://github.com/astral-sh/ruff/issues/new?title=%5BFormatter%20panic%5D
...with the relevant file contents, the `pyproject.toml` settings, and the following stack trace, we'd be very appreciative!
"#;
";
if let Some(path) = path {
write!(
f,

View File

@@ -15,7 +15,7 @@ use crate::commands::format::{
FormatResult, FormattedSource,
};
use crate::resolve::resolve;
use crate::stdin::read_from_stdin;
use crate::stdin::{parrot_stdin, read_from_stdin};
use crate::ExitStatus;
/// Run the formatter over a single file, read from `stdin`.
@@ -34,6 +34,9 @@ pub(crate) fn format_stdin(cli: &FormatArguments, overrides: &CliOverrides) -> R
if pyproject_config.settings.file_resolver.force_exclude {
if let Some(filename) = cli.stdin_filename.as_deref() {
if !python_file_at_path(filename, &pyproject_config, overrides)? {
if mode.is_write() {
parrot_stdin()?;
}
return Ok(ExitStatus::Success);
}
@@ -42,6 +45,9 @@ pub(crate) fn format_stdin(cli: &FormatArguments, overrides: &CliOverrides) -> R
.file_name()
.is_some_and(|name| match_exclusion(filename, name, &format_settings.exclude))
{
if mode.is_write() {
parrot_stdin()?;
}
return Ok(ExitStatus::Success);
}
}
@@ -50,6 +56,9 @@ pub(crate) fn format_stdin(cli: &FormatArguments, overrides: &CliOverrides) -> R
let path = cli.stdin_filename.as_deref();
let SourceType::Python(source_type) = path.map(SourceType::from).unwrap_or_default() else {
if mode.is_write() {
parrot_stdin()?;
}
return Ok(ExitStatus::Success);
};

View File

@@ -63,7 +63,7 @@ fn format_rule_text(rule: Rule) -> String {
if rule.is_preview() || rule.is_nursery() {
output.push_str(
r#"This rule is in preview and is not stable. The `--preview` flag is required for use."#,
r"This rule is in preview and is not stable. The `--preview` flag is required for use.",
);
output.push('\n');
output.push('\n');

View File

@@ -43,7 +43,7 @@ pub fn resolve(
{
let settings = resolve_root_settings(&pyproject, Relativity::Cwd, overrides)?;
debug!(
"Using user specified pyproject.toml at {}",
"Using user-specified configuration file at: {}",
pyproject.display()
);
return Ok(PyprojectConfig::new(
@@ -63,7 +63,10 @@ pub fn resolve(
.as_ref()
.unwrap_or(&path_dedot::CWD.as_path()),
)? {
debug!("Using pyproject.toml (parent) at {}", pyproject.display());
debug!(
"Using configuration file (via parent) at: {}",
pyproject.display()
);
let settings = resolve_root_settings(&pyproject, Relativity::Parent, overrides)?;
return Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Hierarchical,
@@ -77,7 +80,10 @@ pub fn resolve(
// end up the "closest" `pyproject.toml` file for every Python file later on, so
// these act as the "default" settings.)
if let Some(pyproject) = pyproject::find_user_settings_toml() {
debug!("Using pyproject.toml (cwd) at {}", pyproject.display());
debug!(
"Using configuration file (via cwd) at: {}",
pyproject.display()
);
let settings = resolve_root_settings(&pyproject, Relativity::Cwd, overrides)?;
return Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Hierarchical,

View File

@@ -1,5 +1,5 @@
use std::io;
use std::io::Read;
use std::io::{Read, Write};
/// Read a string from `stdin`.
pub(crate) fn read_from_stdin() -> Result<String, io::Error> {
@@ -7,3 +7,11 @@ pub(crate) fn read_from_stdin() -> Result<String, io::Error> {
io::stdin().lock().read_to_string(&mut buffer)?;
Ok(buffer)
}
/// Read bytes from `stdin` and write them to `stdout`.
pub(crate) fn parrot_stdin() -> Result<(), io::Error> {
let mut buffer = String::new();
io::stdin().lock().read_to_string(&mut buffer)?;
io::stdout().write_all(buffer.as_bytes())?;
Ok(())
}

View File

@@ -320,6 +320,11 @@ if __name__ == '__main__':
exit_code: 0
----- stdout -----
from test import say_hy
if __name__ == '__main__':
say_hy("dear Ruff contributor")
----- stderr -----
"###);
Ok(())
@@ -390,9 +395,9 @@ fn deprecated_options() -> Result<()> {
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
r"
tab-size = 2
"#,
",
)?;
insta::with_settings!({filters => vec![
@@ -402,10 +407,10 @@ tab-size = 2
.args(["format", "--config"])
.arg(&ruff_toml)
.arg("-")
.pass_stdin(r#"
.pass_stdin(r"
if True:
pass
"#), @r###"
"), @r###"
success: true
exit_code: 0
----- stdout -----
@@ -438,9 +443,9 @@ format = "json"
.args(["check", "--select", "F401", "--no-cache", "--config"])
.arg(&ruff_toml)
.arg("-")
.pass_stdin(r#"
.pass_stdin(r"
import os
"#), @r###"
"), @r###"
success: false
exit_code: 2
----- stdout -----
@@ -813,3 +818,432 @@ fn test_diff_stdin_formatted() {
----- stderr -----
"###);
}
#[test]
fn test_notebook_trailing_semicolon() {
let fixtures = Path::new("resources").join("test").join("fixtures");
let unformatted = fs::read(fixtures.join("trailing_semicolon.ipynb")).unwrap();
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.ipynb"])
.arg("-")
.pass_stdin(unformatted), @r###"
success: true
exit_code: 0
----- stdout -----
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "4f8ce941-1492-4d4e-8ab5-70d733fe891a",
"metadata": {},
"outputs": [],
"source": [
"%config ZMQInteractiveShell.ast_node_interactivity=\"last_expr_or_assign\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "721ec705-0c65-4bfb-9809-7ed8bc534186",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Assignment statement without a semicolon\n",
"x = 1"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "de50e495-17e5-41cc-94bd-565757555d7e",
"metadata": {},
"outputs": [],
"source": [
"# Assignment statement with a semicolon\n",
"x = 1\n",
"x = 1;"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "39e31201-23da-44eb-8684-41bba3663991",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"2"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Augmented assignment without a semicolon\n",
"x += 1"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6b73d3dd-c73a-4697-9e97-e109a6c1fbab",
"metadata": {},
"outputs": [],
"source": [
"# Augmented assignment without a semicolon\n",
"x += 1\n",
"x += 1; # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2a3e5b86-aa5b-46ba-b9c6-0386d876f58c",
"metadata": {},
"outputs": [],
"source": [
"# Multiple assignment without a semicolon\n",
"x = y = 1"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "07f89e51-9357-4cfb-8fc5-76fb75e35949",
"metadata": {},
"outputs": [],
"source": [
"# Multiple assignment with a semicolon\n",
"x = y = 1\n",
"x = y = 1"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c22b539d-473e-48f8-a236-625e58c47a00",
"metadata": {},
"outputs": [],
"source": [
"# Tuple unpacking without a semicolon\n",
"x, y = 1, 2"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "12c87940-a0d5-403b-a81c-7507eb06dc7e",
"metadata": {},
"outputs": [],
"source": [
"# Tuple unpacking with a semicolon (irrelevant)\n",
"x, y = 1, 2\n",
"x, y = 1, 2 # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "5a768c76-6bc4-470c-b37e-8cc14bc6caf4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Annotated assignment statement without a semicolon\n",
"x: int = 1"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "21bfda82-1a9a-4ba1-9078-74ac480804b5",
"metadata": {},
"outputs": [],
"source": [
"# Annotated assignment statement without a semicolon\n",
"x: int = 1\n",
"x: int = 1; # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "09929999-ff29-4d10-ad2b-e665af15812d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Assignment expression without a semicolon\n",
"(x := 1)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "32a83217-1bad-4f61-855e-ffcdb119c763",
"metadata": {},
"outputs": [],
"source": [
"# Assignment expression with a semicolon\n",
"(x := 1)\n",
"(x := 1); # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "61b81865-277e-4964-b03e-eb78f1f318eb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"x = 1\n",
"# Expression without a semicolon\n",
"x"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "974c29be-67e1-4000-95fa-6ca118a63bad",
"metadata": {},
"outputs": [],
"source": [
"x = 1\n",
"# Expression with a semicolon\n",
"x;"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "cfeb1757-46d6-4f13-969f-a283b6d0304f",
"metadata": {},
"outputs": [],
"source": [
"class Point:\n",
" def __init__(self, x, y):\n",
" self.x = x\n",
" self.y = y\n",
"\n",
"\n",
"p = Point(0, 0);"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "2ee7f1a5-ccfe-4004-bfa4-ef834a58da97",
"metadata": {},
"outputs": [],
"source": [
"# Assignment statement where the left is an attribute access doesn't\n",
"# print the value.\n",
"p.x = 1"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "3e49370a-048b-474d-aa0a-3d1d4a73ad37",
"metadata": {},
"outputs": [],
"source": [
"data = {}\n",
"\n",
"# Neither does the subscript node\n",
"data[\"foo\"] = 1"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "d594bdd3-eaa9-41ef-8cda-cf01bc273b2d",
"metadata": {},
"outputs": [],
"source": [
"if x := 1:\n",
" # It should be the top level statement\n",
" x"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "e532f0cf-80c7-42b7-8226-6002fcf74fb6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Parentheses with comments\n",
"(\n",
" x := 1 # comment\n",
") # comment"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "473c5d62-871b-46ed-8a34-27095243f462",
"metadata": {},
"outputs": [],
"source": [
"# Parentheses with comments\n",
"(\n",
" x := 1 # comment\n",
"); # comment"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "8c3c2361-f49f-45fe-bbe3-7e27410a8a86",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hello world!'"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\"\"\"Hello world!\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "23dbe9b5-3f68-4890-ab2d-ab0dbfd0712a",
"metadata": {},
"outputs": [],
"source": [
"\"\"\"Hello world!\"\"\"; # comment\n",
"# comment"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "3ce33108-d95d-4c70-83d1-0d4fd36a2951",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'x = 1'"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"x = 1\n",
"f\"x = {x}\""
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "654a4a67-de43-4684-824a-9451c67db48f",
"metadata": {},
"outputs": [],
"source": [
"x = 1\n",
"f\"x = {x}\"\n",
"f\"x = {x}\"; # comment\n",
"# comment"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python (ruff-playground)",
"language": "python",
"name": "ruff-playground"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
----- stderr -----
"###);
}

View File

@@ -1564,3 +1564,62 @@ extend-safe-fixes = ["UP03"]
Ok(())
}
#[test]
fn check_docstring_conventions_overrides() -> Result<()> {
// But if we explicitly select it, we override the convention
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
[lint.pydocstyle]
convention = "numpy"
"#,
)?;
let stdin = r#"
def log(x, base) -> float:
"""Calculate natural log of a value
Parameters
----------
x :
Hello
"""
return math.log(x)
"#;
// If we only select the prefix, then everything passes
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "-", "--config"])
.arg(&ruff_toml)
.args(["--output-format", "text", "--no-cache", "--select", "D41"])
.pass_stdin(stdin),
@r###"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
"###
);
// But if we select the exact code, we get an error
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "-", "--config"])
.arg(&ruff_toml)
.args(["--output-format", "text", "--no-cache", "--select", "D417"])
.pass_stdin(stdin),
@r###"
success: false
exit_code: 1
----- stdout -----
-:2:5: D417 Missing argument description in the docstring for `log`: `base`
Found 1 error.
----- stderr -----
"###
);
Ok(())
}

View File

@@ -48,9 +48,12 @@ tracing-indicatif = { workspace = true }
tracing-subscriber = { workspace = true, features = ["env-filter"] }
imara-diff = "0.1.5"
[dev-dependencies]
indoc = "2.0.4"
[features]
# Turn off rayon for profiling
singlethreaded = []
[dev-dependencies]
indoc = "2.0.4"
[lints]
workspace = true

View File

@@ -49,7 +49,7 @@ pub(crate) fn main(args: &Args) -> Result<()> {
if rule.is_preview() || rule.is_nursery() {
output.push_str(
r#"This rule is unstable and in [preview](../preview.md). The `--preview` flag is required for use."#,
r"This rule is unstable and in [preview](../preview.md). The `--preview` flag is required for use.",
);
output.push('\n');
output.push('\n');

View File

@@ -129,12 +129,12 @@ fn emit_field(output: &mut String, name: &str, field: &OptionField, parent_set:
output.push_str("**Example usage**:\n\n");
output.push_str(&format_tab(
"pyproject.toml",
&format_header(parent_set, ConfigurationFile::PyprojectToml),
&format_header(field.scope, parent_set, ConfigurationFile::PyprojectToml),
field.example,
));
output.push_str(&format_tab(
"ruff.toml",
&format_header(parent_set, ConfigurationFile::RuffToml),
&format_header(field.scope, parent_set, ConfigurationFile::RuffToml),
field.example,
));
output.push('\n');
@@ -149,23 +149,53 @@ fn format_tab(tab_name: &str, header: &str, content: &str) -> String {
)
}
fn format_header(parent_set: &Set, configuration: ConfigurationFile) -> String {
let fmt = if let Some(set_name) = parent_set.name() {
if set_name == "format" {
String::from(".format")
} else {
format!(".lint.{set_name}")
}
} else {
String::new()
};
/// Format the TOML header for the example usage for a given option.
///
/// For example: `[tool.ruff.format]` or `[tool.ruff.lint.isort]`.
fn format_header(
scope: Option<&str>,
parent_set: &Set,
configuration: ConfigurationFile,
) -> String {
match configuration {
ConfigurationFile::PyprojectToml => format!("[tool.ruff{fmt}]"),
ConfigurationFile::PyprojectToml => {
let mut header = if let Some(set_name) = parent_set.name() {
if set_name == "format" {
String::from("tool.ruff.format")
} else {
format!("tool.ruff.lint.{set_name}")
}
} else {
"tool.ruff".to_string()
};
if let Some(scope) = scope {
if !header.is_empty() {
header.push('.');
}
header.push_str(scope);
}
format!("[{header}]")
}
ConfigurationFile::RuffToml => {
if fmt.is_empty() {
let mut header = if let Some(set_name) = parent_set.name() {
if set_name == "format" {
String::from("format")
} else {
format!("lint.{set_name}")
}
} else {
String::new()
};
if let Some(scope) = scope {
if !header.is_empty() {
header.push('.');
}
header.push_str(scope);
}
if header.is_empty() {
String::new()
} else {
format!("[{}]", fmt.strip_prefix('.').unwrap())
format!("[{header}]")
}
}
}

View File

@@ -29,3 +29,6 @@ insta = { workspace = true }
[features]
serde = ["dep:serde", "ruff_text_size/serde"]
schemars = ["dep:schemars", "ruff_text_size/schemars"]
[lints]
workspace = true

View File

@@ -1711,14 +1711,14 @@ mod tests {
));
assert_eq!(
r#"a
"a
b
c
d
d
c
b
a"#,
a",
formatted.as_code()
);
}
@@ -2047,10 +2047,10 @@ two lines`,
assert_eq!(
printed.as_code(),
r#"Group with id-2
"Group with id-2
Group with id-1 does not fit on the line because it exceeds the line width of 80 characters by
Group 2 fits
Group 1 breaks"#
Group 1 breaks"
);
}

View File

@@ -17,3 +17,6 @@ ruff_macros = { path = "../ruff_macros" }
[dev-dependencies]
static_assertions = "1.1.0"
[lints]
workspace = true

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.1.5"
version = "0.1.6"
publish = false
authors = { workspace = true }
edition = { workspace = true }
@@ -30,7 +30,7 @@ ruff_source_file = { path = "../ruff_source_file", features = ["serde"] }
ruff_text_size = { path = "../ruff_text_size" }
aho-corasick = { version = "1.1.2" }
annotate-snippets = { version = "0.9.1", features = ["color"] }
annotate-snippets = { version = "0.9.2", features = ["color"] }
anyhow = { workspace = true }
bitflags = { workspace = true }
chrono = { workspace = true }
@@ -53,8 +53,8 @@ path-absolutize = { workspace = true, features = [
] }
pathdiff = { version = "0.2.1" }
pep440_rs = { version = "0.3.12", features = ["serde"] }
pyproject-toml = { version = "0.8.0" }
quick-junit = { version = "0.3.2" }
pyproject-toml = { version = "0.8.1" }
quick-junit = { version = "0.3.5" }
regex = { workspace = true }
result-like = { version = "0.4.6" }
rustc-hash = { workspace = true }
@@ -86,3 +86,6 @@ default = []
schemars = ["dep:schemars"]
# Enables the UnreachableCode rule
unreachable-code = []
[lints]
workspace = true

View File

@@ -1,3 +0,0 @@
# fixtures
Fixture files used for snapshot testing.

View File

@@ -0,0 +1,32 @@
def func():
return 1
def func():
return 1.5
def func(x: int):
if x > 0:
return 1
else:
return 1.5
def func():
return True
def func(x: int):
if x > 0:
return None
else:
return
def func(x: int):
return 1 or 2.5 if x > 0 else 1.5 or "str"
def func(x: int):
return 1 + 2.5 if x > 0 else 1.5 or "str"

View File

@@ -91,3 +91,18 @@ class Registry:
def foo(self) -> None:
object.__setattr__(self, "flag", True)
from typing import Optional, Union
def func(x: Union[list, Optional[int | str | float | bool]]):
pass
def func(x: bool | str):
pass
def func(x: int | str):
pass

View File

@@ -639,3 +639,18 @@ foo = namedtuple(
:20
],
)
# F-strings
kwargs.pop("remove", f"this {trailing_comma}",)
raise Exception(
"first", extra=f"Add trailing comma here ->"
)
assert False, f"<- This is not a trailing comma"
f"""This is a test. {
"Another sentence."
if True else
"Don't add a trailing comma here ->"
}"""

View File

@@ -148,3 +148,32 @@ for i in range(10):
for i in range(10):
pass # comment
pass
def foo():
print("foo")
...
def foo():
"""A docstring."""
print("foo")
...
for i in range(10):
...
...
for i in range(10):
...
...
for i in range(10):
... # comment
...
for i in range(10):
...
pass

View File

@@ -38,3 +38,15 @@ class User:
foo: bool = BooleanField()
# ...
bar = StringField() # PIE794
class Person:
name = "Foo"
name = name + " Bar"
name = "Bar" # PIE794
class Person:
name: str = "Foo"
name: str = name + " Bar"
name: str = "Bar" # PIE794

View File

@@ -1,9 +1,21 @@
{"foo": 1, **{"bar": 1}} # PIE800
{**{"bar": 10}, "a": "b"} # PIE800
foo({**foo, **{"bar": True}}) # PIE800
{**foo, **{"bar": 10}} # PIE800
{ # PIE800
"a": "b",
# Preserve
**{
# all
"bar": 10, # the
# comments
},
}
{**foo, **buzz, **{bar: 10}} # PIE800
{**foo, "bar": True } # OK

View File

@@ -1,27 +1,37 @@
@dataclass
class Foo:
foo: List[str] = field(default_factory=lambda: []) # PIE807
bar: Dict[str, int] = field(default_factory=lambda: {}) # PIE807
class FooTable(BaseTable):
bar = fields.ListField(default=lambda: []) # PIE807
foo = fields.ListField(default=lambda: []) # PIE807
bar = fields.ListField(default=lambda: {}) # PIE807
class FooTable(BaseTable):
bar = fields.ListField(lambda: []) # PIE807
foo = fields.ListField(lambda: []) # PIE807
bar = fields.ListField(default=lambda: {}) # PIE807
@dataclass
class Foo:
foo: List[str] = field(default_factory=list)
bar: Dict[str, int] = field(default_factory=dict)
class FooTable(BaseTable):
bar = fields.ListField(list)
foo = fields.ListField(list)
bar = fields.ListField(dict)
lambda *args, **kwargs: []
lambda *args, **kwargs: {}
lambda *args: []
lambda *args: {}
lambda **kwargs: []
lambda **kwargs: {}
lambda: {**unwrap}

View File

@@ -3,9 +3,11 @@
import abc
import builtins
import collections.abc
import enum
import typing
from abc import abstractmethod
from abc import ABCMeta, abstractmethod
from collections.abc import AsyncIterable, AsyncIterator, Iterable, Iterator
from enum import EnumMeta
from typing import Any, overload
import typing_extensions
@@ -199,6 +201,31 @@ class AsyncIteratorReturningAsyncIterable:
... # Y045 "__aiter__" methods should return an AsyncIterator, not an AsyncIterable
class MetaclassInWhichSelfCannotBeUsed(type):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed) -> MetaclassInWhichSelfCannotBeUsed: ...
class MetaclassInWhichSelfCannotBeUsed2(EnumMeta):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed2: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed2: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed2: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed2) -> MetaclassInWhichSelfCannotBeUsed2: ...
class MetaclassInWhichSelfCannotBeUsed3(enum.EnumType):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed3: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed3: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed3: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed3) -> MetaclassInWhichSelfCannotBeUsed3: ...
class MetaclassInWhichSelfCannotBeUsed4(ABCMeta):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed4: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed4: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed4: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed4) -> MetaclassInWhichSelfCannotBeUsed4: ...
class Abstract(Iterator[str]):
@abstractmethod
def __iter__(self) -> Iterator[str]:

View File

@@ -3,9 +3,11 @@
import abc
import builtins
import collections.abc
import enum
import typing
from abc import abstractmethod
from abc import ABCMeta, abstractmethod
from collections.abc import AsyncIterable, AsyncIterator, Iterable, Iterator
from enum import EnumMeta
from typing import Any, overload
import typing_extensions
@@ -152,6 +154,30 @@ class AsyncIteratorReturningAsyncIterable:
str
]: ... # Y045 "__aiter__" methods should return an AsyncIterator, not an AsyncIterable
class MetaclassInWhichSelfCannotBeUsed(type):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed) -> MetaclassInWhichSelfCannotBeUsed: ...
class MetaclassInWhichSelfCannotBeUsed2(EnumMeta):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed2: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed2: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed2: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed2) -> MetaclassInWhichSelfCannotBeUsed2: ...
class MetaclassInWhichSelfCannotBeUsed3(enum.EnumType):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed3: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed3: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed3: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed3) -> MetaclassInWhichSelfCannotBeUsed3: ...
class MetaclassInWhichSelfCannotBeUsed4(ABCMeta):
def __new__(cls) -> MetaclassInWhichSelfCannotBeUsed4: ...
def __enter__(self) -> MetaclassInWhichSelfCannotBeUsed4: ...
async def __aenter__(self) -> MetaclassInWhichSelfCannotBeUsed4: ...
def __isub__(self, other: MetaclassInWhichSelfCannotBeUsed4) -> MetaclassInWhichSelfCannotBeUsed4: ...
class Abstract(Iterator[str]):
@abstractmethod
def __iter__(self) -> Iterator[str]: ...

View File

@@ -0,0 +1,45 @@
this_should_raise_Q004 = 'This is a \"string\"'
this_should_raise_Q004 = 'This is \\ a \\\"string\"'
this_is_fine = '"This" is a \"string\"'
this_is_fine = "This is a 'string'"
this_is_fine = "\"This\" is a 'string'"
this_is_fine = r'This is a \"string\"'
this_is_fine = R'This is a \"string\"'
this_should_raise_Q004 = (
'This is a'
'\"string\"'
)
# Same as above, but with f-strings
f'This is a \"string\"' # Q004
f'This is \\ a \\\"string\"' # Q004
f'"This" is a \"string\"'
f"This is a 'string'"
f"\"This\" is a 'string'"
fr'This is a \"string\"'
fR'This is a \"string\"'
this_should_raise_Q004 = (
f'This is a'
f'\"string\"' # Q004
)
# Nested f-strings (Python 3.12+)
#
# The first one is interesting because the fix for it is valid pre 3.12:
#
# f"'foo' {'nested'}"
#
# but as the actual string itself is invalid pre 3.12, we don't catch it.
f'\"foo\" {'nested'}' # Q004
f'\"foo\" {f'nested'}' # Q004
f'\"foo\" {f'\"nested\"'} \"\"' # Q004
f'normal {f'nested'} normal'
f'\"normal\" {f'nested'} normal' # Q004
f'\"normal\" {f'nested'} "double quotes"'
f'\"normal\" {f'\"nested\" {'other'} normal'} "double quotes"' # Q004
f'\"normal\" {f'\"nested\" {'other'} "double quotes"'} normal' # Q004
# Make sure we do not unescape quotes
this_is_fine = 'This is an \\"escaped\\" quote'
this_should_raise_Q004 = 'This is an \\\"escaped\\\" quote with an extra backslash'

View File

@@ -0,0 +1,43 @@
this_should_raise_Q004 = "This is a \'string\'"
this_should_raise_Q004 = "'This' is a \'string\'"
this_is_fine = 'This is a "string"'
this_is_fine = '\'This\' is a "string"'
this_is_fine = r"This is a \'string\'"
this_is_fine = R"This is a \'string\'"
this_should_raise_Q004 = (
"This is a"
"\'string\'"
)
# Same as above, but with f-strings
f"This is a \'string\'" # Q004
f"'This' is a \'string\'" # Q004
f'This is a "string"'
f'\'This\' is a "string"'
fr"This is a \'string\'"
fR"This is a \'string\'"
this_should_raise_Q004 = (
f"This is a"
f"\'string\'" # Q004
)
# Nested f-strings (Python 3.12+)
#
# The first one is interesting because the fix for it is valid pre 3.12:
#
# f'"foo" {"nested"}'
#
# but as the actual string itself is invalid pre 3.12, we don't catch it.
f"\'foo\' {"foo"}" # Q004
f"\'foo\' {f"foo"}" # Q004
f"\'foo\' {f"\'foo\'"} \'\'" # Q004
f"normal {f"nested"} normal"
f"\'normal\' {f"nested"} normal" # Q004
f"\'normal\' {f"nested"} 'single quotes'"
f"\'normal\' {f"\'nested\' {"other"} normal"} 'single quotes'" # Q004
f"\'normal\' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
# Make sure we do not unescape quotes
this_is_fine = "This is an \\'escaped\\' quote"
this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash"

View File

@@ -1,8 +1,7 @@
import trio
from trio import sleep
async def func():
import trio
from trio import sleep
await trio.sleep(0) # TRIO115
await trio.sleep(1) # OK
await trio.sleep(0, 1) # OK
@@ -21,8 +20,11 @@ async def func():
trio.sleep(bar)
trio.sleep(0) # TRIO115
def func():
trio.run(trio.sleep(0)) # TRIO115
from trio import Event, sleep
def func():
sleep(0) # TRIO115

View File

@@ -172,3 +172,14 @@ def f():
from module import Member
x: Member = 1
def f():
from typing_extensions import TYPE_CHECKING
from pandas import y
if TYPE_CHECKING:
_type = x
elif True:
_type = y

View File

@@ -0,0 +1,10 @@
from __future__ import annotations
from typing_extensions import TYPE_CHECKING
if TYPE_CHECKING:
from pandas import DataFrame
def example() -> DataFrame:
pass

View File

@@ -0,0 +1,10 @@
from __future__ import annotations
from typing_extensions import TYPE_CHECKING
if TYPE_CHECKING:
from pandas import DataFrame
def example() -> DataFrame:
x = DataFrame()

View File

@@ -37,3 +37,9 @@ if False:
if 0:
x: List
from typing_extensions import TYPE_CHECKING
if TYPE_CHECKING:
pass # TCH005

View File

@@ -0,0 +1,9 @@
from __future__ import annotations
from typing_extensions import Self
def func():
from pandas import DataFrame
df: DataFrame

View File

@@ -0,0 +1,9 @@
from __future__ import annotations
import typing_extensions
def func():
from pandas import DataFrame
df: DataFrame

View File

@@ -0,0 +1,5 @@
import encodings
from datetime import timezone as tz
from datetime import timedelta
import datetime as dt
import datetime

View File

@@ -0,0 +1,8 @@
from __future__ import annotations
import django.settings
import os
import pytz
import sys
from . import local
from library import foo

View File

@@ -63,3 +63,33 @@ class PosOnlyClass:
def bad_method_pos_only(this, blah, /, self, something: str):
pass
class ModelClass:
@hybrid_property
def bad(cls):
pass
@bad.expression
def bad(self):
pass
@bad.wtf
def bad(cls):
pass
@hybrid_property
def good(self):
pass
@good.expression
def good(cls):
pass
@good.wtf
def good(self):
pass
@foobar.thisisstatic
def badstatic(foo):
pass

View File

@@ -89,3 +89,49 @@ x = [ #
f"{ {'a': 1} }"
f"{[ { {'a': 1} } ]}"
f"normal { {f"{ { [1, 2] } }" } } normal"
#: Okay
ham[lower + offset : upper + offset]
#: Okay
ham[(lower + offset) : upper + offset]
#: E203:1:19
ham{lower + offset : upper + offset}
#: E203:1:19
ham[lower + offset : upper + offset]
#: Okay
release_lines = history_file_lines[history_file_lines.index('## Unreleased') + 1: -1]
#: Okay
release_lines = history_file_lines[history_file_lines.index('## Unreleased') + 1 : -1]
#: Okay
ham[1:9], ham[1:9:3], ham[:9:3], ham[1::3], ham[1:9:]
ham[lower:upper], ham[lower:upper:], ham[lower::step]
ham[lower+offset : upper+offset]
ham[: upper_fn(x) : step_fn(x)], ham[:: step_fn(x)]
ham[lower + offset : upper + offset]
#: E201:1:5
ham[ : upper]
#: Okay
ham[lower + offset :: upper + offset]
#: Okay
ham[(lower + offset) :: upper + offset]
#: Okay
ham[lower + offset::upper + offset]
#: E203:1:21
ham[lower + offset : : upper + offset]
#: E203:1:20
ham[lower + offset: :upper + offset]
#: E203:1:20
ham[{lower + offset : upper + offset} : upper + offset]

View File

@@ -16,6 +16,48 @@ if False == None: # E711, E712 (fix)
if None == False: # E711, E712 (fix)
pass
named_var = []
if [] is []: # F632 (fix)
pass
if named_var is []: # F632 (fix)
pass
if [] is named_var: # F632 (fix)
pass
if named_var is [1]: # F632 (fix)
pass
if [1] is named_var: # F632 (fix)
pass
if named_var is [i for i in [1]]: # F632 (fix)
pass
named_var = {}
if {} is {}: # F632 (fix)
pass
if named_var is {}: # F632 (fix)
pass
if {} is named_var: # F632 (fix)
pass
if named_var is {1}: # F632 (fix)
pass
if {1} is named_var: # F632 (fix)
pass
if named_var is {i for i in [1]}: # F632 (fix)
pass
named_var = {1: 1}
if {1: 1} is {1: 1}: # F632 (fix)
pass
if named_var is {1: 1}: # F632 (fix)
pass
if {1: 1} is named_var: # F632 (fix)
pass
if named_var is {1: 1}: # F632 (fix)
pass
if {1: 1} is named_var: # F632 (fix)
pass
if named_var is {i: 1 for i in [1]}: # F632 (fix)
pass
###
# Non-errors
###
@@ -33,3 +75,45 @@ if False is None:
pass
if None is False:
pass
named_var = []
if [] == []:
pass
if named_var == []:
pass
if [] == named_var:
pass
if named_var == [1]:
pass
if [1] == named_var:
pass
if named_var == [i for i in [1]]:
pass
named_var = {}
if {} == {}:
pass
if named_var == {}:
pass
if {} == named_var:
pass
if named_var == {1}:
pass
if {1} == named_var:
pass
if named_var == {i for i in [1]}:
pass
named_var = {1: 1}
if {1: 1} == {1: 1}:
pass
if named_var == {1: 1}:
pass
if {1: 1} == named_var:
pass
if named_var == {1: 1}:
pass
if {1: 1} == named_var:
pass
if named_var == {i: 1 for i in [1]}:
pass

View File

@@ -663,3 +663,55 @@ class CommentAfterDocstring:
def newline_after_closing_quote(self):
"We enforce a newline after the closing quote for a multi-line docstring \
but continuations shouldn't be considered multi-line"
def retain_extra_whitespace():
"""Summary.
This is overindented
And so is this, but it we should preserve the extra space on this line relative
to the one before
"""
def retain_extra_whitespace_multiple():
"""Summary.
This is overindented
And so is this, but it we should preserve the extra space on this line relative
to the one before
This is also overindented
And so is this, but it we should preserve the extra space on this line relative
to the one before
"""
def retain_extra_whitespace_deeper():
"""Summary.
This is overindented
And so is this, but it we should preserve the extra space on this line relative
to the one before
And the relative indent here should be preserved too
"""
def retain_extra_whitespace_followed_by_same_offset():
"""Summary.
This is overindented
And so is this, but it we should preserve the extra space on this line relative
This is overindented
This is overindented
"""
def retain_extra_whitespace_not_overindented():
"""Summary.
This is not overindented
This is overindented, but since one line is not overindented this should not raise
And so is this, but it we should preserve the extra space on this line relative
"""

View File

@@ -0,0 +1,4 @@
"""Test for IPython-only builtins."""
x = 1
display(x)

View File

@@ -0,0 +1,74 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "af8fee97-f9aa-47c2-b34c-b109a5f083d6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"1"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "acd5eb1e-6991-42b8-806f-20d17d7e571f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display(1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f8f3f599-030e-48b3-bcd3-31d97f725c68",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,79 @@
# No Errors
def func(a):
for b in range(1):
...
def func(a):
try:
...
except ValueError:
...
except KeyError:
...
if True:
def func(a):
...
else:
for a in range(1):
print(a)
# Errors
def func(a):
for a in range(1):
...
def func(i):
for i in range(10):
print(i)
def func(e):
try:
...
except Exception as e:
print(e)
def func(f):
with open('', ) as f:
print(f)
def func(a, b):
with context() as (a, b, c):
print(a, b, c)
def func(a, b):
with context() as [a, b, c]:
print(a, b, c)
def func(a):
with open('foo.py', ) as f, open('bar.py') as a:
...
def func(a):
def bar(b):
for a in range(1):
print(a)
def func(a):
def bar(b):
for b in range(1):
print(b)
def func(a=1):
def bar(b=2):
for a in range(1):
print(a)
for b in range(1):
print(b)

View File

@@ -1,10 +0,0 @@
[tool.ruff]
line-length = 88
extend-exclude = [
"excluded_file.py",
"migrations",
"with_excluded_file/other_excluded_file.py",
]
[tool.ruff.lint]
per-file-ignores = { "__init__.py" = ["F401"] }

View File

@@ -114,3 +114,8 @@ class ServiceRefOrValue:
class Collection(Protocol[*_B0]):
def __iter__(self) -> Iterator[Union[*_B0]]:
...
# Regression test for: https://github.com/astral-sh/ruff/issues/8609
def f(x: Union[int, str, bytes]) -> None:
...

View File

@@ -25,3 +25,6 @@ u = u
def hello():
return"Hello"
f"foo"u"bar"
f"foo" u"bar"

View File

@@ -207,3 +207,39 @@ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
# The fixed string will exceed the line length, but it's still smaller than the
# existing line length, so it's fine.
"<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
# When fixing, trim the trailing empty string.
raise ValueError("Conflicting configuration dicts: {!r} {!r}"
"".format(new_dict, d))
# When fixing, trim the trailing empty string.
raise ValueError("Conflicting configuration dicts: {!r} {!r}"
.format(new_dict, d))
raise ValueError(
"Conflicting configuration dicts: {!r} {!r}"
"".format(new_dict, d)
)
raise ValueError(
"Conflicting configuration dicts: {!r} {!r}"
"".format(new_dict, d)
)
# The first string will be converted to an f-string and the curly braces in the second should be converted to be unescaped
(
"{}"
"{{}}"
).format(a)
("{}" "{{}}").format(a)
# Both strings will be converted to an f-string and the curly braces in the second should left escaped
(
"{}"
"{{{}}}"
).format(a, b)
("{}" "{{{}}}").format(a, b)

View File

@@ -0,0 +1,25 @@
x = 1
y = 2
x if x > y else y # FURB136
x if x >= y else y # FURB136
x if x < y else y # FURB136
x if x <= y else y # FURB136
y if x > y else x # FURB136
y if x >= y else x # FURB136
y if x < y else x # FURB136
y if x <= y else x # FURB136
x + y if x > y else y # OK
x if (
x
> y
) else y # FURB136

View File

@@ -0,0 +1,7 @@
r = 3.1 # OK
A = 3.14 * r ** 2 # FURB152
C = 6.28 * r # FURB152
e = 2.71 # FURB152

View File

@@ -18,8 +18,8 @@ pub(crate) fn deferred_lambdas(checker: &mut Checker) {
if checker.enabled(Rule::UnnecessaryLambda) {
pylint::rules::unnecessary_lambda(checker, lambda);
}
if checker.enabled(Rule::ReimplementedListBuiltin) {
flake8_pie::rules::reimplemented_list_builtin(checker, lambda);
if checker.enabled(Rule::ReimplementedContainerBuiltin) {
flake8_pie::rules::reimplemented_container_builtin(checker, lambda);
}
}
}

View File

@@ -12,6 +12,7 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
if !checker.any_enabled(&[
Rule::GlobalVariableNotAssigned,
Rule::ImportShadowedByLoopVar,
Rule::RedefinedArgumentFromLocal,
Rule::RedefinedWhileUnused,
Rule::RuntimeImportInTypeCheckingBlock,
Rule::TypingOnlyFirstPartyImport,
@@ -89,6 +90,32 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
}
}
if checker.enabled(Rule::RedefinedArgumentFromLocal) {
for (name, binding_id) in scope.bindings() {
for shadow in checker.semantic.shadowed_bindings(scope_id, binding_id) {
let binding = &checker.semantic.bindings[shadow.binding_id()];
if !matches!(
binding.kind,
BindingKind::LoopVar
| BindingKind::BoundException
| BindingKind::WithItemVar
) {
continue;
}
let shadowed = &checker.semantic.bindings[shadow.shadowed_id()];
if !shadowed.kind.is_argument() {
continue;
}
checker.diagnostics.push(Diagnostic::new(
pylint::rules::RedefinedArgumentFromLocal {
name: name.to_string(),
},
binding.range(),
));
}
}
}
if checker.enabled(Rule::ImportShadowedByLoopVar) {
for (name, binding_id) in scope.bindings() {
for shadow in checker.semantic.shadowed_bindings(scope_id, binding_id) {

View File

@@ -936,13 +936,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
flake8_trio::rules::zero_sleep_call(checker, call);
}
}
Expr::Dict(
dict @ ast::ExprDict {
keys,
values,
range: _,
},
) => {
Expr::Dict(dict) => {
if checker.any_enabled(&[
Rule::MultiValueRepeatedKeyLiteral,
Rule::MultiValueRepeatedKeyVariable,
@@ -950,7 +944,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
pyflakes::rules::repeated_keys(checker, dict);
}
if checker.enabled(Rule::UnnecessarySpread) {
flake8_pie::rules::unnecessary_spread(checker, keys, values);
flake8_pie::rules::unnecessary_spread(checker, dict);
}
}
Expr::Set(ast::ExprSet { elts, range: _ }) => {
@@ -1251,10 +1245,13 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
refurb::rules::single_item_membership_test(checker, expr, left, ops, comparators);
}
}
Expr::NumberLiteral(_) => {
Expr::NumberLiteral(number_literal @ ast::ExprNumberLiteral { .. }) => {
if checker.source_type.is_stub() && checker.enabled(Rule::NumericLiteralTooLong) {
flake8_pyi::rules::numeric_literal_too_long(checker, expr);
}
if checker.enabled(Rule::MathConstant) {
refurb::rules::math_constant(checker, number_literal);
}
}
Expr::BytesLiteral(_) => {
if checker.source_type.is_stub() && checker.enabled(Rule::StringOrBytesTooLong) {
@@ -1272,21 +1269,20 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::HardcodedTempFile) {
flake8_bandit::rules::hardcoded_tmp_directory(checker, string);
}
if checker.enabled(Rule::UnicodeKindPrefix) {
pyupgrade::rules::unicode_kind_prefix(checker, string);
}
if checker.source_type.is_stub() {
if checker.enabled(Rule::StringOrBytesTooLong) {
flake8_pyi::rules::string_or_bytes_too_long(checker, expr);
}
}
}
Expr::IfExp(ast::ExprIfExp {
test,
body,
orelse,
range: _,
}) => {
Expr::IfExp(
if_exp @ ast::ExprIfExp {
test,
body,
orelse,
range: _,
},
) => {
if checker.enabled(Rule::IfElseBlockInsteadOfDictGet) {
flake8_simplify::rules::if_exp_instead_of_dict_get(
checker, expr, test, body, orelse,
@@ -1301,6 +1297,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::IfExprWithTwistedArms) {
flake8_simplify::rules::twisted_arms_in_ifexpr(checker, expr, test, body, orelse);
}
if checker.enabled(Rule::IfExprMinMax) {
refurb::rules::if_expr_min_max(checker, if_exp);
}
}
Expr::ListComp(
comp @ ast::ExprListComp {

View File

@@ -6,7 +6,7 @@ use crate::rules::flake8_pie;
/// Run lint rules over a suite of [`Stmt`] syntax nodes.
pub(crate) fn suite(suite: &[Stmt], checker: &mut Checker) {
if checker.enabled(Rule::UnnecessaryPass) {
flake8_pie::rules::no_unnecessary_pass(checker, suite);
if checker.enabled(Rule::UnnecessaryPlaceholder) {
flake8_pie::rules::unnecessary_placeholder(checker, suite);
}
}

View File

@@ -54,7 +54,7 @@ use ruff_python_semantic::{
ModuleKind, NodeId, ScopeId, ScopeKind, SemanticModel, SemanticModelFlags, Snapshot,
StarImport, SubmoduleImport,
};
use ruff_python_stdlib::builtins::{BUILTINS, MAGIC_GLOBALS};
use ruff_python_stdlib::builtins::{IPYTHON_BUILTINS, MAGIC_GLOBALS, PYTHON_BUILTINS};
use ruff_source_file::Locator;
use crate::checkers::ast::deferred::Deferred;
@@ -1592,9 +1592,16 @@ impl<'a> Checker<'a> {
}
fn bind_builtins(&mut self) {
for builtin in BUILTINS
for builtin in PYTHON_BUILTINS
.iter()
.chain(MAGIC_GLOBALS.iter())
.chain(
self.source_type
.is_ipynb()
.then_some(IPYTHON_BUILTINS)
.into_iter()
.flatten(),
)
.copied()
.chain(self.settings.builtins.iter().map(String::as_str))
{
@@ -1615,38 +1622,28 @@ impl<'a> Checker<'a> {
fn handle_node_store(&mut self, id: &'a str, expr: &Expr) {
let parent = self.semantic.current_statement();
let mut flags = BindingFlags::empty();
if helpers::is_unpacking_assignment(parent, expr) {
flags.insert(BindingFlags::UNPACKED_ASSIGNMENT);
}
// Match the left-hand side of an annotated assignment, like `x` in `x: int`.
if matches!(
parent,
Stmt::AnnAssign(ast::StmtAnnAssign { value: None, .. })
) && !self.semantic.in_annotation()
{
self.add_binding(
id,
expr.range(),
BindingKind::Annotation,
BindingFlags::empty(),
);
self.add_binding(id, expr.range(), BindingKind::Annotation, flags);
return;
}
if parent.is_for_stmt() {
self.add_binding(
id,
expr.range(),
BindingKind::LoopVar,
BindingFlags::empty(),
);
self.add_binding(id, expr.range(), BindingKind::LoopVar, flags);
return;
}
if helpers::is_unpacking_assignment(parent, expr) {
self.add_binding(
id,
expr.range(),
BindingKind::UnpackedAssignment,
BindingFlags::empty(),
);
if parent.is_with_stmt() {
self.add_binding(id, expr.range(), BindingKind::WithItemVar, flags);
return;
}
@@ -1681,7 +1678,6 @@ impl<'a> Checker<'a> {
let (all_names, all_flags) =
extract_all_names(parent, |name| self.semantic.is_builtin(name));
let mut flags = BindingFlags::empty();
if all_flags.intersects(DunderAllFlags::INVALID_OBJECT) {
flags |= BindingFlags::INVALID_ALL_OBJECT;
}
@@ -1705,21 +1701,11 @@ impl<'a> Checker<'a> {
.current_expressions()
.any(Expr::is_named_expr_expr)
{
self.add_binding(
id,
expr.range(),
BindingKind::NamedExprAssignment,
BindingFlags::empty(),
);
self.add_binding(id, expr.range(), BindingKind::NamedExprAssignment, flags);
return;
}
self.add_binding(
id,
expr.range(),
BindingKind::Assignment,
BindingFlags::empty(),
);
self.add_binding(id, expr.range(), BindingKind::Assignment, flags);
}
fn handle_node_delete(&mut self, expr: &'a Expr) {

View File

@@ -1,6 +1,8 @@
use std::path::Path;
use ruff_diagnostics::Diagnostic;
use ruff_python_index::Indexer;
use ruff_source_file::Locator;
use crate::registry::Rule;
use crate::rules::flake8_no_pep420::rules::implicit_namespace_package;
@@ -10,15 +12,22 @@ use crate::settings::LinterSettings;
pub(crate) fn check_file_path(
path: &Path,
package: Option<&Path>,
locator: &Locator,
indexer: &Indexer,
settings: &LinterSettings,
) -> Vec<Diagnostic> {
let mut diagnostics: Vec<Diagnostic> = vec![];
// flake8-no-pep420
if settings.rules.enabled(Rule::ImplicitNamespacePackage) {
if let Some(diagnostic) =
implicit_namespace_package(path, package, &settings.project_root, &settings.src)
{
if let Some(diagnostic) = implicit_namespace_package(
path,
package,
locator,
indexer,
&settings.project_root,
&settings.src,
) {
diagnostics.push(diagnostic);
}
}

View File

@@ -91,6 +91,10 @@ pub(crate) fn check_tokens(
pycodestyle::rules::tab_indentation(&mut diagnostics, tokens, locator, indexer);
}
if settings.rules.enabled(Rule::UnicodeKindPrefix) {
pyupgrade::rules::unicode_kind_prefix(&mut diagnostics, tokens);
}
if settings.rules.any_enabled(&[
Rule::InvalidCharacterBackspace,
Rule::InvalidCharacterSub,
@@ -115,6 +119,10 @@ pub(crate) fn check_tokens(
flake8_quotes::rules::avoidable_escaped_quote(&mut diagnostics, tokens, locator, settings);
}
if settings.rules.enabled(Rule::UnnecessaryEscapedQuote) {
flake8_quotes::rules::unnecessary_escaped_quote(&mut diagnostics, tokens, locator);
}
if settings.rules.any_enabled(&[
Rule::BadQuotesInlineString,
Rule::BadQuotesMultilineString,
@@ -141,7 +149,7 @@ pub(crate) fn check_tokens(
Rule::TrailingCommaOnBareTuple,
Rule::ProhibitedTrailingComma,
]) {
flake8_commas::rules::trailing_commas(&mut diagnostics, tokens, locator);
flake8_commas::rules::trailing_commas(&mut diagnostics, tokens, locator, indexer);
}
if settings.rules.enabled(Rule::ExtraneousParentheses) {
@@ -159,7 +167,7 @@ pub(crate) fn check_tokens(
Rule::ShebangNotFirstLine,
Rule::ShebangMissingPython,
]) {
flake8_executable::rules::from_tokens(tokens, path, locator, &mut diagnostics);
flake8_executable::rules::from_tokens(&mut diagnostics, path, locator, indexer);
}
if settings.rules.any_enabled(&[

View File

@@ -252,6 +252,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "R0915") => (RuleGroup::Stable, rules::pylint::rules::TooManyStatements),
(Pylint, "R0916") => (RuleGroup::Preview, rules::pylint::rules::TooManyBooleanExpressions),
(Pylint, "R1701") => (RuleGroup::Stable, rules::pylint::rules::RepeatedIsinstanceCalls),
(Pylint, "R1704") => (RuleGroup::Preview, rules::pylint::rules::RedefinedArgumentFromLocal),
(Pylint, "R1711") => (RuleGroup::Stable, rules::pylint::rules::UselessReturn),
(Pylint, "R1714") => (RuleGroup::Stable, rules::pylint::rules::RepeatedEqualityComparison),
(Pylint, "R1706") => (RuleGroup::Preview, rules::pylint::rules::AndOrTernary),
@@ -402,6 +403,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Quotes, "001") => (RuleGroup::Stable, rules::flake8_quotes::rules::BadQuotesMultilineString),
(Flake8Quotes, "002") => (RuleGroup::Stable, rules::flake8_quotes::rules::BadQuotesDocstring),
(Flake8Quotes, "003") => (RuleGroup::Stable, rules::flake8_quotes::rules::AvoidableEscapedQuote),
(Flake8Quotes, "004") => (RuleGroup::Preview, rules::flake8_quotes::rules::UnnecessaryEscapedQuote),
// flake8-annotations
(Flake8Annotations, "001") => (RuleGroup::Stable, rules::flake8_annotations::rules::MissingTypeFunctionArgument),
@@ -767,12 +769,12 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8PytestStyle, "027") => (RuleGroup::Stable, rules::flake8_pytest_style::rules::PytestUnittestRaisesAssertion),
// flake8-pie
(Flake8Pie, "790") => (RuleGroup::Stable, rules::flake8_pie::rules::UnnecessaryPass),
(Flake8Pie, "790") => (RuleGroup::Stable, rules::flake8_pie::rules::UnnecessaryPlaceholder),
(Flake8Pie, "794") => (RuleGroup::Stable, rules::flake8_pie::rules::DuplicateClassFieldDefinition),
(Flake8Pie, "796") => (RuleGroup::Stable, rules::flake8_pie::rules::NonUniqueEnums),
(Flake8Pie, "800") => (RuleGroup::Stable, rules::flake8_pie::rules::UnnecessarySpread),
(Flake8Pie, "804") => (RuleGroup::Stable, rules::flake8_pie::rules::UnnecessaryDictKwargs),
(Flake8Pie, "807") => (RuleGroup::Stable, rules::flake8_pie::rules::ReimplementedListBuiltin),
(Flake8Pie, "807") => (RuleGroup::Stable, rules::flake8_pie::rules::ReimplementedContainerBuiltin),
(Flake8Pie, "808") => (RuleGroup::Stable, rules::flake8_pie::rules::UnnecessaryRangeStart),
(Flake8Pie, "810") => (RuleGroup::Stable, rules::flake8_pie::rules::MultipleStartsEndsWith),
@@ -945,9 +947,11 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Refurb, "131") => (RuleGroup::Nursery, rules::refurb::rules::DeleteFullSlice),
#[allow(deprecated)]
(Refurb, "132") => (RuleGroup::Nursery, rules::refurb::rules::CheckAndRemoveFromSet),
(Refurb, "136") => (RuleGroup::Preview, rules::refurb::rules::IfExprMinMax),
(Refurb, "140") => (RuleGroup::Preview, rules::refurb::rules::ReimplementedStarmap),
(Refurb, "145") => (RuleGroup::Preview, rules::refurb::rules::SliceCopy),
(Refurb, "148") => (RuleGroup::Preview, rules::refurb::rules::UnnecessaryEnumerate),
(Refurb, "152") => (RuleGroup::Preview, rules::refurb::rules::MathConstant),
(Refurb, "168") => (RuleGroup::Preview, rules::refurb::rules::IsinstanceTypeNone),
(Refurb, "169") => (RuleGroup::Preview, rules::refurb::rules::TypeNoneComparison),
(Refurb, "171") => (RuleGroup::Preview, rules::refurb::rules::SingleItemMembershipTest),

View File

@@ -171,7 +171,7 @@ mod tests {
#[test]
fn empty_file() {
let locator = Locator::new(r#""#);
let locator = Locator::new(r"");
let diagnostics = create_diagnostics([]);
let FixResult {
code,
@@ -225,10 +225,10 @@ print("hello world")
#[test]
fn apply_one_replacement() {
let locator = Locator::new(
r#"
r"
class A(object):
...
"#
"
.trim(),
);
let diagnostics = create_diagnostics([Edit::replacement(
@@ -243,10 +243,10 @@ class A(object):
} = apply_fixes(diagnostics.iter(), &locator);
assert_eq!(
code,
r#"
r"
class A(Bar):
...
"#
"
.trim(),
);
assert_eq!(fixes.values().sum::<usize>(), 1);
@@ -262,10 +262,10 @@ class A(Bar):
#[test]
fn apply_one_removal() {
let locator = Locator::new(
r#"
r"
class A(object):
...
"#
"
.trim(),
);
let diagnostics = create_diagnostics([Edit::deletion(TextSize::new(7), TextSize::new(15))]);
@@ -276,10 +276,10 @@ class A(object):
} = apply_fixes(diagnostics.iter(), &locator);
assert_eq!(
code,
r#"
r"
class A:
...
"#
"
.trim()
);
assert_eq!(fixes.values().sum::<usize>(), 1);
@@ -295,10 +295,10 @@ class A:
#[test]
fn apply_two_removals() {
let locator = Locator::new(
r#"
r"
class A(object, object, object):
...
"#
"
.trim(),
);
let diagnostics = create_diagnostics([
@@ -313,10 +313,10 @@ class A(object, object, object):
assert_eq!(
code,
r#"
r"
class A(object):
...
"#
"
.trim()
);
assert_eq!(fixes.values().sum::<usize>(), 2);
@@ -334,10 +334,10 @@ class A(object):
#[test]
fn ignore_overlapping_fixes() {
let locator = Locator::new(
r#"
r"
class A(object):
...
"#
"
.trim(),
);
let diagnostics = create_diagnostics([
@@ -351,10 +351,10 @@ class A(object):
} = apply_fixes(diagnostics.iter(), &locator);
assert_eq!(
code,
r#"
r"
class A:
...
"#
"
.trim(),
);
assert_eq!(fixes.values().sum::<usize>(), 1);

View File

@@ -373,18 +373,18 @@ mod tests {
Insertion::own_line("", TextSize::from(40), "\n")
);
let contents = r#"
let contents = r"
x = 1
"#
"
.trim_start();
assert_eq!(
insert(contents)?,
Insertion::own_line("", TextSize::from(0), "\n")
);
let contents = r#"
let contents = r"
#!/usr/bin/env python3
"#
"
.trim_start();
assert_eq!(
insert(contents)?,
@@ -457,10 +457,10 @@ x = 1
Insertion::inline("", TextSize::from(9), "; ")
);
let contents = r#"
let contents = r"
if True:
pass
"#
"
.trim_start();
assert_eq!(
insert(contents, TextSize::from(0)),

View File

@@ -7,7 +7,7 @@ use std::error::Error;
use anyhow::Result;
use libcst_native::{ImportAlias, Name, NameOrAttribute};
use ruff_python_ast::{self as ast, PySourceType, Stmt, Suite};
use ruff_python_ast::{self as ast, PySourceType, Stmt};
use ruff_text_size::{Ranged, TextSize};
use ruff_diagnostics::Edit;
@@ -26,7 +26,7 @@ mod insertion;
pub(crate) struct Importer<'a> {
/// The Python AST to which we are adding imports.
python_ast: &'a Suite,
python_ast: &'a [Stmt],
/// The [`Locator`] for the Python AST.
locator: &'a Locator<'a>,
/// The [`Stylist`] for the Python AST.
@@ -39,7 +39,7 @@ pub(crate) struct Importer<'a> {
impl<'a> Importer<'a> {
pub(crate) fn new(
python_ast: &'a Suite,
python_ast: &'a [Stmt],
locator: &'a Locator<'a>,
stylist: &'a Stylist<'a>,
) -> Self {
@@ -132,11 +132,7 @@ impl<'a> Importer<'a> {
)?;
// Import the `TYPE_CHECKING` symbol from the typing module.
let (type_checking_edit, type_checking) = self.get_or_import_symbol(
&ImportRequest::import_from("typing", "TYPE_CHECKING"),
at,
semantic,
)?;
let (type_checking_edit, type_checking) = self.get_or_import_type_checking(at, semantic)?;
// Add the import to a `TYPE_CHECKING` block.
let add_import_edit = if let Some(block) = self.preceding_type_checking_block(at) {
@@ -161,6 +157,30 @@ impl<'a> Importer<'a> {
})
}
/// Generate an [`Edit`] to reference `typing.TYPE_CHECKING`. Returns the [`Edit`] necessary to
/// make the symbol available in the current scope along with the bound name of the symbol.
fn get_or_import_type_checking(
&self,
at: TextSize,
semantic: &SemanticModel,
) -> Result<(Edit, String), ResolutionError> {
for module in semantic.typing_modules() {
if let Some((edit, name)) = self.get_symbol(
&ImportRequest::import_from(module, "TYPE_CHECKING"),
at,
semantic,
)? {
return Ok((edit, name));
}
}
self.import_symbol(
&ImportRequest::import_from("typing", "TYPE_CHECKING"),
at,
semantic,
)
}
/// Generate an [`Edit`] to reference the given symbol. Returns the [`Edit`] necessary to make
/// the symbol available in the current scope along with the bound name of the symbol.
///

View File

@@ -117,7 +117,7 @@ pub fn check_path(
.iter_enabled()
.any(|rule_code| rule_code.lint_source().is_filesystem())
{
diagnostics.extend(check_file_path(path, package, settings));
diagnostics.extend(check_file_path(path, package, locator, indexer, settings));
}
// Run the logical line-based rules.
@@ -639,7 +639,7 @@ mod tests {
use crate::registry::Rule;
use crate::source_kind::SourceKind;
use crate::test::{test_contents, test_notebook_path, TestedNotebook};
use crate::test::{assert_notebook_path, test_contents, TestedNotebook};
use crate::{assert_messages, settings};
/// Construct a path to a Jupyter notebook in the `resources/test/fixtures/jupyter` directory.
@@ -655,7 +655,7 @@ mod tests {
messages,
source_notebook,
..
} = test_notebook_path(
} = assert_notebook_path(
&actual,
expected,
&settings::LinterSettings::for_rule(Rule::UnsortedImports),
@@ -672,7 +672,7 @@ mod tests {
messages,
source_notebook,
..
} = test_notebook_path(
} = assert_notebook_path(
&actual,
expected,
&settings::LinterSettings::for_rule(Rule::UnusedImport),
@@ -689,7 +689,7 @@ mod tests {
messages,
source_notebook,
..
} = test_notebook_path(
} = assert_notebook_path(
&actual,
expected,
&settings::LinterSettings::for_rule(Rule::UnusedVariable),
@@ -706,7 +706,7 @@ mod tests {
let TestedNotebook {
linted_notebook: fixed_notebook,
..
} = test_notebook_path(
} = assert_notebook_path(
actual_path,
&expected_path,
&settings::LinterSettings::for_rule(Rule::UnusedImport),

View File

@@ -199,7 +199,7 @@ def fibonacci(n):
TextSize::from(99),
)));
let file_2 = r#"if a == 1: pass"#;
let file_2 = r"if a == 1: pass";
let undefined_name = Diagnostic::new(
DiagnosticKind {

View File

@@ -297,6 +297,7 @@ impl Rule {
| Rule::TabIndentation
| Rule::TrailingCommaOnBareTuple
| Rule::TypeCommentInStub
| Rule::UnicodeKindPrefix
| Rule::UselessSemicolon
| Rule::UTF8EncodingDeclaration => LintSource::Tokens,
Rule::IOError => LintSource::Io,

View File

@@ -245,10 +245,10 @@ impl Renamer {
| BindingKind::Argument
| BindingKind::TypeParam
| BindingKind::NamedExprAssignment
| BindingKind::UnpackedAssignment
| BindingKind::Assignment
| BindingKind::BoundException
| BindingKind::LoopVar
| BindingKind::WithItemVar
| BindingKind::Global
| BindingKind::Nonlocal(_)
| BindingKind::ClassDefinition(_)

View File

@@ -1,5 +1,14 @@
use itertools::Itertools;
use ruff_python_ast::helpers::{pep_604_union, ReturnStatementVisitor};
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::{self as ast, Expr, ExprContext};
use ruff_python_semantic::analyze::type_inference::{NumberLike, PythonType, ResolvedPythonType};
use ruff_python_semantic::analyze::visibility;
use ruff_python_semantic::{Definition, SemanticModel};
use ruff_text_size::TextRange;
use crate::settings::types::PythonVersion;
/// Return the name of the function, if it's overloaded.
pub(crate) fn overloaded_name(definition: &Definition, semantic: &SemanticModel) -> Option<String> {
@@ -27,3 +36,81 @@ pub(crate) fn is_overload_impl(
function.name.as_str() == overloaded_name
}
}
/// Given a function, guess its return type.
pub(crate) fn auto_return_type(
function: &ast::StmtFunctionDef,
target_version: PythonVersion,
) -> Option<Expr> {
// Collect all the `return` statements.
let returns = {
let mut visitor = ReturnStatementVisitor::default();
visitor.visit_body(&function.body);
if visitor.is_generator {
return None;
}
visitor.returns
};
// Determine the return type of the first `return` statement.
let (return_statement, returns) = returns.split_first()?;
let mut return_type = return_statement.value.as_deref().map_or(
ResolvedPythonType::Atom(PythonType::None),
ResolvedPythonType::from,
);
// Merge the return types of the remaining `return` statements.
for return_statement in returns {
return_type = return_type.union(return_statement.value.as_deref().map_or(
ResolvedPythonType::Atom(PythonType::None),
ResolvedPythonType::from,
));
}
match return_type {
ResolvedPythonType::Atom(python_type) => type_expr(python_type),
ResolvedPythonType::Union(python_types) if target_version >= PythonVersion::Py310 => {
// Aggregate all the individual types (e.g., `int`, `float`).
let names = python_types
.iter()
.sorted_unstable()
.filter_map(|python_type| type_expr(*python_type))
.collect::<Vec<_>>();
// Wrap in a bitwise union (e.g., `int | float`).
Some(pep_604_union(&names))
}
ResolvedPythonType::Union(_) => None,
ResolvedPythonType::Unknown => None,
ResolvedPythonType::TypeError => None,
}
}
/// Given a [`PythonType`], return an [`Expr`] that resolves to that type.
fn type_expr(python_type: PythonType) -> Option<Expr> {
fn name(name: &str) -> Expr {
Expr::Name(ast::ExprName {
id: name.into(),
range: TextRange::default(),
ctx: ExprContext::Load,
})
}
match python_type {
PythonType::String => Some(name("str")),
PythonType::Bytes => Some(name("bytes")),
PythonType::Number(number) => match number {
NumberLike::Integer => Some(name("int")),
NumberLike::Float => Some(name("float")),
NumberLike::Complex => Some(name("complex")),
NumberLike::Bool => Some(name("bool")),
},
PythonType::None => Some(name("None")),
PythonType::Ellipsis => None,
PythonType::Dict => None,
PythonType::List => None,
PythonType::Set => None,
PythonType::Tuple => None,
PythonType::Generator => None,
}
}

View File

@@ -110,6 +110,24 @@ mod tests {
Ok(())
}
#[test]
fn auto_return_type() -> Result<()> {
let diagnostics = test_path(
Path::new("flake8_annotations/auto_return_type.py"),
&LinterSettings {
..LinterSettings::for_rules(vec![
Rule::MissingReturnTypeUndocumentedPublicFunction,
Rule::MissingReturnTypePrivateFunction,
Rule::MissingReturnTypeSpecialMethod,
Rule::MissingReturnTypeStaticMethod,
Rule::MissingReturnTypeClassMethod,
])
},
)?;
assert_messages!(diagnostics);
Ok(())
}
#[test]
fn suppress_none_returning() -> Result<()> {
let diagnostics = test_path(

View File

@@ -1,8 +1,8 @@
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix, Violation};
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::ReturnStatementVisitor;
use ruff_python_ast::identifier::Identifier;
use ruff_python_ast::statement_visitor::StatementVisitor;
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::{self as ast, Expr, ParameterWithDefault, Stmt};
use ruff_python_parser::typing::parse_type_annotation;
use ruff_python_semantic::analyze::visibility;
@@ -12,6 +12,7 @@ use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::registry::Rule;
use crate::rules::flake8_annotations::helpers::auto_return_type;
use crate::rules::ruff::typing::type_hint_resolves_to_any;
/// ## What it does
@@ -41,7 +42,7 @@ pub struct MissingTypeFunctionArgument {
impl Violation for MissingTypeFunctionArgument {
#[derive_message_formats]
fn message(&self) -> String {
let MissingTypeFunctionArgument { name } = self;
let Self { name } = self;
format!("Missing type annotation for function argument `{name}`")
}
}
@@ -73,7 +74,7 @@ pub struct MissingTypeArgs {
impl Violation for MissingTypeArgs {
#[derive_message_formats]
fn message(&self) -> String {
let MissingTypeArgs { name } = self;
let Self { name } = self;
format!("Missing type annotation for `*{name}`")
}
}
@@ -105,7 +106,7 @@ pub struct MissingTypeKwargs {
impl Violation for MissingTypeKwargs {
#[derive_message_formats]
fn message(&self) -> String {
let MissingTypeKwargs { name } = self;
let Self { name } = self;
format!("Missing type annotation for `**{name}`")
}
}
@@ -142,7 +143,7 @@ pub struct MissingTypeSelf {
impl Violation for MissingTypeSelf {
#[derive_message_formats]
fn message(&self) -> String {
let MissingTypeSelf { name } = self;
let Self { name } = self;
format!("Missing type annotation for `{name}` in method")
}
}
@@ -181,7 +182,7 @@ pub struct MissingTypeCls {
impl Violation for MissingTypeCls {
#[derive_message_formats]
fn message(&self) -> String {
let MissingTypeCls { name } = self;
let Self { name } = self;
format!("Missing type annotation for `{name}` in classmethod")
}
}
@@ -208,14 +209,26 @@ impl Violation for MissingTypeCls {
#[violation]
pub struct MissingReturnTypeUndocumentedPublicFunction {
name: String,
annotation: Option<String>,
}
impl Violation for MissingReturnTypeUndocumentedPublicFunction {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let MissingReturnTypeUndocumentedPublicFunction { name } = self;
let Self { name, .. } = self;
format!("Missing return type annotation for public function `{name}`")
}
fn fix_title(&self) -> Option<String> {
let Self { annotation, .. } = self;
if let Some(annotation) = annotation {
Some(format!("Add return type annotation: `{annotation}`"))
} else {
Some(format!("Add return type annotation"))
}
}
}
/// ## What it does
@@ -240,14 +253,26 @@ impl Violation for MissingReturnTypeUndocumentedPublicFunction {
#[violation]
pub struct MissingReturnTypePrivateFunction {
name: String,
annotation: Option<String>,
}
impl Violation for MissingReturnTypePrivateFunction {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let MissingReturnTypePrivateFunction { name } = self;
let Self { name, .. } = self;
format!("Missing return type annotation for private function `{name}`")
}
fn fix_title(&self) -> Option<String> {
let Self { annotation, .. } = self;
if let Some(annotation) = annotation {
Some(format!("Add return type annotation: `{annotation}`"))
} else {
Some(format!("Add return type annotation"))
}
}
}
/// ## What it does
@@ -285,17 +310,25 @@ impl Violation for MissingReturnTypePrivateFunction {
#[violation]
pub struct MissingReturnTypeSpecialMethod {
name: String,
annotation: Option<String>,
}
impl AlwaysFixableViolation for MissingReturnTypeSpecialMethod {
impl Violation for MissingReturnTypeSpecialMethod {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let MissingReturnTypeSpecialMethod { name } = self;
let Self { name, .. } = self;
format!("Missing return type annotation for special method `{name}`")
}
fn fix_title(&self) -> String {
"Add `None` return type".to_string()
fn fix_title(&self) -> Option<String> {
let Self { annotation, .. } = self;
if let Some(annotation) = annotation {
Some(format!("Add return type annotation: `{annotation}`"))
} else {
Some(format!("Add return type annotation"))
}
}
}
@@ -325,14 +358,26 @@ impl AlwaysFixableViolation for MissingReturnTypeSpecialMethod {
#[violation]
pub struct MissingReturnTypeStaticMethod {
name: String,
annotation: Option<String>,
}
impl Violation for MissingReturnTypeStaticMethod {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let MissingReturnTypeStaticMethod { name } = self;
let Self { name, .. } = self;
format!("Missing return type annotation for staticmethod `{name}`")
}
fn fix_title(&self) -> Option<String> {
let Self { annotation, .. } = self;
if let Some(annotation) = annotation {
Some(format!("Add return type annotation: `{annotation}`"))
} else {
Some(format!("Add return type annotation"))
}
}
}
/// ## What it does
@@ -361,14 +406,26 @@ impl Violation for MissingReturnTypeStaticMethod {
#[violation]
pub struct MissingReturnTypeClassMethod {
name: String,
annotation: Option<String>,
}
impl Violation for MissingReturnTypeClassMethod {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let MissingReturnTypeClassMethod { name } = self;
let Self { name, .. } = self;
format!("Missing return type annotation for classmethod `{name}`")
}
fn fix_title(&self) -> Option<String> {
let Self { annotation, .. } = self;
if let Some(annotation) = annotation {
Some(format!("Add return type annotation: `{annotation}`"))
} else {
Some(format!("Add return type annotation"))
}
}
}
/// ## What it does
@@ -421,7 +478,7 @@ pub struct AnyType {
impl Violation for AnyType {
#[derive_message_formats]
fn message(&self) -> String {
let AnyType { name } = self;
let Self { name } = self;
format!("Dynamically typed expressions (typing.Any) are disallowed in `{name}`")
}
}
@@ -673,21 +730,41 @@ pub(crate) fn definition(
) {
if is_method && visibility::is_classmethod(decorator_list, checker.semantic()) {
if checker.enabled(Rule::MissingReturnTypeClassMethod) {
diagnostics.push(Diagnostic::new(
let return_type = auto_return_type(function, checker.settings.target_version)
.map(|return_type| checker.generator().expr(&return_type));
let mut diagnostic = Diagnostic::new(
MissingReturnTypeClassMethod {
name: name.to_string(),
annotation: return_type.clone(),
},
function.identifier(),
));
);
if let Some(return_type) = return_type {
diagnostic.set_fix(Fix::unsafe_edit(Edit::insertion(
format!(" -> {return_type}"),
function.parameters.range().end(),
)));
}
diagnostics.push(diagnostic);
}
} else if is_method && visibility::is_staticmethod(decorator_list, checker.semantic()) {
if checker.enabled(Rule::MissingReturnTypeStaticMethod) {
diagnostics.push(Diagnostic::new(
let return_type = auto_return_type(function, checker.settings.target_version)
.map(|return_type| checker.generator().expr(&return_type));
let mut diagnostic = Diagnostic::new(
MissingReturnTypeStaticMethod {
name: name.to_string(),
annotation: return_type.clone(),
},
function.identifier(),
));
);
if let Some(return_type) = return_type {
diagnostic.set_fix(Fix::unsafe_edit(Edit::insertion(
format!(" -> {return_type}"),
function.parameters.range().end(),
)));
}
diagnostics.push(diagnostic);
}
} else if is_method && visibility::is_init(name) {
// Allow omission of return annotation in `__init__` functions, as long as at
@@ -697,6 +774,7 @@ pub(crate) fn definition(
let mut diagnostic = Diagnostic::new(
MissingReturnTypeSpecialMethod {
name: name.to_string(),
annotation: Some("None".to_string()),
},
function.identifier(),
);
@@ -709,13 +787,15 @@ pub(crate) fn definition(
}
} else if is_method && visibility::is_magic(name) {
if checker.enabled(Rule::MissingReturnTypeSpecialMethod) {
let return_type = simple_magic_return_type(name);
let mut diagnostic = Diagnostic::new(
MissingReturnTypeSpecialMethod {
name: name.to_string(),
annotation: return_type.map(ToString::to_string),
},
function.identifier(),
);
if let Some(return_type) = simple_magic_return_type(name) {
if let Some(return_type) = return_type {
diagnostic.set_fix(Fix::unsafe_edit(Edit::insertion(
format!(" -> {return_type}"),
function.parameters.range().end(),
@@ -727,22 +807,44 @@ pub(crate) fn definition(
match visibility {
visibility::Visibility::Public => {
if checker.enabled(Rule::MissingReturnTypeUndocumentedPublicFunction) {
diagnostics.push(Diagnostic::new(
let return_type =
auto_return_type(function, checker.settings.target_version)
.map(|return_type| checker.generator().expr(&return_type));
let mut diagnostic = Diagnostic::new(
MissingReturnTypeUndocumentedPublicFunction {
name: name.to_string(),
annotation: return_type.clone(),
},
function.identifier(),
));
);
if let Some(return_type) = return_type {
diagnostic.set_fix(Fix::unsafe_edit(Edit::insertion(
format!(" -> {return_type}"),
function.parameters.range().end(),
)));
}
diagnostics.push(diagnostic);
}
}
visibility::Visibility::Private => {
if checker.enabled(Rule::MissingReturnTypePrivateFunction) {
diagnostics.push(Diagnostic::new(
let return_type =
auto_return_type(function, checker.settings.target_version)
.map(|return_type| checker.generator().expr(&return_type));
let mut diagnostic = Diagnostic::new(
MissingReturnTypePrivateFunction {
name: name.to_string(),
annotation: return_type.clone(),
},
function.identifier(),
));
);
if let Some(return_type) = return_type {
diagnostic.set_fix(Fix::unsafe_edit(Edit::insertion(
format!(" -> {return_type}"),
function.parameters.range().end(),
)));
}
diagnostics.push(diagnostic);
}
}
}

View File

@@ -8,5 +8,6 @@ allow_overload.py:29:9: ANN201 Missing return type annotation for public functio
| ^^^ ANN201
30 | return i
|
= help: Add return type annotation

View File

@@ -0,0 +1,127 @@
---
source: crates/ruff_linter/src/rules/flake8_annotations/mod.rs
---
auto_return_type.py:1:5: ANN201 [*] Missing return type annotation for public function `func`
|
1 | def func():
| ^^^^ ANN201
2 | return 1
|
= help: Add return type annotation: `int`
Unsafe fix
1 |-def func():
1 |+def func() -> int:
2 2 | return 1
3 3 |
4 4 |
auto_return_type.py:5:5: ANN201 [*] Missing return type annotation for public function `func`
|
5 | def func():
| ^^^^ ANN201
6 | return 1.5
|
= help: Add return type annotation: `float`
Unsafe fix
2 2 | return 1
3 3 |
4 4 |
5 |-def func():
5 |+def func() -> float:
6 6 | return 1.5
7 7 |
8 8 |
auto_return_type.py:9:5: ANN201 [*] Missing return type annotation for public function `func`
|
9 | def func(x: int):
| ^^^^ ANN201
10 | if x > 0:
11 | return 1
|
= help: Add return type annotation: `float`
Unsafe fix
6 6 | return 1.5
7 7 |
8 8 |
9 |-def func(x: int):
9 |+def func(x: int) -> float:
10 10 | if x > 0:
11 11 | return 1
12 12 | else:
auto_return_type.py:16:5: ANN201 [*] Missing return type annotation for public function `func`
|
16 | def func():
| ^^^^ ANN201
17 | return True
|
= help: Add return type annotation: `bool`
Unsafe fix
13 13 | return 1.5
14 14 |
15 15 |
16 |-def func():
16 |+def func() -> bool:
17 17 | return True
18 18 |
19 19 |
auto_return_type.py:20:5: ANN201 [*] Missing return type annotation for public function `func`
|
20 | def func(x: int):
| ^^^^ ANN201
21 | if x > 0:
22 | return None
|
= help: Add return type annotation: `None`
Unsafe fix
17 17 | return True
18 18 |
19 19 |
20 |-def func(x: int):
20 |+def func(x: int) -> None:
21 21 | if x > 0:
22 22 | return None
23 23 | else:
auto_return_type.py:27:5: ANN201 [*] Missing return type annotation for public function `func`
|
27 | def func(x: int):
| ^^^^ ANN201
28 | return 1 or 2.5 if x > 0 else 1.5 or "str"
|
= help: Add return type annotation: `str | float`
Unsafe fix
24 24 | return
25 25 |
26 26 |
27 |-def func(x: int):
27 |+def func(x: int) -> str | float:
28 28 | return 1 or 2.5 if x > 0 else 1.5 or "str"
29 29 |
30 30 |
auto_return_type.py:31:5: ANN201 [*] Missing return type annotation for public function `func`
|
31 | def func(x: int):
| ^^^^ ANN201
32 | return 1 + 2.5 if x > 0 else 1.5 or "str"
|
= help: Add return type annotation: `str | float`
Unsafe fix
28 28 | return 1 or 2.5 if x > 0 else 1.5 or "str"
29 29 |
30 30 |
31 |-def func(x: int):
31 |+def func(x: int) -> str | float:
32 32 | return 1 + 2.5 if x > 0 else 1.5 or "str"

View File

@@ -8,6 +8,7 @@ annotation_presence.py:5:5: ANN201 Missing return type annotation for public fun
| ^^^ ANN201
6 | pass
|
= help: Add return type annotation
annotation_presence.py:5:9: ANN001 Missing type annotation for function argument `a`
|
@@ -32,6 +33,7 @@ annotation_presence.py:10:5: ANN201 Missing return type annotation for public fu
| ^^^ ANN201
11 | pass
|
= help: Add return type annotation
annotation_presence.py:10:17: ANN001 Missing type annotation for function argument `b`
|
@@ -56,6 +58,7 @@ annotation_presence.py:20:5: ANN201 Missing return type annotation for public fu
| ^^^ ANN201
21 | pass
|
= help: Add return type annotation
annotation_presence.py:25:5: ANN201 Missing return type annotation for public function `foo`
|
@@ -64,6 +67,7 @@ annotation_presence.py:25:5: ANN201 Missing return type annotation for public fu
| ^^^ ANN201
26 | pass
|
= help: Add return type annotation
annotation_presence.py:45:12: ANN401 Dynamically typed expressions (typing.Any) are disallowed in `a`
|
@@ -250,7 +254,7 @@ annotation_presence.py:159:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^ ANN204
160 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `None`
Unsafe fix
156 156 |
@@ -270,7 +274,7 @@ annotation_presence.py:165:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^ ANN204
166 | print(f"{self.attr=}")
|
= help: Add `None` return type
= help: Add return type annotation: `None`
Unsafe fix
162 162 |

View File

@@ -7,6 +7,7 @@ ignore_fully_untyped.py:24:5: ANN201 Missing return type annotation for public f
| ^^^^^^^^^^^^^^^^^^^^^^^ ANN201
25 | pass
|
= help: Add return type annotation
ignore_fully_untyped.py:24:37: ANN001 Missing type annotation for function argument `b`
|
@@ -28,6 +29,7 @@ ignore_fully_untyped.py:32:5: ANN201 Missing return type annotation for public f
| ^^^^^^^^^^^^^^^^^^^^^^^ ANN201
33 | pass
|
= help: Add return type annotation
ignore_fully_untyped.py:43:9: ANN201 Missing return type annotation for public function `error_typed_self`
|
@@ -37,5 +39,6 @@ ignore_fully_untyped.py:43:9: ANN201 Missing return type annotation for public f
| ^^^^^^^^^^^^^^^^ ANN201
44 | pass
|
= help: Add return type annotation

View File

@@ -9,7 +9,7 @@ mypy_init_return.py:5:9: ANN204 [*] Missing return type annotation for special m
| ^^^^^^^^ ANN204
6 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `None`
Unsafe fix
2 2 |
@@ -29,7 +29,7 @@ mypy_init_return.py:11:9: ANN204 [*] Missing return type annotation for special
| ^^^^^^^^ ANN204
12 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `None`
Unsafe fix
8 8 |
@@ -48,6 +48,7 @@ mypy_init_return.py:40:5: ANN202 Missing return type annotation for private func
| ^^^^^^^^ ANN202
41 | ...
|
= help: Add return type annotation
mypy_init_return.py:47:9: ANN204 [*] Missing return type annotation for special method `__init__`
|
@@ -57,7 +58,7 @@ mypy_init_return.py:47:9: ANN204 [*] Missing return type annotation for special
| ^^^^^^^^ ANN204
48 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `None`
Unsafe fix
44 44 | # Error used to be ok for a moment since the mere presence

View File

@@ -8,7 +8,7 @@ simple_magic_methods.py:2:9: ANN204 [*] Missing return type annotation for speci
| ^^^^^^^ ANN204
3 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `str`
Unsafe fix
1 1 | class Foo:
@@ -26,7 +26,7 @@ simple_magic_methods.py:5:9: ANN204 [*] Missing return type annotation for speci
| ^^^^^^^^ ANN204
6 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `str`
Unsafe fix
2 2 | def __str__(self):
@@ -46,7 +46,7 @@ simple_magic_methods.py:8:9: ANN204 [*] Missing return type annotation for speci
| ^^^^^^^ ANN204
9 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `int`
Unsafe fix
5 5 | def __repr__(self):
@@ -66,7 +66,7 @@ simple_magic_methods.py:11:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^^^^^^^^ ANN204
12 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `int`
Unsafe fix
8 8 | def __len__(self):
@@ -86,7 +86,7 @@ simple_magic_methods.py:14:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^ ANN204
15 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `None`
Unsafe fix
11 11 | def __length_hint__(self):
@@ -106,7 +106,7 @@ simple_magic_methods.py:17:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^ ANN204
18 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `None`
Unsafe fix
14 14 | def __init__(self):
@@ -126,7 +126,7 @@ simple_magic_methods.py:20:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^ ANN204
21 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `bool`
Unsafe fix
17 17 | def __del__(self):
@@ -146,7 +146,7 @@ simple_magic_methods.py:23:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^^ ANN204
24 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `bytes`
Unsafe fix
20 20 | def __bool__(self):
@@ -166,7 +166,7 @@ simple_magic_methods.py:26:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^^^ ANN204
27 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `str`
Unsafe fix
23 23 | def __bytes__(self):
@@ -186,7 +186,7 @@ simple_magic_methods.py:29:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^^^^^ ANN204
30 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `bool`
Unsafe fix
26 26 | def __format__(self, format_spec):
@@ -206,7 +206,7 @@ simple_magic_methods.py:32:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^^^^ ANN204
33 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `complex`
Unsafe fix
29 29 | def __contains__(self, item):
@@ -226,7 +226,7 @@ simple_magic_methods.py:35:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^ ANN204
36 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `int`
Unsafe fix
32 32 | def __complex__(self):
@@ -246,7 +246,7 @@ simple_magic_methods.py:38:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^^ ANN204
39 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `float`
Unsafe fix
35 35 | def __int__(self):
@@ -266,7 +266,7 @@ simple_magic_methods.py:41:9: ANN204 [*] Missing return type annotation for spec
| ^^^^^^^^^ ANN204
42 | ...
|
= help: Add `None` return type
= help: Add return type annotation: `int`
Unsafe fix
38 38 | def __float__(self):

View File

@@ -1,15 +1,26 @@
---
source: crates/ruff_linter/src/rules/flake8_annotations/mod.rs
---
suppress_none_returning.py:45:5: ANN201 Missing return type annotation for public function `foo`
suppress_none_returning.py:45:5: ANN201 [*] Missing return type annotation for public function `foo`
|
44 | # Error
45 | def foo():
| ^^^ ANN201
46 | return True
|
= help: Add return type annotation: `bool`
suppress_none_returning.py:50:5: ANN201 Missing return type annotation for public function `foo`
Unsafe fix
42 42 |
43 43 |
44 44 | # Error
45 |-def foo():
45 |+def foo() -> bool:
46 46 | return True
47 47 |
48 48 |
suppress_none_returning.py:50:5: ANN201 [*] Missing return type annotation for public function `foo`
|
49 | # Error
50 | def foo():
@@ -17,6 +28,17 @@ suppress_none_returning.py:50:5: ANN201 Missing return type annotation for publi
51 | a = 2 + 2
52 | if a == 4:
|
= help: Add return type annotation: `bool | None`
Unsafe fix
47 47 |
48 48 |
49 49 | # Error
50 |-def foo():
50 |+def foo() -> bool | None:
51 51 | a = 2 + 2
52 52 | if a == 4:
53 53 | return True
suppress_none_returning.py:59:9: ANN001 Missing type annotation for function argument `a`
|

View File

@@ -10,6 +10,7 @@ mod tests {
use test_case::test_case;
use crate::registry::Rule;
use crate::settings::types::PreviewMode;
use crate::test::test_path;
use crate::{assert_messages, settings};
@@ -25,4 +26,22 @@ mod tests {
assert_messages!(snapshot, diagnostics);
Ok(())
}
#[test_case(Rule::BooleanTypeHintPositionalArgument, Path::new("FBT.py"))]
fn preview_rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!(
"preview__{}_{}",
rule_code.noqa_code(),
path.to_string_lossy()
);
let diagnostics = test_path(
Path::new("flake8_boolean_trap").join(path).as_path(),
&settings::LinterSettings {
preview: PreviewMode::Enabled,
..settings::LinterSettings::for_rule(rule_code)
},
)?;
assert_messages!(snapshot, diagnostics);
Ok(())
}
}

View File

@@ -4,6 +4,7 @@ use ruff_diagnostics::Diagnostic;
use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::call_path::collect_call_path;
use ruff_python_semantic::SemanticModel;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
@@ -26,6 +27,9 @@ use crate::rules::flake8_boolean_trap::helpers::is_allowed_func_def;
/// keyword-only argument, to force callers to be explicit when providing
/// the argument.
///
/// In [preview], this rule will also flag annotations that include boolean
/// variants, like `bool | int`.
///
/// ## Example
/// ```python
/// from math import ceil, floor
@@ -86,6 +90,8 @@ use crate::rules::flake8_boolean_trap::helpers::is_allowed_func_def;
/// ## References
/// - [Python documentation: Calls](https://docs.python.org/3/reference/expressions.html#calls)
/// - [_How to Avoid “The Boolean Trap”_ by Adam Johnson](https://adamj.eu/tech/2021/07/10/python-type-hints-how-to-avoid-the-boolean-trap/)
///
/// [preview]: https://docs.astral.sh/ruff/preview/
#[violation]
pub struct BooleanTypeHintPositionalArgument;
@@ -96,6 +102,7 @@ impl Violation for BooleanTypeHintPositionalArgument {
}
}
/// FBT001
pub(crate) fn boolean_type_hint_positional_argument(
checker: &mut Checker,
name: &str,
@@ -122,15 +129,17 @@ pub(crate) fn boolean_type_hint_positional_argument(
let Some(annotation) = parameter.annotation.as_ref() else {
continue;
};
// check for both bool (python class) and 'bool' (string annotation)
let hint = match annotation.as_ref() {
Expr::Name(name) => &name.id == "bool",
Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) => value == "bool",
_ => false,
};
if !hint || !checker.semantic().is_builtin("bool") {
continue;
if checker.settings.preview.is_enabled() {
if !match_annotation_to_complex_bool(annotation, checker.semantic()) {
continue;
}
} else {
if !match_annotation_to_literal_bool(annotation) {
continue;
}
}
if !checker.semantic().is_builtin("bool") {
return;
}
checker.diagnostics.push(Diagnostic::new(
BooleanTypeHintPositionalArgument,
@@ -138,3 +147,52 @@ pub(crate) fn boolean_type_hint_positional_argument(
));
}
}
/// Returns `true` if the annotation is a boolean type hint (e.g., `bool`).
fn match_annotation_to_literal_bool(annotation: &Expr) -> bool {
match annotation {
// Ex) `True`
Expr::Name(name) => &name.id == "bool",
// Ex) `"True"`
Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) => value == "bool",
_ => false,
}
}
/// Returns `true` if the annotation is a boolean type hint (e.g., `bool`), or a type hint that
/// includes boolean as a variant (e.g., `bool | int`).
fn match_annotation_to_complex_bool(annotation: &Expr, semantic: &SemanticModel) -> bool {
match annotation {
// Ex) `bool`
Expr::Name(name) => &name.id == "bool",
// Ex) `"bool"`
Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) => value == "bool",
// Ex) `bool | int`
Expr::BinOp(ast::ExprBinOp {
left,
op: ast::Operator::BitOr,
right,
..
}) => {
match_annotation_to_complex_bool(left, semantic)
|| match_annotation_to_complex_bool(right, semantic)
}
// Ex) `typing.Union[bool, int]`
Expr::Subscript(ast::ExprSubscript { value, slice, .. }) => {
if semantic.match_typing_expr(value, "Union") {
if let Expr::Tuple(ast::ExprTuple { elts, .. }) = slice.as_ref() {
elts.iter()
.any(|elt| match_annotation_to_complex_bool(elt, semantic))
} else {
// Union with a single type is an invalid type annotation
false
}
} else if semantic.match_typing_expr(value, "Optional") {
match_annotation_to_complex_bool(slice, semantic)
} else {
false
}
}
_ => false,
}
}

View File

@@ -0,0 +1,106 @@
---
source: crates/ruff_linter/src/rules/flake8_boolean_trap/mod.rs
---
FBT.py:4:5: FBT001 Boolean-typed positional argument in function definition
|
2 | posonly_nohint,
3 | posonly_nonboolhint: int,
4 | posonly_boolhint: bool,
| ^^^^^^^^^^^^^^^^ FBT001
5 | posonly_boolstrhint: "bool",
6 | /,
|
FBT.py:5:5: FBT001 Boolean-typed positional argument in function definition
|
3 | posonly_nonboolhint: int,
4 | posonly_boolhint: bool,
5 | posonly_boolstrhint: "bool",
| ^^^^^^^^^^^^^^^^^^^ FBT001
6 | /,
7 | offset,
|
FBT.py:10:5: FBT001 Boolean-typed positional argument in function definition
|
8 | posorkw_nonvalued_nohint,
9 | posorkw_nonvalued_nonboolhint: int,
10 | posorkw_nonvalued_boolhint: bool,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ FBT001
11 | posorkw_nonvalued_boolstrhint: "bool",
12 | posorkw_boolvalued_nohint=True,
|
FBT.py:11:5: FBT001 Boolean-typed positional argument in function definition
|
9 | posorkw_nonvalued_nonboolhint: int,
10 | posorkw_nonvalued_boolhint: bool,
11 | posorkw_nonvalued_boolstrhint: "bool",
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FBT001
12 | posorkw_boolvalued_nohint=True,
13 | posorkw_boolvalued_nonboolhint: int = True,
|
FBT.py:14:5: FBT001 Boolean-typed positional argument in function definition
|
12 | posorkw_boolvalued_nohint=True,
13 | posorkw_boolvalued_nonboolhint: int = True,
14 | posorkw_boolvalued_boolhint: bool = True,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ FBT001
15 | posorkw_boolvalued_boolstrhint: "bool" = True,
16 | posorkw_nonboolvalued_nohint=1,
|
FBT.py:15:5: FBT001 Boolean-typed positional argument in function definition
|
13 | posorkw_boolvalued_nonboolhint: int = True,
14 | posorkw_boolvalued_boolhint: bool = True,
15 | posorkw_boolvalued_boolstrhint: "bool" = True,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FBT001
16 | posorkw_nonboolvalued_nohint=1,
17 | posorkw_nonboolvalued_nonboolhint: int = 2,
|
FBT.py:18:5: FBT001 Boolean-typed positional argument in function definition
|
16 | posorkw_nonboolvalued_nohint=1,
17 | posorkw_nonboolvalued_nonboolhint: int = 2,
18 | posorkw_nonboolvalued_boolhint: bool = 3,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FBT001
19 | posorkw_nonboolvalued_boolstrhint: "bool" = 4,
20 | *,
|
FBT.py:19:5: FBT001 Boolean-typed positional argument in function definition
|
17 | posorkw_nonboolvalued_nonboolhint: int = 2,
18 | posorkw_nonboolvalued_boolhint: bool = 3,
19 | posorkw_nonboolvalued_boolstrhint: "bool" = 4,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FBT001
20 | *,
21 | kwonly_nonvalued_nohint,
|
FBT.py:89:19: FBT001 Boolean-typed positional argument in function definition
|
88 | # FBT001: Boolean positional arg in function definition
89 | def foo(self, value: bool) -> None:
| ^^^^^ FBT001
90 | pass
|
FBT.py:99:10: FBT001 Boolean-typed positional argument in function definition
|
99 | def func(x: Union[list, Optional[int | str | float | bool]]):
| ^ FBT001
100 | pass
|
FBT.py:103:10: FBT001 Boolean-typed positional argument in function definition
|
103 | def func(x: bool | str):
| ^ FBT001
104 | pass
|

View File

@@ -1,5 +1,14 @@
use ruff_python_stdlib::builtins::is_builtin;
use ruff_python_ast::PySourceType;
use ruff_python_stdlib::builtins::{is_ipython_builtin, is_python_builtin};
pub(super) fn shadows_builtin(name: &str, ignorelist: &[String]) -> bool {
is_builtin(name) && ignorelist.iter().all(|ignore| ignore != name)
pub(super) fn shadows_builtin(
name: &str,
ignorelist: &[String],
source_type: PySourceType,
) -> bool {
if is_python_builtin(name) || source_type.is_ipynb() && is_ipython_builtin(name) {
ignorelist.iter().all(|ignore| ignore != name)
} else {
false
}
}

View File

@@ -67,6 +67,7 @@ pub(crate) fn builtin_argument_shadowing(checker: &mut Checker, parameter: &Para
if shadows_builtin(
parameter.name.as_str(),
&checker.settings.flake8_builtins.builtins_ignorelist,
checker.source_type,
) {
checker.diagnostics.push(Diagnostic::new(
BuiltinArgumentShadowing {

View File

@@ -74,7 +74,11 @@ pub(crate) fn builtin_attribute_shadowing(
name: &str,
range: TextRange,
) {
if shadows_builtin(name, &checker.settings.flake8_builtins.builtins_ignorelist) {
if shadows_builtin(
name,
&checker.settings.flake8_builtins.builtins_ignorelist,
checker.source_type,
) {
// Ignore shadowing within `TypedDict` definitions, since these are only accessible through
// subscripting and not through attribute access.
if class_def
@@ -102,7 +106,11 @@ pub(crate) fn builtin_method_shadowing(
decorator_list: &[Decorator],
range: TextRange,
) {
if shadows_builtin(name, &checker.settings.flake8_builtins.builtins_ignorelist) {
if shadows_builtin(
name,
&checker.settings.flake8_builtins.builtins_ignorelist,
checker.source_type,
) {
// Ignore some standard-library methods. Ideally, we'd ignore all overridden methods, since
// those should be flagged on the superclass, but that's more difficult.
if is_standard_library_override(name, class_def, checker.semantic()) {

View File

@@ -60,7 +60,11 @@ impl Violation for BuiltinVariableShadowing {
/// A001
pub(crate) fn builtin_variable_shadowing(checker: &mut Checker, name: &str, range: TextRange) {
if shadows_builtin(name, &checker.settings.flake8_builtins.builtins_ignorelist) {
if shadows_builtin(
name,
&checker.settings.flake8_builtins.builtins_ignorelist,
checker.source_type,
) {
checker.diagnostics.push(Diagnostic::new(
BuiltinVariableShadowing {
name: name.to_string(),

View File

@@ -3,6 +3,7 @@ use itertools::Itertools;
use ruff_diagnostics::{AlwaysFixableViolation, Violation};
use ruff_diagnostics::{Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_index::Indexer;
use ruff_python_parser::lexer::{LexResult, Spanned};
use ruff_python_parser::Tok;
use ruff_source_file::Locator;
@@ -29,22 +30,33 @@ enum TokenType {
/// Simplified token specialized for the task.
#[derive(Copy, Clone)]
struct Token<'tok> {
type_: TokenType,
// Underlying token.
spanned: Option<&'tok Spanned>,
struct Token {
r#type: TokenType,
range: TextRange,
}
impl<'tok> Token<'tok> {
const fn irrelevant() -> Token<'static> {
Token {
type_: TokenType::Irrelevant,
spanned: None,
}
impl Ranged for Token {
fn range(&self) -> TextRange {
self.range
}
}
impl Token {
fn new(r#type: TokenType, range: TextRange) -> Self {
Self { r#type, range }
}
const fn from_spanned(spanned: &'tok Spanned) -> Token<'tok> {
let type_ = match &spanned.0 {
fn irrelevant() -> Token {
Token {
r#type: TokenType::Irrelevant,
range: TextRange::default(),
}
}
}
impl From<&Spanned> for Token {
fn from(spanned: &Spanned) -> Self {
let r#type = match &spanned.0 {
Tok::NonLogicalNewline => TokenType::NonLogicalNewline,
Tok::Newline => TokenType::Newline,
Tok::For => TokenType::For,
@@ -63,8 +75,8 @@ impl<'tok> Token<'tok> {
_ => TokenType::Irrelevant,
};
Self {
spanned: Some(spanned),
type_,
range: spanned.1,
r#type,
}
}
}
@@ -92,14 +104,14 @@ enum ContextType {
/// Comma context - described a comma-delimited "situation".
#[derive(Copy, Clone)]
struct Context {
type_: ContextType,
r#type: ContextType,
num_commas: u32,
}
impl Context {
const fn new(type_: ContextType) -> Self {
const fn new(r#type: ContextType) -> Self {
Self {
type_,
r#type,
num_commas: 0,
}
}
@@ -222,21 +234,49 @@ pub(crate) fn trailing_commas(
diagnostics: &mut Vec<Diagnostic>,
tokens: &[LexResult],
locator: &Locator,
indexer: &Indexer,
) {
let mut fstrings = 0u32;
let tokens = tokens
.iter()
.flatten()
// Completely ignore comments -- they just interfere with the logic.
.filter(|&r| !matches!(r, (Tok::Comment(_), _)))
.map(Token::from_spanned);
.filter_map(|spanned @ (tok, tok_range)| match tok {
// Completely ignore comments -- they just interfere with the logic.
Tok::Comment(_) => None,
// F-strings are handled as `String` token type with the complete range
// of the outermost f-string. This means that the expression inside the
// f-string is not checked for trailing commas.
Tok::FStringStart => {
fstrings = fstrings.saturating_add(1);
None
}
Tok::FStringEnd => {
fstrings = fstrings.saturating_sub(1);
if fstrings == 0 {
indexer
.fstring_ranges()
.outermost(tok_range.start())
.map(|range| Token::new(TokenType::String, range))
} else {
None
}
}
_ => {
if fstrings == 0 {
Some(Token::from(spanned))
} else {
None
}
}
});
let tokens = [Token::irrelevant(), Token::irrelevant()]
.into_iter()
.chain(tokens);
// Collapse consecutive newlines to the first one -- trailing commas are
// added before the first newline.
let tokens = tokens.coalesce(|previous, current| {
if previous.type_ == TokenType::NonLogicalNewline
&& current.type_ == TokenType::NonLogicalNewline
if previous.r#type == TokenType::NonLogicalNewline
&& current.r#type == TokenType::NonLogicalNewline
{
Ok(previous)
} else {
@@ -249,8 +289,8 @@ pub(crate) fn trailing_commas(
for (prev_prev, prev, token) in tokens.tuple_windows() {
// Update the comma context stack.
match token.type_ {
TokenType::OpeningBracket => match (prev.type_, prev_prev.type_) {
match token.r#type {
TokenType::OpeningBracket => match (prev.r#type, prev_prev.r#type) {
(TokenType::Named, TokenType::Def) => {
stack.push(Context::new(ContextType::FunctionParameters));
}
@@ -261,7 +301,7 @@ pub(crate) fn trailing_commas(
stack.push(Context::new(ContextType::Tuple));
}
},
TokenType::OpeningSquareBracket => match prev.type_ {
TokenType::OpeningSquareBracket => match prev.r#type {
TokenType::ClosingBracket | TokenType::Named | TokenType::String => {
stack.push(Context::new(ContextType::Subscript));
}
@@ -288,8 +328,8 @@ pub(crate) fn trailing_commas(
let context = &stack[stack.len() - 1];
// Is it allowed to have a trailing comma before this token?
let comma_allowed = token.type_ == TokenType::ClosingBracket
&& match context.type_ {
let comma_allowed = token.r#type == TokenType::ClosingBracket
&& match context.r#type {
ContextType::No => false,
ContextType::FunctionParameters => true,
ContextType::CallArguments => true,
@@ -304,22 +344,21 @@ pub(crate) fn trailing_commas(
};
// Is prev a prohibited trailing comma?
let comma_prohibited = prev.type_ == TokenType::Comma && {
let comma_prohibited = prev.r#type == TokenType::Comma && {
// Is `(1,)` or `x[1,]`?
let is_singleton_tuplish =
matches!(context.type_, ContextType::Subscript | ContextType::Tuple)
matches!(context.r#type, ContextType::Subscript | ContextType::Tuple)
&& context.num_commas <= 1;
// There was no non-logical newline, so prohibit (except in `(1,)` or `x[1,]`).
if comma_allowed && !is_singleton_tuplish {
true
// Lambdas not handled by comma_allowed so handle it specially.
} else {
context.type_ == ContextType::LambdaParameters && token.type_ == TokenType::Colon
context.r#type == ContextType::LambdaParameters && token.r#type == TokenType::Colon
}
};
if comma_prohibited {
let comma = prev.spanned.unwrap();
let mut diagnostic = Diagnostic::new(ProhibitedTrailingComma, comma.1);
let mut diagnostic = Diagnostic::new(ProhibitedTrailingComma, prev.range());
diagnostic.set_fix(Fix::safe_edit(Edit::range_deletion(diagnostic.range())));
diagnostics.push(diagnostic);
}
@@ -327,10 +366,9 @@ pub(crate) fn trailing_commas(
// Is prev a prohibited trailing comma on a bare tuple?
// Approximation: any comma followed by a statement-ending newline.
let bare_comma_prohibited =
prev.type_ == TokenType::Comma && token.type_ == TokenType::Newline;
prev.r#type == TokenType::Comma && token.r#type == TokenType::Newline;
if bare_comma_prohibited {
let comma = prev.spanned.unwrap();
diagnostics.push(Diagnostic::new(TrailingCommaOnBareTuple, comma.1));
diagnostics.push(Diagnostic::new(TrailingCommaOnBareTuple, prev.range()));
}
// Comma is required if:
@@ -339,40 +377,37 @@ pub(crate) fn trailing_commas(
// - Not already present,
// - Not on an empty (), {}, [].
let comma_required = comma_allowed
&& prev.type_ == TokenType::NonLogicalNewline
&& prev.r#type == TokenType::NonLogicalNewline
&& !matches!(
prev_prev.type_,
prev_prev.r#type,
TokenType::Comma
| TokenType::OpeningBracket
| TokenType::OpeningSquareBracket
| TokenType::OpeningCurlyBracket
);
if comma_required {
let missing_comma = prev_prev.spanned.unwrap();
let mut diagnostic = Diagnostic::new(
MissingTrailingComma,
TextRange::empty(missing_comma.1.end()),
);
let mut diagnostic =
Diagnostic::new(MissingTrailingComma, TextRange::empty(prev_prev.end()));
// Create a replacement that includes the final bracket (or other token),
// rather than just inserting a comma at the end. This prevents the UP034 fix
// removing any brackets in the same linter pass - doing both at the same time could
// lead to a syntax error.
let contents = locator.slice(missing_comma.1);
let contents = locator.slice(prev_prev.range());
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
format!("{contents},"),
missing_comma.1,
prev_prev.range(),
)));
diagnostics.push(diagnostic);
}
// Pop the current context if the current token ended it.
// The top context is never popped (if unbalanced closing brackets).
let pop_context = match context.type_ {
let pop_context = match context.r#type {
// Lambda terminated by `:`.
ContextType::LambdaParameters => token.type_ == TokenType::Colon,
ContextType::LambdaParameters => token.r#type == TokenType::Colon,
// All others terminated by a closing bracket.
// flake8-commas doesn't verify that it matches the opening...
_ => token.type_ == TokenType::ClosingBracket,
_ => token.r#type == TokenType::ClosingBracket,
};
if pop_context && stack.len() > 1 {
stack.pop();

View File

@@ -939,4 +939,43 @@ COM81.py:632:42: COM812 [*] Trailing comma missing
634 634 |
635 635 | foo = namedtuple(
COM81.py:644:46: COM819 [*] Trailing comma prohibited
|
643 | # F-strings
644 | kwargs.pop("remove", f"this {trailing_comma}",)
| ^ COM819
645 |
646 | raise Exception(
|
= help: Remove trailing comma
Safe fix
641 641 | )
642 642 |
643 643 | # F-strings
644 |-kwargs.pop("remove", f"this {trailing_comma}",)
644 |+kwargs.pop("remove", f"this {trailing_comma}")
645 645 |
646 646 | raise Exception(
647 647 | "first", extra=f"Add trailing comma here ->"
COM81.py:647:49: COM812 [*] Trailing comma missing
|
646 | raise Exception(
647 | "first", extra=f"Add trailing comma here ->"
| COM812
648 | )
|
= help: Add trailing comma
Safe fix
644 644 | kwargs.pop("remove", f"this {trailing_comma}",)
645 645 |
646 646 | raise Exception(
647 |- "first", extra=f"Add trailing comma here ->"
647 |+ "first", extra=f"Add trailing comma here ->",
648 648 | )
649 649 |
650 650 | assert False, f"<- This is not a trailing comma"

View File

@@ -12,11 +12,11 @@ mod tests {
#[test]
fn notice() {
let diagnostics = test_snippet(
r#"
r"
# Copyright 2023
import os
"#
"
.trim(),
&settings::LinterSettings::for_rules(vec![Rule::MissingCopyrightNotice]),
);
@@ -26,11 +26,11 @@ import os
#[test]
fn notice_with_c() {
let diagnostics = test_snippet(
r#"
r"
# Copyright (C) 2023
import os
"#
"
.trim(),
&settings::LinterSettings::for_rules(vec![Rule::MissingCopyrightNotice]),
);
@@ -40,11 +40,11 @@ import os
#[test]
fn notice_with_caps() {
let diagnostics = test_snippet(
r#"
r"
# COPYRIGHT (C) 2023
import os
"#
"
.trim(),
&settings::LinterSettings::for_rules(vec![Rule::MissingCopyrightNotice]),
);
@@ -54,11 +54,11 @@ import os
#[test]
fn notice_with_range() {
let diagnostics = test_snippet(
r#"
r"
# Copyright (C) 2021-2023
import os
"#
"
.trim(),
&settings::LinterSettings::for_rules(vec![Rule::MissingCopyrightNotice]),
);
@@ -68,11 +68,11 @@ import os
#[test]
fn valid_author() {
let diagnostics = test_snippet(
r#"
r"
# Copyright (C) 2023 Ruff
import os
"#
"
.trim(),
&settings::LinterSettings {
flake8_copyright: super::settings::Settings {
@@ -88,11 +88,11 @@ import os
#[test]
fn invalid_author() {
let diagnostics = test_snippet(
r#"
r"
# Copyright (C) 2023 Some Author
import os
"#
"
.trim(),
&settings::LinterSettings {
flake8_copyright: super::settings::Settings {
@@ -108,9 +108,9 @@ import os
#[test]
fn small_file() {
let diagnostics = test_snippet(
r#"
r"
import os
"#
"
.trim(),
&settings::LinterSettings {
flake8_copyright: super::settings::Settings {
@@ -126,7 +126,7 @@ import os
#[test]
fn late_notice() {
let diagnostics = test_snippet(
r#"
r"
# Content Content Content Content Content Content Content Content Content Content
# Content Content Content Content Content Content Content Content Content Content
# Content Content Content Content Content Content Content Content Content Content
@@ -149,7 +149,7 @@ import os
# Content Content Content Content Content Content Content Content Content Content
# Copyright 2023
"#
"
.trim(),
&settings::LinterSettings::for_rules(vec![Rule::MissingCopyrightNotice]),
);
@@ -159,8 +159,8 @@ import os
#[test]
fn char_boundary() {
let diagnostics = test_snippet(
r#"কককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককক
"#
r"কককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককককক
"
.trim(),
&settings::LinterSettings::for_rules(vec![Rule::MissingCopyrightNotice]),
);

View File

@@ -1,9 +1,7 @@
use std::path::Path;
use ruff_python_parser::lexer::LexResult;
use ruff_python_parser::Tok;
use ruff_diagnostics::Diagnostic;
use ruff_python_index::Indexer;
use ruff_source_file::Locator;
pub(crate) use shebang_leading_whitespace::*;
pub(crate) use shebang_missing_executable_file::*;
@@ -20,32 +18,31 @@ mod shebang_not_executable;
mod shebang_not_first_line;
pub(crate) fn from_tokens(
tokens: &[LexResult],
diagnostics: &mut Vec<Diagnostic>,
path: &Path,
locator: &Locator,
diagnostics: &mut Vec<Diagnostic>,
indexer: &Indexer,
) {
let mut has_any_shebang = false;
for (tok, range) in tokens.iter().flatten() {
if let Tok::Comment(comment) = tok {
if let Some(shebang) = ShebangDirective::try_extract(comment) {
has_any_shebang = true;
for range in indexer.comment_ranges() {
let comment = locator.slice(*range);
if let Some(shebang) = ShebangDirective::try_extract(comment) {
has_any_shebang = true;
if let Some(diagnostic) = shebang_missing_python(*range, &shebang) {
diagnostics.push(diagnostic);
}
if let Some(diagnostic) = shebang_missing_python(*range, &shebang) {
diagnostics.push(diagnostic);
}
if let Some(diagnostic) = shebang_not_executable(path, *range) {
diagnostics.push(diagnostic);
}
if let Some(diagnostic) = shebang_not_executable(path, *range) {
diagnostics.push(diagnostic);
}
if let Some(diagnostic) = shebang_leading_whitespace(*range, locator) {
diagnostics.push(diagnostic);
}
if let Some(diagnostic) = shebang_leading_whitespace(*range, locator) {
diagnostics.push(diagnostic);
}
if let Some(diagnostic) = shebang_not_first_line(*range, locator) {
diagnostics.push(diagnostic);
}
if let Some(diagnostic) = shebang_not_first_line(*range, locator) {
diagnostics.push(diagnostic);
}
}
}

View File

@@ -1,11 +1,13 @@
use ruff_python_ast::Expr;
use std::fmt;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_text_size::Ranged;
use ruff_python_ast::imports::{AnyImport, ImportFrom};
use ruff_python_ast::Expr;
use ruff_text_size::{Ranged, TextSize};
use crate::checkers::ast::Checker;
use crate::importer::Importer;
/// ## What it does
/// Checks for uses of PEP 585- and PEP 604-style type annotations in Python
@@ -42,6 +44,10 @@ use crate::checkers::ast::Checker;
/// ...
/// ```
///
/// ## Fix safety
/// This rule's fix is marked as unsafe, as adding `from __future__ import annotations`
/// may change the semantics of the program.
///
/// ## Options
/// - `target-version`
#[violation]
@@ -66,18 +72,28 @@ impl fmt::Display for Reason {
}
}
impl Violation for FutureRequiredTypeAnnotation {
impl AlwaysFixableViolation for FutureRequiredTypeAnnotation {
#[derive_message_formats]
fn message(&self) -> String {
let FutureRequiredTypeAnnotation { reason } = self;
format!("Missing `from __future__ import annotations`, but uses {reason}")
}
fn fix_title(&self) -> String {
format!("Add `from __future__ import annotations`")
}
}
/// FA102
pub(crate) fn future_required_type_annotation(checker: &mut Checker, expr: &Expr, reason: Reason) {
checker.diagnostics.push(Diagnostic::new(
FutureRequiredTypeAnnotation { reason },
expr.range(),
));
let mut diagnostic = Diagnostic::new(FutureRequiredTypeAnnotation { reason }, expr.range());
if let Some(python_ast) = checker.semantic().definitions.python_ast() {
let required_import =
AnyImport::ImportFrom(ImportFrom::member("__future__", "annotations"));
diagnostic.set_fix(Fix::unsafe_edit(
Importer::new(python_ast, checker.locator(), checker.stylist())
.add_import(&required_import, TextSize::default()),
));
}
checker.diagnostics.push(diagnostic);
}

Some files were not shown because too many files have changed in this diff Show More