Compare commits

...

65 Commits

Author SHA1 Message Date
Charlie Marsh
931d41bff1 Revert "Bump version to 0.0.222"
This reverts commit 852aab5758.
2023-01-13 23:56:29 -05:00
Charlie Marsh
852aab5758 Bump version to 0.0.222 2023-01-13 23:50:08 -05:00
Charlie Marsh
27fe4873f2 Fix placement of update feature flag 2023-01-13 23:46:32 -05:00
Charlie Marsh
ee6c81d02a Bump version to 0.0.221 2023-01-13 23:33:15 -05:00
Charlie Marsh
59542344e2 Avoid unnecessary allocations for module names (#1863) 2023-01-13 23:26:34 -05:00
Martin Fischer
7b1ce72f86 Actually fix wasm-pack build command (#1862)
I initially attempted to run `wasm-pack build -p ruff` which gave the
error message:

Error: crate directory is missing a `Cargo.toml` file; is `-p` the wrong
directory?

I interpreted that as wasm-pack looking for the "ruff" directory because
I specified -p ruff, however actually the wasm-pack build usage is:

    wasm-pack build [FLAGS] [OPTIONS] <path> <cargo-build-options>

And I was missing the `<path>` argument. So this actually wasn't at all
a bug in wasm-pack but just a confusing error message. And the symlink
hack I introduced in the previous commit didn't actually work ... I only
accidentally omitted the `-p` when testing (which ended up as `ruff`
being the <path> argument) ... CLIs are fun.
2023-01-13 23:20:20 -05:00
Martin Fischer
156e09536e Add workaround for wasm-pack bug to fix the playground CI (#1861)
Fixes #1860.
2023-01-13 22:46:51 -05:00
Charlie Marsh
22341c4ae4 Move repology down 2023-01-13 22:29:26 -05:00
Colin Delahunty
e4993bd7e2 Added ALE (#1857)
Fixes the sub issue I brought up in #1829.
2023-01-13 21:39:51 -05:00
Martin Fischer
82aff5f9ec Split off ruff_cli crate from ruff library
This lets you test the ruff linters or use the ruff library
without having to compile the ~100 additional dependencies
that are needed by the CLI.

Because we set the following in the [workspace] section of Cargo.toml:

   default-members = [".", "ruff_cli"]

`cargo run` still runs the CLI and `cargo test` still tests
the code in src/ as well as the code in the new ruff_cli crate.
(But you can now also run `cargo test -p ruff` to only test the linters.)
2023-01-13 21:37:54 -05:00
Charlie Marsh
403a004e03 Refactor import-tracking to leverage existing AST bindings (#1856)
This PR refactors our import-tracking logic to leverage our existing
logic for tracking bindings. It's both a significant simplification, a
significant improvement (as we can now track reassignments), and closes
out a bunch of subtle bugs.

Though the AST tracks all bindings (e.g., when parsing `import os as
foo`, we bind the name `foo` to a `BindingKind::Importation` that points
to the `os` module), when I went to implement import tracking (e.g., to
ensure that if the user references `List`, it's actually `typing.List`),
I added a parallel system specifically for this use-case.

That was a mistake, for a few reasons:

1. It didn't track reassignments, so if you had `from typing import
List`, but `List` was later overridden, we'd still consider any
reference to `List` to be `typing.List`.
2. It required a bunch of extra logic, include complex logic to try and
optimize the lookups, since it's such a hot codepath.
3. There were a few bugs in the implementation that were just hard to
correct under the existing abstractions (e.g., if you did `from typing
import Optional as Foo`, then we'd treat any reference to `Foo` _or_
`Optional` as `typing.Optional` (even though, in that case, `Optional`
was really unbound).

The new implementation goes through our existing binding tracking: when
we get a reference, we find the appropriate binding given the current
scope stack, and normalize it back to its original target.

Closes #1690.
Closes #1790.
2023-01-13 20:39:54 -05:00
Charlie Marsh
0b92849996 Improve spacing preservation for C405 fixes (#1855)
We now preserve the spacing of the more common form:

```py
set((
    1,
))
```

Rather than the less common form:

```py
set(
    (1,)
)
```
2023-01-13 13:11:08 -05:00
Charlie Marsh
12440ede9c Remove non-magic trailing comma from tuple (#1854)
Closes #1821.
2023-01-13 12:56:42 -05:00
max0x53
fc3f722df5 Implement PLR0133 (ComparisonOfConstants) (#1841)
This PR adds [Pylint
`R0133`](https://pylint.pycqa.org/en/latest/user_guide/messages/refactor/comparison-of-constants.html)

Feel free to suggest changes and additions, I have tried to maintain
parity with the Pylint implementation
[`comparison_checker.py`](https://github.com/PyCQA/pylint/blob/main/pylint/checkers/base/comparison_checker.py#L247)

See #970
2023-01-13 12:14:35 -05:00
Maksudul Haque
84ef7a0171 [isort] Add classes Config Option (#1849)
ref https://github.com/charliermarsh/ruff/issues/1819
2023-01-13 12:13:01 -05:00
Nicola Soranzo
66b1d09362 Clarify that some flake8-bugbear opinionated rules are already implemented (#1847)
E.g. B904 and B905.
2023-01-13 11:49:05 -05:00
Maksudul Haque
3ae01db226 [flake8-bugbear] Fix False Positives for B024 & B027 (#1851)
closes https://github.com/charliermarsh/ruff/issues/1848
2023-01-13 11:46:17 -05:00
Charlie Marsh
048e5774e8 Use absolute paths for --stdin-filename matching (#1843)
Non-basename glob matches (e.g., for `--per-file-ignores`) assume that
the path has been converted to an absolute path. (We do this for
filenames as part of the directory traversal.) For filenames passed via
stdin, though, we're missing this conversion. So `--per-file-ignores`
that rely on the _basename_ worked as expected, but directory paths did
not.

Closes #1840.
2023-01-12 21:01:05 -05:00
max0x53
b47e8e6770 Implement PLR2004 (MagicValueComparison) (#1828)
This PR adds [Pylint
`R2004`](https://pylint.pycqa.org/en/latest/user_guide/messages/refactor/magic-value-comparison.html#magic-value-comparison-r2004)

Feel free to suggest changes and additions, I have tried to maintain
parity with the Pylint implementation
[`magic_value.py`](https://github.com/PyCQA/pylint/blob/main/pylint/extensions/magic_value.py)

See #970
2023-01-12 19:44:18 -05:00
Jan Katins
ef17c82998 Document the way extend-ignore/select are applied (#1839)
Closes: https://github.com/charliermarsh/ruff/issues/1838
2023-01-12 19:44:03 -05:00
Charlie Marsh
9aeb5df5fe Bump version to 0.0.220 2023-01-12 17:57:04 -05:00
Charlie Marsh
7ffba7b552 Use absolute paths for GitHub and Gitlab annotations (#1837)
Note that the _annotation path_ is absolute, while the path encoded in
the message remains relative.

![Screen Shot 2023-01-12 at 5 54 11
PM](https://user-images.githubusercontent.com/1309177/212198531-63f15445-0f6a-471c-a64c-18ad2b6df0c7.png)

Closes #1835.
2023-01-12 17:54:34 -05:00
Charlie Marsh
06473bb1b5 Support for-else loops in SIM110 and SIM111 (#1834)
This PR adds support for `SIM110` and `SIM111` simplifications of the
form:

```py
def f():
    # SIM110
    for x in iterable:
        if check(x):
            return True
    else:
        return False
```
2023-01-12 17:04:58 -05:00
Ash Berlin-Taylor
bf5c048502 Airflow is now using ruff (#1833)
😀
2023-01-12 16:50:01 -05:00
Charlie Marsh
eaed08ae79 Skip SIM110/SIM111 fixes that create long lines 2023-01-12 16:21:54 -05:00
Charlie Marsh
e0fdc4c5e8 Avoid SIM110/SIM110 errors with else statements (#1832)
Closes #1831.
2023-01-12 16:17:27 -05:00
Charlie Marsh
590bec57f4 Fix typo in relative-imports-order option name 2023-01-12 15:57:58 -05:00
Charlie Marsh
3110d342c7 Implement isort's reverse_relative setting (#1826)
This PR implements `reverse-relative`, from isort, but renames it to
`relative-imports-order` with the respected value `closest-to-furthest`
and `furthest-to-closest`, and the latter being the default.

Closes #1813.
2023-01-12 15:48:40 -05:00
nefrob
39aae28eb4 📝 Update readme example for adding isort required imports (#1824)
Fixes use of  isort name to the ruff name.
2023-01-12 13:18:06 -05:00
Charlie Marsh
dcccfe2591 Avoid parsing pyproject.toml files when settings are fixed (#1827)
Apart from being wasteful, this can also cause problems (see the linked
issue).

Resolves #1812.
2023-01-12 13:15:44 -05:00
Martin Fischer
38f5e8f423 Decouple linter module from cache module 2023-01-12 13:09:59 -05:00
Martin Fischer
74f14182ea Decouple resolver module from cli::Overrides 2023-01-12 13:09:59 -05:00
Charlie Marsh
bbc1e7804e Don't trigger SIM401 for complex default values (#1825)
Resolves #1809.
2023-01-12 12:51:23 -05:00
messense
c6320b29e4 Implement autofix for flake8-quotes (#1810)
Resolves #1789
2023-01-12 12:42:28 -05:00
Maksudul Haque
1a90408e8c [flake8-bandit] Add Rule for S701 (jinja2 autoescape false) (#1815)
ref: https://github.com/charliermarsh/ruff/issues/1646

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-12 11:59:20 -05:00
Jeroen Van Goey
07134c50c8 Add usage of ruff in pandas to README (#1811)
pandas now uses ruff for linting, see
https://github.com/pandas-dev/pandas/pull/50160
2023-01-12 10:55:21 -05:00
Charlie Marsh
b36d4a15b0 Modify visibility and shuffle around some modules (#1807) 2023-01-11 23:57:05 -05:00
Charlie Marsh
d8162ce79d Bump version to 0.0.219 2023-01-11 23:46:01 -05:00
Charlie Marsh
e11ef54bda Improve globset documentation and help message (#1808)
Closes #1545.
2023-01-11 23:41:56 -05:00
messense
9a07b0623e Move top level ruff into python folder (#1806)
https://maturin.rs/project_layout.html#mixed-rustpython-project

Resolves #1805
2023-01-11 23:12:55 -05:00
Charlie Marsh
f450e2e79d Implement doc line length enforcement (#1804)
This PR implements `W505` (`DocLineTooLong`), which is similar to `E501`
(`LineTooLong`) but confined to doc lines.

I based the "doc line" definition on pycodestyle, which defines a doc
line as a standalone comment or string statement. Our definition is a
bit more liberal, since we consider any string statement a doc line
(even if it's part of a multi-line statement) -- but that seems fine to
me.

Note that, unusually, this rule requires custom extraction from both the
token stream (to find standalone comments) and the AST (to find string
statements).

Closes #1784.
2023-01-11 22:32:14 -05:00
Colin Delahunty
329946f162 Avoid erroneous Q002 error message for single-quote docstrings (#1777)
Fixes #1775. Before implementing your solution I thought of a slightly
simpler one. However, it will let this function pass:
```
def double_inside_single(a):
    'Double inside "single "'
```
If we want function to pass, my implementation works. But if we do not,
then I can go with how you suggested I implemented this (I left how I
would begin to handle it commented out). The bottom of the flake8-quotes
documentation seems to suggest that this should pass:
https://pypi.org/project/flake8-quotes/

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-11 20:01:54 -05:00
Charlie Marsh
588399e415 Fix Clippy error 2023-01-11 19:59:00 -05:00
Chammika Mannakkara
4523885268 flake8_simplify : SIM401 (#1778)
Ref #998 

- Implements SIM401 with fix
- Added tests

Notes: 
- only recognize simple ExprKind::Name variables in expr patterns for
now
- bug-fix from reference implementation: check 3-conditions (dict-key,
target-variable, dict-name) to be equal, `flake8_simplify` only test
first two (only first in second pattern)
2023-01-11 19:51:37 -05:00
Maksudul Haque
de81b0cd38 [flake8-simplify] Add Rule for SIM115 (Use context handler for opening files) (#1782)
ref: https://github.com/charliermarsh/ruff/issues/998

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-11 19:28:05 -05:00
Charlie Marsh
4fce296e3f Skip SIM108 violations for complex if-statements (#1802)
We now skip SIM108 violations if: the resulting statement would exceed
the user-specified line length, or the `if` statement contains comments.

Closes #1719.

Closes #1766.
2023-01-11 19:21:30 -05:00
Charlie Marsh
9d48d7bbd1 Skip unused argument checks for magic methods (#1801)
We still check `__init__`, `__call__`, and `__new__`.

Closes #1796.
2023-01-11 19:02:20 -05:00
Charlie Marsh
c56f263618 Avoid flagging builtins for OSError rewrites (#1800)
Related to (but does not fix) #1790.
2023-01-11 18:49:25 -05:00
Grzegorz Bokota
fb2382fbc3 Update readme to reflect #1763 (#1780)
When checking changes in the 0.0.218 release I noticed that auto fixing
PT004 and PT005 was disabled but this change was not reflected in
README. So I create this small PR to do this.

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-11 18:37:41 -05:00
Charlie Marsh
c92a5a8704 Avoid rewriting flake8-comprehensions expressions for builtin overrides (#1799)
Closes #1788.
2023-01-11 18:33:55 -05:00
Charlie Marsh
d7cf3147b7 Refactor flake8-comprehensions rules to take fewer arguments (#1797) 2023-01-11 18:21:18 -05:00
Charlie Marsh
bf4d35c705 Convert flake8-comprehensions checks to Checker style (#1795) 2023-01-11 18:11:20 -05:00
Charlie Marsh
4e97e9c7cf Improve PIE794 autofix behavior (#1794)
We now: (1) trigger PIE794 for objects without bases (not sure why this
was omitted before); and (2) remove the entire line, rather than leaving
behind trailing whitespace.

Resolves #1787.
2023-01-11 18:01:29 -05:00
Charlie Marsh
a3fcc3b28d Disable update check by default (#1786)
This has received enough criticism that I'm comfortable making it
opt-in.
2023-01-11 13:47:40 -05:00
Charlie Marsh
cfbd068dd5 Bump version to 0.0.218 2023-01-10 21:28:23 -05:00
Charlie Marsh
8aed23fe0a Avoid B023 false-positives for some common builtins (#1776)
This is based on the upstream work in
https://github.com/PyCQA/flake8-bugbear/pull/303 and
https://github.com/PyCQA/flake8-bugbear/pull/305/files.

Resolves #1686.
2023-01-10 21:23:48 -05:00
Colin Delahunty
c016c41c71 Pyupgrade: Format specifiers (#1594)
A part of #827. Posting this for visibility. Still has some work to do
to be done.

Things that still need done before this is ready:

- [x] Does not work when the item is being assigned to a variable
- [x] Does not work if being used in a function call
- [x] Fix incorrectly removed calls in the function
- [x] Has not been tested with pyupgrade negative test cases

Tests from pyupgrade can be seen here:
https://github.com/asottile/pyupgrade/blob/main/tests/features/format_literals_test.py

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-10 20:21:04 -05:00
Charlie Marsh
f1a5e53f06 Enable isort-style required-imports enforcement (#1762)
In isort, this is called `add-imports`, but I prefer the declarative
name.

The idea is that by adding the following to your `pyproject.toml`, you
can ensure that the import is included in all files:

```toml
[tool.ruff.isort]
required-imports = ["from __future__ import annotations"]
```

I mostly reverse-engineered isort's logic for making decisions, though I
made some slight tweaks that I think are preferable. A few comments:

- Like isort, we don't enforce this on empty files (like empty
`__init__.py`).
- Like isort, we require that the import is at the top-level.
- isort will skip any docstrings, and any comments on the first three
lines (I think, based on testing). Ruff places the import after the last
docstring or comment in the file preamble (that is: after the last
docstring or comment that comes before the _first_ non-docstring and
non-comment).

Resolves #1700.
2023-01-10 18:12:57 -05:00
Charlie Marsh
1e94e0221f Disable doctests (#1772)
We don't have any doctests, but `cargo test --all` spends more than half
the time on doctests? A little confusing, but this brings the test time
from > 4s to < 2s on my machine.
2023-01-10 15:10:16 -05:00
Martin Fischer
543865c96b Generate RuleCode::origin() via macro (#1770) 2023-01-10 13:20:43 -05:00
Maksudul Haque
b8e3f0bc13 [flake8-bandit] Add Rule for S508 (snmp insecure version) & S509 (snmp weak cryptography) (#1771)
ref: https://github.com/charliermarsh/ruff/issues/1646

Co-authored-by: messense <messense@icloud.com>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-10 13:13:54 -05:00
Charlie Marsh
643cedb200 Move CONTRIBUTING.md to top-level (#1768) 2023-01-10 07:38:12 -05:00
Charlie Marsh
91620c378a Disable release builds on CI (#1761) 2023-01-10 07:33:03 -05:00
Harutaka Kawamura
b732135795 Do not autofix PT004 and PT005 (#1763)
As @edgarrmondragon commented in
https://github.com/charliermarsh/ruff/pull/1740#issuecomment-1376230550,
just renaming fixture doesn't work.
2023-01-10 07:24:16 -05:00
messense
9384a081f9 Implement flake8-simplify SIM112 (#1764)
Ref #998
2023-01-10 07:24:01 -05:00
342 changed files with 8379 additions and 4446 deletions

View File

@@ -27,8 +27,8 @@ jobs:
toolchain: nightly-2022-11-01
override: true
- uses: Swatinem/rust-cache@v1
- run: cargo build --all --release
- run: ./target/release/ruff_dev generate-all
- run: cargo build --all
- run: ./target/debug/ruff_dev generate-all
- run: git diff --quiet README.md || echo "::error file=README.md::This file is outdated. Run 'cargo +nightly dev generate-all'."
- run: git diff --quiet ruff.schema.json || echo "::error file=ruff.schema.json::This file is outdated. Run 'cargo +nightly dev generate-all'."
- run: git diff --exit-code -- README.md ruff.schema.json
@@ -60,7 +60,7 @@ jobs:
target: wasm32-unknown-unknown
- uses: Swatinem/rust-cache@v1
- run: cargo clippy --workspace --all-targets --all-features -- -D warnings -W clippy::pedantic
- run: cargo clippy --workspace --target wasm32-unknown-unknown --all-features -- -D warnings -W clippy::pedantic
- run: cargo clippy -p ruff --target wasm32-unknown-unknown --all-features -- -D warnings -W clippy::pedantic
cargo-test:
name: "cargo test"
@@ -79,7 +79,7 @@ jobs:
run: |
cargo insta test --all --delete-unreferenced-snapshots
git diff --exit-code
- run: cargo test --package ruff --test black_compatibility_test -- --ignored
- run: cargo test --package ruff_cli --test black_compatibility_test -- --ignored
# TODO(charlie): Re-enable the `wasm-pack` tests.
# See: https://github.com/charliermarsh/ruff/issues/1425

View File

@@ -31,7 +31,7 @@ jobs:
- uses: jetli/wasm-pack-action@v0.4.0
- uses: jetli/wasm-bindgen-action@v0.2.0
- name: "Run wasm-pack"
run: wasm-pack build --target web --out-dir playground/src/pkg
run: wasm-pack build --target web --out-dir playground/src/pkg . -- -p ruff
- name: "Install Node dependencies"
run: npm ci
working-directory: playground

View File

@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.0.217
rev: v0.0.221
hooks:
- id: ruff

62
Cargo.lock generated
View File

@@ -735,7 +735,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.217-dev.0"
version = "0.0.221-dev.0"
dependencies = [
"anyhow",
"clap 4.0.32",
@@ -1874,27 +1874,19 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.217"
version = "0.0.221"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
"assert_cmd",
"atty",
"bincode",
"bitflags",
"cachedir",
"cfg-if 1.0.0",
"chrono",
"clap 4.0.32",
"clap_complete_command",
"clearscreen",
"colored",
"console_error_panic_hook",
"console_log",
"criterion",
"dirs 4.0.0",
"fern",
"filetime",
"getrandom 0.2.8",
"glob",
"globset",
@@ -1906,13 +1898,10 @@ dependencies = [
"log",
"natord",
"nohash-hasher",
"notify",
"num-bigint",
"num-traits",
"once_cell",
"path-absolutize",
"quick-junit",
"rayon",
"regex",
"ropey",
"ruff_macros",
@@ -1924,25 +1913,57 @@ dependencies = [
"semver",
"serde",
"serde-wasm-bindgen",
"serde_json",
"shellexpand",
"similar",
"strum",
"strum_macros",
"test-case",
"textwrap",
"titlecase",
"toml_edit",
"update-informer",
"ureq",
"walkdir",
"wasm-bindgen",
"wasm-bindgen-test",
]
[[package]]
name = "ruff_cli"
version = "0.0.221"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
"assert_cmd",
"atty",
"bincode",
"cachedir",
"chrono",
"clap 4.0.32",
"clap_complete_command",
"clearscreen",
"colored",
"filetime",
"glob",
"ignore",
"itertools",
"log",
"notify",
"path-absolutize",
"quick-junit",
"rayon",
"regex",
"ruff",
"rustc-hash",
"serde",
"serde_json",
"similar",
"strum",
"textwrap",
"update-informer",
"ureq",
"walkdir",
]
[[package]]
name = "ruff_dev"
version = "0.0.217"
version = "0.0.221"
dependencies = [
"anyhow",
"clap 4.0.32",
@@ -1950,6 +1971,7 @@ dependencies = [
"libcst",
"once_cell",
"ruff",
"ruff_cli",
"rustpython-ast",
"rustpython-common",
"rustpython-parser",
@@ -1962,7 +1984,7 @@ dependencies = [
[[package]]
name = "ruff_macros"
version = "0.0.217"
version = "0.0.221"
dependencies = [
"once_cell",
"proc-macro2",

View File

@@ -2,11 +2,13 @@
members = [
"flake8_to_ruff",
"ruff_dev",
"ruff_cli",
]
default-members = [".", "ruff_cli"]
[package]
name = "ruff"
version = "0.0.217"
version = "0.0.221"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = "2021"
rust-version = "1.65.0"
@@ -19,22 +21,17 @@ license = "MIT"
[lib]
name = "ruff"
crate-type = ["cdylib", "rlib"]
doctest = false
[dependencies]
annotate-snippets = { version = "0.9.1", features = ["color"] }
anyhow = { version = "1.0.66" }
atty = { version = "0.2.14" }
bincode = { version = "1.3.3" }
bitflags = { version = "1.3.2" }
cachedir = { version = "0.3.0" }
cfg-if = { version = "1.0.0" }
chrono = { version = "0.4.21", default-features = false, features = ["clock"] }
clap = { version = "4.0.1", features = ["derive", "env"] }
clap_complete_command = { version = "0.4.0" }
colored = { version = "2.0.0" }
dirs = { version = "4.0.0" }
fern = { version = "0.6.1" }
filetime = { version = "0.2.17" }
glob = { version = "0.3.0" }
globset = { version = "0.4.9" }
ignore = { version = "0.4.18" }
@@ -43,15 +40,13 @@ libcst = { git = "https://github.com/charliermarsh/LibCST", rev = "f2f0b7a487a87
log = { version = "0.4.17" }
natord = { version = "1.0.9" }
nohash-hasher = { version = "0.2.0" }
notify = { version = "5.0.0" }
num-bigint = { version = "0.4.3" }
num-traits = "0.2.15"
once_cell = { version = "1.16.0" }
path-absolutize = { version = "3.0.14", features = ["once_cell_cache", "use_unix_paths_on_wasm"] }
quick-junit = { version = "0.3.2" }
regex = { version = "1.6.0" }
ropey = { version = "1.5.0", features = ["cr_lines", "simd"], default-features = false }
ruff_macros = { version = "0.0.217", path = "ruff_macros" }
ruff_macros = { version = "0.0.221", path = "ruff_macros" }
rustc-hash = { version = "1.1.0" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "d532160333ffeb6dbeca2c2728c2391cd1e53b7f" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "d532160333ffeb6dbeca2c2728c2391cd1e53b7f" }
@@ -59,20 +54,12 @@ rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPyth
schemars = { version = "0.8.11" }
semver = { version = "1.0.16" }
serde = { version = "1.0.147", features = ["derive"] }
serde_json = { version = "1.0.87" }
shellexpand = { version = "3.0.0" }
similar = { version = "2.2.1" }
strum = { version = "0.24.1", features = ["strum_macros"] }
strum_macros = { version = "0.24.3" }
textwrap = { version = "0.16.0" }
titlecase = { version = "2.2.1" }
toml_edit = { version = "0.17.1", features = ["easy"] }
walkdir = { version = "2.3.2" }
[target.'cfg(not(target_family = "wasm"))'.dependencies]
clearscreen = { version = "2.0.0" }
rayon = { version = "1.5.3" }
update-informer = { version = "0.6.0", default-features = false, features = ["pypi"], optional = true }
# https://docs.rs/getrandom/0.2.7/getrandom/#webassembly-support
# For (future) wasm-pack support
@@ -87,17 +74,11 @@ wasm-bindgen = { version = "0.2.83" }
[dev-dependencies]
insta = { version = "1.19.1", features = ["yaml"] }
test-case = { version = "2.2.2" }
ureq = { version = "2.5.0", features = [] }
wasm-bindgen-test = { version = "0.3.33" }
[target.'cfg(not(target_family = "wasm"))'.dev-dependencies]
assert_cmd = { version = "2.0.4" }
criterion = { version = "0.4.0" }
[features]
default = ["update-informer"]
update-informer = ["dep:update-informer"]
[profile.release]
panic = "abort"
lto = "thin"

170
README.md
View File

@@ -46,7 +46,9 @@ imports, and more.
Ruff is extremely actively developed and used in major open-source projects like:
- [pandas](https://github.com/pandas-dev/pandas)
- [FastAPI](https://github.com/tiangolo/fastapi)
- [Apache Airflow](https://github.com/apache/airflow)
- [Bokeh](https://github.com/bokeh/bokeh)
- [Zulip](https://github.com/zulip/zulip)
- [Pydantic](https://github.com/pydantic/pydantic)
@@ -139,8 +141,6 @@ Ruff is available as [`ruff`](https://pypi.org/project/ruff/) on PyPI:
pip install ruff
```
[![Packaging status](https://repology.org/badge/vertical-allrepos/ruff-python-linter.svg?exclude_unsupported=1)](https://repology.org/project/ruff-python-linter/versions)
For **macOS Homebrew** and **Linuxbrew** users, Ruff is also available as [`ruff`](https://formulae.brew.sh/formula/ruff) on Homebrew:
```shell
@@ -159,6 +159,8 @@ For **Arch Linux** users, Ruff is also available as [`ruff`](https://archlinux.o
pacman -S ruff
```
[![Packaging status](https://repology.org/badge/vertical-allrepos/ruff-python-linter.svg?exclude_unsupported=1)](https://repology.org/project/ruff-python-linter/versions)
### Usage
To run Ruff, try any of the following:
@@ -180,7 +182,7 @@ Ruff also works with [pre-commit](https://pre-commit.com):
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.217'
rev: 'v0.0.221'
hooks:
- id: ruff
# Respect `exclude` and `extend-exclude` settings.
@@ -343,21 +345,21 @@ Options:
Disable cache reads
--isolated
Ignore all configuration files
--select <SELECT>
--select <RULE_CODE>
Comma-separated list of rule codes to enable (or ALL, to enable all rules)
--extend-select <EXTEND_SELECT>
--extend-select <RULE_CODE>
Like --select, but adds additional rule codes on top of the selected ones
--ignore <IGNORE>
--ignore <RULE_CODE>
Comma-separated list of rule codes to disable
--extend-ignore <EXTEND_IGNORE>
--extend-ignore <RULE_CODE>
Like --ignore, but adds additional rule codes on top of the ignored ones
--exclude <EXCLUDE>
--exclude <FILE_PATTERN>
List of paths, used to omit files and/or directories from analysis
--extend-exclude <EXTEND_EXCLUDE>
--extend-exclude <FILE_PATTERN>
Like --exclude, but adds additional files and directories on top of those already excluded
--fixable <FIXABLE>
--fixable <RULE_CODE>
List of rule codes to treat as eligible for autofix. Only applicable when autofix itself is enabled (e.g., via `--fix`)
--unfixable <UNFIXABLE>
--unfixable <RULE_CODE>
List of rule codes to treat as ineligible for autofix. Only applicable when autofix itself is enabled (e.g., via `--fix`)
--per-file-ignores <PER_FILE_IGNORES>
List of mappings from file pattern to code to exclude
@@ -597,6 +599,7 @@ For more, see [pycodestyle](https://pypi.org/project/pycodestyle/2.9.1/) on PyPI
| E902 | IOError | IOError: `...` | |
| E999 | SyntaxError | SyntaxError: `...` | |
| W292 | NoNewLineAtEndOfFile | No newline at end of file | 🛠 |
| W505 | DocLineTooLong | Doc line too long (89 > 88 characters) | |
| W605 | InvalidEscapeSequence | Invalid escape sequence: '\c' | 🛠 |
### mccabe (C90)
@@ -614,6 +617,7 @@ For more, see [isort](https://pypi.org/project/isort/5.10.1/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| I001 | UnsortedImports | Import block is un-sorted or un-formatted | 🛠 |
| I002 | MissingRequiredImport | Missing required import: `from __future__ import ...` | 🛠 |
### pydocstyle (D)
@@ -701,6 +705,7 @@ For more, see [pyupgrade](https://pypi.org/project/pyupgrade/3.2.0/) on PyPI.
| UP027 | RewriteListComprehension | Replace unpacked list comprehension with a generator expression | 🛠 |
| UP028 | RewriteYieldFrom | Replace `yield` over `for` loop with `yield from` | 🛠 |
| UP029 | UnnecessaryBuiltinImport | Unnecessary builtin import: `...` | 🛠 |
| UP030 | FormatLiterals | Use implicit references for positional format fields | 🛠 |
### pep8-naming (N)
@@ -777,6 +782,9 @@ For more, see [flake8-bandit](https://pypi.org/project/flake8-bandit/4.1.1/) on
| S324 | HashlibInsecureHashFunction | Probable use of insecure hash functions in `hashlib`: "..." | |
| S501 | RequestWithNoCertValidation | Probable use of `...` call with `verify=False` disabling SSL certificate checks | |
| S506 | UnsafeYAMLLoad | Probable use of unsafe `yaml.load`. Allows instantiation of arbitrary objects. Consider `yaml.safe_load`. | |
| S508 | SnmpInsecureVersion | The use of SNMPv1 and SNMPv2 is insecure. Use SNMPv3 if able. | |
| S509 | SnmpWeakCryptography | You should not use SNMPv3 without encryption. `noAuthNoPriv` & `authNoPriv` is insecure. | |
| S701 | Jinja2AutoescapeFalse | By default, jinja2 sets `autoescape` to `False`. Consider using `autoescape=True` or the `select_autoescape` function to mitigate XSS vulnerabilities. | |
### flake8-blind-except (BLE)
@@ -916,8 +924,8 @@ For more, see [flake8-pytest-style](https://pypi.org/project/flake8-pytest-style
| PT001 | IncorrectFixtureParenthesesStyle | Use `@pytest.fixture()` over `@pytest.fixture` | 🛠 |
| PT002 | FixturePositionalArgs | Configuration for fixture `...` specified via positional args, use kwargs | |
| PT003 | ExtraneousScopeFunction | `scope='function'` is implied in `@pytest.fixture()` | |
| PT004 | MissingFixtureNameUnderscore | Fixture `...` does not return anything, add leading underscore | 🛠 |
| PT005 | IncorrectFixtureNameUnderscore | Fixture `...` returns a value, remove leading underscore | 🛠 |
| PT004 | MissingFixtureNameUnderscore | Fixture `...` does not return anything, add leading underscore | |
| PT005 | IncorrectFixtureNameUnderscore | Fixture `...` returns a value, remove leading underscore | |
| PT006 | ParametrizeNamesWrongType | Wrong name(s) type in `@pytest.mark.parametrize`, expected `tuple` | 🛠 |
| PT007 | ParametrizeValuesWrongType | Wrong values type in `@pytest.mark.parametrize` expected `list` of `tuple` | |
| PT008 | PatchWithLambda | Use `return_value=` instead of patching with `lambda` | |
@@ -945,10 +953,10 @@ For more, see [flake8-quotes](https://pypi.org/project/flake8-quotes/3.3.1/) on
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| Q000 | BadQuotesInlineString | Single quotes found but double quotes preferred | |
| Q001 | BadQuotesMultilineString | Single quote multiline found but double quotes preferred | |
| Q002 | BadQuotesDocstring | Single quote docstring found but double quotes preferred | |
| Q003 | AvoidQuoteEscape | Change outer quotes to avoid escaping inner quotes | |
| Q000 | BadQuotesInlineString | Single quotes found but double quotes preferred | 🛠 |
| Q001 | BadQuotesMultilineString | Single quote multiline found but double quotes preferred | 🛠 |
| Q002 | BadQuotesDocstring | Single quote docstring found but double quotes preferred | 🛠 |
| Q003 | AvoidQuoteEscape | Change outer quotes to avoid escaping inner quotes | 🛠 |
### flake8-return (RET)
@@ -971,6 +979,7 @@ For more, see [flake8-simplify](https://pypi.org/project/flake8-simplify/0.19.3/
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| SIM115 | OpenFileWithContextHandler | Use context handler for opening files | |
| SIM101 | DuplicateIsinstanceCall | Multiple `isinstance` calls for `...`, merge into a single call | 🛠 |
| SIM102 | NestedIfStatements | Use a single `if` statement instead of nested `if` statements | |
| SIM103 | ReturnBoolConditionDirectly | Return the condition `...` directly | 🛠 |
@@ -980,6 +989,7 @@ For more, see [flake8-simplify](https://pypi.org/project/flake8-simplify/0.19.3/
| SIM109 | CompareWithTuple | Use `value in (..., ...)` instead of `value == ... or value == ...` | 🛠 |
| SIM110 | ConvertLoopToAny | Use `return any(x for x in y)` instead of `for` loop | 🛠 |
| SIM111 | ConvertLoopToAll | Use `return all(x for x in y)` instead of `for` loop | 🛠 |
| SIM112 | UseCapitalEnvironmentVariables | Use capitalized environment variable `...` instead of `...` | 🛠 |
| SIM117 | MultipleWithStatements | Use a single `with` statement with multiple contexts instead of nested `with` statements | |
| SIM118 | KeyInDict | Use `key in dict` instead of `key in dict.keys()` | 🛠 |
| SIM201 | NegateEqualOp | Use `left != right` instead of `not left == right` | 🛠 |
@@ -993,6 +1003,7 @@ For more, see [flake8-simplify](https://pypi.org/project/flake8-simplify/0.19.3/
| SIM222 | OrTrue | Use `True` instead of `... or True` | 🛠 |
| SIM223 | AndFalse | Use `False` instead of `... and False` | 🛠 |
| SIM300 | YodaConditions | Yoda conditions are discouraged, use `left == right` instead | 🛠 |
| SIM401 | DictGetWithDefault | Use `var = dict.get(key, "default")` instead of an `if` block | 🛠 |
### flake8-tidy-imports (TID)
@@ -1083,8 +1094,10 @@ For more, see [Pylint](https://pypi.org/project/pylint/2.15.7/) on PyPI.
| PLE1142 | AwaitOutsideAsync | `await` should be used within an async function | |
| PLR0206 | PropertyWithParameters | Cannot have defined parameters for properties | |
| PLR0402 | ConsiderUsingFromImport | Use `from ... import ...` in lieu of alias | |
| PLR0133 | ConstantComparison | Two constants compared in a comparison, consider replacing `0 == 0` | |
| PLR1701 | ConsiderMergingIsinstance | Merge these isinstance calls: `isinstance(..., (...))` | |
| PLR1722 | UseSysExit | Use `sys.exit()` instead of `exit` | 🛠 |
| PLR2004 | MagicValueComparison | Magic number used in comparison, consider replacing magic with a constant variable | |
| PLW0120 | UselessElseOnLoop | Else clause on loop without a break statement, remove the else and de-indent all the code inside it | |
| PLW0602 | GlobalVariableNotAssigned | Using global for `...` but no assignment is done | |
@@ -1243,6 +1256,18 @@ officially supported LSP server for Ruff.
However, Ruff is also available as part of the [coc-pyright](https://github.com/fannheyward/coc-pyright)
extension for `coc.nvim`.
<details>
<summary>With the <a href="https://github.com/dense-analysis/ale">ALE</a> plugin for (Neo)Vim.</summary>
```vim
let g:ale_linters = { "python": ["ruff"] }
let g:ale_fixers = {
\ "python": ["black", "ruff"],
\}
```
</details>
<details>
<summary>Ruff can also be integrated via <a href="https://github.com/neovim/nvim-lspconfig/blob/master/doc/server_configurations.md#efm"><code>efm</code></a> in just a <a href="https://github.com/JafarAbdi/myconfigs/blob/6f0b6b2450e92ec8fc50422928cd22005b919110/efm-langserver/config.yaml#L14-L20">few lines</a>.</summary>
<br>
@@ -1407,7 +1432,7 @@ Beyond the rule set, Ruff suffers from the following limitations vis-à-vis Flak
There are a few other minor incompatibilities between Ruff and the originating Flake8 plugins:
- Ruff doesn't implement the "opinionated" lint rules from `flake8-bugbear`.
- Ruff doesn't implement all the "opinionated" lint rules from `flake8-bugbear`.
- Depending on your project structure, Ruff and `isort` can differ in their detection of first-party
code. (This is often solved by modifying the `src` property, e.g., to `src = ["src"]`, if your
code is nested in a `src` directory.)
@@ -1817,6 +1842,8 @@ Exclusions are based on globs, and can be either:
`directory`). Note that these paths are relative to the project root
(e.g., the directory containing your `pyproject.toml`).
For more information on the glob syntax, refer to the [`globset` documentation](https://docs.rs/globset/latest/globset/#syntax).
Note that you'll typically want to use
[`extend-exclude`](#extend-exclude) to modify the excluded paths.
@@ -1864,6 +1891,18 @@ line-length = 100
A list of file patterns to omit from linting, in addition to those
specified by `exclude`.
Exclusions are based on globs, and can be either:
- Single-path patterns, like `.mypy_cache` (to exclude any directory
named `.mypy_cache` in the tree), `foo.py` (to exclude any file named
`foo.py`), or `foo_*.py` (to exclude any file matching `foo_*.py` ).
- Relative patterns, like `directory/foo.py` (to exclude that specific
file) or `directory/*.py` (to exclude any Python files in
`directory`). Note that these paths are relative to the project root
(e.g., the directory containing your `pyproject.toml`).
For more information on the glob syntax, refer to the [`globset` documentation](https://docs.rs/globset/latest/globset/#syntax).
**Default value**: `[]`
**Type**: `Vec<FilePattern>`
@@ -1883,6 +1922,12 @@ extend-exclude = ["tests", "src/bad.py"]
A list of rule codes or prefixes to ignore, in addition to those
specified by `ignore`.
Note that `extend-ignore` is applied after resolving rules from
`ignore`/`select` and a less specific rule in `extend-ignore`
would overwrite a more specific rule in `select`. It is
recommended to only use `extend-ignore` when extending a
`pyproject.toml` file via `extend`.
**Default value**: `[]`
**Type**: `Vec<RuleCodePrefix>`
@@ -1902,6 +1947,12 @@ extend-ignore = ["F841"]
A list of rule codes or prefixes to enable, in addition to those
specified by `select`.
Note that `extend-select` is applied after resolving rules from
`ignore`/`select` and a less specific rule in `extend-select`
would overwrite a more specific rule in `ignore`. It is
recommended to only use `extend-select` when extending a
`pyproject.toml` file via `extend`.
**Default value**: `[]`
**Type**: `Vec<RuleCodePrefix>`
@@ -2335,7 +2386,7 @@ unfixable = ["F401"]
Enable or disable automatic update checks (overridden by the
`--update-check` and `--no-update-check` command-line flags).
**Default value**: `true`
**Default value**: `false`
**Type**: `bool`
@@ -2343,7 +2394,7 @@ Enable or disable automatic update checks (overridden by the
```toml
[tool.ruff]
update-check = false
update-check = true
```
---
@@ -2839,6 +2890,24 @@ ignore-variadic-names = true
### `isort`
#### [`classes`](#classes)
An override list of tokens to always recognize as a Class for
`order-by-type` regardless of casing.
**Default value**: `[]`
**Type**: `Vec<String>`
**Example usage**:
```toml
[tool.ruff.isort]
classes = ["SVC"]
```
---
#### [`combine-as-imports`](#combine-as-imports)
Combines as imports on the same line. See isort's [`combine-as-imports`](https://pycqa.github.io/isort/docs/configuration/options.html#combine-as-imports)
@@ -2998,6 +3067,47 @@ order-by-type = true
---
#### [`relative-imports-order`](#relative-imports-order)
Whether to place "closer" imports (fewer `.` characters, most local)
before "further" imports (more `.` characters, least local), or vice
versa.
The default ("furthest-to-closest") is equivalent to isort's
`reverse-relative` default (`reverse-relative = false`); setting
this to "closest-to-furthest" is equivalent to isort's `reverse-relative
= true`.
**Default value**: `furthest-to-closest`
**Type**: `RelatveImportsOrder`
**Example usage**:
```toml
[tool.ruff.isort]
relative-imports-order = "closest-to-furthest"
```
---
#### [`required-imports`](#required-imports)
Add the specified import line to all files.
**Default value**: `[]`
**Type**: `Vec<String>`
**Example usage**:
```toml
[tool.ruff.isort]
required-imports = ["from __future__ import annotations"]
```
---
#### [`single-line-exclusions`](#single-line-exclusions)
One or more modules to exclude from the single line rule.
@@ -3137,6 +3247,24 @@ ignore-overlong-task-comments = true
---
#### [`max-doc-length`](#max-doc-length)
The maximum line length to allow for line-length violations within
documentation (`W505`), including standalone comments.
**Default value**: `None`
**Type**: `usize`
**Example usage**:
```toml
[tool.ruff.pycodestyle]
max-doc-length = 88
```
---
### `pydocstyle`
#### [`convention`](#convention)
@@ -3191,4 +3319,4 @@ MIT
## Contributing
Contributions are welcome and hugely appreciated. To get started, check out the
[contributing guidelines](https://github.com/charliermarsh/ruff/blob/main/.github/CONTRIBUTING.md).
[contributing guidelines](https://github.com/charliermarsh/ruff/blob/main/CONTRIBUTING.md).

View File

@@ -771,7 +771,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8_to_ruff"
version = "0.0.217"
version = "0.0.221"
dependencies = [
"anyhow",
"clap",
@@ -1975,7 +1975,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.217"
version = "0.0.221"
dependencies = [
"anyhow",
"bincode",

View File

@@ -1,10 +1,11 @@
[package]
name = "flake8-to-ruff"
version = "0.0.217-dev.0"
version = "0.0.221-dev.0"
edition = "2021"
[lib]
name = "flake8_to_ruff"
doctest = false
[dependencies]
anyhow = { version = "1.0.66" }

View File

@@ -1,10 +1,13 @@
[build-system]
requires = ["maturin>=0.14,<0.15"]
requires = ["maturin>=0.14.10,<0.15"]
# We depend on >=0.14.10 because we specify name in
# [package.metadata.maturin] in ruff_cli/Cargo.toml.
build-backend = "maturin"
[project]
name = "ruff"
version = "0.0.217"
version = "0.0.221"
description = "An extremely fast Python linter, written in Rust."
authors = [
{ name = "Charlie Marsh", email = "charlie.r.marsh@gmail.com" },
@@ -35,6 +38,8 @@ urls = { repository = "https://github.com/charliermarsh/ruff" }
[tool.maturin]
bindings = "bin"
manifest-path = "ruff_cli/Cargo.toml"
python-source = "python"
strip = true
[tool.setuptools]

View File

@@ -0,0 +1,9 @@
class C:
from typing import overload
@overload
def f(self, x: int, y: int) -> None:
...
def f(self, x, y):
pass

View File

@@ -0,0 +1,6 @@
from pysnmp.hlapi import CommunityData
CommunityData("public", mpModel=0) # S508
CommunityData("public", mpModel=1) # S508
CommunityData("public", mpModel=2) # OK

View File

@@ -0,0 +1,7 @@
from pysnmp.hlapi import UsmUserData
insecure = UsmUserData("securityName") # S509
auth_no_priv = UsmUserData("securityName", "authName") # S509
less_insecure = UsmUserData("securityName", "authName", "privName") # OK

View File

@@ -0,0 +1,29 @@
import jinja2
from jinja2 import Environment, select_autoescape
templateLoader = jinja2.FileSystemLoader( searchpath="/" )
something = ''
Environment(loader=templateLoader, load=templateLoader, autoescape=True)
templateEnv = jinja2.Environment(autoescape=True,
loader=templateLoader )
Environment(loader=templateLoader, load=templateLoader, autoescape=something) # S701
templateEnv = jinja2.Environment(autoescape=False, loader=templateLoader ) # S701
Environment(loader=templateLoader,
load=templateLoader,
autoescape=False) # S701
Environment(loader=templateLoader, # S701
load=templateLoader)
Environment(loader=templateLoader, autoescape=select_autoescape())
Environment(loader=templateLoader,
autoescape=select_autoescape(['html', 'htm', 'xml']))
Environment(loader=templateLoader,
autoescape=jinja2.select_autoescape(['html', 'htm', 'xml']))
def fake_func():
return 'foobar'
Environment(loader=templateLoader, autoescape=fake_func()) # S701

View File

@@ -25,10 +25,10 @@ for x in range(3):
def check_inside_functions_too():
ls = [lambda: x for x in range(2)]
st = {lambda: x for x in range(2)}
gn = (lambda: x for x in range(2))
dt = {x: lambda: x for x in range(2)}
ls = [lambda: x for x in range(2)] # error
st = {lambda: x for x in range(2)} # error
gn = (lambda: x for x in range(2)) # error
dt = {x: lambda: x for x in range(2)} # error
async def pointless_async_iterable():
@@ -37,9 +37,9 @@ async def pointless_async_iterable():
async def container_for_problems():
async for x in pointless_async_iterable():
functions.append(lambda: x)
functions.append(lambda: x) # error
[lambda: x async for x in pointless_async_iterable()]
[lambda: x async for x in pointless_async_iterable()] # error
a = 10
@@ -47,10 +47,10 @@ b = 0
while True:
a = a_ = a - 1
b += 1
functions.append(lambda: a)
functions.append(lambda: a_)
functions.append(lambda: b)
functions.append(lambda: c) # not a name error because of late binding!
functions.append(lambda: a) # error
functions.append(lambda: a_) # error
functions.append(lambda: b) # error
functions.append(lambda: c) # error, but not a name error due to late binding
c: bool = a > 3
if not c:
break
@@ -58,7 +58,7 @@ while True:
# Nested loops should not duplicate reports
for j in range(2):
for k in range(3):
lambda: j * k
lambda: j * k # error
for j, k, l in [(1, 2, 3)]:
@@ -80,3 +80,95 @@ for var in range(2):
for i in range(3):
lambda: f"{i}"
# `query` is defined in the function, so also defining it in the loop should be OK.
for name in ["a", "b"]:
query = name
def myfunc(x):
query = x
query_post = x
_ = query
_ = query_post
query_post = name # in case iteration order matters
# Bug here because two dict comprehensions reference `name`, one of which is inside
# the lambda. This should be totally fine, of course.
_ = {
k: v
for k, v in reduce(
lambda data, event: merge_mappings(
[data, {name: f(caches, data, event) for name, f in xx}]
),
events,
{name: getattr(group, name) for name in yy},
).items()
if k in backfill_fields
}
# OK to define lambdas if they're immediately consumed, typically as the `key=`
# argument or in a consumed `filter()` (even if a comprehension is better style)
for x in range(2):
# It's not a complete get-out-of-linting-free construct - these should fail:
min([None, lambda: x], key=repr)
sorted([None, lambda: x], key=repr)
any(filter(bool, [None, lambda: x]))
list(filter(bool, [None, lambda: x]))
all(reduce(bool, [None, lambda: x]))
# But all these should be OK:
min(range(3), key=lambda y: x * y)
max(range(3), key=lambda y: x * y)
sorted(range(3), key=lambda y: x * y)
any(map(lambda y: x < y, range(3)))
all(map(lambda y: x < y, range(3)))
set(map(lambda y: x < y, range(3)))
list(map(lambda y: x < y, range(3)))
tuple(map(lambda y: x < y, range(3)))
sorted(map(lambda y: x < y, range(3)))
frozenset(map(lambda y: x < y, range(3)))
any(filter(lambda y: x < y, range(3)))
all(filter(lambda y: x < y, range(3)))
set(filter(lambda y: x < y, range(3)))
list(filter(lambda y: x < y, range(3)))
tuple(filter(lambda y: x < y, range(3)))
sorted(filter(lambda y: x < y, range(3)))
frozenset(filter(lambda y: x < y, range(3)))
any(reduce(lambda y: x | y, range(3)))
all(reduce(lambda y: x | y, range(3)))
set(reduce(lambda y: x | y, range(3)))
list(reduce(lambda y: x | y, range(3)))
tuple(reduce(lambda y: x | y, range(3)))
sorted(reduce(lambda y: x | y, range(3)))
frozenset(reduce(lambda y: x | y, range(3)))
import functools
any(functools.reduce(lambda y: x | y, range(3)))
all(functools.reduce(lambda y: x | y, range(3)))
set(functools.reduce(lambda y: x | y, range(3)))
list(functools.reduce(lambda y: x | y, range(3)))
tuple(functools.reduce(lambda y: x | y, range(3)))
sorted(functools.reduce(lambda y: x | y, range(3)))
frozenset(functools.reduce(lambda y: x | y, range(3)))
# OK because the lambda which references a loop variable is defined in a `return`
# statement, and after we return the loop variable can't be redefined.
# In principle we could do something fancy with `break`, but it's not worth it.
def iter_f(names):
for name in names:
if exists(name):
return lambda: name if exists(name) else None
if foo(name):
return [lambda: name] # known false alarm
if False:
return [lambda: i for i in range(3)] # error

View File

@@ -1,15 +1,16 @@
"""
Should emit:
B024 - on lines 17, 34, 52, 58, 69, 74, 79, 84, 89
B024 - on lines 18, 71, 82, 87, 92, 141
"""
import abc
import abc as notabc
from abc import ABC, ABCMeta
from abc import abstractmethod
from abc import abstractmethod, abstractproperty
from abc import abstractmethod as abstract
from abc import abstractmethod as abstractaoeuaoeuaoeu
from abc import abstractmethod as notabstract
from abc import abstractproperty as notabstract_property
import foo
@@ -49,12 +50,24 @@ class Base_6(ABC):
foo()
class Base_7(ABC): # error
class Base_7(ABC):
@notabstract
def method(self):
foo()
class Base_8(ABC):
@notabstract_property
def method(self):
foo()
class Base_9(ABC):
@abstractproperty
def method(self):
foo()
class MetaBase_1(metaclass=ABCMeta): # error
def method(self):
foo()

View File

@@ -1,11 +1,12 @@
"""
Should emit:
B027 - on lines 12, 15, 18, 22, 30
B027 - on lines 13, 16, 19, 23
"""
import abc
from abc import ABC
from abc import abstractmethod
from abc import abstractmethod, abstractproperty
from abc import abstractmethod as notabstract
from abc import abstractproperty as notabstract_property
class AbstractClass(ABC):
@@ -42,6 +43,18 @@ class AbstractClass(ABC):
def abstract_3(self):
...
@abc.abstractproperty
def abstract_4(self):
...
@abstractproperty
def abstract_5(self):
...
@notabstract_property
def abstract_6(self):
...
def body_1(self):
print("foo")
...

View File

@@ -2,3 +2,10 @@ x = list(x for x in range(3))
x = list(
x for x in range(3)
)
def list(*args, **kwargs):
return None
list(x for x in range(3))

View File

@@ -2,3 +2,10 @@ x = set(x for x in range(3))
x = set(
x for x in range(3)
)
def set(*args, **kwargs):
return None
set(x for x in range(3))

View File

@@ -1,5 +1,18 @@
s1 = set([1, 2])
s2 = set((1, 2))
s3 = set([])
s4 = set(())
s5 = set()
set([1, 2])
set((1, 2))
set([])
set(())
set()
set((1,))
set((
1,
))
set([
1,
])
set(
(1,)
)
set(
[1,]
)

View File

@@ -3,3 +3,10 @@ l = list()
d1 = dict()
d2 = dict(a=1)
d3 = dict(**d2)
def list():
return [1, 2, 3]
a = list()

View File

@@ -4,3 +4,10 @@ list(sorted(x))
reversed(sorted(x))
reversed(sorted(x, key=lambda e: e))
reversed(sorted(x, reverse=True))
def reversed(*args, **kwargs):
return None
reversed(sorted(x, reverse=True))

View File

@@ -31,3 +31,10 @@ class User(BaseModel):
@buzz.setter
def buzz(self, value: str | int) -> None:
...
class User:
bar: str = StringField()
foo: bool = BooleanField()
# ...
bar = StringField() # PIE794

View File

@@ -17,6 +17,15 @@ def fun_with_params_no_docstring(a, b="""
""" """docstring"""):
pass
def fun_with_params_no_docstring2(a, b=c[foo():], c=\
""" not a docstring """):
pass
def function_with_single_docstring(a):
"Single line docstring"
def double_inside_single(a):
'Double inside "single "'

View File

@@ -13,11 +13,19 @@ def foo2():
def fun_with_params_no_docstring(a, b='''
not a
not a
''' '''docstring'''):
pass
def fun_with_params_no_docstring2(a, b=c[foo():], c=\
''' not a docstring '''):
pass
def function_with_single_docstring(a):
'Single line docstring'
def double_inside_single(a):
"Double inside 'single '"

View File

@@ -1,4 +1,5 @@
this_should_raise_Q003 = 'This is a \'string\''
this_should_raise_Q003 = 'This is \\ a \\\'string\''
this_is_fine = '"This" is a \'string\''
this_is_fine = "This is a 'string'"
this_is_fine = "\"This\" is a 'string'"

View File

@@ -1,13 +1,13 @@
# Bad
# SIM108
if a:
b = c
else:
b = d
# Good
# OK
b = c if a else d
# https://github.com/MartinThoma/flake8-simplify/issues/115
# OK
if a:
b = c
elif c:
@@ -15,6 +15,7 @@ elif c:
else:
b = d
# OK
if True:
pass
elif a:
@@ -22,6 +23,7 @@ elif a:
else:
b = 2
# OK (false negative)
if True:
pass
else:
@@ -30,19 +32,62 @@ else:
else:
b = 2
import sys
# OK
if sys.version_info >= (3, 9):
randbytes = random.randbytes
else:
randbytes = _get_random_bytes
# OK
if sys.platform == "darwin":
randbytes = random.randbytes
else:
randbytes = _get_random_bytes
# OK
if sys.platform.startswith("linux"):
randbytes = random.randbytes
else:
randbytes = _get_random_bytes
# OK (includes comments)
if x > 0:
# test test
abc = x
else:
# test test test
abc = -x
# OK (too long)
if parser.errno == BAD_FIRST_LINE:
req = wrappers.Request(sock, server=self._server)
else:
req = wrappers.Request(
sock,
parser.get_method(),
parser.get_scheme() or _scheme,
parser.get_path(),
parser.get_version(),
parser.get_query_string(),
server=self._server,
)
# SIM108
if a:
b = cccccccccccccccccccccccccccccccccccc
else:
b = ddddddddddddddddddddddddddddddddddddd
# OK (too long)
if True:
if a:
b = cccccccccccccccccccccccccccccccccccc
else:
b = ddddddddddddddddddddddddddddddddddddd

View File

@@ -1,5 +1,6 @@
def f():
for x in iterable: # SIM110
# SIM110
for x in iterable:
if check(x):
return True
return False
@@ -20,14 +21,16 @@ def f():
def f():
for x in iterable: # SIM111
# SIM111
for x in iterable:
if check(x):
return False
return True
def f():
for x in iterable: # SIM111
# SIM111
for x in iterable:
if not x.is_empty():
return False
return True
@@ -45,3 +48,70 @@ def f():
if check(x):
return "foo"
return "bar"
def f():
# SIM110
for x in iterable:
if check(x):
return True
else:
return False
def f():
# SIM111
for x in iterable:
if check(x):
return False
else:
return True
def f():
# SIM110
for x in iterable:
if check(x):
return True
else:
return False
return True
def f():
# SIM111
for x in iterable:
if check(x):
return False
else:
return True
return False
def f():
for x in iterable:
if check(x):
return True
elif x.is_empty():
return True
return False
def f():
for x in iterable:
if check(x):
return True
else:
return True
return False
def f():
for x in iterable:
if check(x):
return True
elif x.is_empty():
return True
else:
return True
return False

View File

@@ -1,10 +1,18 @@
def f():
for x in iterable: # SIM110
# SIM110
for x in iterable:
if check(x):
return True
return False
def f():
for x in iterable:
if check(x):
return True
return True
def f():
for el in [1, 2, 3]:
if is_true(el):
@@ -13,21 +21,97 @@ def f():
def f():
for x in iterable: # SIM111
# SIM111
for x in iterable:
if check(x):
return False
return True
def f():
for x in iterable: # SIM 111
# SIM111
for x in iterable:
if not x.is_empty():
return False
return True
def f():
for x in iterable:
if check(x):
return False
return False
def f():
for x in iterable:
if check(x):
return "foo"
return "bar"
def f():
# SIM110
for x in iterable:
if check(x):
return True
else:
return False
def f():
# SIM111
for x in iterable:
if check(x):
return False
else:
return True
def f():
# SIM110
for x in iterable:
if check(x):
return True
else:
return False
return True
def f():
# SIM111
for x in iterable:
if check(x):
return False
else:
return True
return False
def f():
for x in iterable:
if check(x):
return True
elif x.is_empty():
return True
return False
def f():
for x in iterable:
if check(x):
return True
else:
return True
return False
def f():
for x in iterable:
if check(x):
return True
elif x.is_empty():
return True
else:
return True
return False

View File

@@ -0,0 +1,19 @@
import os
# Bad
os.environ['foo']
os.environ.get('foo')
os.environ.get('foo', 'bar')
os.getenv('foo')
# Good
os.environ['FOO']
os.environ.get('FOO')
os.environ.get('FOO', 'bar')
os.getenv('FOO')

View File

@@ -0,0 +1,6 @@
f = open('foo.txt') # SIM115
data = f.read()
f.close()
with open('foo.txt') as f: # OK
data = f.read()

View File

@@ -0,0 +1,87 @@
###
# Positive cases
###
# SIM401 (pattern-1)
if key in a_dict:
var = a_dict[key]
else:
var = "default1"
# SIM401 (pattern-2)
if key not in a_dict:
var = "default2"
else:
var = a_dict[key]
# SIM401 (default with a complex expression)
if key in a_dict:
var = a_dict[key]
else:
var = val1 + val2
# SIM401 (complex expression in key)
if keys[idx] in a_dict:
var = a_dict[keys[idx]]
else:
var = "default"
# SIM401 (complex expression in dict)
if key in dicts[idx]:
var = dicts[idx][key]
else:
var = "default"
# SIM401 (complex expression in var)
if key in a_dict:
vars[idx] = a_dict[key]
else:
vars[idx] = "default"
###
# Negative cases
###
# OK (false negative)
if not key in a_dict:
var = "default"
else:
var = a_dict[key]
# OK (different dict)
if key in a_dict:
var = other_dict[key]
else:
var = "default"
# OK (different key)
if key in a_dict:
var = a_dict[other_key]
else:
var = "default"
# OK (different var)
if key in a_dict:
var = a_dict[key]
else:
other_var = "default"
# OK (extra vars in body)
if key in a_dict:
var = a_dict[key]
var2 = value2
else:
var = "default"
# OK (extra vars in orelse)
if key in a_dict:
var = a_dict[key]
else:
var2 = value2
var = "default"
# OK (complex default value)
if key in a_dict:
var = a_dict[key]
else:
var = foo()

View File

@@ -181,3 +181,17 @@ def f(a: int, b: int) -> str:
def f(a, b):
return f"{a}{b}"
###
# Unused arguments on magic methods.
###
class C:
def __init__(self, x) -> None:
print("Hello, world!")
def __str__(self) -> str:
return "Hello, world!"
def __exit__(self, exc_type, exc_value, traceback) -> None:
print("Hello, world!")

View File

@@ -0,0 +1,4 @@
from sklearn.svm import func, SVC, CONST, Klass
from subprocess import N_CLASS, PIPE, Popen, STDOUT
from module import CLASS, Class, CONSTANT, function, BASIC, Apple
from torch.nn import SELU, AClass, A_CONSTANT

View File

@@ -0,0 +1,3 @@
from ... import a
from .. import b
from . import c

View File

@@ -0,0 +1,3 @@
#!/usr/bin/env python3
x = 1

View File

@@ -0,0 +1,3 @@
"""Hello, world!"""
x = 1

View File

@@ -0,0 +1 @@
"""Hello, world!"""

View File

@@ -0,0 +1,2 @@
"""Hello, world!"""; x = \
1; y = 2

View File

@@ -0,0 +1 @@
"""Hello, world!"""; x = 1

View File

@@ -0,0 +1,2 @@
from __future__ import generator_stop
import os

View File

@@ -0,0 +1,15 @@
#!/usr/bin/env python3
"""Here's a top-level docstring that's over the limit."""
def f():
"""Here's a docstring that's also over the limit."""
x = 1 # Here's a comment that's over the limit, but it's not standalone.
# Here's a standalone comment that's over the limit.
print("Here's a string that's over the limit, but it's not a docstring.")
"This is also considered a docstring, and is over the limit."

View File

@@ -9,7 +9,6 @@ from .background import BackgroundTasks
# F401 `datastructures.UploadFile` imported but unused
from .datastructures import UploadFile as FileUpload
# OK
import applications as applications

View File

@@ -5,4 +5,3 @@ from warnings import warn
warnings.warn("this is ok")
warn("by itself is also ok")
logging.warning("this is fine")
log.warning("this is ok")

View File

@@ -2,14 +2,4 @@ import logging
from logging import warn
logging.warn("this is not ok")
log.warn("this is also not ok")
warn("not ok")
def foo():
from logging import warn
def warn():
pass
warn("has been redefined, but we will still report it")

View File

@@ -0,0 +1,59 @@
"""Check that magic values are not used in comparisons"""
if 100 == 100: # [comparison-of-constants]
pass
if 1 == 3: # [comparison-of-constants]
pass
if 1 != 3: # [comparison-of-constants]
pass
x = 0
if 4 == 3 == x: # [comparison-of-constants]
pass
if x == 0: # correct
pass
y = 1
if x == y: # correct
pass
if 1 > 0: # [comparison-of-constants]
pass
if x > 0: # correct
pass
if 1 >= 0: # [comparison-of-constants]
pass
if x >= 0: # correct
pass
if 1 < 0: # [comparison-of-constants]
pass
if x < 0: # correct
pass
if 1 <= 0: # [comparison-of-constants]
pass
if x <= 0: # correct
pass
word = "hello"
if word == "": # correct
pass
if "hello" == "": # [comparison-of-constants]
pass
truthy = True
if truthy == True: # correct
pass
if True == False: # [comparison-of-constants]
pass

View File

@@ -0,0 +1,71 @@
"""Check that magic values are not used in comparisons"""
import cmath
user_input = 10
if 10 > user_input: # [magic-value-comparison]
pass
if 10 == 100: # [comparison-of-constants] R0133
pass
if 1 == 3: # [comparison-of-constants] R0133
pass
x = 0
if 4 == 3 == x: # [comparison-of-constants] R0133
pass
time_delta = 7224
ONE_HOUR = 3600
if time_delta > ONE_HOUR: # correct
pass
argc = 1
if argc != -1: # correct
pass
if argc != 0: # correct
pass
if argc != 1: # correct
pass
if argc != 2: # [magic-value-comparison]
pass
if __name__ == "__main__": # correct
pass
ADMIN_PASSWORD = "SUPERSECRET"
input_password = "password"
if input_password == "": # correct
pass
if input_password == ADMIN_PASSWORD: # correct
pass
if input_password == "Hunter2": # [magic-value-comparison]
pass
PI = 3.141592653589793238
pi_estimation = 3.14
if pi_estimation == 3.141592653589793238: # [magic-value-comparison]
pass
if pi_estimation == PI: # correct
pass
HELLO_WORLD = b"Hello, World!"
user_input = b"Hello, There!"
if user_input == b"something": # [magic-value-comparison]
pass
if user_input == HELLO_WORLD: # correct
pass

View File

@@ -0,0 +1,8 @@
class SocketError(Exception):
pass
try:
raise SocketError()
except SocketError:
pass

View File

@@ -0,0 +1,36 @@
# Invalid calls; errors expected.
"{0}" "{1}" "{2}".format(1, 2, 3)
"a {3} complicated {1} string with {0} {2}".format(
"first", "second", "third", "fourth"
)
'{0}'.format(1)
'{0:x}'.format(30)
x = '{0}'.format(1)
'''{0}\n{1}\n'''.format(1, 2)
x = "foo {0}" \
"bar {1}".format(1, 2)
("{0}").format(1)
"\N{snowman} {0}".format(1)
'{' '0}'.format(1)
# These will not change because we are waiting for libcst to fix this issue:
# https://github.com/Instagram/LibCST/issues/846
print(
'foo{0}'
'bar{1}'.format(1, 2)
)
print(
'foo{0}' # ohai\n"
'bar{1}'.format(1, 2)
)

View File

@@ -0,0 +1,23 @@
# Valid calls; no errors expected.
'{}'.format(1)
x = ('{0} {1}',)
'{0} {0}'.format(1)
'{0:<{1}}'.format(1, 4)
f"{0}".format(a)
f"{0}".format(1)
print(f"{0}".format(1))
# I did not include the following tests because ruff does not seem to work with
# invalid python syntax (which is a good thing)
# "{0}"format(1)
# '{'.format(1)", "'}'.format(1)
# ("{0}" # {1}\n"{2}").format(1, 2, 3)

View File

@@ -17,7 +17,7 @@ resources/test/project/examples/docs/docs/file.py:8:5: F841 Local variable `x` i
resources/test/project/project/file.py:1:8: F401 `os` imported but unused
resources/test/project/project/import_file.py:1:1: I001 Import block is un-sorted or un-formatted
Found 7 error(s).
6 potentially fixable with the --fix option.
7 potentially fixable with the --fix option.
```
Running from the project directory itself should exhibit the same behavior:
@@ -32,7 +32,7 @@ examples/docs/docs/file.py:8:5: F841 Local variable `x` is assigned to but never
project/file.py:1:8: F401 `os` imported but unused
project/import_file.py:1:1: I001 Import block is un-sorted or un-formatted
Found 7 error(s).
6 potentially fixable with the --fix option.
7 potentially fixable with the --fix option.
```
Running from the sub-package directory should exhibit the same behavior, but omit the top-level
@@ -43,7 +43,7 @@ files:
docs/file.py:1:1: I001 Import block is un-sorted or un-formatted
docs/file.py:8:5: F841 Local variable `x` is assigned to but never used
Found 2 error(s).
1 potentially fixable with the --fix option.
2 potentially fixable with the --fix option.
```
`--config` should force Ruff to use the specified `pyproject.toml` for all files, and resolve
@@ -74,7 +74,7 @@ docs/docs/file.py:1:1: I001 Import block is un-sorted or un-formatted
docs/docs/file.py:8:5: F841 Local variable `x` is assigned to but never used
excluded/script.py:5:5: F841 Local variable `x` is assigned to but never used
Found 4 error(s).
1 potentially fixable with the --fix option.
4 potentially fixable with the --fix option.
```
Passing an excluded directory directly should report errors in the contained files:

View File

@@ -40,7 +40,7 @@
]
},
"exclude": {
"description": "A list of file patterns to exclude from linting.\n\nExclusions are based on globs, and can be either:\n\n- Single-path patterns, like `.mypy_cache` (to exclude any directory named `.mypy_cache` in the tree), `foo.py` (to exclude any file named `foo.py`), or `foo_*.py` (to exclude any file matching `foo_*.py` ). - Relative patterns, like `directory/foo.py` (to exclude that specific file) or `directory/*.py` (to exclude any Python files in `directory`). Note that these paths are relative to the project root (e.g., the directory containing your `pyproject.toml`).\n\nNote that you'll typically want to use [`extend-exclude`](#extend-exclude) to modify the excluded paths.",
"description": "A list of file patterns to exclude from linting.\n\nExclusions are based on globs, and can be either:\n\n- Single-path patterns, like `.mypy_cache` (to exclude any directory named `.mypy_cache` in the tree), `foo.py` (to exclude any file named `foo.py`), or `foo_*.py` (to exclude any file matching `foo_*.py` ). - Relative patterns, like `directory/foo.py` (to exclude that specific file) or `directory/*.py` (to exclude any Python files in `directory`). Note that these paths are relative to the project root (e.g., the directory containing your `pyproject.toml`).\n\nFor more information on the glob syntax, refer to the [`globset` documentation](https://docs.rs/globset/latest/globset/#syntax).\n\nNote that you'll typically want to use [`extend-exclude`](#extend-exclude) to modify the excluded paths.",
"type": [
"array",
"null"
@@ -57,7 +57,7 @@
]
},
"extend-exclude": {
"description": "A list of file patterns to omit from linting, in addition to those specified by `exclude`.",
"description": "A list of file patterns to omit from linting, in addition to those specified by `exclude`.\n\nExclusions are based on globs, and can be either:\n\n- Single-path patterns, like `.mypy_cache` (to exclude any directory named `.mypy_cache` in the tree), `foo.py` (to exclude any file named `foo.py`), or `foo_*.py` (to exclude any file matching `foo_*.py` ). - Relative patterns, like `directory/foo.py` (to exclude that specific file) or `directory/*.py` (to exclude any Python files in `directory`). Note that these paths are relative to the project root (e.g., the directory containing your `pyproject.toml`).\n\nFor more information on the glob syntax, refer to the [`globset` documentation](https://docs.rs/globset/latest/globset/#syntax).",
"type": [
"array",
"null"
@@ -67,7 +67,7 @@
}
},
"extend-ignore": {
"description": "A list of rule codes or prefixes to ignore, in addition to those specified by `ignore`.",
"description": "A list of rule codes or prefixes to ignore, in addition to those specified by `ignore`.\n\nNote that `extend-ignore` is applied after resolving rules from `ignore`/`select` and a less specific rule in `extend-ignore` would overwrite a more specific rule in `select`. It is recommended to only use `extend-ignore` when extending a `pyproject.toml` file via `extend`.",
"type": [
"array",
"null"
@@ -77,7 +77,7 @@
}
},
"extend-select": {
"description": "A list of rule codes or prefixes to enable, in addition to those specified by `select`.",
"description": "A list of rule codes or prefixes to enable, in addition to those specified by `select`.\n\nNote that `extend-select` is applied after resolving rules from `ignore`/`select` and a less specific rule in `extend-select` would overwrite a more specific rule in `ignore`. It is recommended to only use `extend-select` when extending a `pyproject.toml` file via `extend`.",
"type": [
"array",
"null"
@@ -755,6 +755,16 @@
"IsortOptions": {
"type": "object",
"properties": {
"classes": {
"description": "An override list of tokens to always recognize as a Class for `order-by-type` regardless of casing.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"combine-as-imports": {
"description": "Combines as imports on the same line. See isort's [`combine-as-imports`](https://pycqa.github.io/isort/docs/configuration/options.html#combine-as-imports) option.",
"type": [
@@ -820,6 +830,27 @@
"null"
]
},
"relative-imports-order": {
"description": "Whether to place \"closer\" imports (fewer `.` characters, most local) before \"further\" imports (more `.` characters, least local), or vice versa.\n\nThe default (\"furthest-to-closest\") is equivalent to isort's `reverse-relative` default (`reverse-relative = false`); setting this to \"closest-to-furthest\" is equivalent to isort's `reverse-relative = true`.",
"anyOf": [
{
"$ref": "#/definitions/RelatveImportsOrder"
},
{
"type": "null"
}
]
},
"required-imports": {
"description": "Add the specified import line to all files.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"single-line-exclusions": {
"description": "One or more modules to exclude from the single line rule.",
"type": [
@@ -935,6 +966,15 @@
"boolean",
"null"
]
},
"max-doc-length": {
"description": "The maximum line length to allow for line-length violations within documentation (`W505`), including standalone comments.",
"type": [
"integer",
"null"
],
"format": "uint",
"minimum": 0.0
}
},
"additionalProperties": false
@@ -988,6 +1028,24 @@
}
]
},
"RelatveImportsOrder": {
"oneOf": [
{
"description": "Place \"closer\" imports (fewer `.` characters, most local) before \"further\" imports (more `.` characters, least local).",
"type": "string",
"enum": [
"closest-to-furthest"
]
},
{
"description": "Place \"further\" imports (more `.` characters, least local) imports before \"closer\" imports (fewer `.` characters, most local).",
"type": "string",
"enum": [
"furthest-to-closest"
]
}
]
},
"RuleCodePrefix": {
"type": "string",
"enum": [
@@ -1268,6 +1326,7 @@
"I0",
"I00",
"I001",
"I002",
"I2",
"I25",
"I252",
@@ -1384,6 +1443,9 @@
"PLE1142",
"PLR",
"PLR0",
"PLR01",
"PLR013",
"PLR0133",
"PLR02",
"PLR020",
"PLR0206",
@@ -1396,6 +1458,10 @@
"PLR1701",
"PLR172",
"PLR1722",
"PLR2",
"PLR20",
"PLR200",
"PLR2004",
"PLW",
"PLW0",
"PLW01",
@@ -1493,6 +1559,11 @@
"S50",
"S501",
"S506",
"S508",
"S509",
"S7",
"S70",
"S701",
"SIM",
"SIM1",
"SIM10",
@@ -1506,6 +1577,8 @@
"SIM11",
"SIM110",
"SIM111",
"SIM112",
"SIM115",
"SIM117",
"SIM118",
"SIM2",
@@ -1525,6 +1598,9 @@
"SIM3",
"SIM30",
"SIM300",
"SIM4",
"SIM40",
"SIM401",
"T",
"T1",
"T10",
@@ -1592,10 +1668,15 @@
"UP027",
"UP028",
"UP029",
"UP03",
"UP030",
"W",
"W2",
"W29",
"W292",
"W5",
"W50",
"W505",
"W6",
"W60",
"W605",

62
ruff_cli/Cargo.toml Normal file
View File

@@ -0,0 +1,62 @@
[package]
name = "ruff_cli"
version = "0.0.221"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = "2021"
rust-version = "1.65.0"
documentation = "https://github.com/charliermarsh/ruff"
homepage = "https://github.com/charliermarsh/ruff"
repository = "https://github.com/charliermarsh/ruff"
readme = "../README.md"
license = "MIT"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[[bin]]
name = "ruff"
path = "src/main.rs"
doctest = false
[dependencies]
ruff = { path = ".." }
annotate-snippets = { version = "0.9.1", features = ["color"] }
anyhow = { version = "1.0.66" }
atty = { version = "0.2.14" }
bincode = { version = "1.3.3" }
cachedir = { version = "0.3.0" }
chrono = { version = "0.4.21", default-features = false, features = ["clock"] }
clap = { version = "4.0.1", features = ["derive", "env"] }
clap_complete_command = { version = "0.4.0" }
clearscreen = { version = "2.0.0" }
colored = { version = "2.0.0" }
filetime = { version = "0.2.17" }
glob = { version = "0.3.0" }
ignore = { version = "0.4.18" }
itertools = { version = "0.10.5" }
log = { version = "0.4.17" }
notify = { version = "5.0.0" }
path-absolutize = { version = "3.0.14", features = ["once_cell_cache"] }
quick-junit = { version = "0.3.2" }
rayon = { version = "1.5.3" }
regex = { version = "1.6.0" }
rustc-hash = { version = "1.1.0" }
serde = { version = "1.0.147", features = ["derive"] }
serde_json = { version = "1.0.87" }
similar = { version = "2.2.1" }
textwrap = { version = "0.16.0" }
update-informer = { version = "0.6.0", default-features = false, features = ["pypi"], optional = true }
walkdir = { version = "2.3.2" }
[dev-dependencies]
assert_cmd = { version = "2.0.4" }
strum = { version = "0.24.1" }
ureq = { version = "2.5.0", features = [] }
[features]
default = ["update-informer"]
update-informer = ["dep:update-informer"]
[package.metadata.maturin]
name = "ruff"
# Setting the name here is necessary for maturin to include the package in its builds.

124
ruff_cli/src/cache.rs Normal file
View File

@@ -0,0 +1,124 @@
use std::collections::hash_map::DefaultHasher;
use std::fs;
use std::hash::{Hash, Hasher};
use std::io::Write;
use std::path::Path;
use anyhow::Result;
use filetime::FileTime;
use log::error;
use path_absolutize::Absolutize;
use ruff::message::Message;
use ruff::settings::{flags, Settings};
use serde::{Deserialize, Serialize};
const CARGO_PKG_VERSION: &str = env!("CARGO_PKG_VERSION");
#[derive(Serialize, Deserialize)]
struct CacheMetadata {
mtime: i64,
}
#[derive(Serialize)]
struct CheckResultRef<'a> {
metadata: &'a CacheMetadata,
messages: &'a [Message],
}
#[derive(Deserialize)]
struct CheckResult {
metadata: CacheMetadata,
messages: Vec<Message>,
}
fn content_dir() -> &'static Path {
Path::new("content")
}
fn cache_key<P: AsRef<Path>>(path: P, settings: &Settings, autofix: flags::Autofix) -> u64 {
let mut hasher = DefaultHasher::new();
CARGO_PKG_VERSION.hash(&mut hasher);
path.as_ref().absolutize().unwrap().hash(&mut hasher);
settings.hash(&mut hasher);
autofix.hash(&mut hasher);
hasher.finish()
}
#[allow(dead_code)]
/// Initialize the cache at the specified `Path`.
pub fn init(path: &Path) -> Result<()> {
// Create the cache directories.
fs::create_dir_all(path.join(content_dir()))?;
// Add the CACHEDIR.TAG.
if !cachedir::is_tagged(path)? {
cachedir::add_tag(path)?;
}
// Add the .gitignore.
let gitignore_path = path.join(".gitignore");
if !gitignore_path.exists() {
let mut file = fs::File::create(gitignore_path)?;
file.write_all(b"*")?;
}
Ok(())
}
fn write_sync(cache_dir: &Path, key: u64, value: &[u8]) -> Result<(), std::io::Error> {
fs::write(
cache_dir.join(content_dir()).join(format!("{key:x}")),
value,
)
}
fn read_sync(cache_dir: &Path, key: u64) -> Result<Vec<u8>, std::io::Error> {
fs::read(cache_dir.join(content_dir()).join(format!("{key:x}")))
}
/// Get a value from the cache.
pub fn get<P: AsRef<Path>>(
path: P,
metadata: &fs::Metadata,
settings: &Settings,
autofix: flags::Autofix,
) -> Option<Vec<Message>> {
let encoded = read_sync(&settings.cache_dir, cache_key(path, settings, autofix)).ok()?;
let (mtime, messages) = match bincode::deserialize::<CheckResult>(&encoded[..]) {
Ok(CheckResult {
metadata: CacheMetadata { mtime },
messages,
}) => (mtime, messages),
Err(e) => {
error!("Failed to deserialize encoded cache entry: {e:?}");
return None;
}
};
if FileTime::from_last_modification_time(metadata).unix_seconds() != mtime {
return None;
}
Some(messages)
}
/// Set a value in the cache.
pub fn set<P: AsRef<Path>>(
path: P,
metadata: &fs::Metadata,
settings: &Settings,
autofix: flags::Autofix,
messages: &[Message],
) {
let check_result = CheckResultRef {
metadata: &CacheMetadata {
mtime: FileTime::from_last_modification_time(metadata).unix_seconds(),
},
messages,
};
if let Err(e) = write_sync(
&settings.cache_dir,
cache_key(path, settings, autofix),
&bincode::serialize(&check_result).unwrap(),
) {
error!("Failed to write to cache: {e:?}");
}
}

View File

@@ -2,17 +2,21 @@ use std::path::{Path, PathBuf};
use clap::{command, Parser};
use regex::Regex;
use rustc_hash::FxHashMap;
use crate::fs;
use crate::logging::LogLevel;
use crate::registry::{RuleCode, RuleCodePrefix};
use crate::settings::types::{
use ruff::logging::LogLevel;
use ruff::registry::{RuleCode, RuleCodePrefix};
use ruff::resolver::ConfigProcessor;
use ruff::settings::types::{
FilePattern, PatternPrefixPair, PerFileIgnore, PythonVersion, SerializationFormat,
};
use ruff::{fs, mccabe};
use rustc_hash::FxHashMap;
#[derive(Debug, Parser)]
#[command(author, about = "Ruff: An extremely fast Python linter.")]
#[command(
author,
name = "ruff",
about = "Ruff: An extremely fast Python linter."
)]
#[command(version)]
#[allow(clippy::struct_excessive_bools)]
pub struct Cli {
@@ -61,33 +65,33 @@ pub struct Cli {
pub isolated: bool,
/// Comma-separated list of rule codes to enable (or ALL, to enable all
/// rules).
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub select: Option<Vec<RuleCodePrefix>>,
/// Like --select, but adds additional rule codes on top of the selected
/// ones.
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub extend_select: Option<Vec<RuleCodePrefix>>,
/// Comma-separated list of rule codes to disable.
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub ignore: Option<Vec<RuleCodePrefix>>,
/// Like --ignore, but adds additional rule codes on top of the ignored
/// ones.
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub extend_ignore: Option<Vec<RuleCodePrefix>>,
/// List of paths, used to omit files and/or directories from analysis.
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "FILE_PATTERN")]
pub exclude: Option<Vec<FilePattern>>,
/// Like --exclude, but adds additional files and directories on top of
/// those already excluded.
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "FILE_PATTERN")]
pub extend_exclude: Option<Vec<FilePattern>>,
/// List of rule codes to treat as eligible for autofix. Only applicable
/// when autofix itself is enabled (e.g., via `--fix`).
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub fixable: Option<Vec<RuleCodePrefix>>,
/// List of rule codes to treat as ineligible for autofix. Only applicable
/// when autofix itself is enabled (e.g., via `--fix`).
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub unfixable: Option<Vec<RuleCodePrefix>>,
/// List of mappings from file pattern to code to exclude
#[arg(long, value_delimiter = ',')]
@@ -344,6 +348,87 @@ pub struct Overrides {
pub update_check: Option<bool>,
}
impl ConfigProcessor for &Overrides {
fn process_config(&self, config: &mut ruff::settings::configuration::Configuration) {
if let Some(cache_dir) = &self.cache_dir {
config.cache_dir = Some(cache_dir.clone());
}
if let Some(dummy_variable_rgx) = &self.dummy_variable_rgx {
config.dummy_variable_rgx = Some(dummy_variable_rgx.clone());
}
if let Some(exclude) = &self.exclude {
config.exclude = Some(exclude.clone());
}
if let Some(extend_exclude) = &self.extend_exclude {
config.extend_exclude.extend(extend_exclude.clone());
}
if let Some(fix) = &self.fix {
config.fix = Some(*fix);
}
if let Some(fix_only) = &self.fix_only {
config.fix_only = Some(*fix_only);
}
if let Some(fixable) = &self.fixable {
config.fixable = Some(fixable.clone());
}
if let Some(format) = &self.format {
config.format = Some(*format);
}
if let Some(force_exclude) = &self.force_exclude {
config.force_exclude = Some(*force_exclude);
}
if let Some(ignore) = &self.ignore {
config.ignore = Some(ignore.clone());
}
if let Some(line_length) = &self.line_length {
config.line_length = Some(*line_length);
}
if let Some(max_complexity) = &self.max_complexity {
config.mccabe = Some(mccabe::settings::Options {
max_complexity: Some(*max_complexity),
});
}
if let Some(per_file_ignores) = &self.per_file_ignores {
config.per_file_ignores = Some(collect_per_file_ignores(per_file_ignores.clone()));
}
if let Some(respect_gitignore) = &self.respect_gitignore {
config.respect_gitignore = Some(*respect_gitignore);
}
if let Some(select) = &self.select {
config.select = Some(select.clone());
}
if let Some(show_source) = &self.show_source {
config.show_source = Some(*show_source);
}
if let Some(target_version) = &self.target_version {
config.target_version = Some(*target_version);
}
if let Some(unfixable) = &self.unfixable {
config.unfixable = Some(unfixable.clone());
}
if let Some(update_check) = &self.update_check {
config.update_check = Some(*update_check);
}
// Special-case: `extend_ignore` and `extend_select` are parallel arrays, so
// push an empty array if only one of the two is provided.
match (&self.extend_ignore, &self.extend_select) {
(Some(extend_ignore), Some(extend_select)) => {
config.extend_ignore.push(extend_ignore.clone());
config.extend_select.push(extend_select.clone());
}
(Some(extend_ignore), None) => {
config.extend_ignore.push(extend_ignore.clone());
config.extend_select.push(Vec::new());
}
(None, Some(extend_select)) => {
config.extend_ignore.push(Vec::new());
config.extend_select.push(extend_select.clone());
}
(None, None) => {}
}
}
}
/// Map the CLI settings to a `LogLevel`.
pub fn extract_log_level(cli: &Arguments) -> LogLevel {
if cli.silent {

View File

@@ -11,22 +11,22 @@ use log::{debug, error};
use path_absolutize::path_dedot;
#[cfg(not(target_family = "wasm"))]
use rayon::prelude::*;
use rustpython_ast::Location;
use ruff::cache::CACHE_DIR_NAME;
use ruff::linter::add_noqa_to_path;
use ruff::logging::LogLevel;
use ruff::message::{Location, Message};
use ruff::registry::RuleCode;
use ruff::resolver::{FileDiscovery, PyprojectDiscovery};
use ruff::settings::flags;
use ruff::settings::types::SerializationFormat;
use ruff::{fix, fs, packaging, resolver, violations, warn_user_once};
use serde::Serialize;
use walkdir::WalkDir;
use crate::autofix::fixer;
use crate::cache::CACHE_DIR_NAME;
use crate::cache;
use crate::cli::Overrides;
use crate::diagnostics::{lint_path, lint_stdin, Diagnostics};
use crate::iterators::par_iter;
use crate::linter::{add_noqa_to_path, lint_path, lint_stdin, Diagnostics};
use crate::logging::LogLevel;
use crate::message::Message;
use crate::registry::RuleCode;
use crate::resolver::{FileDiscovery, PyprojectDiscovery};
use crate::settings::flags;
use crate::settings::types::SerializationFormat;
use crate::{cache, fs, packages, resolver, violations, warn_user_once};
/// Run the linter over a collection of files.
pub fn run(
@@ -35,7 +35,7 @@ pub fn run(
file_strategy: &FileDiscovery,
overrides: &Overrides,
cache: flags::Cache,
autofix: fixer::Mode,
autofix: fix::FixMode,
) -> Result<Diagnostics> {
// Collect all the Python files to check.
let start = Instant::now();
@@ -77,7 +77,7 @@ pub fn run(
};
// Discover the package root for each Python file.
let package_roots = packages::detect_package_roots(
let package_roots = packaging::detect_package_roots(
&paths
.iter()
.flatten()
@@ -156,7 +156,7 @@ pub fn run_stdin(
pyproject_strategy: &PyprojectDiscovery,
file_strategy: &FileDiscovery,
overrides: &Overrides,
autofix: fixer::Mode,
autofix: fix::FixMode,
) -> Result<Diagnostics> {
if let Some(filename) = filename {
if !resolver::python_file_at_path(filename, pyproject_strategy, file_strategy, overrides)? {
@@ -169,7 +169,7 @@ pub fn run_stdin(
};
let package_root = filename
.and_then(Path::parent)
.and_then(packages::detect_package_root);
.and_then(packaging::detect_package_root);
let stdin = read_from_stdin()?;
let mut diagnostics = lint_stdin(filename, package_root, &stdin, settings, autofix)?;
diagnostics.messages.sort_unstable();
@@ -288,7 +288,7 @@ struct Explanation<'a> {
}
/// Explain a `RuleCode` to the user.
pub fn explain(code: &RuleCode, format: &SerializationFormat) -> Result<()> {
pub fn explain(code: &RuleCode, format: SerializationFormat) -> Result<()> {
match format {
SerializationFormat::Text | SerializationFormat::Grouped => {
println!(

155
ruff_cli/src/diagnostics.rs Normal file
View File

@@ -0,0 +1,155 @@
#![cfg_attr(target_family = "wasm", allow(dead_code))]
use std::fs::write;
use std::io;
use std::io::Write;
use std::ops::AddAssign;
use std::path::Path;
use anyhow::Result;
use log::debug;
use ruff::linter::{lint_fix, lint_only};
use ruff::message::Message;
use ruff::settings::{flags, Settings};
use ruff::{fix, fs};
use similar::TextDiff;
use crate::cache;
#[derive(Debug, Default)]
pub struct Diagnostics {
pub messages: Vec<Message>,
pub fixed: usize,
}
impl Diagnostics {
pub fn new(messages: Vec<Message>) -> Self {
Self { messages, fixed: 0 }
}
}
impl AddAssign for Diagnostics {
fn add_assign(&mut self, other: Self) {
self.messages.extend(other.messages);
self.fixed += other.fixed;
}
}
/// Lint the source code at the given `Path`.
pub fn lint_path(
path: &Path,
package: Option<&Path>,
settings: &Settings,
cache: flags::Cache,
autofix: fix::FixMode,
) -> Result<Diagnostics> {
// Validate the `Settings` and return any errors.
settings.validate()?;
// Check the cache.
// TODO(charlie): `fixer::Mode::Apply` and `fixer::Mode::Diff` both have
// side-effects that aren't captured in the cache. (In practice, it's fine
// to cache `fixer::Mode::Apply`, since a file either has no fixes, or we'll
// write the fixes to disk, thus invalidating the cache. But it's a bit hard
// to reason about. We need to come up with a better solution here.)
let metadata = if matches!(cache, flags::Cache::Enabled)
&& matches!(autofix, fix::FixMode::None | fix::FixMode::Generate)
{
let metadata = path.metadata()?;
if let Some(messages) = cache::get(path, &metadata, settings, autofix.into()) {
debug!("Cache hit for: {}", path.to_string_lossy());
return Ok(Diagnostics::new(messages));
}
Some(metadata)
} else {
None
};
// Read the file from disk.
let contents = fs::read_file(path)?;
// Lint the file.
let (messages, fixed) = if matches!(autofix, fix::FixMode::Apply | fix::FixMode::Diff) {
let (transformed, fixed, messages) = lint_fix(&contents, path, package, settings)?;
if fixed > 0 {
if matches!(autofix, fix::FixMode::Apply) {
write(path, transformed)?;
} else if matches!(autofix, fix::FixMode::Diff) {
let mut stdout = io::stdout().lock();
TextDiff::from_lines(&contents, &transformed)
.unified_diff()
.header(&fs::relativize_path(path), &fs::relativize_path(path))
.to_writer(&mut stdout)?;
stdout.write_all(b"\n")?;
stdout.flush()?;
}
}
(messages, fixed)
} else {
let messages = lint_only(&contents, path, package, settings, autofix.into())?;
let fixed = 0;
(messages, fixed)
};
// Re-populate the cache.
if let Some(metadata) = metadata {
cache::set(path, &metadata, settings, autofix.into(), &messages);
}
Ok(Diagnostics { messages, fixed })
}
/// Generate `Diagnostic`s from source code content derived from
/// stdin.
pub fn lint_stdin(
path: Option<&Path>,
package: Option<&Path>,
contents: &str,
settings: &Settings,
autofix: fix::FixMode,
) -> Result<Diagnostics> {
// Validate the `Settings` and return any errors.
settings.validate()?;
// Lint the inputs.
let (messages, fixed) = if matches!(autofix, fix::FixMode::Apply | fix::FixMode::Diff) {
let (transformed, fixed, messages) = lint_fix(
contents,
path.unwrap_or_else(|| Path::new("-")),
package,
settings,
)?;
if matches!(autofix, fix::FixMode::Apply) {
// Write the contents to stdout, regardless of whether any errors were fixed.
io::stdout().write_all(transformed.as_bytes())?;
} else if matches!(autofix, fix::FixMode::Diff) {
// But only write a diff if it's non-empty.
if fixed > 0 {
let text_diff = TextDiff::from_lines(contents, &transformed);
let mut unified_diff = text_diff.unified_diff();
if let Some(path) = path {
unified_diff.header(&fs::relativize_path(path), &fs::relativize_path(path));
}
let mut stdout = io::stdout().lock();
unified_diff.to_writer(&mut stdout)?;
stdout.write_all(b"\n")?;
stdout.flush()?;
}
}
(messages, fixed)
} else {
let messages = lint_only(
contents,
path.unwrap_or_else(|| Path::new("-")),
package,
settings,
autofix.into(),
)?;
let fixed = 0;
(messages, fixed)
};
Ok(Diagnostics { messages, fixed })
}

6
ruff_cli/src/lib.rs Normal file
View File

@@ -0,0 +1,6 @@
#![allow(clippy::must_use_candidate, dead_code)]
mod cli;
// used by ruff_dev::generate_cli_help
pub use cli::Cli;

View File

@@ -1,24 +1,40 @@
#![allow(
clippy::match_same_arms,
clippy::missing_errors_doc,
clippy::module_name_repetitions,
clippy::too_many_lines
)]
#![forbid(unsafe_code)]
use std::io::{self};
use std::path::{Path, PathBuf};
use std::process::ExitCode;
use std::sync::mpsc::channel;
use ::ruff::autofix::fixer;
use ::ruff::cli::{extract_log_level, Cli, Overrides};
use ::ruff::logging::{set_up_logging, LogLevel};
use ::ruff::printer::{Printer, Violations};
use ::ruff::resolver::{resolve_settings, FileDiscovery, PyprojectDiscovery, Relativity};
use ::ruff::resolver::{
resolve_settings_with_processor, ConfigProcessor, FileDiscovery, PyprojectDiscovery, Relativity,
};
use ::ruff::settings::configuration::Configuration;
use ::ruff::settings::types::SerializationFormat;
use ::ruff::settings::{pyproject, Settings};
#[cfg(feature = "update-informer")]
use ::ruff::updates;
use ::ruff::{commands, warn_user_once};
use ::ruff::{fix, fs, warn_user_once};
use anyhow::Result;
use clap::{CommandFactory, Parser};
use cli::{extract_log_level, Cli, Overrides};
use colored::Colorize;
use notify::{recommended_watcher, RecursiveMode, Watcher};
use path_absolutize::path_dedot;
use printer::{Printer, Violations};
mod cache;
mod cli;
mod commands;
mod diagnostics;
mod iterators;
mod printer;
#[cfg(all(feature = "update-informer"))]
pub mod updates;
/// Resolve the relevant settings strategy and defaults for the current
/// invocation.
@@ -31,14 +47,14 @@ fn resolve(
if isolated {
// First priority: if we're running in isolated mode, use the default settings.
let mut config = Configuration::default();
config.apply(overrides.clone());
overrides.process_config(&mut config);
let settings = Settings::from_configuration(config, &path_dedot::CWD)?;
Ok(PyprojectDiscovery::Fixed(settings))
} else if let Some(pyproject) = config {
// Second priority: the user specified a `pyproject.toml` file. Use that
// `pyproject.toml` for _all_ configuration, and resolve paths relative to the
// current working directory. (This matches ESLint's behavior.)
let settings = resolve_settings(pyproject, &Relativity::Cwd, Some(overrides))?;
let settings = resolve_settings_with_processor(pyproject, &Relativity::Cwd, overrides)?;
Ok(PyprojectDiscovery::Fixed(settings))
} else if let Some(pyproject) = pyproject::find_settings_toml(
stdin_filename
@@ -50,14 +66,14 @@ fn resolve(
// that directory. (With `Strategy::Hierarchical`, we'll end up finding
// the "closest" `pyproject.toml` file for every Python file later on,
// so these act as the "default" settings.)
let settings = resolve_settings(&pyproject, &Relativity::Parent, Some(overrides))?;
let settings = resolve_settings_with_processor(&pyproject, &Relativity::Parent, overrides)?;
Ok(PyprojectDiscovery::Hierarchical(settings))
} else if let Some(pyproject) = pyproject::find_user_settings_toml() {
// Fourth priority: find a user-specific `pyproject.toml`, but resolve all paths
// relative the current working directory. (With `Strategy::Hierarchical`, we'll
// end up the "closest" `pyproject.toml` file for every Python file later on, so
// these act as the "default" settings.)
let settings = resolve_settings(&pyproject, &Relativity::Cwd, Some(overrides))?;
let settings = resolve_settings_with_processor(&pyproject, &Relativity::Cwd, overrides)?;
Ok(PyprojectDiscovery::Hierarchical(settings))
} else {
// Fallback: load Ruff's default settings, and resolve all paths relative to the
@@ -65,13 +81,13 @@ fn resolve(
// "closest" `pyproject.toml` file for every Python file later on, so these act
// as the "default" settings.)
let mut config = Configuration::default();
config.apply(overrides.clone());
overrides.process_config(&mut config);
let settings = Settings::from_configuration(config, &path_dedot::CWD)?;
Ok(PyprojectDiscovery::Hierarchical(settings))
}
}
pub(crate) fn inner_main() -> Result<ExitCode> {
pub fn main() -> Result<ExitCode> {
// Extract command-line arguments.
let (cli, overrides) = Cli::parse().partition();
let log_level = extract_log_level(&cli);
@@ -129,7 +145,7 @@ pub(crate) fn inner_main() -> Result<ExitCode> {
};
if let Some(code) = cli.explain {
commands::explain(&code, &format)?;
commands::explain(&code, format)?;
return Ok(ExitCode::SUCCESS);
}
if cli.show_settings {
@@ -152,13 +168,13 @@ pub(crate) fn inner_main() -> Result<ExitCode> {
// but not apply fixes. That would allow us to avoid special-casing JSON
// here.
let autofix = if cli.diff {
fixer::Mode::Diff
fix::FixMode::Diff
} else if fix || fix_only {
fixer::Mode::Apply
fix::FixMode::Apply
} else if matches!(format, SerializationFormat::Json) {
fixer::Mode::Generate
fix::FixMode::Generate
} else {
fixer::Mode::None
fix::FixMode::None
};
let violations = if cli.diff || fix_only {
Violations::Hide
@@ -176,7 +192,7 @@ pub(crate) fn inner_main() -> Result<ExitCode> {
let printer = Printer::new(&format, &log_level, &autofix, &violations);
if cli.watch {
if !matches!(autofix, fixer::Mode::None) {
if !matches!(autofix, fix::FixMode::None) {
warn_user_once!("--fix is not enabled in watch mode.");
}
if format != SerializationFormat::Text {
@@ -193,9 +209,9 @@ pub(crate) fn inner_main() -> Result<ExitCode> {
&file_strategy,
&overrides,
cache.into(),
fixer::Mode::None,
fix::FixMode::None,
)?;
printer.write_continuously(&messages)?;
printer.write_continuously(&messages);
// Configure the file watcher.
let (tx, rx) = channel();
@@ -223,9 +239,9 @@ pub(crate) fn inner_main() -> Result<ExitCode> {
&file_strategy,
&overrides,
cache.into(),
fixer::Mode::None,
fix::FixMode::None,
)?;
printer.write_continuously(&messages)?;
printer.write_continuously(&messages);
}
}
Err(err) => return Err(err.into()),
@@ -243,7 +259,7 @@ pub(crate) fn inner_main() -> Result<ExitCode> {
// Generate lint violations.
let diagnostics = if is_stdin {
commands::run_stdin(
cli.stdin_filename.as_deref(),
cli.stdin_filename.map(fs::normalize_path).as_deref(),
&pyproject_strategy,
&file_strategy,
&overrides,
@@ -263,7 +279,7 @@ pub(crate) fn inner_main() -> Result<ExitCode> {
// Always try to print violations (the printer itself may suppress output),
// unless we're writing fixes via stdin (in which case, the transformed
// source code goes to stdout).
if !(is_stdin && matches!(autofix, fixer::Mode::Apply | fixer::Mode::Diff)) {
if !(is_stdin && matches!(autofix, fix::FixMode::Apply | fix::FixMode::Diff)) {
printer.write_once(&diagnostics)?;
}

View File

@@ -6,18 +6,16 @@ use annotate_snippets::snippet::{Annotation, AnnotationType, Slice, Snippet, Sou
use anyhow::Result;
use colored::Colorize;
use itertools::iterate;
use rustpython_parser::ast::Location;
use ruff::fs::relativize_path;
use ruff::logging::LogLevel;
use ruff::message::{Location, Message};
use ruff::registry::RuleCode;
use ruff::settings::types::SerializationFormat;
use ruff::{fix, notify_user};
use serde::Serialize;
use serde_json::json;
use crate::autofix::fixer;
use crate::fs::relativize_path;
use crate::linter::Diagnostics;
use crate::logging::LogLevel;
use crate::message::Message;
use crate::notify_user;
use crate::registry::RuleCode;
use crate::settings::types::SerializationFormat;
use crate::diagnostics::Diagnostics;
/// Enum to control whether lint violations are shown to the user.
pub enum Violations {
@@ -46,7 +44,7 @@ struct ExpandedMessage<'a> {
pub struct Printer<'a> {
format: &'a SerializationFormat,
log_level: &'a LogLevel,
autofix: &'a fixer::Mode,
autofix: &'a fix::FixMode,
violations: &'a Violations,
}
@@ -54,7 +52,7 @@ impl<'a> Printer<'a> {
pub fn new(
format: &'a SerializationFormat,
log_level: &'a LogLevel,
autofix: &'a fixer::Mode,
autofix: &'a fix::FixMode,
violations: &'a Violations,
) -> Self {
Self {
@@ -84,7 +82,7 @@ impl<'a> Printer<'a> {
println!("Found {remaining} error(s).");
}
if !matches!(self.autofix, fixer::Mode::Apply) {
if !matches!(self.autofix, fix::FixMode::Apply) {
let num_fixable = diagnostics
.messages
.iter()
@@ -98,9 +96,9 @@ impl<'a> Printer<'a> {
Violations::Hide => {
let fixed = diagnostics.fixed;
if fixed > 0 {
if matches!(self.autofix, fixer::Mode::Apply) {
if matches!(self.autofix, fix::FixMode::Apply) {
println!("Fixed {fixed} error(s).");
} else if matches!(self.autofix, fixer::Mode::Diff) {
} else if matches!(self.autofix, fix::FixMode::Diff) {
println!("Would fix {fixed} error(s).");
}
}
@@ -241,7 +239,7 @@ impl<'a> Printer<'a> {
"::error title=Ruff \
({}),file={},line={},col={},endLine={},endColumn={}::{}",
message.kind.code(),
relativize_path(Path::new(&message.filename)),
message.filename,
message.location.row(),
message.location.column(),
message.end_location.row(),
@@ -265,7 +263,7 @@ impl<'a> Printer<'a> {
"severity": "major",
"fingerprint": message.kind.code(),
"location": {
"path": relativize_path(Path::new(&message.filename)),
"path": message.filename,
"lines": {
"begin": message.location.row(),
"end": message.end_location.row()
@@ -283,9 +281,9 @@ impl<'a> Printer<'a> {
Ok(())
}
pub fn write_continuously(&self, diagnostics: &Diagnostics) -> Result<()> {
pub fn write_continuously(&self, diagnostics: &Diagnostics) {
if matches!(self.log_level, LogLevel::Silent) {
return Ok(());
return;
}
if self.log_level >= &LogLevel::Default {
@@ -303,10 +301,9 @@ impl<'a> Printer<'a> {
print_message(message);
}
}
Ok(())
}
#[allow(clippy::unused_self)]
pub fn clear_screen(&self) -> Result<()> {
#[cfg(not(target_family = "wasm"))]
clearscreen::clear()?;

View File

@@ -9,7 +9,7 @@ use std::time::Duration;
use std::{fs, process, str};
use anyhow::{anyhow, Context, Result};
use assert_cmd::{crate_name, Command};
use assert_cmd::Command;
use itertools::Itertools;
use log::info;
use ruff::logging::{set_up_logging, LogLevel};
@@ -24,6 +24,8 @@ struct Blackd {
client: ureq::Agent,
}
const BIN_NAME: &str = "ruff";
impl Blackd {
pub fn new() -> Result<Self> {
// Get free TCP port to run on
@@ -104,7 +106,7 @@ fn run_test(path: &Path, blackd: &Blackd, ruff_args: &[&str]) -> Result<()> {
let input = fs::read(path)?;
// Step 1: Run `ruff` on the input.
let step_1 = &Command::cargo_bin(crate_name!())?
let step_1 = &Command::cargo_bin(BIN_NAME)?
.args(ruff_args)
.write_stdin(input)
.assert()
@@ -121,7 +123,7 @@ fn run_test(path: &Path, blackd: &Blackd, ruff_args: &[&str]) -> Result<()> {
let step_2_output = blackd.check(&step_1_output)?;
// Step 3: Re-run `ruff` on the input.
let step_3 = &Command::cargo_bin(crate_name!())?
let step_3 = &Command::cargo_bin(BIN_NAME)?
.args(ruff_args)
.write_stdin(step_2_output.clone())
.assert();

View File

@@ -3,11 +3,14 @@
use std::str;
use anyhow::Result;
use assert_cmd::{crate_name, Command};
use assert_cmd::Command;
use path_absolutize::path_dedot;
const BIN_NAME: &str = "ruff";
#[test]
fn test_stdin_success() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
cmd.args(["-", "--format", "text"])
.write_stdin("")
.assert()
@@ -17,7 +20,7 @@ fn test_stdin_success() -> Result<()> {
#[test]
fn test_stdin_error() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text"])
.write_stdin("import os\n")
@@ -33,7 +36,7 @@ fn test_stdin_error() -> Result<()> {
#[test]
fn test_stdin_filename() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--stdin-filename", "F401.py"])
.write_stdin("import os\n")
@@ -49,7 +52,7 @@ fn test_stdin_filename() -> Result<()> {
#[test]
fn test_stdin_json() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "json", "--stdin-filename", "F401.py"])
.write_stdin("import os\n")
@@ -57,41 +60,44 @@ fn test_stdin_json() -> Result<()> {
.failure();
assert_eq!(
str::from_utf8(&output.get_output().stdout)?,
r#"[
{
format!(
r#"[
{{
"code": "F401",
"message": "`os` imported but unused",
"fix": {
"fix": {{
"content": "",
"message": "Remove unused import: `os`",
"location": {
"location": {{
"row": 1,
"column": 0
},
"end_location": {
}},
"end_location": {{
"row": 2,
"column": 0
}
},
"location": {
}}
}},
"location": {{
"row": 1,
"column": 8
},
"end_location": {
}},
"end_location": {{
"row": 1,
"column": 10
},
"filename": "F401.py"
}
}},
"filename": "{}/F401.py"
}}
]
"#
"#,
path_dedot::CWD.to_str().unwrap()
)
);
Ok(())
}
#[test]
fn test_stdin_autofix() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--fix"])
.write_stdin("import os\nimport sys\n\nprint(sys.version)\n")
@@ -106,7 +112,7 @@ fn test_stdin_autofix() -> Result<()> {
#[test]
fn test_stdin_autofix_when_not_fixable_should_still_print_contents() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--fix"])
.write_stdin("import os\nimport sys\n\nif (1, 2):\n print(sys.version)\n")
@@ -121,7 +127,7 @@ fn test_stdin_autofix_when_not_fixable_should_still_print_contents() -> Result<(
#[test]
fn test_stdin_autofix_when_no_issues_should_still_print_contents() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--fix"])
.write_stdin("import sys\n\nprint(sys.version)\n")
@@ -136,7 +142,7 @@ fn test_stdin_autofix_when_no_issues_should_still_print_contents() -> Result<()>
#[test]
fn test_show_source() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--show-source"])
.write_stdin("l = 1")

View File

@@ -1,8 +1,12 @@
[package]
name = "ruff_dev"
version = "0.0.217"
version = "0.0.221"
edition = "2021"
[lib]
name = "ruff_dev"
doctest = false
[dependencies]
anyhow = { version = "1.0.66" }
clap = { version = "4.0.1", features = ["derive"] }
@@ -10,6 +14,7 @@ itertools = { version = "0.10.5" }
libcst = { git = "https://github.com/charliermarsh/LibCST", rev = "f2f0b7a487a8725d161fe8b3ed73a6758b21e177" }
once_cell = { version = "1.16.0" }
ruff = { path = ".." }
ruff_cli = { path = "../ruff_cli" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "d532160333ffeb6dbeca2c2728c2391cd1e53b7f" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "d532160333ffeb6dbeca2c2728c2391cd1e53b7f" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "d532160333ffeb6dbeca2c2728c2391cd1e53b7f" }

View File

@@ -2,7 +2,7 @@
use anyhow::Result;
use clap::{Args, CommandFactory};
use ruff::cli::Cli as MainCli;
use ruff_cli::Cli as MainCli;
use crate::utils::replace_readme_section;

View File

@@ -5,9 +5,7 @@ use std::path::PathBuf;
use anyhow::Result;
use clap::Args;
use ruff::source_code_generator::SourceCodeGenerator;
use ruff::source_code_locator::SourceCodeLocator;
use ruff::source_code_style::SourceCodeStyleDetector;
use ruff::source_code::{Generator, Locator, Stylist};
use rustpython_parser::parser;
#[derive(Args)]
@@ -20,9 +18,9 @@ pub struct Cli {
pub fn main(cli: &Cli) -> Result<()> {
let contents = fs::read_to_string(&cli.file)?;
let python_ast = parser::parse_program(&contents, &cli.file.to_string_lossy())?;
let locator = SourceCodeLocator::new(&contents);
let stylist = SourceCodeStyleDetector::from_contents(&contents, &locator);
let mut generator: SourceCodeGenerator = (&stylist).into();
let locator = Locator::new(&contents);
let stylist = Stylist::from_contents(&contents, &locator);
let mut generator: Generator = (&stylist).into();
generator.unparse_suite(&python_ast);
println!("{}", generator.generate());
Ok(())

View File

@@ -1,10 +1,11 @@
[package]
name = "ruff_macros"
version = "0.0.217"
version = "0.0.221"
edition = "2021"
[lib]
proc-macro = true
doctest = false
[dependencies]
once_cell = { version = "1.17.0" }

View File

@@ -12,9 +12,12 @@
)]
#![forbid(unsafe_code)]
use syn::{parse_macro_input, DeriveInput};
use proc_macro2::Span;
use quote::quote;
use syn::{parse_macro_input, DeriveInput, Ident};
mod config;
mod prefixes;
mod rule_code_prefix;
#[proc_macro_derive(ConfigurationOptions, attributes(option, doc, option_group))]
@@ -34,3 +37,23 @@ pub fn derive_rule_code_prefix(input: proc_macro::TokenStream) -> proc_macro::To
.unwrap_or_else(syn::Error::into_compile_error)
.into()
}
#[proc_macro]
pub fn origin_by_code(item: proc_macro::TokenStream) -> proc_macro::TokenStream {
let ident = parse_macro_input!(item as Ident).to_string();
let mut iter = prefixes::PREFIX_TO_ORIGIN.iter();
let origin = loop {
let (prefix, origin) = iter
.next()
.unwrap_or_else(|| panic!("code doesn't start with any recognized prefix: {ident}"));
if ident.starts_with(prefix) {
break origin;
}
};
let prefix = Ident::new(origin, Span::call_site());
quote! {
RuleOrigin::#prefix
}
.into()
}

View File

@@ -0,0 +1,53 @@
// Longer prefixes should come first so that you can find an origin for a code
// by simply picking the first entry that starts with the given prefix.
pub const PREFIX_TO_ORIGIN: &[(&str, &str)] = &[
("ANN", "Flake8Annotations"),
("ARG", "Flake8UnusedArguments"),
("A", "Flake8Builtins"),
("BLE", "Flake8BlindExcept"),
("B", "Flake8Bugbear"),
("C4", "Flake8Comprehensions"),
("C9", "McCabe"),
("DTZ", "Flake8Datetimez"),
("D", "Pydocstyle"),
("ERA", "Eradicate"),
("EM", "Flake8ErrMsg"),
("E", "Pycodestyle"),
("FBT", "Flake8BooleanTrap"),
("F", "Pyflakes"),
("ICN", "Flake8ImportConventions"),
("ISC", "Flake8ImplicitStrConcat"),
("I", "Isort"),
("N", "PEP8Naming"),
("PD", "PandasVet"),
("PGH", "PygrepHooks"),
("PL", "Pylint"),
("PT", "Flake8PytestStyle"),
("Q", "Flake8Quotes"),
("RET", "Flake8Return"),
("SIM", "Flake8Simplify"),
("S", "Flake8Bandit"),
("T10", "Flake8Debugger"),
("T20", "Flake8Print"),
("TID", "Flake8TidyImports"),
("UP", "Pyupgrade"),
("W", "Pycodestyle"),
("YTT", "Flake82020"),
("PIE", "Flake8Pie"),
("RUF", "Ruff"),
];
#[cfg(test)]
mod tests {
use super::PREFIX_TO_ORIGIN;
#[test]
fn order() {
for (idx, (prefix, _)) in PREFIX_TO_ORIGIN.iter().enumerate() {
for (prior_prefix, _) in PREFIX_TO_ORIGIN[..idx].iter() {
assert!(!prefix.starts_with(prior_prefix));
}
}
}
}

View File

@@ -34,7 +34,7 @@ def main(*, plugin: str, url: str) -> None:
with open(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}/rules.rs"), "w+") as fp:
fp.write("use crate::checkers::ast::Checker;\n")
with open(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}/mod.rs"), "w+") as fp:
fp.write("pub mod rules;\n")
fp.write("pub(crate) mod rules;\n")
fp.write("\n")
fp.write(
"""#[cfg(test)]

View File

@@ -116,33 +116,6 @@ impl Violation for %s {
fp.write("\n")
has_written = True
# Add the relevant code-to-origin pair to `src/registry.rs`.
with open(os.path.join(ROOT_DIR, "src/registry.rs")) as fp:
content = fp.read()
seen_impl = False
has_written = False
with open(os.path.join(ROOT_DIR, "src/registry.rs"), "w") as fp:
for line in content.splitlines():
fp.write(line)
fp.write("\n")
if has_written:
continue
if line.startswith("impl RuleCode"):
seen_impl = True
continue
if not seen_impl:
continue
if line.strip() == f"// {origin}":
indent = line.split("//")[0]
fp.write(f"{indent}RuleCode::{code} => RuleOrigin::{pascal_case(origin)},")
fp.write("\n")
has_written = True
if __name__ == "__main__":
parser = argparse.ArgumentParser(

View File

@@ -1,5 +1,12 @@
use rustpython_ast::{Expr, Stmt, StmtKind};
pub fn name(stmt: &Stmt) -> &str {
match &stmt.node {
StmtKind::FunctionDef { name, .. } | StmtKind::AsyncFunctionDef { name, .. } => name,
_ => panic!("Expected StmtKind::FunctionDef | StmtKind::AsyncFunctionDef"),
}
}
pub fn decorator_list(stmt: &Stmt) -> &Vec<Expr> {
match &stmt.node {
StmtKind::FunctionDef { decorator_list, .. }

View File

@@ -388,6 +388,12 @@ impl<'a> From<&'a Box<Expr>> for Box<ComparableExpr<'a>> {
}
}
impl<'a> From<&'a Box<Expr>> for ComparableExpr<'a> {
fn from(expr: &'a Box<Expr>) -> Self {
(&**expr).into()
}
}
impl<'a> From<&'a Expr> for ComparableExpr<'a> {
fn from(expr: &'a Expr) -> Self {
match &expr.node {

View File

@@ -1,10 +1,8 @@
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::Expr;
use crate::ast::helpers::{
collect_call_paths, dealias_call_path, match_call_path, to_module_and_member,
};
use crate::ast::helpers::to_call_path;
use crate::ast::types::{Scope, ScopeKind};
use crate::checkers::ast::Checker;
const CLASS_METHODS: [&str; 3] = ["__new__", "__init_subclass__", "__class_getitem__"];
const METACLASS_BASES: [(&str, &str); 2] = [("", "type"), ("abc", "ABCMeta")];
@@ -18,11 +16,10 @@ pub enum FunctionType {
/// Classify a function based on its scope, name, and decorators.
pub fn classify(
checker: &Checker,
scope: &Scope,
name: &str,
decorator_list: &[Expr],
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
classmethod_decorators: &[String],
staticmethod_decorators: &[String],
) -> FunctionType {
@@ -33,17 +30,18 @@ pub fn classify(
if CLASS_METHODS.contains(&name)
|| scope.bases.iter().any(|expr| {
// The class itself extends a known metaclass, so all methods are class methods.
let call_path = dealias_call_path(collect_call_paths(expr), import_aliases);
METACLASS_BASES
.iter()
.any(|(module, member)| match_call_path(&call_path, module, member, from_imports))
checker.resolve_call_path(expr).map_or(false, |call_path| {
METACLASS_BASES
.iter()
.any(|(module, member)| call_path == [*module, *member])
})
})
|| decorator_list.iter().any(|expr| {
// The method is decorated with a class method decorator (like `@classmethod`).
let call_path = dealias_call_path(collect_call_paths(expr), import_aliases);
classmethod_decorators.iter().any(|decorator| {
let (module, member) = to_module_and_member(decorator);
match_call_path(&call_path, module, member, from_imports)
checker.resolve_call_path(expr).map_or(false, |call_path| {
classmethod_decorators
.iter()
.any(|decorator| call_path == to_call_path(decorator))
})
})
{
@@ -51,10 +49,10 @@ pub fn classify(
} else if decorator_list.iter().any(|expr| {
// The method is decorated with a static method decorator (like
// `@staticmethod`).
let call_path = dealias_call_path(collect_call_paths(expr), import_aliases);
staticmethod_decorators.iter().any(|decorator| {
let (module, member) = to_module_and_member(decorator);
match_call_path(&call_path, module, member, from_imports)
checker.resolve_call_path(expr).map_or(false, |call_path| {
staticmethod_decorators
.iter()
.any(|decorator| call_path == to_call_path(decorator))
})
}) {
FunctionType::StaticMethod

View File

@@ -12,9 +12,8 @@ use rustpython_parser::lexer::Tok;
use rustpython_parser::token::StringKind;
use crate::ast::types::{Binding, BindingKind, Range};
use crate::source_code_generator::SourceCodeGenerator;
use crate::source_code_style::SourceCodeStyleDetector;
use crate::SourceCodeLocator;
use crate::checkers::ast::Checker;
use crate::source_code::{Generator, Locator, Stylist};
/// Create an `Expr` with default location from an `ExprKind`.
pub fn create_expr(node: ExprKind) -> Expr {
@@ -27,15 +26,15 @@ pub fn create_stmt(node: StmtKind) -> Stmt {
}
/// Generate source code from an `Expr`.
pub fn unparse_expr(expr: &Expr, stylist: &SourceCodeStyleDetector) -> String {
let mut generator: SourceCodeGenerator = stylist.into();
pub fn unparse_expr(expr: &Expr, stylist: &Stylist) -> String {
let mut generator: Generator = stylist.into();
generator.unparse_expr(expr, 0);
generator.generate()
}
/// Generate source code from an `Stmt`.
pub fn unparse_stmt(stmt: &Stmt, stylist: &SourceCodeStyleDetector) -> String {
let mut generator: SourceCodeGenerator = stylist.into();
pub fn unparse_stmt(stmt: &Stmt, stylist: &Stylist) -> String {
let mut generator: Generator = stylist.into();
generator.unparse_stmt(stmt);
generator.generate()
}
@@ -56,150 +55,42 @@ fn collect_call_path_inner<'a>(expr: &'a Expr, parts: &mut Vec<&'a str>) {
}
}
/// Convert an `Expr` to its call path (like `List`, or `typing.List`).
pub fn compose_call_path(expr: &Expr) -> Option<String> {
let segments = collect_call_paths(expr);
if segments.is_empty() {
None
} else {
Some(segments.join("."))
}
}
/// Convert an `Expr` to its call path segments (like ["typing", "List"]).
pub fn collect_call_paths(expr: &Expr) -> Vec<&str> {
pub fn collect_call_path(expr: &Expr) -> Vec<&str> {
let mut segments = vec![];
collect_call_path_inner(expr, &mut segments);
segments
}
/// Rewrite any import aliases on a call path.
pub fn dealias_call_path<'a>(
call_path: Vec<&'a str>,
import_aliases: &FxHashMap<&str, &'a str>,
) -> Vec<&'a str> {
if let Some(head) = call_path.first() {
if let Some(origin) = import_aliases.get(head) {
let tail = &call_path[1..];
let mut call_path: Vec<&str> = vec![];
call_path.extend(origin.split('.'));
call_path.extend(tail);
call_path
} else {
call_path
}
/// Convert an `Expr` to its call path (like `List`, or `typing.List`).
pub fn compose_call_path(expr: &Expr) -> Option<String> {
let call_path = collect_call_path(expr);
if call_path.is_empty() {
None
} else {
call_path
Some(format_call_path(&call_path))
}
}
/// Return `true` if the `Expr` is a reference to `${module}.${target}`.
///
/// Useful for, e.g., ensuring that a `Union` reference represents
/// `typing.Union`.
pub fn match_module_member(
expr: &Expr,
module: &str,
member: &str,
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> bool {
match_call_path(
&dealias_call_path(collect_call_paths(expr), import_aliases),
module,
member,
from_imports,
)
}
/// Return `true` if the `call_path` is a reference to `${module}.${target}`.
///
/// Optimized version of `match_module_member` for pre-computed call paths.
pub fn match_call_path(
call_path: &[&str],
module: &str,
member: &str,
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
) -> bool {
// If we have no segments, we can't ever match.
let num_segments = call_path.len();
if num_segments == 0 {
return false;
}
// If the last segment doesn't match the member, we can't ever match.
if call_path[num_segments - 1] != member {
return false;
}
// We now only need the module path, so throw out the member name.
let call_path = &call_path[..num_segments - 1];
let num_segments = call_path.len();
// Case (1): It's a builtin (like `list`).
// Case (2a): We imported from the parent (`from typing.re import Match`,
// `Match`).
// Case (2b): We imported star from the parent (`from typing.re import *`,
// `Match`).
if num_segments == 0 {
module.is_empty()
|| from_imports.get(module).map_or(false, |imports| {
imports.contains(member) || imports.contains("*")
})
/// Format a call path for display.
pub fn format_call_path(call_path: &[&str]) -> String {
if call_path
.first()
.expect("Unable to format empty call path")
.is_empty()
{
call_path[1..].join(".")
} else {
let components: Vec<&str> = module.split('.').collect();
// Case (3a): it's a fully qualified call path (`import typing`,
// `typing.re.Match`). Case (3b): it's a fully qualified call path (`import
// typing.re`, `typing.re.Match`).
if components == call_path {
return true;
}
// Case (4): We imported from the grandparent (`from typing import re`,
// `re.Match`)
let num_matches = (0..components.len())
.take(num_segments)
.take_while(|i| components[components.len() - 1 - i] == call_path[num_segments - 1 - i])
.count();
if num_matches > 0 {
let cut = components.len() - num_matches;
// TODO(charlie): Rewrite to avoid this allocation.
let module = components[..cut].join(".");
let member = components[cut];
if from_imports
.get(&module.as_str())
.map_or(false, |imports| imports.contains(member))
{
return true;
}
}
false
call_path.join(".")
}
}
/// Return `true` if the `Expr` contains a reference to `${module}.${target}`.
pub fn contains_call_path(
expr: &Expr,
module: &str,
member: &str,
import_aliases: &FxHashMap<&str, &str>,
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
) -> bool {
pub fn contains_call_path(checker: &Checker, expr: &Expr, target: &[&str]) -> bool {
any_over_expr(expr, &|expr| {
let call_path = collect_call_paths(expr);
if !call_path.is_empty() {
if match_call_path(
&dealias_call_path(call_path, import_aliases),
module,
member,
from_imports,
) {
return true;
}
}
false
checker
.resolve_call_path(expr)
.map_or(false, |call_path| call_path == target)
})
}
@@ -391,13 +282,13 @@ pub fn extract_handler_names(handlers: &[Excepthandler]) -> Vec<Vec<&str>> {
if let Some(type_) = type_ {
if let ExprKind::Tuple { elts, .. } = &type_.node {
for type_ in elts {
let call_path = collect_call_paths(type_);
let call_path = collect_call_path(type_);
if !call_path.is_empty() {
handler_names.push(call_path);
}
}
} else {
let call_path = collect_call_paths(type_);
let call_path = collect_call_path(type_);
if !call_path.is_empty() {
handler_names.push(call_path);
}
@@ -430,6 +321,13 @@ pub fn collect_arg_names<'a>(arguments: &'a Arguments) -> FxHashSet<&'a str> {
arg_names
}
/// Returns `true` if a statement or expression includes at least one comment.
pub fn has_comments<T>(located: &Located<T>, locator: &Locator) -> bool {
lexer::make_tokenizer(&locator.slice_source_code_range(&Range::from_located(located)))
.flatten()
.any(|(_, tok, _)| matches!(tok, Tok::Comment(..)))
}
/// Returns `true` if a call is an argumented `super` invocation.
pub fn is_super_call_with_arguments(func: &Expr, args: &[Expr]) -> bool {
if let ExprKind::Name { id, .. } = &func.node {
@@ -453,12 +351,37 @@ pub fn format_import_from(level: Option<&usize>, module: Option<&str>) -> String
module_name
}
/// Format the member reference name for a relative import.
pub fn format_import_from_member(
level: Option<&usize>,
module: Option<&str>,
member: &str,
) -> String {
let mut full_name = String::with_capacity(
level.map_or(0, |level| *level)
+ module.as_ref().map_or(0, |module| module.len())
+ 1
+ member.len(),
);
if let Some(level) = level {
for _ in 0..*level {
full_name.push('.');
}
}
if let Some(module) = module {
full_name.push_str(module);
full_name.push('.');
}
full_name.push_str(member);
full_name
}
/// Split a target string (like `typing.List`) into (`typing`, `List`).
pub fn to_module_and_member(target: &str) -> (&str, &str) {
if let Some(index) = target.rfind('.') {
(&target[..index], &target[index + 1..])
pub fn to_call_path(target: &str) -> Vec<&str> {
if target.contains('.') {
target.split('.').collect()
} else {
("", target)
vec!["", target]
}
}
@@ -476,14 +399,14 @@ pub fn to_absolute(relative: Location, base: Location) -> Location {
}
/// Return `true` if a `Stmt` has leading content.
pub fn match_leading_content(stmt: &Stmt, locator: &SourceCodeLocator) -> bool {
pub fn match_leading_content(stmt: &Stmt, locator: &Locator) -> bool {
let range = Range::new(Location::new(stmt.location.row(), 0), stmt.location);
let prefix = locator.slice_source_code_range(&range);
prefix.chars().any(|char| !char.is_whitespace())
}
/// Return `true` if a `Stmt` has trailing content.
pub fn match_trailing_content(stmt: &Stmt, locator: &SourceCodeLocator) -> bool {
pub fn match_trailing_content(stmt: &Stmt, locator: &Locator) -> bool {
let range = Range::new(
stmt.end_location.unwrap(),
Location::new(stmt.end_location.unwrap().row() + 1, 0),
@@ -501,7 +424,7 @@ pub fn match_trailing_content(stmt: &Stmt, locator: &SourceCodeLocator) -> bool
}
/// Return the number of trailing empty lines following a statement.
pub fn count_trailing_lines(stmt: &Stmt, locator: &SourceCodeLocator) -> usize {
pub fn count_trailing_lines(stmt: &Stmt, locator: &Locator) -> usize {
let suffix =
locator.slice_source_code_at(&Location::new(stmt.end_location.unwrap().row() + 1, 0));
suffix
@@ -513,7 +436,7 @@ pub fn count_trailing_lines(stmt: &Stmt, locator: &SourceCodeLocator) -> usize {
/// Return the appropriate visual `Range` for any message that spans a `Stmt`.
/// Specifically, this method returns the range of a function or class name,
/// rather than that of the entire function or class body.
pub fn identifier_range(stmt: &Stmt, locator: &SourceCodeLocator) -> Range {
pub fn identifier_range(stmt: &Stmt, locator: &Locator) -> Range {
if matches!(
stmt.node,
StmtKind::ClassDef { .. }
@@ -532,7 +455,7 @@ pub fn identifier_range(stmt: &Stmt, locator: &SourceCodeLocator) -> Range {
}
/// Like `identifier_range`, but accepts a `Binding`.
pub fn binding_range(binding: &Binding, locator: &SourceCodeLocator) -> Range {
pub fn binding_range(binding: &Binding, locator: &Locator) -> Range {
if matches!(
binding.kind,
BindingKind::ClassDefinition | BindingKind::FunctionDefinition
@@ -548,7 +471,7 @@ pub fn binding_range(binding: &Binding, locator: &SourceCodeLocator) -> Range {
}
// Return the ranges of `Name` tokens within a specified node.
pub fn find_names<T>(located: &Located<T>, locator: &SourceCodeLocator) -> Vec<Range> {
pub fn find_names<T>(located: &Located<T>, locator: &Locator) -> Vec<Range> {
let contents = locator.slice_source_code_range(&Range::from_located(located));
lexer::make_tokenizer_located(&contents, located.location)
.flatten()
@@ -561,10 +484,7 @@ pub fn find_names<T>(located: &Located<T>, locator: &SourceCodeLocator) -> Vec<R
}
/// Return the `Range` of `name` in `Excepthandler`.
pub fn excepthandler_name_range(
handler: &Excepthandler,
locator: &SourceCodeLocator,
) -> Option<Range> {
pub fn excepthandler_name_range(handler: &Excepthandler, locator: &Locator) -> Option<Range> {
let ExcepthandlerKind::ExceptHandler {
name, type_, body, ..
} = &handler.node;
@@ -587,7 +507,7 @@ pub fn excepthandler_name_range(
}
/// Return the `Range` of `except` in `Excepthandler`.
pub fn except_range(handler: &Excepthandler, locator: &SourceCodeLocator) -> Range {
pub fn except_range(handler: &Excepthandler, locator: &Locator) -> Range {
let ExcepthandlerKind::ExceptHandler { body, type_, .. } = &handler.node;
let end = if let Some(type_) = type_ {
type_.location
@@ -612,7 +532,7 @@ pub fn except_range(handler: &Excepthandler, locator: &SourceCodeLocator) -> Ran
}
/// Find f-strings that don't contain any formatted values in a `JoinedStr`.
pub fn find_useless_f_strings(expr: &Expr, locator: &SourceCodeLocator) -> Vec<(Range, Range)> {
pub fn find_useless_f_strings(expr: &Expr, locator: &Locator) -> Vec<(Range, Range)> {
let contents = locator.slice_source_code_range(&Range::from_located(expr));
lexer::make_tokenizer_located(&contents, expr.location)
.flatten()
@@ -649,7 +569,7 @@ pub fn find_useless_f_strings(expr: &Expr, locator: &SourceCodeLocator) -> Vec<(
}
/// Return the `Range` of `else` in `For`, `AsyncFor`, and `While` statements.
pub fn else_range(stmt: &Stmt, locator: &SourceCodeLocator) -> Option<Range> {
pub fn else_range(stmt: &Stmt, locator: &Locator) -> Option<Range> {
match &stmt.node {
StmtKind::For { body, orelse, .. }
| StmtKind::AsyncFor { body, orelse, .. }
@@ -683,7 +603,7 @@ pub fn else_range(stmt: &Stmt, locator: &SourceCodeLocator) -> Option<Range> {
/// Return `true` if a `Stmt` appears to be part of a multi-statement line, with
/// other statements preceding it.
pub fn preceded_by_continuation(stmt: &Stmt, locator: &SourceCodeLocator) -> bool {
pub fn preceded_by_continuation(stmt: &Stmt, locator: &Locator) -> bool {
// Does the previous line end in a continuation? This will have a specific
// false-positive, which is that if the previous line ends in a comment, it
// will be treated as a continuation. So we should only use this information to
@@ -704,16 +624,31 @@ pub fn preceded_by_continuation(stmt: &Stmt, locator: &SourceCodeLocator) -> boo
/// Return `true` if a `Stmt` appears to be part of a multi-statement line, with
/// other statements preceding it.
pub fn preceded_by_multi_statement_line(stmt: &Stmt, locator: &SourceCodeLocator) -> bool {
pub fn preceded_by_multi_statement_line(stmt: &Stmt, locator: &Locator) -> bool {
match_leading_content(stmt, locator) || preceded_by_continuation(stmt, locator)
}
/// Return `true` if a `Stmt` appears to be part of a multi-statement line, with
/// other statements following it.
pub fn followed_by_multi_statement_line(stmt: &Stmt, locator: &SourceCodeLocator) -> bool {
pub fn followed_by_multi_statement_line(stmt: &Stmt, locator: &Locator) -> bool {
match_trailing_content(stmt, locator)
}
/// Return `true` if a `Stmt` is a docstring.
pub fn is_docstring_stmt(stmt: &Stmt) -> bool {
if let StmtKind::Expr { value } = &stmt.node {
matches!(
value.node,
ExprKind::Constant {
value: Constant::Str { .. },
..
}
)
} else {
false
}
}
#[derive(Default)]
/// A simple representation of a call's positional and keyword arguments.
pub struct SimpleCallArgs<'a> {
@@ -759,188 +694,48 @@ impl<'a> SimpleCallArgs<'a> {
}
None
}
/// Get the number of positional and keyword arguments used.
#[allow(clippy::len_without_is_empty)]
pub fn len(&self) -> usize {
self.args.len() + self.kwargs.len()
}
}
#[cfg(test)]
mod tests {
use anyhow::Result;
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::Location;
use rustpython_parser::parser;
use crate::ast::helpers::{
else_range, identifier_range, match_module_member, match_trailing_content,
};
use crate::ast::helpers::{else_range, identifier_range, match_trailing_content};
use crate::ast::types::Range;
use crate::source_code_locator::SourceCodeLocator;
#[test]
fn builtin() -> Result<()> {
let expr = parser::parse_expression("list", "<filename>")?;
assert!(match_module_member(
&expr,
"",
"list",
&FxHashMap::default(),
&FxHashMap::default(),
));
Ok(())
}
#[test]
fn fully_qualified() -> Result<()> {
let expr = parser::parse_expression("typing.re.Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::default(),
&FxHashMap::default(),
));
Ok(())
}
#[test]
fn unimported() -> Result<()> {
let expr = parser::parse_expression("Match", "<filename>")?;
assert!(!match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::default(),
&FxHashMap::default(),
));
let expr = parser::parse_expression("re.Match", "<filename>")?;
assert!(!match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::default(),
&FxHashMap::default(),
));
Ok(())
}
#[test]
fn from_star() -> Result<()> {
let expr = parser::parse_expression("Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::from_iter([("typing.re", FxHashSet::from_iter(["*"]))]),
&FxHashMap::default()
));
Ok(())
}
#[test]
fn from_parent() -> Result<()> {
let expr = parser::parse_expression("Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::from_iter([("typing.re", FxHashSet::from_iter(["Match"]))]),
&FxHashMap::default()
));
Ok(())
}
#[test]
fn from_grandparent() -> Result<()> {
let expr = parser::parse_expression("re.Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::from_iter([("typing", FxHashSet::from_iter(["re"]))]),
&FxHashMap::default()
));
let expr = parser::parse_expression("match.Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re.match",
"Match",
&FxHashMap::from_iter([("typing.re", FxHashSet::from_iter(["match"]))]),
&FxHashMap::default()
));
let expr = parser::parse_expression("re.match.Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re.match",
"Match",
&FxHashMap::from_iter([("typing", FxHashSet::from_iter(["re"]))]),
&FxHashMap::default()
));
Ok(())
}
#[test]
fn from_alias() -> Result<()> {
let expr = parser::parse_expression("IMatch", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::from_iter([("typing.re", FxHashSet::from_iter(["Match"]))]),
&FxHashMap::from_iter([("IMatch", "Match")]),
));
Ok(())
}
#[test]
fn from_aliased_parent() -> Result<()> {
let expr = parser::parse_expression("t.Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::default(),
&FxHashMap::from_iter([("t", "typing.re")]),
));
Ok(())
}
#[test]
fn from_aliased_grandparent() -> Result<()> {
let expr = parser::parse_expression("t.re.Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::default(),
&FxHashMap::from_iter([("t", "typing")]),
));
Ok(())
}
use crate::source_code::Locator;
#[test]
fn trailing_content() -> Result<()> {
let contents = "x = 1";
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert!(!match_trailing_content(stmt, &locator));
let contents = "x = 1; y = 2";
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert!(match_trailing_content(stmt, &locator));
let contents = "x = 1 ";
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert!(!match_trailing_content(stmt, &locator));
let contents = "x = 1 # Comment";
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert!(!match_trailing_content(stmt, &locator));
let contents = r#"
@@ -950,7 +745,7 @@ y = 2
.trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert!(!match_trailing_content(stmt, &locator));
Ok(())
@@ -961,7 +756,7 @@ y = 2
let contents = "def f(): pass".trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
identifier_range(stmt, &locator),
Range::new(Location::new(1, 4), Location::new(1, 5),)
@@ -975,7 +770,7 @@ def \
.trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
identifier_range(stmt, &locator),
Range::new(Location::new(2, 2), Location::new(2, 3),)
@@ -984,7 +779,7 @@ def \
let contents = "class Class(): pass".trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
identifier_range(stmt, &locator),
Range::new(Location::new(1, 6), Location::new(1, 11),)
@@ -993,7 +788,7 @@ def \
let contents = "class Class: pass".trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
identifier_range(stmt, &locator),
Range::new(Location::new(1, 6), Location::new(1, 11),)
@@ -1007,7 +802,7 @@ class Class():
.trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
identifier_range(stmt, &locator),
Range::new(Location::new(2, 6), Location::new(2, 11),)
@@ -1016,7 +811,7 @@ class Class():
let contents = r#"x = y + 1"#.trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
identifier_range(stmt, &locator),
Range::new(Location::new(1, 0), Location::new(1, 9),)
@@ -1036,7 +831,7 @@ else:
.trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
let range = else_range(stmt, &locator).unwrap();
assert_eq!(range.location.row(), 3);
assert_eq!(range.location.column(), 0);

View File

@@ -74,7 +74,6 @@ pub enum ScopeKind<'a> {
Function(FunctionDef<'a>),
Generator,
Module,
Arg,
Lambda(Lambda<'a>),
}
@@ -105,7 +104,7 @@ impl<'a> Scope<'a> {
}
#[derive(Clone, Debug)]
pub enum BindingKind {
pub enum BindingKind<'a> {
Annotation,
Argument,
Assignment,
@@ -119,14 +118,14 @@ pub enum BindingKind {
Export(Vec<String>),
FutureImportation,
StarImportation(Option<usize>, Option<String>),
Importation(String, String),
FromImportation(String, String),
SubmoduleImportation(String, String),
Importation(&'a str, &'a str),
FromImportation(&'a str, String),
SubmoduleImportation(&'a str, &'a str),
}
#[derive(Clone, Debug)]
pub struct Binding<'a> {
pub kind: BindingKind,
pub kind: BindingKind<'a>,
pub range: Range,
/// The statement in which the `Binding` was defined.
pub source: Option<RefEquality<'a, Stmt>>,
@@ -169,19 +168,26 @@ impl<'a> Binding<'a> {
pub fn redefines(&self, existing: &'a Binding) -> bool {
match &self.kind {
BindingKind::Importation(_, full_name) | BindingKind::FromImportation(_, full_name) => {
if let BindingKind::SubmoduleImportation(_, existing_full_name) = &existing.kind {
return full_name == existing_full_name;
BindingKind::Importation(.., full_name) => {
if let BindingKind::SubmoduleImportation(.., existing) = &existing.kind {
return full_name == existing;
}
}
BindingKind::SubmoduleImportation(_, full_name) => {
if let BindingKind::Importation(_, existing_full_name)
| BindingKind::FromImportation(_, existing_full_name)
| BindingKind::SubmoduleImportation(_, existing_full_name) = &existing.kind
{
return full_name == existing_full_name;
BindingKind::FromImportation(.., full_name) => {
if let BindingKind::SubmoduleImportation(.., existing) = &existing.kind {
return full_name == existing;
}
}
BindingKind::SubmoduleImportation(.., full_name) => match &existing.kind {
BindingKind::Importation(.., existing)
| BindingKind::SubmoduleImportation(.., existing) => {
return full_name == existing;
}
BindingKind::FromImportation(.., existing) => {
return full_name == existing;
}
_ => {}
},
BindingKind::Annotation => {
return false;
}

View File

@@ -1,225 +1 @@
use std::borrow::Cow;
use std::collections::BTreeSet;
use itertools::Itertools;
use ropey::RopeBuilder;
use rustpython_parser::ast::Location;
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::registry::Diagnostic;
use crate::source_code_locator::SourceCodeLocator;
#[derive(Debug, Copy, Clone, Hash)]
pub enum Mode {
Generate,
Apply,
Diff,
None,
}
impl From<bool> for Mode {
fn from(value: bool) -> Self {
if value {
Mode::Apply
} else {
Mode::None
}
}
}
/// Auto-fix errors in a file, and write the fixed source code to disk.
pub fn fix_file<'a>(
diagnostics: &'a [Diagnostic],
locator: &'a SourceCodeLocator<'a>,
) -> Option<(Cow<'a, str>, usize)> {
if diagnostics.iter().all(|check| check.fix.is_none()) {
return None;
}
Some(apply_fixes(
diagnostics.iter().filter_map(|check| check.fix.as_ref()),
locator,
))
}
/// Apply a series of fixes.
fn apply_fixes<'a>(
fixes: impl Iterator<Item = &'a Fix>,
locator: &'a SourceCodeLocator<'a>,
) -> (Cow<'a, str>, usize) {
let mut output = RopeBuilder::new();
let mut last_pos: Location = Location::new(1, 0);
let mut applied: BTreeSet<&Fix> = BTreeSet::default();
let mut num_fixed: usize = 0;
for fix in fixes.sorted_by_key(|fix| fix.location) {
// If we already applied an identical fix as part of another correction, skip
// any re-application.
if applied.contains(&fix) {
num_fixed += 1;
continue;
}
// Best-effort approach: if this fix overlaps with a fix we've already applied,
// skip it.
if last_pos > fix.location {
continue;
}
// Add all contents from `last_pos` to `fix.location`.
let slice = locator.slice_source_code_range(&Range::new(last_pos, fix.location));
output.append(&slice);
// Add the patch itself.
output.append(&fix.content);
// Track that the fix was applied.
last_pos = fix.end_location;
applied.insert(fix);
num_fixed += 1;
}
// Add the remaining content.
let slice = locator.slice_source_code_at(&last_pos);
output.append(&slice);
(Cow::from(output.finish()), num_fixed)
}
#[cfg(test)]
mod tests {
use rustpython_parser::ast::Location;
use crate::autofix::fixer::apply_fixes;
use crate::autofix::Fix;
use crate::SourceCodeLocator;
#[test]
fn empty_file() {
let fixes = vec![];
let locator = SourceCodeLocator::new(r#""#);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(contents, "");
assert_eq!(fixed, 0);
}
#[test]
fn apply_single_replacement() {
let fixes = vec![Fix {
content: "Bar".to_string(),
location: Location::new(1, 8),
end_location: Location::new(1, 14),
}];
let locator = SourceCodeLocator::new(
r#"
class A(object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A(Bar):
...
"#
.trim(),
);
assert_eq!(fixed, 1);
}
#[test]
fn apply_single_removal() {
let fixes = vec![Fix {
content: String::new(),
location: Location::new(1, 7),
end_location: Location::new(1, 15),
}];
let locator = SourceCodeLocator::new(
r#"
class A(object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A:
...
"#
.trim()
);
assert_eq!(fixed, 1);
}
#[test]
fn apply_double_removal() {
let fixes = vec![
Fix {
content: String::new(),
location: Location::new(1, 7),
end_location: Location::new(1, 16),
},
Fix {
content: String::new(),
location: Location::new(1, 16),
end_location: Location::new(1, 23),
},
];
let locator = SourceCodeLocator::new(
r#"
class A(object, object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A:
...
"#
.trim()
);
assert_eq!(fixed, 2);
}
#[test]
fn ignore_overlapping_fixes() {
let fixes = vec![
Fix {
content: String::new(),
location: Location::new(1, 7),
end_location: Location::new(1, 15),
},
Fix {
content: "ignored".to_string(),
location: Location::new(1, 9),
end_location: Location::new(1, 11),
},
];
let locator = SourceCodeLocator::new(
r#"
class A(object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A:
...
"#
.trim(),
);
assert_eq!(fixed, 1);
}
}

View File

@@ -9,10 +9,10 @@ use crate::ast::helpers;
use crate::ast::helpers::to_absolute;
use crate::ast::types::Range;
use crate::ast::whitespace::LinesWithTrailingNewline;
use crate::autofix::Fix;
use crate::cst::helpers::compose_module_path;
use crate::cst::matchers::match_module;
use crate::source_code_locator::SourceCodeLocator;
use crate::fix::Fix;
use crate::source_code::Locator;
/// Determine if a body contains only a single statement, taking into account
/// deleted.
@@ -78,7 +78,7 @@ fn is_lone_child(child: &Stmt, parent: &Stmt, deleted: &[&Stmt]) -> Result<bool>
/// Return the location of a trailing semicolon following a `Stmt`, if it's part
/// of a multi-statement line.
fn trailing_semicolon(stmt: &Stmt, locator: &SourceCodeLocator) -> Option<Location> {
fn trailing_semicolon(stmt: &Stmt, locator: &Locator) -> Option<Location> {
let contents = locator.slice_source_code_at(&stmt.end_location.unwrap());
for (row, line) in LinesWithTrailingNewline::from(&contents).enumerate() {
let trimmed = line.trim();
@@ -100,7 +100,7 @@ fn trailing_semicolon(stmt: &Stmt, locator: &SourceCodeLocator) -> Option<Locati
}
/// Find the next valid break for a `Stmt` after a semicolon.
fn next_stmt_break(semicolon: Location, locator: &SourceCodeLocator) -> Location {
fn next_stmt_break(semicolon: Location, locator: &Locator) -> Location {
let start_location = Location::new(semicolon.row(), semicolon.column() + 1);
let contents = locator.slice_source_code_at(&start_location);
for (row, line) in LinesWithTrailingNewline::from(&contents).enumerate() {
@@ -133,7 +133,7 @@ fn next_stmt_break(semicolon: Location, locator: &SourceCodeLocator) -> Location
}
/// Return `true` if a `Stmt` occurs at the end of a file.
fn is_end_of_file(stmt: &Stmt, locator: &SourceCodeLocator) -> bool {
fn is_end_of_file(stmt: &Stmt, locator: &Locator) -> bool {
let contents = locator.slice_source_code_at(&stmt.end_location.unwrap());
contents.is_empty()
}
@@ -155,7 +155,7 @@ pub fn delete_stmt(
stmt: &Stmt,
parent: Option<&Stmt>,
deleted: &[&Stmt],
locator: &SourceCodeLocator,
locator: &Locator,
) -> Result<Fix> {
if parent
.map(|parent| is_lone_child(stmt, parent, deleted))
@@ -197,7 +197,7 @@ pub fn remove_unused_imports<'a>(
stmt: &Stmt,
parent: Option<&Stmt>,
deleted: &[&Stmt],
locator: &SourceCodeLocator,
locator: &Locator,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(stmt));
let mut tree = match_module(&module_text)?;
@@ -210,14 +210,17 @@ pub fn remove_unused_imports<'a>(
Some(SmallStatement::Import(import_body)) => (&mut import_body.names, None),
Some(SmallStatement::ImportFrom(import_body)) => {
if let ImportNames::Aliases(names) = &mut import_body.names {
(names, import_body.module.as_ref())
(
names,
Some((&import_body.relative, import_body.module.as_ref())),
)
} else if let ImportNames::Star(..) = &import_body.names {
// Special-case: if the import is a `from ... import *`, then we delete the
// entire statement.
let mut found_star = false;
for unused_import in unused_imports {
let full_name = match import_body.module.as_ref() {
Some(module_name) => format!("{}.*", compose_module_path(module_name),),
Some(module_name) => format!("{}.*", compose_module_path(module_name)),
None => "*".to_string(),
};
if unused_import == full_name {
@@ -246,11 +249,25 @@ pub fn remove_unused_imports<'a>(
for unused_import in unused_imports {
let alias_index = aliases.iter().position(|alias| {
let full_name = match import_module {
Some(module_name) => format!(
"{}.{}",
compose_module_path(module_name),
compose_module_path(&alias.name)
),
Some((relative, module)) => {
let module = module.map(compose_module_path);
let member = compose_module_path(&alias.name);
let mut full_name = String::with_capacity(
relative.len()
+ module.as_ref().map_or(0, std::string::String::len)
+ member.len()
+ 1,
);
for _ in 0..relative.len() {
full_name.push('.');
}
if let Some(module) = module {
full_name.push_str(&module);
full_name.push('.');
}
full_name.push_str(&member);
full_name
}
None => compose_module_path(&alias.name),
};
full_name == unused_import
@@ -299,20 +316,20 @@ mod tests {
use rustpython_parser::parser;
use crate::autofix::helpers::{next_stmt_break, trailing_semicolon};
use crate::source_code_locator::SourceCodeLocator;
use crate::source_code::Locator;
#[test]
fn find_semicolon() -> Result<()> {
let contents = "x = 1";
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(trailing_semicolon(stmt, &locator), None);
let contents = "x = 1; y = 1";
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
trailing_semicolon(stmt, &locator),
Some(Location::new(1, 5))
@@ -321,7 +338,7 @@ mod tests {
let contents = "x = 1 ; y = 1";
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
trailing_semicolon(stmt, &locator),
Some(Location::new(1, 6))
@@ -334,7 +351,7 @@ x = 1 \
.trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
trailing_semicolon(stmt, &locator),
Some(Location::new(2, 2))
@@ -346,14 +363,14 @@ x = 1 \
#[test]
fn find_next_stmt_break() {
let contents = "x = 1; y = 1";
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
next_stmt_break(Location::new(1, 4), &locator),
Location::new(1, 5)
);
let contents = "x = 1 ; y = 1";
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
next_stmt_break(Location::new(1, 5), &locator),
Location::new(1, 6)
@@ -364,7 +381,7 @@ x = 1 \
; y = 1
"#
.trim();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
next_stmt_break(Location::new(2, 2), &locator),
Location::new(2, 4)

View File

@@ -1,38 +1,210 @@
use std::borrow::Cow;
use std::collections::BTreeSet;
use itertools::Itertools;
use ropey::RopeBuilder;
use rustpython_ast::Location;
use serde::{Deserialize, Serialize};
use crate::ast::types::Range;
use crate::fix::Fix;
use crate::registry::Diagnostic;
use crate::source_code::Locator;
pub mod fixer;
pub mod helpers;
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize)]
pub struct Fix {
pub content: String,
pub location: Location,
pub end_location: Location,
/// Auto-fix errors in a file, and write the fixed source code to disk.
pub fn fix_file<'a>(
diagnostics: &'a [Diagnostic],
locator: &'a Locator<'a>,
) -> Option<(Cow<'a, str>, usize)> {
if diagnostics.iter().all(|check| check.fix.is_none()) {
return None;
}
Some(apply_fixes(
diagnostics.iter().filter_map(|check| check.fix.as_ref()),
locator,
))
}
impl Fix {
pub fn deletion(start: Location, end: Location) -> Self {
Self {
/// Apply a series of fixes.
fn apply_fixes<'a>(
fixes: impl Iterator<Item = &'a Fix>,
locator: &'a Locator<'a>,
) -> (Cow<'a, str>, usize) {
let mut output = RopeBuilder::new();
let mut last_pos: Location = Location::new(1, 0);
let mut applied: BTreeSet<&Fix> = BTreeSet::default();
let mut num_fixed: usize = 0;
for fix in fixes.sorted_by_key(|fix| fix.location) {
// If we already applied an identical fix as part of another correction, skip
// any re-application.
if applied.contains(&fix) {
num_fixed += 1;
continue;
}
// Best-effort approach: if this fix overlaps with a fix we've already applied,
// skip it.
if last_pos > fix.location {
continue;
}
// Add all contents from `last_pos` to `fix.location`.
let slice = locator.slice_source_code_range(&Range::new(last_pos, fix.location));
output.append(&slice);
// Add the patch itself.
output.append(&fix.content);
// Track that the fix was applied.
last_pos = fix.end_location;
applied.insert(fix);
num_fixed += 1;
}
// Add the remaining content.
let slice = locator.slice_source_code_at(&last_pos);
output.append(&slice);
(Cow::from(output.finish()), num_fixed)
}
#[cfg(test)]
mod tests {
use rustpython_parser::ast::Location;
use crate::autofix::apply_fixes;
use crate::fix::Fix;
use crate::source_code::Locator;
#[test]
fn empty_file() {
let fixes = vec![];
let locator = Locator::new(r#""#);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(contents, "");
assert_eq!(fixed, 0);
}
#[test]
fn apply_single_replacement() {
let fixes = vec![Fix {
content: "Bar".to_string(),
location: Location::new(1, 8),
end_location: Location::new(1, 14),
}];
let locator = Locator::new(
r#"
class A(object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A(Bar):
...
"#
.trim(),
);
assert_eq!(fixed, 1);
}
#[test]
fn apply_single_removal() {
let fixes = vec![Fix {
content: String::new(),
location: start,
end_location: end,
}
location: Location::new(1, 7),
end_location: Location::new(1, 15),
}];
let locator = Locator::new(
r#"
class A(object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A:
...
"#
.trim()
);
assert_eq!(fixed, 1);
}
pub fn replacement(content: String, start: Location, end: Location) -> Self {
Self {
content,
location: start,
end_location: end,
}
#[test]
fn apply_double_removal() {
let fixes = vec![
Fix {
content: String::new(),
location: Location::new(1, 7),
end_location: Location::new(1, 16),
},
Fix {
content: String::new(),
location: Location::new(1, 16),
end_location: Location::new(1, 23),
},
];
let locator = Locator::new(
r#"
class A(object, object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A:
...
"#
.trim()
);
assert_eq!(fixed, 2);
}
pub fn insertion(content: String, at: Location) -> Self {
Self {
content,
location: at,
end_location: at,
}
#[test]
fn ignore_overlapping_fixes() {
let fixes = vec![
Fix {
content: String::new(),
location: Location::new(1, 7),
end_location: Location::new(1, 15),
},
Fix {
content: "ignored".to_string(),
location: Location::new(1, 9),
end_location: Location::new(1, 11),
},
];
let locator = Locator::new(
r#"
class A(object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A:
...
"#
.trim(),
);
assert_eq!(fixed, 1);
}
}

View File

@@ -1,132 +1,9 @@
use std::collections::hash_map::DefaultHasher;
use std::fs;
use std::hash::{Hash, Hasher};
use std::io::Write;
use std::path::{Path, PathBuf};
use anyhow::Result;
use filetime::FileTime;
use log::error;
use path_absolutize::Absolutize;
use serde::{Deserialize, Serialize};
use crate::message::Message;
use crate::settings::{flags, Settings};
pub const CACHE_DIR_NAME: &str = ".ruff_cache";
const CARGO_PKG_VERSION: &str = env!("CARGO_PKG_VERSION");
#[derive(Serialize, Deserialize)]
struct CacheMetadata {
mtime: i64,
}
#[derive(Serialize)]
struct CheckResultRef<'a> {
metadata: &'a CacheMetadata,
messages: &'a [Message],
}
#[derive(Deserialize)]
struct CheckResult {
metadata: CacheMetadata,
messages: Vec<Message>,
}
/// Return the cache directory for a given project root. Defers to the
/// `RUFF_CACHE_DIR` environment variable, if set.
pub fn cache_dir(project_root: &Path) -> PathBuf {
project_root.join(CACHE_DIR_NAME)
}
fn content_dir() -> &'static Path {
Path::new("content")
}
fn cache_key<P: AsRef<Path>>(path: P, settings: &Settings, autofix: flags::Autofix) -> u64 {
let mut hasher = DefaultHasher::new();
CARGO_PKG_VERSION.hash(&mut hasher);
path.as_ref().absolutize().unwrap().hash(&mut hasher);
settings.hash(&mut hasher);
autofix.hash(&mut hasher);
hasher.finish()
}
/// Initialize the cache at the specified `Path`.
pub fn init(path: &Path) -> Result<()> {
// Create the cache directories.
fs::create_dir_all(path.join(content_dir()))?;
// Add the CACHEDIR.TAG.
if !cachedir::is_tagged(path)? {
cachedir::add_tag(path)?;
}
// Add the .gitignore.
let gitignore_path = path.join(".gitignore");
if !gitignore_path.exists() {
let mut file = fs::File::create(gitignore_path)?;
file.write_all(b"*")?;
}
Ok(())
}
fn write_sync(cache_dir: &Path, key: u64, value: &[u8]) -> Result<(), std::io::Error> {
fs::write(
cache_dir.join(content_dir()).join(format!("{key:x}")),
value,
)
}
fn read_sync(cache_dir: &Path, key: u64) -> Result<Vec<u8>, std::io::Error> {
fs::read(cache_dir.join(content_dir()).join(format!("{key:x}")))
}
/// Get a value from the cache.
pub fn get<P: AsRef<Path>>(
path: P,
metadata: &fs::Metadata,
settings: &Settings,
autofix: flags::Autofix,
) -> Option<Vec<Message>> {
let encoded = read_sync(&settings.cache_dir, cache_key(path, settings, autofix)).ok()?;
let (mtime, messages) = match bincode::deserialize::<CheckResult>(&encoded[..]) {
Ok(CheckResult {
metadata: CacheMetadata { mtime },
messages,
}) => (mtime, messages),
Err(e) => {
error!("Failed to deserialize encoded cache entry: {e:?}");
return None;
}
};
if FileTime::from_last_modification_time(metadata).unix_seconds() != mtime {
return None;
}
Some(messages)
}
/// Set a value in the cache.
pub fn set<P: AsRef<Path>>(
path: P,
metadata: &fs::Metadata,
settings: &Settings,
autofix: flags::Autofix,
messages: &[Message],
) {
let check_result = CheckResultRef {
metadata: &CacheMetadata {
mtime: FileTime::from_last_modification_time(metadata).unix_seconds(),
},
messages,
};
if let Err(e) = write_sync(
&settings.cache_dir,
cache_key(path, settings, autofix),
&bincode::serialize(&check_result).unwrap(),
) {
error!("Failed to write to cache: {e:?}");
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -7,47 +7,49 @@ use rustpython_parser::ast::Suite;
use crate::ast::visitor::Visitor;
use crate::directives::IsortDirectives;
use crate::isort;
use crate::isort::track::ImportTracker;
use crate::registry::Diagnostic;
use crate::isort::track::{Block, ImportTracker};
use crate::registry::{Diagnostic, RuleCode};
use crate::settings::{flags, Settings};
use crate::source_code_locator::SourceCodeLocator;
use crate::source_code_style::SourceCodeStyleDetector;
fn check_import_blocks(
tracker: ImportTracker,
locator: &SourceCodeLocator,
settings: &Settings,
stylist: &SourceCodeStyleDetector,
autofix: flags::Autofix,
package: Option<&Path>,
) -> Vec<Diagnostic> {
let mut diagnostics = vec![];
for block in tracker.into_iter() {
if !block.imports.is_empty() {
if let Some(diagnostic) =
isort::rules::check_imports(&block, locator, settings, stylist, autofix, package)
{
diagnostics.push(diagnostic);
}
}
}
diagnostics
}
use crate::source_code::{Locator, Stylist};
#[allow(clippy::too_many_arguments)]
pub fn check_imports(
python_ast: &Suite,
locator: &SourceCodeLocator,
locator: &Locator,
directives: &IsortDirectives,
settings: &Settings,
stylist: &SourceCodeStyleDetector,
stylist: &Stylist,
autofix: flags::Autofix,
path: &Path,
package: Option<&Path>,
) -> Vec<Diagnostic> {
let mut tracker = ImportTracker::new(locator, directives, path);
for stmt in python_ast {
tracker.visit_stmt(stmt);
// Extract all imports from the AST.
let tracker = {
let mut tracker = ImportTracker::new(locator, directives, path);
for stmt in python_ast {
tracker.visit_stmt(stmt);
}
tracker
};
let blocks: Vec<&Block> = tracker.iter().collect();
// Enforce import rules.
let mut diagnostics = vec![];
if settings.enabled.contains(&RuleCode::I001) {
for block in &blocks {
if !block.imports.is_empty() {
if let Some(diagnostic) = isort::rules::organize_imports(
block, locator, settings, stylist, autofix, package,
) {
diagnostics.push(diagnostic);
}
}
}
}
check_import_blocks(tracker, locator, settings, stylist, autofix, package)
if settings.enabled.contains(&RuleCode::I002) {
diagnostics.extend(isort::rules::add_required_imports(
&blocks, python_ast, locator, settings, autofix,
));
}
diagnostics
}

View File

@@ -1,6 +1,6 @@
//! Lint rules based on checking raw physical lines.
use crate::pycodestyle::rules::{line_too_long, no_newline_at_end_of_file};
use crate::pycodestyle::rules::{doc_line_too_long, line_too_long, no_newline_at_end_of_file};
use crate::pygrep_hooks::rules::{blanket_noqa, blanket_type_ignore};
use crate::pyupgrade::rules::unnecessary_coding_comment;
use crate::registry::{Diagnostic, RuleCode};
@@ -9,18 +9,21 @@ use crate::settings::{flags, Settings};
pub fn check_lines(
contents: &str,
commented_lines: &[usize],
doc_lines: &[usize],
settings: &Settings,
autofix: flags::Autofix,
) -> Vec<Diagnostic> {
let mut diagnostics: Vec<Diagnostic> = vec![];
let enforce_unnecessary_coding_comment = settings.enabled.contains(&RuleCode::UP009);
let enforce_blanket_noqa = settings.enabled.contains(&RuleCode::PGH004);
let enforce_blanket_type_ignore = settings.enabled.contains(&RuleCode::PGH003);
let enforce_doc_line_too_long = settings.enabled.contains(&RuleCode::W505);
let enforce_line_too_long = settings.enabled.contains(&RuleCode::E501);
let enforce_no_newline_at_end_of_file = settings.enabled.contains(&RuleCode::W292);
let enforce_blanket_type_ignore = settings.enabled.contains(&RuleCode::PGH003);
let enforce_blanket_noqa = settings.enabled.contains(&RuleCode::PGH004);
let enforce_unnecessary_coding_comment = settings.enabled.contains(&RuleCode::UP009);
let mut commented_lines_iter = commented_lines.iter().peekable();
let mut doc_lines_iter = doc_lines.iter().peekable();
for (index, line) in contents.lines().enumerate() {
while commented_lines_iter
.next_if(|lineno| &(index + 1) == *lineno)
@@ -40,18 +43,25 @@ pub fn check_lines(
}
if enforce_blanket_type_ignore {
if commented_lines.contains(&(index + 1)) {
if let Some(diagnostic) = blanket_type_ignore(index, line) {
diagnostics.push(diagnostic);
}
if let Some(diagnostic) = blanket_type_ignore(index, line) {
diagnostics.push(diagnostic);
}
}
if enforce_blanket_noqa {
if commented_lines.contains(&(index + 1)) {
if let Some(diagnostic) = blanket_noqa(index, line) {
diagnostics.push(diagnostic);
}
if let Some(diagnostic) = blanket_noqa(index, line) {
diagnostics.push(diagnostic);
}
}
}
while doc_lines_iter
.next_if(|lineno| &(index + 1) == *lineno)
.is_some()
{
if enforce_doc_line_too_long {
if let Some(diagnostic) = doc_line_too_long(index, line, settings) {
diagnostics.push(diagnostic);
}
}
}
@@ -90,6 +100,7 @@ mod tests {
check_lines(
line,
&[],
&[],
&Settings {
line_length,
..Settings::for_rule(RuleCode::E501)

View File

@@ -6,7 +6,7 @@ use nohash_hasher::IntMap;
use rustpython_parser::ast::Location;
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::fix::Fix;
use crate::noqa::{is_file_exempt, Directive};
use crate::registry::{Diagnostic, DiagnosticKind, RuleCode, CODE_REDIRECTS};
use crate::settings::{flags, Settings};

View File

@@ -5,12 +5,12 @@ use rustpython_parser::lexer::{LexResult, Tok};
use crate::lex::docstring_detection::StateMachine;
use crate::registry::{Diagnostic, RuleCode};
use crate::ruff::rules::Context;
use crate::settings::flags;
use crate::source_code_locator::SourceCodeLocator;
use crate::{eradicate, flake8_implicit_str_concat, flake8_quotes, pycodestyle, ruff, Settings};
use crate::settings::{flags, Settings};
use crate::source_code::Locator;
use crate::{eradicate, flake8_implicit_str_concat, flake8_quotes, pycodestyle, ruff};
pub fn check_tokens(
locator: &SourceCodeLocator,
locator: &Locator,
tokens: &[LexResult],
settings: &Settings,
autofix: flags::Autofix,
@@ -67,7 +67,8 @@ pub fn check_tokens(
start,
end,
is_docstring,
&settings.flake8_quotes,
settings,
autofix,
) {
if settings.enabled.contains(diagnostic.kind.code()) {
diagnostics.push(diagnostic);

View File

@@ -1,5 +1,7 @@
use anyhow::{bail, Result};
use libcst_native::{Expr, Import, ImportFrom, Module, SmallStatement, Statement};
use libcst_native::{
Call, Expr, Expression, Import, ImportFrom, Module, SmallStatement, Statement,
};
pub fn match_module(module_text: &str) -> Result<Module> {
match libcst_native::parse_module(module_text, None) {
@@ -8,6 +10,13 @@ pub fn match_module(module_text: &str) -> Result<Module> {
}
}
pub fn match_expression(expression_text: &str) -> Result<Expression> {
match libcst_native::parse_expression(expression_text) {
Ok(expression) => Ok(expression),
Err(_) => bail!("Failed to extract CST from source"),
}
}
pub fn match_expr<'a, 'b>(module: &'a mut Module<'b>) -> Result<&'a mut Expr<'b>> {
if let Some(Statement::Simple(expr)) = module.body.first_mut() {
if let Some(SmallStatement::Expr(expr)) = expr.body.first_mut() {
@@ -43,3 +52,11 @@ pub fn match_import_from<'a, 'b>(module: &'a mut Module<'b>) -> Result<&'a mut I
bail!("Expected Statement::Simple")
}
}
pub fn match_call<'a, 'b>(expression: &'a mut Expression<'b>) -> Result<&'a mut Call<'b>> {
if let Expression::Call(call) = expression {
Ok(call)
} else {
bail!("Expected SmallStatement::Expr")
}
}

View File

@@ -6,7 +6,7 @@ use rustpython_ast::Location;
use rustpython_parser::lexer::{LexResult, Tok};
use crate::registry::LintSource;
use crate::Settings;
use crate::settings::Settings;
bitflags! {
pub struct Flags: u32 {
@@ -33,6 +33,7 @@ impl Flags {
pub struct IsortDirectives {
pub exclusions: IntSet<usize>,
pub splits: Vec<usize>,
pub skip_file: bool,
}
pub struct Directives {
@@ -89,17 +90,11 @@ pub fn extract_noqa_line_for(lxr: &[LexResult]) -> IntMap<usize, usize> {
pub fn extract_isort_directives(lxr: &[LexResult]) -> IsortDirectives {
let mut exclusions: IntSet<usize> = IntSet::default();
let mut splits: Vec<usize> = Vec::default();
let mut skip_file: bool = false;
let mut off: Option<Location> = None;
let mut last: Option<Location> = None;
for &(start, ref tok, end) in lxr.iter().flatten() {
last = Some(end);
// No need to keep processing, but we do need to determine the last token.
if skip_file {
continue;
}
let Tok::Comment(comment_text) = tok else {
continue;
};
@@ -111,7 +106,10 @@ pub fn extract_isort_directives(lxr: &[LexResult]) -> IsortDirectives {
if comment_text == "# isort: split" {
splits.push(start.row());
} else if comment_text == "# isort: skip_file" || comment_text == "# isort:skip_file" {
skip_file = true;
return IsortDirectives {
skip_file: true,
..IsortDirectives::default()
};
} else if off.is_some() {
if comment_text == "# isort: on" {
if let Some(start) = off {
@@ -130,14 +128,7 @@ pub fn extract_isort_directives(lxr: &[LexResult]) -> IsortDirectives {
}
}
if skip_file {
// Enforce `isort: skip_file`.
if let Some(end) = last {
for row in 1..=end.row() {
exclusions.insert(row);
}
}
} else if let Some(start) = off {
if let Some(start) = off {
// Enforce unterminated `isort: off`.
if let Some(end) = last {
for row in start.row() + 1..=end.row() {
@@ -145,7 +136,11 @@ pub fn extract_isort_directives(lxr: &[LexResult]) -> IsortDirectives {
}
}
}
IsortDirectives { exclusions, splits }
IsortDirectives {
exclusions,
splits,
..IsortDirectives::default()
}
}
#[cfg(test)]
@@ -283,10 +278,7 @@ x = 1
y = 2
z = x + 1";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
assert_eq!(
extract_isort_directives(&lxr).exclusions,
IntSet::from_iter([1, 2, 3, 4])
);
assert_eq!(extract_isort_directives(&lxr).exclusions, IntSet::default());
let contents = "# isort: off
x = 1
@@ -295,10 +287,7 @@ y = 2
# isort: skip_file
z = x + 1";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
assert_eq!(
extract_isort_directives(&lxr).exclusions,
IntSet::from_iter([1, 2, 3, 4, 5, 6])
);
assert_eq!(extract_isort_directives(&lxr).exclusions, IntSet::default());
}
#[test]

58
src/doc_lines.rs Normal file
View File

@@ -0,0 +1,58 @@
//! Doc line extraction. In this context, a doc line is a line consisting of a
//! standalone comment or a constant string statement.
use rustpython_ast::{Constant, ExprKind, Stmt, StmtKind, Suite};
use rustpython_parser::lexer::{LexResult, Tok};
use crate::ast::visitor;
use crate::ast::visitor::Visitor;
/// Extract doc lines (standalone comments) from a token sequence.
pub fn doc_lines_from_tokens(lxr: &[LexResult]) -> Vec<usize> {
let mut doc_lines: Vec<usize> = Vec::default();
let mut prev: Option<usize> = None;
for (start, tok, end) in lxr.iter().flatten() {
if matches!(tok, Tok::Indent | Tok::Dedent | Tok::Newline) {
continue;
}
if matches!(tok, Tok::Comment(..)) {
if let Some(prev) = prev {
if start.row() > prev {
doc_lines.push(start.row());
}
} else {
doc_lines.push(start.row());
}
}
prev = Some(end.row());
}
doc_lines
}
#[derive(Default)]
struct StringLinesVisitor {
string_lines: Vec<usize>,
}
impl Visitor<'_> for StringLinesVisitor {
fn visit_stmt(&mut self, stmt: &Stmt) {
if let StmtKind::Expr { value } = &stmt.node {
if let ExprKind::Constant {
value: Constant::Str(..),
..
} = &value.node
{
self.string_lines
.extend(value.location.row()..=value.end_location.unwrap().row());
}
}
visitor::walk_stmt(self, stmt);
}
}
/// Extract doc lines (standalone strings) from an AST.
pub fn doc_lines_from_ast(python_ast: &Suite) -> Vec<usize> {
let mut visitor = StringLinesVisitor::default();
visitor.visit_body(python_ast);
visitor.string_lines
}

View File

@@ -1,5 +1,5 @@
pub mod detection;
pub mod rules;
pub(crate) mod detection;
pub(crate) mod rules;
#[cfg(test)]
mod tests {

View File

@@ -1,11 +1,12 @@
use rustpython_ast::Location;
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::eradicate::detection::comment_contains_code;
use crate::registry::RuleCode;
use crate::settings::flags;
use crate::{violations, Diagnostic, Settings, SourceCodeLocator};
use crate::fix::Fix;
use crate::registry::{Diagnostic, RuleCode};
use crate::settings::{flags, Settings};
use crate::source_code::Locator;
use crate::violations;
fn is_standalone_comment(line: &str) -> bool {
for char in line.chars() {
@@ -20,7 +21,7 @@ fn is_standalone_comment(line: &str) -> bool {
/// ERA001
pub fn commented_out_code(
locator: &SourceCodeLocator,
locator: &Locator,
start: Location,
end: Location,
settings: &Settings,

53
src/fix.rs Normal file
View File

@@ -0,0 +1,53 @@
use rustpython_ast::Location;
use serde::{Deserialize, Serialize};
#[derive(Debug, Copy, Clone, Hash)]
pub enum FixMode {
Generate,
Apply,
Diff,
None,
}
impl From<bool> for FixMode {
fn from(value: bool) -> Self {
if value {
FixMode::Apply
} else {
FixMode::None
}
}
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize)]
pub struct Fix {
pub content: String,
pub location: Location,
pub end_location: Location,
}
impl Fix {
pub fn deletion(start: Location, end: Location) -> Self {
Self {
content: String::new(),
location: start,
end_location: end,
}
}
pub fn replacement(content: String, start: Location, end: Location) -> Self {
Self {
content,
location: start,
end_location: end,
}
}
pub fn insertion(content: String, at: Location) -> Self {
Self {
content,
location: at,
end_location: at,
}
}
}

View File

@@ -1,4 +1,4 @@
pub mod rules;
pub(crate) mod rules;
#[cfg(test)]
mod tests {

View File

@@ -1,20 +1,15 @@
use num_bigint::BigInt;
use rustpython_ast::{Cmpop, Constant, Expr, ExprKind, Located};
use crate::ast::helpers::match_module_member;
use crate::ast::types::Range;
use crate::checkers::ast::Checker;
use crate::registry::{Diagnostic, RuleCode};
use crate::violations;
fn is_sys(checker: &Checker, expr: &Expr, target: &str) -> bool {
match_module_member(
expr,
"sys",
target,
&checker.from_imports,
&checker.import_aliases,
)
checker
.resolve_call_path(expr)
.map_or(false, |path| path == ["sys", target])
}
/// YTT101, YTT102, YTT301, YTT303
@@ -187,13 +182,10 @@ pub fn compare(checker: &mut Checker, left: &Expr, ops: &[Cmpop], comparators: &
/// YTT202
pub fn name_or_attribute(checker: &mut Checker, expr: &Expr) {
if match_module_member(
expr,
"six",
"PY3",
&checker.from_imports,
&checker.import_aliases,
) {
if checker
.resolve_call_path(expr)
.map_or(false, |path| path == ["six", "PY3"])
{
checker.diagnostics.push(Diagnostic::new(
violations::SixPY3Referenced,
Range::from_located(expr),

View File

@@ -4,11 +4,11 @@ use rustpython_parser::lexer;
use rustpython_parser::lexer::Tok;
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::source_code_locator::SourceCodeLocator;
use crate::fix::Fix;
use crate::source_code::Locator;
/// ANN204
pub fn add_return_none_annotation(locator: &SourceCodeLocator, stmt: &Stmt) -> Result<Fix> {
pub fn add_return_none_annotation(locator: &Locator, stmt: &Stmt) -> Result<Fix> {
let range = Range::from_located(stmt);
let contents = locator.slice_source_code_range(&range);

View File

@@ -1,6 +1,6 @@
mod fixes;
pub mod helpers;
pub mod rules;
pub(crate) mod helpers;
pub(crate) mod rules;
pub mod settings;
#[cfg(test)]
@@ -9,9 +9,10 @@ mod tests {
use anyhow::Result;
use crate::flake8_annotations;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::{flake8_annotations, Settings};
use crate::settings::Settings;
#[test]
fn defaults() -> Result<()> {
@@ -144,4 +145,22 @@ mod tests {
insta::assert_yaml_snapshot!(diagnostics);
Ok(())
}
#[test]
fn allow_nested_overload() -> Result<()> {
let diagnostics = test_path(
Path::new("./resources/test/fixtures/flake8_annotations/allow_nested_overload.py"),
&Settings {
..Settings::for_rules(vec![
RuleCode::ANN201,
RuleCode::ANN202,
RuleCode::ANN204,
RuleCode::ANN205,
RuleCode::ANN206,
])
},
)?;
insta::assert_yaml_snapshot!(diagnostics);
Ok(())
}
}

Some files were not shown because too many files have changed in this diff Show More