Compare commits

...

61 Commits

Author SHA1 Message Date
Charlie Marsh
3a3a5fcd81 Remove -dev suffix from flake8_to_ruff 2023-01-15 22:45:14 -05:00
Charlie Marsh
e8577d5e26 Bump version to 0.0.223 2023-01-15 22:44:01 -05:00
Charlie Marsh
bcb1e6ba20 Add flake8-commas to the README 2023-01-15 22:43:29 -05:00
Charlie Marsh
15403522c1 Avoid triggering SIM117 for async with statements (#1903)
Actually, it looks like _none_ of the existing rules should be triggered on async `with` statements.

Closes #1902.
2023-01-15 21:42:36 -05:00
messense
cb4f305ced Lock stdout once when printing diagnostics (#1901)
https://doc.rust-lang.org/stable/std/io/struct.Stdout.html

> Each handle shares a global buffer of data to be written to the standard output stream.
> Access is also synchronized via a lock and
> explicit control over locking is available via the [`lock`](https://doc.rust-lang.org/stable/std/io/struct.Stdout.html#method.lock) method.
2023-01-15 21:04:00 -05:00
Charlie Marsh
d71a615b18 Buffer diagnostic writes to stdout (#1900) 2023-01-15 19:34:15 -05:00
Charlie Marsh
dfc2a34878 Remove rogue println 2023-01-15 18:59:59 -05:00
Charlie Marsh
7608087776 Don't require docstrings for setters and deleters (#1899) 2023-01-15 18:57:38 -05:00
Charlie Marsh
228f033e15 Skip noqa checker if no diagnostics are found (#1898) 2023-01-15 18:53:00 -05:00
Martin Fischer
d75d6d7c7c refactor: Split CliSettings from Settings
We want to automatically derive Hash for the library settings, which
requires us to split off all the settings unused by the library
(since these shouldn't affect the hash used by ruff_cli::cache).
2023-01-15 15:19:42 -05:00
Martin Fischer
ef80ab205c Mark Settings::for_rule(s) as test-only 2023-01-15 15:19:42 -05:00
Ran Benita
d3041587ad Implement flake8-commas (#1872)
Implements [flake8-commas](https://github.com/PyCQA/flake8-commas). Fixes #1058.

The plugin is mostly redundant with Black (and also deprecated upstream), but very useful for projects which can't/won't use an auto-formatter. 

This linter works on tokens. Before porting to Rust, I cleaned up the Python code ([link](https://gist.github.com/bluetech/7c5dcbdec4a73dd5a74d4bc09c72b8b9)) and made sure the tests pass. In the Rust version I tried to add explanatory comments, to the best of my understanding of the original logic.

Some changes I did make:

- Got rid of rule C814 - "missing trailing comma in Python 2". Ruff doesn't support Python 2.
- Merged rules C815 - "missing trailing comma in Python 3.5+" and C816 - "missing trailing comma in Python 3.6+" into C812 - "missing trailing comma". These Python versions are outdated, didn't think it was worth the complication.
- Added autofixes for C812 and C819.

Autofix is missing for C818 - "trailing comma on bare tuple prohibited". It needs to turn e.g. `x = 1,` into `x = (1, )`, it's a bit difficult to do with tokens only, so I skipped it for now.

I ran the rules on cpython/Lib and on a big internal code base and it works as intended (though I only sampled the diffs).
2023-01-15 14:03:32 -05:00
Harutaka Kawamura
8d912404b7 Use more precise error ranges for RET505~508 (#1895) 2023-01-15 13:54:24 -05:00
Tom Fryers
85bdb45eca Improve magic value message wording (#1892)
The message previously specified 'number', but the error applies to more types.
2023-01-15 12:53:02 -05:00
messense
c7d0d26981 Update add plugin/rule scripts (#1889)
Adjusted some file locations and changed to use [`pathlib`](https://docs.python.org/3/library/pathlib.html) instead of `os.path`.
2023-01-15 12:49:42 -05:00
Charlie Marsh
5c6753e69e Remove some Clippy allows (#1888) 2023-01-15 02:32:36 -05:00
Charlie Marsh
3791ca721a Add a dedicated token indexer for continuations and comments (#1886)
The primary motivation is that we can now robustly detect `\` continuations due to the addition of `Tok::NonLogicalNewline`. This PR generalizes the approach we took to comments (track all lines that contain any comments), and applies it to continuations too.
2023-01-15 01:57:31 -05:00
Charlie Marsh
2c644619e0 Convert confusable violations to named fields (#1887)
See: #1871.
2023-01-15 01:56:18 -05:00
Martin Fischer
81996f1bcc Convert define_rule_mapping! to a procedural macro
define_rule_mapping! was previously implemented as a declarative macro,
which was however partially relying on an origin_by_code! proc macro
because declarative macros cannot match on substrings of identifiers.

Currently all define_rule_mapping! lines look like the following:

    TID251 => violations::BannedApi,
    TID252 => violations::BannedRelativeImport,

We want to break up violations.rs, moving the violation definitions to
the respective rule modules. To do this we want to change the previous
lines to:

    TID251 => rules::flake8_tidy_imports::banned_api::BannedApi,
    TID252 => rules::flake8_tidy_imports::relative_imports::RelativeImport,

This however doesn't work because the define_rule_mapping! macro is
currently defined as:

    ($($code:ident => $mod:ident::$name:ident,)+) => { ... }

That is it only supported $module::$name but not longer paths with
multiple modules. While we could define `=> $path:path`[1] then we
could no longer access the last path segment, which we need because
we use it for the DiagnosticKind variant names. And
`$path:path::$last:ident` doesn't work either because it would be
ambiguous (Rust wouldn't know where the path ends ... so path fragments
have to be followed by some punctuation/keyword that may not be part of
paths). And we also cannot just introduce a procedural macro like
path_basename!(...) because the following is not valid Rust code:

    enum Foo { foo!(...), }

(macros cannot be called in the place where you define variants.)

So we have to convert define_rule_mapping! into a proc macro in order to
support paths of arbitrary length and this commit implements that.

[1]: https://doc.rust-lang.org/reference/macros-by-example.html#metavariables
2023-01-15 01:54:57 -05:00
Charlie Marsh
e3cc918b93 Bump version to 0.0.222 2023-01-14 23:34:53 -05:00
Charlie Marsh
d864477876 Turn doc references into links (#1878) 2023-01-14 23:27:45 -05:00
Charlie Marsh
e1ced89624 Document breaking --max-complexity change 2023-01-14 23:22:38 -05:00
Martin Fischer
4470d7ba04 Make the CI check for broken links in the Rust docs (#1883) 2023-01-14 23:18:17 -05:00
Harutaka Kawamura
2a1601749f Fix range of SIM201, 202, and 208 (#1880)
Before

```
resources/test/fixtures/flake8_simplify/SIM208.py:1:13: SIM208 Use `a` instead of `not (not a)`
  |
1 | if not (not a):  # SIM208
  |             ^ SIM208
  |
  = help: Replace with `a`

resources/test/fixtures/flake8_simplify/SIM208.py:4:14: SIM208 Use `a == b` instead of `not (not a == b)`
  |
4 | if not (not (a == b)):  # SIM208
  |              ^^^^^^ SIM208
  |
  = help: Replace with `a == b`
```

After

```
resources/test/fixtures/flake8_simplify/SIM208.py:1:4: SIM208 Use `a` instead of `not (not a)`
  |
1 | if not (not a):  # SIM208
  |    ^^^^^^^^^^^ SIM208
  |
  = help: Replace with `a`

resources/test/fixtures/flake8_simplify/SIM208.py:4:4: SIM208 Use `a == b` instead of `not (not a == b)`
  |
4 | if not (not (a == b)):  # SIM208
  |    ^^^^^^^^^^^^^^^^^^ SIM208
  |
  = help: Replace with `a == b`
```
2023-01-14 21:17:32 -05:00
Charlie Marsh
a01edad1c4 Remove --max-complexity from the CLI (#1877) 2023-01-14 18:27:23 -05:00
Martin Fischer
a81ac6705d Make ruff::source_code::{Generator, Locator, Stylist} private 2023-01-14 18:23:59 -05:00
Martin Fischer
d77675f30d Make ruff::{ast, autofix, directives, rustpython_helpers} private 2023-01-14 18:23:59 -05:00
Martin Fischer
fe7658199d Make ruff::violations private 2023-01-14 18:23:59 -05:00
Martin Fischer
cfa25ea4b0 Make ruff::rules private 2023-01-14 18:23:59 -05:00
Martin Fischer
c7f0f3b237 Regenerate insta snapshots 2023-01-14 11:48:02 -05:00
Martin Fischer
3b36030461 Introduce ruff::rules module
Resolves #1547.
2023-01-14 11:48:02 -05:00
Charlie Marsh
812df77246 Add Dagster and SnowCLI 2023-01-14 10:45:16 -05:00
Martin Fischer
69b356e9b9 Add top-level doc comments for crates
Test by running:

    cargo doc --no-deps --all --open
2023-01-14 10:11:30 -05:00
Martin Fischer
033d7d7e91 Disable doc generation for the ruff_cli binary 2023-01-14 10:11:30 -05:00
Martin Fischer
a181ca7a3d Reduce the API of ruff_cli to ruff_cli::help() 2023-01-14 10:11:30 -05:00
Martin Fischer
92124001d5 Turn ruff_dev into a bin-only crate 2023-01-14 10:11:30 -05:00
Martin Fischer
06b389c5bc Turn flake8_to_ruff into a bin-only crate 2023-01-14 10:11:30 -05:00
Thomas MK
9dc66b5a65 Split up the table corresponding to the pylint rules (#1868)
This makes it easier to see which rules you're enabling when selecting
one of the pylint codes (like `PLC`). This also makes it clearer what
those abbreviations stand for. When I first saw the pylint section, I
was very confused by that, so other might be as well.

See it rendered here:
https://github.com/thomkeh/ruff/blob/patch-1/README.md#pylint-plc-ple-plr-plw
2023-01-14 08:07:02 -05:00
Ran Benita
3447dd3615 Bump RustPython (#1836)
This bumps RustPython so we can use the new `NonLogicalNewline` token.
A couple of rules needed a fix due to the new token. There might be more
that are not caught by tests (anything working with tokens directly with
lookaheads), I hope not.
2023-01-14 08:03:27 -05:00
Harutaka Kawamura
42cb106377 Improve SIM117 (#1867)
This PR makes the following changes to improve `SIM117`:

- Avoid emitting `SIM117` multiple times within the same `with`
statement:
- Adjust the error range.  


## Example

```python
with A() as a:  # SIM117
    with B() as b:
        with C() as c:
            print("hello")
```

### Current

```
resources/test/fixtures/flake8_simplify/SIM117.py:5:1: SIM117 Use a single `with` statement with multiple contexts instead of nested `with` statements
  |
5 | / with A() as a:  # SIM117
6 | |     with B() as b:
7 | |         with C() as c:
8 | |             print("hello")
  | |__________________________^ SIM117
  |

resources/test/fixtures/flake8_simplify/SIM117.py:6:5: SIM117 Use a single `with` statement with multiple contexts instead of nested `with` statements
  |
6 |       with B() as b:
  |  _____^
7 | |         with C() as c:
8 | |             print("hello")
  | |__________________________^ SIM117
  |
```

### Improved

```
resources/test/fixtures/flake8_simplify/SIM117.py:5:1: SIM117 Use a single `with` statement with multiple contexts instead of nested `with` statements
  |
5 | / with A() as a:  # SIM117
6 | |     with B() as b:
7 | |         with C() as c:
  | |______________________^ SIM117
  |
```

Signed-off-by: harupy <hkawamura0130@gmail.com>
2023-01-14 07:59:24 -05:00
Charlie Marsh
027382f891 Add support for namespace packages (#1859)
Closes #1817.
2023-01-14 07:31:57 -05:00
Charlie Marsh
931d41bff1 Revert "Bump version to 0.0.222"
This reverts commit 852aab5758.
2023-01-13 23:56:29 -05:00
Charlie Marsh
852aab5758 Bump version to 0.0.222 2023-01-13 23:50:08 -05:00
Charlie Marsh
27fe4873f2 Fix placement of update feature flag 2023-01-13 23:46:32 -05:00
Charlie Marsh
ee6c81d02a Bump version to 0.0.221 2023-01-13 23:33:15 -05:00
Charlie Marsh
59542344e2 Avoid unnecessary allocations for module names (#1863) 2023-01-13 23:26:34 -05:00
Martin Fischer
7b1ce72f86 Actually fix wasm-pack build command (#1862)
I initially attempted to run `wasm-pack build -p ruff` which gave the
error message:

Error: crate directory is missing a `Cargo.toml` file; is `-p` the wrong
directory?

I interpreted that as wasm-pack looking for the "ruff" directory because
I specified -p ruff, however actually the wasm-pack build usage is:

    wasm-pack build [FLAGS] [OPTIONS] <path> <cargo-build-options>

And I was missing the `<path>` argument. So this actually wasn't at all
a bug in wasm-pack but just a confusing error message. And the symlink
hack I introduced in the previous commit didn't actually work ... I only
accidentally omitted the `-p` when testing (which ended up as `ruff`
being the <path> argument) ... CLIs are fun.
2023-01-13 23:20:20 -05:00
Martin Fischer
156e09536e Add workaround for wasm-pack bug to fix the playground CI (#1861)
Fixes #1860.
2023-01-13 22:46:51 -05:00
Charlie Marsh
22341c4ae4 Move repology down 2023-01-13 22:29:26 -05:00
Colin Delahunty
e4993bd7e2 Added ALE (#1857)
Fixes the sub issue I brought up in #1829.
2023-01-13 21:39:51 -05:00
Martin Fischer
82aff5f9ec Split off ruff_cli crate from ruff library
This lets you test the ruff linters or use the ruff library
without having to compile the ~100 additional dependencies
that are needed by the CLI.

Because we set the following in the [workspace] section of Cargo.toml:

   default-members = [".", "ruff_cli"]

`cargo run` still runs the CLI and `cargo test` still tests
the code in src/ as well as the code in the new ruff_cli crate.
(But you can now also run `cargo test -p ruff` to only test the linters.)
2023-01-13 21:37:54 -05:00
Charlie Marsh
403a004e03 Refactor import-tracking to leverage existing AST bindings (#1856)
This PR refactors our import-tracking logic to leverage our existing
logic for tracking bindings. It's both a significant simplification, a
significant improvement (as we can now track reassignments), and closes
out a bunch of subtle bugs.

Though the AST tracks all bindings (e.g., when parsing `import os as
foo`, we bind the name `foo` to a `BindingKind::Importation` that points
to the `os` module), when I went to implement import tracking (e.g., to
ensure that if the user references `List`, it's actually `typing.List`),
I added a parallel system specifically for this use-case.

That was a mistake, for a few reasons:

1. It didn't track reassignments, so if you had `from typing import
List`, but `List` was later overridden, we'd still consider any
reference to `List` to be `typing.List`.
2. It required a bunch of extra logic, include complex logic to try and
optimize the lookups, since it's such a hot codepath.
3. There were a few bugs in the implementation that were just hard to
correct under the existing abstractions (e.g., if you did `from typing
import Optional as Foo`, then we'd treat any reference to `Foo` _or_
`Optional` as `typing.Optional` (even though, in that case, `Optional`
was really unbound).

The new implementation goes through our existing binding tracking: when
we get a reference, we find the appropriate binding given the current
scope stack, and normalize it back to its original target.

Closes #1690.
Closes #1790.
2023-01-13 20:39:54 -05:00
Charlie Marsh
0b92849996 Improve spacing preservation for C405 fixes (#1855)
We now preserve the spacing of the more common form:

```py
set((
    1,
))
```

Rather than the less common form:

```py
set(
    (1,)
)
```
2023-01-13 13:11:08 -05:00
Charlie Marsh
12440ede9c Remove non-magic trailing comma from tuple (#1854)
Closes #1821.
2023-01-13 12:56:42 -05:00
max0x53
fc3f722df5 Implement PLR0133 (ComparisonOfConstants) (#1841)
This PR adds [Pylint
`R0133`](https://pylint.pycqa.org/en/latest/user_guide/messages/refactor/comparison-of-constants.html)

Feel free to suggest changes and additions, I have tried to maintain
parity with the Pylint implementation
[`comparison_checker.py`](https://github.com/PyCQA/pylint/blob/main/pylint/checkers/base/comparison_checker.py#L247)

See #970
2023-01-13 12:14:35 -05:00
Maksudul Haque
84ef7a0171 [isort] Add classes Config Option (#1849)
ref https://github.com/charliermarsh/ruff/issues/1819
2023-01-13 12:13:01 -05:00
Nicola Soranzo
66b1d09362 Clarify that some flake8-bugbear opinionated rules are already implemented (#1847)
E.g. B904 and B905.
2023-01-13 11:49:05 -05:00
Maksudul Haque
3ae01db226 [flake8-bugbear] Fix False Positives for B024 & B027 (#1851)
closes https://github.com/charliermarsh/ruff/issues/1848
2023-01-13 11:46:17 -05:00
Charlie Marsh
048e5774e8 Use absolute paths for --stdin-filename matching (#1843)
Non-basename glob matches (e.g., for `--per-file-ignores`) assume that
the path has been converted to an absolute path. (We do this for
filenames as part of the directory traversal.) For filenames passed via
stdin, though, we're missing this conversion. So `--per-file-ignores`
that rely on the _basename_ worked as expected, but directory paths did
not.

Closes #1840.
2023-01-12 21:01:05 -05:00
max0x53
b47e8e6770 Implement PLR2004 (MagicValueComparison) (#1828)
This PR adds [Pylint
`R2004`](https://pylint.pycqa.org/en/latest/user_guide/messages/refactor/magic-value-comparison.html#magic-value-comparison-r2004)

Feel free to suggest changes and additions, I have tried to maintain
parity with the Pylint implementation
[`magic_value.py`](https://github.com/PyCQA/pylint/blob/main/pylint/extensions/magic_value.py)

See #970
2023-01-12 19:44:18 -05:00
Jan Katins
ef17c82998 Document the way extend-ignore/select are applied (#1839)
Closes: https://github.com/charliermarsh/ruff/issues/1838
2023-01-12 19:44:03 -05:00
938 changed files with 7599 additions and 4685 deletions

View File

@@ -60,7 +60,7 @@ jobs:
target: wasm32-unknown-unknown
- uses: Swatinem/rust-cache@v1
- run: cargo clippy --workspace --all-targets --all-features -- -D warnings -W clippy::pedantic
- run: cargo clippy --workspace --target wasm32-unknown-unknown --all-features -- -D warnings -W clippy::pedantic
- run: cargo clippy -p ruff --target wasm32-unknown-unknown --all-features -- -D warnings -W clippy::pedantic
cargo-test:
name: "cargo test"
@@ -79,7 +79,10 @@ jobs:
run: |
cargo insta test --all --delete-unreferenced-snapshots
git diff --exit-code
- run: cargo test --package ruff --test black_compatibility_test -- --ignored
- run: cargo test --package ruff_cli --test black_compatibility_test -- --ignored
# Check for broken links in the documentation.
# Setting RUSTDOCFLAGS because `cargo doc --check` isn't yet implemented (https://github.com/rust-lang/cargo/issues/10025).
- run: RUSTDOCFLAGS="-D warnings" cargo doc --all --no-deps
# TODO(charlie): Re-enable the `wasm-pack` tests.
# See: https://github.com/charliermarsh/ruff/issues/1425

View File

@@ -31,7 +31,7 @@ jobs:
- uses: jetli/wasm-pack-action@v0.4.0
- uses: jetli/wasm-bindgen-action@v0.2.0
- name: "Run wasm-pack"
run: wasm-pack build --target web --out-dir playground/src/pkg
run: wasm-pack build --target web --out-dir playground/src/pkg . -- -p ruff
- name: "Install Node dependencies"
run: npm ci
working-directory: playground

View File

@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.0.220
rev: v0.0.223
hooks:
- id: ruff

View File

@@ -1,5 +1,20 @@
# Breaking Changes
## 0.0.222
### `--max-complexity` has been removed from the CLI ([#1877](https://github.com/charliermarsh/ruff/pull/1877))
The McCabe plugin's `--max-complexity` setting has been removed from the CLI, for consistency with
the treatment of other, similar settings.
To set the maximum complexity, use the `max-complexity` property in your `pyproject.toml` file,
like so:
```toml
[tool.ruff.mccabe]
max-complexity = 10
```
## 0.0.181
### Files excluded by `.gitignore` are now ignored ([#1234](https://github.com/charliermarsh/ruff/pull/1234))

121
Cargo.lock generated
View File

@@ -735,7 +735,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.220-dev.0"
version = "0.0.223"
dependencies = [
"anyhow",
"clap 4.0.32",
@@ -1364,6 +1364,27 @@ dependencies = [
"libc",
]
[[package]]
name = "num_enum"
version = "0.5.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf5395665662ef45796a4ff5486c5d41d29e0c09640af4c5f17fd94ee2c119c9"
dependencies = [
"num_enum_derive",
]
[[package]]
name = "num_enum_derive"
version = "0.5.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3b0498641e53dd6ac1a4f22547548caa6864cc4933784319cd1775271c5a46ce"
dependencies = [
"proc-macro-crate",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "once_cell"
version = "1.17.0"
@@ -1621,6 +1642,17 @@ dependencies = [
"termtree",
]
[[package]]
name = "proc-macro-crate"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eda0fc3b0fb7c975631757e14d9049da17374063edb6ebbcbc54d880d4fe94e9"
dependencies = [
"once_cell",
"thiserror",
"toml",
]
[[package]]
name = "proc-macro-error"
version = "1.0.4"
@@ -1874,27 +1906,19 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.220"
version = "0.0.223"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
"assert_cmd",
"atty",
"bincode",
"bitflags",
"cachedir",
"cfg-if 1.0.0",
"chrono",
"clap 4.0.32",
"clap_complete_command",
"clearscreen",
"colored",
"console_error_panic_hook",
"console_log",
"criterion",
"dirs 4.0.0",
"fern",
"filetime",
"getrandom 0.2.8",
"glob",
"globset",
@@ -1906,13 +1930,10 @@ dependencies = [
"log",
"natord",
"nohash-hasher",
"notify",
"num-bigint",
"num-traits",
"once_cell",
"path-absolutize",
"quick-junit",
"rayon",
"regex",
"ropey",
"ruff_macros",
@@ -1924,25 +1945,57 @@ dependencies = [
"semver",
"serde",
"serde-wasm-bindgen",
"serde_json",
"shellexpand",
"similar",
"strum",
"strum_macros",
"test-case",
"textwrap",
"titlecase",
"toml_edit",
"update-informer",
"ureq",
"walkdir",
"wasm-bindgen",
"wasm-bindgen-test",
]
[[package]]
name = "ruff_cli"
version = "0.0.223"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
"assert_cmd",
"atty",
"bincode",
"cachedir",
"chrono",
"clap 4.0.32",
"clap_complete_command",
"clearscreen",
"colored",
"filetime",
"glob",
"ignore",
"itertools",
"log",
"notify",
"path-absolutize",
"quick-junit",
"rayon",
"regex",
"ruff",
"rustc-hash",
"serde",
"serde_json",
"similar",
"strum",
"textwrap",
"update-informer",
"ureq",
"walkdir",
]
[[package]]
name = "ruff_dev"
version = "0.0.220"
version = "0.0.223"
dependencies = [
"anyhow",
"clap 4.0.32",
@@ -1950,6 +2003,7 @@ dependencies = [
"libcst",
"once_cell",
"ruff",
"ruff_cli",
"rustpython-ast",
"rustpython-common",
"rustpython-parser",
@@ -1962,7 +2016,7 @@ dependencies = [
[[package]]
name = "ruff_macros"
version = "0.0.220"
version = "0.0.223"
dependencies = [
"once_cell",
"proc-macro2",
@@ -2005,8 +2059,8 @@ dependencies = [
[[package]]
name = "rustpython-ast"
version = "0.1.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=d532160333ffeb6dbeca2c2728c2391cd1e53b7f#d532160333ffeb6dbeca2c2728c2391cd1e53b7f"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=acbc517b55406c76da83d7b2711941d8d3f65b87#acbc517b55406c76da83d7b2711941d8d3f65b87"
dependencies = [
"num-bigint",
"rustpython-common",
@@ -2015,8 +2069,8 @@ dependencies = [
[[package]]
name = "rustpython-common"
version = "0.0.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=d532160333ffeb6dbeca2c2728c2391cd1e53b7f#d532160333ffeb6dbeca2c2728c2391cd1e53b7f"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=acbc517b55406c76da83d7b2711941d8d3f65b87#acbc517b55406c76da83d7b2711941d8d3f65b87"
dependencies = [
"ascii",
"bitflags",
@@ -2040,8 +2094,8 @@ dependencies = [
[[package]]
name = "rustpython-compiler-core"
version = "0.1.2"
source = "git+https://github.com/RustPython/RustPython.git?rev=d532160333ffeb6dbeca2c2728c2391cd1e53b7f#d532160333ffeb6dbeca2c2728c2391cd1e53b7f"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=acbc517b55406c76da83d7b2711941d8d3f65b87#acbc517b55406c76da83d7b2711941d8d3f65b87"
dependencies = [
"bincode",
"bitflags",
@@ -2050,15 +2104,15 @@ dependencies = [
"lz4_flex",
"num-bigint",
"num-complex",
"num_enum",
"serde",
"static_assertions",
"thiserror",
]
[[package]]
name = "rustpython-parser"
version = "0.1.2"
source = "git+https://github.com/RustPython/RustPython.git?rev=d532160333ffeb6dbeca2c2728c2391cd1e53b7f#d532160333ffeb6dbeca2c2728c2391cd1e53b7f"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=acbc517b55406c76da83d7b2711941d8d3f65b87#acbc517b55406c76da83d7b2711941d8d3f65b87"
dependencies = [
"ahash",
"anyhow",
@@ -2496,6 +2550,15 @@ dependencies = [
"regex",
]
[[package]]
name = "toml"
version = "0.5.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1333c76748e868a4d9d1017b5ab53171dfd095f70c712fdb4653a406547f598f"
dependencies = [
"serde",
]
[[package]]
name = "toml_datetime"
version = "0.5.0"

View File

@@ -2,11 +2,13 @@
members = [
"flake8_to_ruff",
"ruff_dev",
"ruff_cli",
]
default-members = [".", "ruff_cli"]
[package]
name = "ruff"
version = "0.0.220"
version = "0.0.223"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = "2021"
rust-version = "1.65.0"
@@ -22,20 +24,14 @@ crate-type = ["cdylib", "rlib"]
doctest = false
[dependencies]
annotate-snippets = { version = "0.9.1", features = ["color"] }
anyhow = { version = "1.0.66" }
atty = { version = "0.2.14" }
bincode = { version = "1.3.3" }
bitflags = { version = "1.3.2" }
cachedir = { version = "0.3.0" }
cfg-if = { version = "1.0.0" }
chrono = { version = "0.4.21", default-features = false, features = ["clock"] }
clap = { version = "4.0.1", features = ["derive", "env"] }
clap_complete_command = { version = "0.4.0" }
colored = { version = "2.0.0" }
dirs = { version = "4.0.0" }
fern = { version = "0.6.1" }
filetime = { version = "0.2.17" }
glob = { version = "0.3.0" }
globset = { version = "0.4.9" }
ignore = { version = "0.4.18" }
@@ -44,36 +40,26 @@ libcst = { git = "https://github.com/charliermarsh/LibCST", rev = "f2f0b7a487a87
log = { version = "0.4.17" }
natord = { version = "1.0.9" }
nohash-hasher = { version = "0.2.0" }
notify = { version = "5.0.0" }
num-bigint = { version = "0.4.3" }
num-traits = "0.2.15"
once_cell = { version = "1.16.0" }
path-absolutize = { version = "3.0.14", features = ["once_cell_cache", "use_unix_paths_on_wasm"] }
quick-junit = { version = "0.3.2" }
regex = { version = "1.6.0" }
ropey = { version = "1.5.0", features = ["cr_lines", "simd"], default-features = false }
ruff_macros = { version = "0.0.220", path = "ruff_macros" }
ruff_macros = { version = "0.0.223", path = "ruff_macros" }
rustc-hash = { version = "1.1.0" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "d532160333ffeb6dbeca2c2728c2391cd1e53b7f" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "d532160333ffeb6dbeca2c2728c2391cd1e53b7f" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "d532160333ffeb6dbeca2c2728c2391cd1e53b7f" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "acbc517b55406c76da83d7b2711941d8d3f65b87" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "acbc517b55406c76da83d7b2711941d8d3f65b87" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "acbc517b55406c76da83d7b2711941d8d3f65b87" }
schemars = { version = "0.8.11" }
semver = { version = "1.0.16" }
serde = { version = "1.0.147", features = ["derive"] }
serde_json = { version = "1.0.87" }
shellexpand = { version = "3.0.0" }
similar = { version = "2.2.1" }
strum = { version = "0.24.1", features = ["strum_macros"] }
strum_macros = { version = "0.24.3" }
textwrap = { version = "0.16.0" }
titlecase = { version = "2.2.1" }
toml_edit = { version = "0.17.1", features = ["easy"] }
walkdir = { version = "2.3.2" }
[target.'cfg(not(target_family = "wasm"))'.dependencies]
clearscreen = { version = "2.0.0" }
rayon = { version = "1.5.3" }
update-informer = { version = "0.6.0", default-features = false, features = ["pypi"], optional = true }
# https://docs.rs/getrandom/0.2.7/getrandom/#webassembly-support
# For (future) wasm-pack support
@@ -88,17 +74,11 @@ wasm-bindgen = { version = "0.2.83" }
[dev-dependencies]
insta = { version = "1.19.1", features = ["yaml"] }
test-case = { version = "2.2.2" }
ureq = { version = "2.5.0", features = [] }
wasm-bindgen-test = { version = "0.3.33" }
[target.'cfg(not(target_family = "wasm"))'.dev-dependencies]
assert_cmd = { version = "2.0.4" }
criterion = { version = "0.4.0" }
[features]
default = ["update-informer"]
update-informer = ["dep:update-informer"]
[profile.release]
panic = "abort"
lto = "thin"

112
README.md
View File

@@ -51,16 +51,18 @@ Ruff is extremely actively developed and used in major open-source projects like
- [Apache Airflow](https://github.com/apache/airflow)
- [Bokeh](https://github.com/bokeh/bokeh)
- [Zulip](https://github.com/zulip/zulip)
- [Dagster](https://github.com/dagster-io/dagster)
- [Pydantic](https://github.com/pydantic/pydantic)
- [Sphinx](https://github.com/sphinx-doc/sphinx)
- [Hatch](https://github.com/pypa/hatch)
- [Jupyter](https://github.com/jupyter-server/jupyter_server)
- [Synapse](https://github.com/matrix-org/synapse)
- [Synapse (Matrix)](https://github.com/matrix-org/synapse)
- [Saleor](https://github.com/saleor/saleor)
- [Polars](https://github.com/pola-rs/polars)
- [Ibis](https://github.com/ibis-project/ibis)
- [OpenBB](https://github.com/OpenBB-finance/OpenBBTerminal)
- [`pyca/cryptography`](https://github.com/pyca/cryptography)
- [Cryptography (PyCA)](https://github.com/pyca/cryptography)
- [SnowCLI (Snowflake)](https://github.com/Snowflake-Labs/snowcli)
Read the [launch blog post](https://notes.crmarsh.com/python-tooling-could-be-much-much-faster).
@@ -121,6 +123,7 @@ of [Conda](https://docs.conda.io/en/latest/):
1. [pygrep-hooks (PGH)](#pygrep-hooks-pgh)
1. [Pylint (PLC, PLE, PLR, PLW)](#pylint-plc-ple-plr-plw)
1. [flake8-pie (PIE)](#flake8-pie-pie)
1. [flake8-commas (COM)](#flake8-commas-com)
1. [Ruff-specific rules (RUF)](#ruff-specific-rules-ruf)<!-- End auto-generated table of contents. -->
1. [Editor Integrations](#editor-integrations)
1. [FAQ](#faq)
@@ -141,8 +144,6 @@ Ruff is available as [`ruff`](https://pypi.org/project/ruff/) on PyPI:
pip install ruff
```
[![Packaging status](https://repology.org/badge/vertical-allrepos/ruff-python-linter.svg?exclude_unsupported=1)](https://repology.org/project/ruff-python-linter/versions)
For **macOS Homebrew** and **Linuxbrew** users, Ruff is also available as [`ruff`](https://formulae.brew.sh/formula/ruff) on Homebrew:
```shell
@@ -161,6 +162,8 @@ For **Arch Linux** users, Ruff is also available as [`ruff`](https://archlinux.o
pacman -S ruff
```
[![Packaging status](https://repology.org/badge/vertical-allrepos/ruff-python-linter.svg?exclude_unsupported=1)](https://repology.org/project/ruff-python-linter/versions)
### Usage
To run Ruff, try any of the following:
@@ -182,7 +185,7 @@ Ruff also works with [pre-commit](https://pre-commit.com):
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.220'
rev: 'v0.0.223'
hooks:
- id: ruff
# Respect `exclude` and `extend-exclude` settings.
@@ -383,8 +386,6 @@ Options:
The minimum Python version that should be supported
--line-length <LINE_LENGTH>
Set the line-length for length-associated rules and automatic formatting
--max-complexity <MAX_COMPLEXITY>
Maximum McCabe complexity allowed for a given function
--add-noqa
Enable automatic additions of `noqa` directives to failing lines
--clean
@@ -581,6 +582,7 @@ For more, see [Pyflakes](https://pypi.org/project/pyflakes/2.5.0/) on PyPI.
For more, see [pycodestyle](https://pypi.org/project/pycodestyle/2.9.1/) on PyPI.
#### Error (E)
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| E401 | MultipleImportsOnOneLine | Multiple imports on one line | |
@@ -598,6 +600,10 @@ For more, see [pycodestyle](https://pypi.org/project/pycodestyle/2.9.1/) on PyPI
| E743 | AmbiguousFunctionName | Ambiguous function name: `...` | |
| E902 | IOError | IOError: `...` | |
| E999 | SyntaxError | SyntaxError: `...` | |
#### Warning (W)
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| W292 | NoNewLineAtEndOfFile | No newline at end of file | 🛠 |
| W505 | DocLineTooLong | Doc line too long (89 > 88 characters) | |
| W605 | InvalidEscapeSequence | Invalid escape sequence: '\c' | 🛠 |
@@ -979,7 +985,6 @@ For more, see [flake8-simplify](https://pypi.org/project/flake8-simplify/0.19.3/
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| SIM115 | OpenFileWithContextHandler | Use context handler for opening files | |
| SIM101 | DuplicateIsinstanceCall | Multiple `isinstance` calls for `...`, merge into a single call | 🛠 |
| SIM102 | NestedIfStatements | Use a single `if` statement instead of nested `if` statements | |
| SIM103 | ReturnBoolConditionDirectly | Return the condition `...` directly | 🛠 |
@@ -990,6 +995,7 @@ For more, see [flake8-simplify](https://pypi.org/project/flake8-simplify/0.19.3/
| SIM110 | ConvertLoopToAny | Use `return any(x for x in y)` instead of `for` loop | 🛠 |
| SIM111 | ConvertLoopToAll | Use `return all(x for x in y)` instead of `for` loop | 🛠 |
| SIM112 | UseCapitalEnvironmentVariables | Use capitalized environment variable `...` instead of `...` | 🛠 |
| SIM115 | OpenFileWithContextHandler | Use context handler for opening files | |
| SIM117 | MultipleWithStatements | Use a single `with` statement with multiple contexts instead of nested `with` statements | |
| SIM118 | KeyInDict | Use `key in dict` instead of `key in dict.keys()` | 🛠 |
| SIM201 | NegateEqualOp | Use `left != right` instead of `not left == right` | 🛠 |
@@ -1084,18 +1090,33 @@ For more, see [pygrep-hooks](https://github.com/pre-commit/pygrep-hooks) on GitH
For more, see [Pylint](https://pypi.org/project/pylint/2.15.7/) on PyPI.
#### Convention (PLC)
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| PLC0414 | UselessImportAlias | Import alias does not rename original package | 🛠 |
| PLC2201 | MisplacedComparisonConstant | Comparison should be ... | 🛠 |
| PLC3002 | UnnecessaryDirectLambdaCall | Lambda expression called directly. Execute the expression inline instead. | |
#### Error (PLE)
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| PLE0117 | NonlocalWithoutBinding | Nonlocal name `...` found without binding | |
| PLE0118 | UsedPriorGlobalDeclaration | Name `...` is used prior to global declaration on line 1 | |
| PLE1142 | AwaitOutsideAsync | `await` should be used within an async function | |
#### Refactor (PLR)
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| PLR0133 | ConstantComparison | Two constants compared in a comparison, consider replacing `0 == 0` | |
| PLR0206 | PropertyWithParameters | Cannot have defined parameters for properties | |
| PLR0402 | ConsiderUsingFromImport | Use `from ... import ...` in lieu of alias | |
| PLR1701 | ConsiderMergingIsinstance | Merge these isinstance calls: `isinstance(..., (...))` | |
| PLR1722 | UseSysExit | Use `sys.exit()` instead of `exit` | 🛠 |
| PLR2004 | MagicValueComparison | Magic value used in comparison, consider replacing magic with a constant variable | |
#### Warning (PLW)
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| PLW0120 | UselessElseOnLoop | Else clause on loop without a break statement, remove the else and de-indent all the code inside it | |
| PLW0602 | GlobalVariableNotAssigned | Using global for `...` but no assignment is done | |
@@ -1109,6 +1130,16 @@ For more, see [flake8-pie](https://pypi.org/project/flake8-pie/0.16.0/) on PyPI.
| PIE794 | DupeClassFieldDefinitions | Class field `...` is defined multiple times | 🛠 |
| PIE807 | PreferListBuiltin | Prefer `list()` over useless lambda | 🛠 |
### flake8-commas (COM)
For more, see [flake8-commas](https://pypi.org/project/flake8-commas/2.1.0/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| COM812 | TrailingCommaMissing | Trailing comma missing | 🛠 |
| COM818 | TrailingCommaOnBareTupleProhibited | Trailing comma on bare tuple prohibited | |
| COM819 | TrailingCommaProhibited | Trailing comma prohibited | 🛠 |
### Ruff-specific rules (RUF)
| Code | Name | Message | Fix |
@@ -1254,6 +1285,18 @@ officially supported LSP server for Ruff.
However, Ruff is also available as part of the [coc-pyright](https://github.com/fannheyward/coc-pyright)
extension for `coc.nvim`.
<details>
<summary>With the <a href="https://github.com/dense-analysis/ale">ALE</a> plugin for (Neo)Vim.</summary>
```vim
let g:ale_linters = { "python": ["ruff"] }
let g:ale_fixers = {
\ "python": ["black", "ruff"],
\}
```
</details>
<details>
<summary>Ruff can also be integrated via <a href="https://github.com/neovim/nvim-lspconfig/blob/master/doc/server_configurations.md#efm"><code>efm</code></a> in just a <a href="https://github.com/JafarAbdi/myconfigs/blob/6f0b6b2450e92ec8fc50422928cd22005b919110/efm-langserver/config.yaml#L14-L20">few lines</a>.</summary>
<br>
@@ -1381,6 +1424,7 @@ natively, including:
- [`flake8-boolean-trap`](https://pypi.org/project/flake8-boolean-trap/)
- [`flake8-bugbear`](https://pypi.org/project/flake8-bugbear/)
- [`flake8-builtins`](https://pypi.org/project/flake8-builtins/)
- [`flake8-commas`](https://pypi.org/project/flake8-commas/)
- [`flake8-comprehensions`](https://pypi.org/project/flake8-comprehensions/)
- [`flake8-datetimez`](https://pypi.org/project/flake8-datetimez/)
- [`flake8-debugger`](https://pypi.org/project/flake8-debugger/)
@@ -1418,7 +1462,7 @@ Beyond the rule set, Ruff suffers from the following limitations vis-à-vis Flak
There are a few other minor incompatibilities between Ruff and the originating Flake8 plugins:
- Ruff doesn't implement the "opinionated" lint rules from `flake8-bugbear`.
- Ruff doesn't implement all the "opinionated" lint rules from `flake8-bugbear`.
- Depending on your project structure, Ruff and `isort` can differ in their detection of first-party
code. (This is often solved by modifying the `src` property, e.g., to `src = ["src"]`, if your
code is nested in a `src` directory.)
@@ -1446,6 +1490,7 @@ Today, Ruff can be used to replace Flake8 when used with any of the following pl
- [`flake8-boolean-trap`](https://pypi.org/project/flake8-boolean-trap/)
- [`flake8-bugbear`](https://pypi.org/project/flake8-bugbear/)
- [`flake8-builtins`](https://pypi.org/project/flake8-builtins/)
- [`flake8-commas`](https://pypi.org/project/flake8-commas/)
- [`flake8-comprehensions`](https://pypi.org/project/flake8-comprehensions/)
- [`flake8-datetimez`](https://pypi.org/project/flake8-datetimez/)
- [`flake8-debugger`](https://pypi.org/project/flake8-debugger/)
@@ -1908,6 +1953,12 @@ extend-exclude = ["tests", "src/bad.py"]
A list of rule codes or prefixes to ignore, in addition to those
specified by `ignore`.
Note that `extend-ignore` is applied after resolving rules from
`ignore`/`select` and a less specific rule in `extend-ignore`
would overwrite a more specific rule in `select`. It is
recommended to only use `extend-ignore` when extending a
`pyproject.toml` file via `extend`.
**Default value**: `[]`
**Type**: `Vec<RuleCodePrefix>`
@@ -1927,6 +1978,12 @@ extend-ignore = ["F841"]
A list of rule codes or prefixes to enable, in addition to those
specified by `select`.
Note that `extend-select` is applied after resolving rules from
`ignore`/`select` and a less specific rule in `extend-select`
would overwrite a more specific rule in `ignore`. It is
recommended to only use `extend-select` when extending a
`pyproject.toml` file via `extend`.
**Default value**: `[]`
**Type**: `Vec<RuleCodePrefix>`
@@ -2128,6 +2185,25 @@ line-length = 120
---
#### [`namespace-packages`](#namespace-packages)
Mark the specified directories as namespace packages. For the purpose of
module resolution, Ruff will treat those directories as if they
contained an `__init__.py` file.
**Default value**: `[]`
**Type**: `Vec<PathBuf>`
**Example usage**:
```toml
[tool.ruff]
namespace-packages = ["airflow/providers"]
```
---
#### [`per-file-ignores`](#per-file-ignores)
A list of mappings from file pattern to rule codes or prefixes to
@@ -2864,6 +2940,24 @@ ignore-variadic-names = true
### `isort`
#### [`classes`](#classes)
An override list of tokens to always recognize as a Class for
`order-by-type` regardless of casing.
**Default value**: `[]`
**Type**: `Vec<String>`
**Example usage**:
```toml
[tool.ruff.isort]
classes = ["SVC"]
```
---
#### [`combine-as-imports`](#combine-as-imports)
Combines as imports on the same line. See isort's [`combine-as-imports`](https://pycqa.github.io/isort/docs/configuration/options.html#combine-as-imports)

View File

@@ -771,7 +771,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8_to_ruff"
version = "0.0.220"
version = "0.0.223"
dependencies = [
"anyhow",
"clap",
@@ -1975,7 +1975,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.220"
version = "0.0.223"
dependencies = [
"anyhow",
"bincode",

View File

@@ -1,12 +1,8 @@
[package]
name = "flake8-to-ruff"
version = "0.0.220-dev.0"
version = "0.0.223"
edition = "2021"
[lib]
name = "flake8_to_ruff"
doctest = false
[dependencies]
anyhow = { version = "1.0.66" }
clap = { version = "4.0.1", features = ["derive"] }

View File

@@ -1,18 +0,0 @@
#![allow(
clippy::collapsible_else_if,
clippy::collapsible_if,
clippy::implicit_hasher,
clippy::match_same_arms,
clippy::missing_errors_doc,
clippy::missing_panics_doc,
clippy::module_name_repetitions,
clippy::must_use_candidate,
clippy::similar_names,
clippy::too_many_lines
)]
#![forbid(unsafe_code)]
pub mod black;
pub mod converter;
mod parser;
pub mod plugin;

View File

@@ -1,4 +1,4 @@
//! Utility to generate Ruff's pyproject.toml section from a Flake8 INI file.
//! Utility to generate Ruff's `pyproject.toml` section from a Flake8 INI file.
#![allow(
clippy::collapsible_else_if,
clippy::collapsible_if,
@@ -18,9 +18,7 @@ use std::path::PathBuf;
use anyhow::Result;
use clap::Parser;
use configparser::ini::Ini;
use flake8_to_ruff::black::parse_black_options;
use flake8_to_ruff::converter;
use flake8_to_ruff::plugin::Plugin;
use ruff::flake8_to_ruff;
#[derive(Parser)]
#[command(
@@ -38,7 +36,7 @@ struct Cli {
pyproject: Option<PathBuf>,
/// List of plugins to enable.
#[arg(long, value_delimiter = ',')]
plugin: Option<Vec<Plugin>>,
plugin: Option<Vec<flake8_to_ruff::Plugin>>,
}
fn main() -> Result<()> {
@@ -52,12 +50,12 @@ fn main() -> Result<()> {
// Read the pyproject.toml file.
let black = cli
.pyproject
.map(parse_black_options)
.map(flake8_to_ruff::parse_black_options)
.transpose()?
.flatten();
// Create Ruff's pyproject.toml section.
let pyproject = converter::convert(&config, black.as_ref(), cli.plugin)?;
let pyproject = flake8_to_ruff::convert(&config, black.as_ref(), cli.plugin)?;
println!("{}", toml_edit::easy::to_string_pretty(&pyproject)?);
Ok(())

View File

@@ -0,0 +1,45 @@
The MIT License (MIT)
Copyright (c) 2017 Thomas Grainger.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
Portions of this flake8-commas Software may utilize the following
copyrighted material, the use of which is hereby acknowledged.
Original flake8-commas: https://github.com/trevorcreech/flake8-commas/commit/e8563b71b1d5442e102c8734c11cb5202284293d
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -1,10 +1,13 @@
[build-system]
requires = ["maturin>=0.14,<0.15"]
requires = ["maturin>=0.14.10,<0.15"]
# We depend on >=0.14.10 because we specify name in
# [package.metadata.maturin] in ruff_cli/Cargo.toml.
build-backend = "maturin"
[project]
name = "ruff"
version = "0.0.220"
version = "0.0.223"
description = "An extremely fast Python linter, written in Rust."
authors = [
{ name = "Charlie Marsh", email = "charlie.r.marsh@gmail.com" },
@@ -35,6 +38,7 @@ urls = { repository = "https://github.com/charliermarsh/ruff" }
[tool.maturin]
bindings = "bin"
manifest-path = "ruff_cli/Cargo.toml"
python-source = "python"
strip = true

View File

@@ -0,0 +1,9 @@
class C:
from typing import overload
@overload
def f(self, x: int, y: int) -> None:
...
def f(self, x, y):
pass

View File

@@ -1,15 +1,16 @@
"""
Should emit:
B024 - on lines 17, 34, 52, 58, 69, 74, 79, 84, 89
B024 - on lines 18, 71, 82, 87, 92, 141
"""
import abc
import abc as notabc
from abc import ABC, ABCMeta
from abc import abstractmethod
from abc import abstractmethod, abstractproperty
from abc import abstractmethod as abstract
from abc import abstractmethod as abstractaoeuaoeuaoeu
from abc import abstractmethod as notabstract
from abc import abstractproperty as notabstract_property
import foo
@@ -49,12 +50,24 @@ class Base_6(ABC):
foo()
class Base_7(ABC): # error
class Base_7(ABC):
@notabstract
def method(self):
foo()
class Base_8(ABC):
@notabstract_property
def method(self):
foo()
class Base_9(ABC):
@abstractproperty
def method(self):
foo()
class MetaBase_1(metaclass=ABCMeta): # error
def method(self):
foo()

View File

@@ -1,11 +1,12 @@
"""
Should emit:
B027 - on lines 12, 15, 18, 22, 30
B027 - on lines 13, 16, 19, 23
"""
import abc
from abc import ABC
from abc import abstractmethod
from abc import abstractmethod, abstractproperty
from abc import abstractmethod as notabstract
from abc import abstractproperty as notabstract_property
class AbstractClass(ABC):
@@ -42,6 +43,18 @@ class AbstractClass(ABC):
def abstract_3(self):
...
@abc.abstractproperty
def abstract_4(self):
...
@abstractproperty
def abstract_5(self):
...
@notabstract_property
def abstract_6(self):
...
def body_1(self):
print("foo")
...

View File

@@ -0,0 +1,628 @@
# ==> bad_function_call.py <==
bad_function_call(
param1='test',
param2='test'
)
# ==> bad_list.py <==
bad_list = [
1,
2,
3
]
bad_list_with_comment = [
1,
2,
3
# still needs a comma!
]
bad_list_with_extra_empty = [
1,
2,
3
]
# ==> bare.py <==
bar = 1, 2
foo = 1
foo = (1,)
foo = 1,
bar = 1; foo = bar,
foo = (
3,
4,
)
foo = 3,
class A(object):
foo = 3
bar = 10,
foo_bar = 2
a = ('a',)
from foo import bar, baz
group_by = function_call('arg'),
group_by = ('foobar' * 3),
def foo():
return False,
==> callable_before_parenth_form.py <==
def foo(
bar,
):
pass
{'foo': foo}['foo'](
bar
)
{'foo': foo}['foo'](
bar,
)
(foo)(
bar
)
(foo)[0](
bar,
)
[foo][0](
bar
)
[foo][0](
bar,
)
# ==> comment_good_dict.py <==
multiline_good_dict = {
"good": 123, # this is a good number
}
# ==> dict_comprehension.py <==
not_a_dict = {
x: y
for x, y in ((1, 2), (3, 4))
}
# ==> good_empty_comma_context.py <==
def func2(
):
pass
func2(
)
func2(
)
[
]
[
]
(
)
(
)
{
}
# ==> good_list.py <==
stuff = [
'a',
'b',
# more stuff will go here
]
more_stuff = [
'a',
'b',
]
# ==> keyword_before_parenth_form/base_bad.py <==
from x import (
y
)
assert(
SyntaxWarning,
ThrownHere,
Anyway
)
# async await is fine outside an async def
# ruff: RustPython tokenizer treats async/await as keywords, not applicable.
# def await(
# foo
# ):
# async(
# foo
# )
# def async(
# foo
# ):
# await(
# foo
# )
# ==> keyword_before_parenth_form/base.py <==
from x import (
y,
)
assert(
SyntaxWarning,
ThrownHere,
Anyway,
)
assert (
foo
)
assert (
foo and
bar
)
if(
foo and
bar
):
pass
elif(
foo and
bar
):
pass
for x in(
[1,2,3]
):
print(x)
(x for x in (
[1, 2, 3]
))
(
'foo'
) is (
'foo'
)
if (
foo and
bar
) or not (
foo
) or (
spam
):
pass
def xyz():
raise(
Exception()
)
def abc():
return(
3
)
while(
False
):
pass
with(
loop
):
pass
def foo():
yield (
"foo"
)
# async await is fine outside an async def
# ruff: RustPython tokenizer treats async/await as keywords, not applicable.
# def await(
# foo,
# ):
# async(
# foo,
# )
# def async(
# foo,
# ):
# await(
# foo,
# )
# ==> keyword_before_parenth_form/py3.py <==
# Syntax error in Py2
def foo():
yield from (
foo
)
# ==> list_comprehension.py <==
not_a_list = [
s.strip()
for s in 'foo, bar, baz'.split(',')
]
# ==> multiline_bad_dict.py <==
multiline_bad_dict = {
"bad": 123
}
# ==> multiline_bad_function_def.py <==
def func_good(
a = 3,
b = 2):
pass
def func_bad(
a = 3,
b = 2
):
pass
# ==> multiline_bad_function_one_param.py <==
def func(
a = 3
):
pass
func(
a = 3
)
# ==> multiline_bad_or_dict.py <==
multiline_bad_or_dict = {
"good": True or False,
"bad": 123
}
# ==> multiline_good_dict.py <==
multiline_good_dict = {
"good": 123,
}
# ==> multiline_good_single_keyed_for_dict.py <==
good_dict = {
"good": x for x in y
}
# ==> multiline_if.py <==
if (
foo
and bar
):
print("Baz")
# ==> multiline_index_access.py <==
multiline_index_access[
"good"
]
multiline_index_access_after_function()[
"good"
]
multiline_index_access_after_inline_index_access['first'][
"good"
]
multiline_index_access[
"probably fine",
]
[0, 1, 2][
"good"
]
[0, 1, 2][
"probably fine",
]
multiline_index_access[
"probably fine",
"not good"
]
multiline_index_access[
"fine",
"fine",
:
"not good"
]
# ==> multiline_string.py <==
s = (
'this' +
'is a string'
)
s2 = (
'this'
'is a also a string'
)
t = (
'this' +
'is a tuple',
)
t2 = (
'this'
'is also a tuple',
)
# ==> multiline_subscript_slice.py <==
multiline_index_access[
"fine",
"fine"
:
"not fine"
]
multiline_index_access[
"fine"
"fine"
:
"fine"
:
"fine"
]
multiline_index_access[
"fine"
"fine",
:
"fine",
:
"fine",
]
multiline_index_access[
"fine"
"fine",
:
"fine"
:
"fine",
"not fine"
]
multiline_index_access[
"fine"
"fine",
:
"fine",
"fine"
:
"fine",
]
multiline_index_access[
lambda fine,
fine,
fine: (0,)
:
lambda fine,
fine,
fine: (0,),
"fine"
:
"fine",
]
# ==> one_line_dict.py <==
one_line_dict = {"good": 123}
# ==> parenth_form.py <==
parenth_form = (
a +
b +
c
)
parenth_form_with_lambda = (
lambda x, y: 0
)
parenth_form_with_default_lambda = (
lambda x=(
lambda
x,
y,
:
0
),
y = {a: b},
:
0
)
# ==> prohibited.py <==
foo = ['a', 'b', 'c',]
bar = { a: b,}
def bah(ham, spam,):
pass
(0,)
(0, 1,)
foo = ['a', 'b', 'c', ]
bar = { a: b, }
def bah(ham, spam, ):
pass
(0, )
(0, 1, )
image[:, :, 0]
image[:,]
image[:,:,]
lambda x, :
# ==> unpack.py <==
def function(
foo,
bar,
**kwargs
):
pass
def function(
foo,
bar,
*args
):
pass
def function(
foo,
bar,
*args,
extra_kwarg
):
pass
result = function(
foo,
bar,
**kwargs
)
result = function(
foo,
bar,
**not_called_kwargs
)
def foo(
ham,
spam,
*args,
kwarg_only
):
pass
# In python 3.5 if it's not a function def, commas are mandatory.
foo(
**kwargs
)
{
**kwargs
}
(
*args
)
{
*args
}
[
*args
]
def foo(
ham,
spam,
*args
):
pass
def foo(
ham,
spam,
**kwargs
):
pass
def foo(
ham,
spam,
*args,
kwarg_only
):
pass
# In python 3.5 if it's not a function def, commas are mandatory.
foo(
**kwargs,
)
{
**kwargs,
}
(
*args,
)
{
*args,
}
[
*args,
]
result = function(
foo,
bar,
**{'ham': spam}
)

View File

@@ -1,5 +1,18 @@
s1 = set([1, 2])
s2 = set((1, 2))
s3 = set([])
s4 = set(())
s5 = set()
set([1, 2])
set((1, 2))
set([])
set(())
set()
set((1,))
set((
1,
))
set([
1,
])
set(
(1,)
)
set(
[1,]
)

View File

@@ -2,6 +2,11 @@ with A() as a: # SIM117
with B() as b:
print("hello")
with A(): # SIM117
with B():
with C():
print("hello")
with A() as a:
a()
with B() as b:
@@ -11,3 +16,15 @@ with A() as a:
with B() as b:
print("hello")
a()
async with A() as a:
with B() as b:
print("hello")
with A() as a:
async with B() as b:
print("hello")
async with A() as a:
async with B() as b:
print("hello")

View File

@@ -0,0 +1,4 @@
from sklearn.svm import func, SVC, CONST, Klass
from subprocess import N_CLASS, PIPE, Popen, STDOUT
from module import CLASS, Class, CONSTANT, function, BASIC, Apple
from torch.nn import SELU, AClass, A_CONSTANT

View File

@@ -0,0 +1,17 @@
class PropertyWithSetter:
@property
def foo(self) -> str:
"""Docstring for foo."""
return "foo"
@foo.setter
def foo(self, value: str) -> None:
pass
@foo.deleter
def foo(self):
pass
@foo
def foo(self, value: str) -> None:
pass

View File

@@ -9,7 +9,6 @@ from .background import BackgroundTasks
# F401 `datastructures.UploadFile` imported but unused
from .datastructures import UploadFile as FileUpload
# OK
import applications as applications

View File

@@ -5,4 +5,3 @@ from warnings import warn
warnings.warn("this is ok")
warn("by itself is also ok")
logging.warning("this is fine")
log.warning("this is ok")

View File

@@ -2,14 +2,4 @@ import logging
from logging import warn
logging.warn("this is not ok")
log.warn("this is also not ok")
warn("not ok")
def foo():
from logging import warn
def warn():
pass
warn("has been redefined, but we will still report it")

View File

@@ -0,0 +1,59 @@
"""Check that magic values are not used in comparisons"""
if 100 == 100: # [comparison-of-constants]
pass
if 1 == 3: # [comparison-of-constants]
pass
if 1 != 3: # [comparison-of-constants]
pass
x = 0
if 4 == 3 == x: # [comparison-of-constants]
pass
if x == 0: # correct
pass
y = 1
if x == y: # correct
pass
if 1 > 0: # [comparison-of-constants]
pass
if x > 0: # correct
pass
if 1 >= 0: # [comparison-of-constants]
pass
if x >= 0: # correct
pass
if 1 < 0: # [comparison-of-constants]
pass
if x < 0: # correct
pass
if 1 <= 0: # [comparison-of-constants]
pass
if x <= 0: # correct
pass
word = "hello"
if word == "": # correct
pass
if "hello" == "": # [comparison-of-constants]
pass
truthy = True
if truthy == True: # correct
pass
if True == False: # [comparison-of-constants]
pass

View File

@@ -0,0 +1,71 @@
"""Check that magic values are not used in comparisons"""
import cmath
user_input = 10
if 10 > user_input: # [magic-value-comparison]
pass
if 10 == 100: # [comparison-of-constants] R0133
pass
if 1 == 3: # [comparison-of-constants] R0133
pass
x = 0
if 4 == 3 == x: # [comparison-of-constants] R0133
pass
time_delta = 7224
ONE_HOUR = 3600
if time_delta > ONE_HOUR: # correct
pass
argc = 1
if argc != -1: # correct
pass
if argc != 0: # correct
pass
if argc != 1: # correct
pass
if argc != 2: # [magic-value-comparison]
pass
if __name__ == "__main__": # correct
pass
ADMIN_PASSWORD = "SUPERSECRET"
input_password = "password"
if input_password == "": # correct
pass
if input_password == ADMIN_PASSWORD: # correct
pass
if input_password == "Hunter2": # [magic-value-comparison]
pass
PI = 3.141592653589793238
pi_estimation = 3.14
if pi_estimation == 3.141592653589793238: # [magic-value-comparison]
pass
if pi_estimation == PI: # correct
pass
HELLO_WORLD = b"Hello, World!"
user_input = b"Hello, There!"
if user_input == b"something": # [magic-value-comparison]
pass
if user_input == HELLO_WORLD: # correct
pass

View File

@@ -67,7 +67,7 @@
}
},
"extend-ignore": {
"description": "A list of rule codes or prefixes to ignore, in addition to those specified by `ignore`.",
"description": "A list of rule codes or prefixes to ignore, in addition to those specified by `ignore`.\n\nNote that `extend-ignore` is applied after resolving rules from `ignore`/`select` and a less specific rule in `extend-ignore` would overwrite a more specific rule in `select`. It is recommended to only use `extend-ignore` when extending a `pyproject.toml` file via `extend`.",
"type": [
"array",
"null"
@@ -77,7 +77,7 @@
}
},
"extend-select": {
"description": "A list of rule codes or prefixes to enable, in addition to those specified by `select`.",
"description": "A list of rule codes or prefixes to enable, in addition to those specified by `select`.\n\nNote that `extend-select` is applied after resolving rules from `ignore`/`select` and a less specific rule in `extend-select` would overwrite a more specific rule in `ignore`. It is recommended to only use `extend-select` when extending a `pyproject.toml` file via `extend`.",
"type": [
"array",
"null"
@@ -285,6 +285,16 @@
}
]
},
"namespace-packages": {
"description": "Mark the specified directories as namespace packages. For the purpose of module resolution, Ruff will treat those directories as if they contained an `__init__.py` file.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"pep8-naming": {
"description": "Options for the `pep8-naming` plugin.",
"anyOf": [
@@ -755,6 +765,16 @@
"IsortOptions": {
"type": "object",
"properties": {
"classes": {
"description": "An override list of tokens to always recognize as a Class for `order-by-type` regardless of casing.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"combine-as-imports": {
"description": "Combines as imports on the same line. See isort's [`combine-as-imports`](https://pycqa.github.io/isort/docs/configuration/options.html#combine-as-imports) option.",
"type": [
@@ -1136,6 +1156,12 @@
"C9",
"C90",
"C901",
"COM",
"COM8",
"COM81",
"COM812",
"COM818",
"COM819",
"D",
"D1",
"D10",
@@ -1433,6 +1459,9 @@
"PLE1142",
"PLR",
"PLR0",
"PLR01",
"PLR013",
"PLR0133",
"PLR02",
"PLR020",
"PLR0206",
@@ -1445,6 +1474,10 @@
"PLR1701",
"PLR172",
"PLR1722",
"PLR2",
"PLR20",
"PLR200",
"PLR2004",
"PLW",
"PLW0",
"PLW01",

68
ruff_cli/Cargo.toml Normal file
View File

@@ -0,0 +1,68 @@
[package]
name = "ruff_cli"
version = "0.0.223"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = "2021"
rust-version = "1.65.0"
documentation = "https://github.com/charliermarsh/ruff"
homepage = "https://github.com/charliermarsh/ruff"
repository = "https://github.com/charliermarsh/ruff"
readme = "../README.md"
license = "MIT"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[[bin]]
name = "ruff"
path = "src/main.rs"
doctest = false
# Since the name of the binary is the same as the name of the `ruff` crate
# running `cargo doc --no-deps --all` results in an `output filename collision`
# See also https://github.com/rust-lang/cargo/issues/6313.
# We therefore disable the documentation generation for the binary.
doc = false
[dependencies]
ruff = { path = ".." }
annotate-snippets = { version = "0.9.1", features = ["color"] }
anyhow = { version = "1.0.66" }
atty = { version = "0.2.14" }
bincode = { version = "1.3.3" }
cachedir = { version = "0.3.0" }
chrono = { version = "0.4.21", default-features = false, features = ["clock"] }
clap = { version = "4.0.1", features = ["derive", "env"] }
clap_complete_command = { version = "0.4.0" }
clearscreen = { version = "2.0.0" }
colored = { version = "2.0.0" }
filetime = { version = "0.2.17" }
glob = { version = "0.3.0" }
ignore = { version = "0.4.18" }
itertools = { version = "0.10.5" }
log = { version = "0.4.17" }
notify = { version = "5.0.0" }
path-absolutize = { version = "3.0.14", features = ["once_cell_cache"] }
quick-junit = { version = "0.3.2" }
rayon = { version = "1.5.3" }
regex = { version = "1.6.0" }
rustc-hash = { version = "1.1.0" }
serde = { version = "1.0.147", features = ["derive"] }
serde_json = { version = "1.0.87" }
similar = { version = "2.2.1" }
textwrap = { version = "0.16.0" }
update-informer = { version = "0.6.0", default-features = false, features = ["pypi"], optional = true }
walkdir = { version = "2.3.2" }
[dev-dependencies]
assert_cmd = { version = "2.0.4" }
strum = { version = "0.24.1" }
ureq = { version = "2.5.0", features = [] }
[features]
default = ["update-informer"]
update-informer = ["dep:update-informer"]
[package.metadata.maturin]
name = "ruff"
# Setting the name here is necessary for maturin to include the package in its builds.

128
ruff_cli/src/cache.rs Normal file
View File

@@ -0,0 +1,128 @@
use std::collections::hash_map::DefaultHasher;
use std::fs;
use std::hash::{Hash, Hasher};
use std::io::Write;
use std::path::Path;
use anyhow::Result;
use filetime::FileTime;
use log::error;
use path_absolutize::Absolutize;
use ruff::message::Message;
use ruff::settings::{flags, AllSettings, Settings};
use serde::{Deserialize, Serialize};
const CARGO_PKG_VERSION: &str = env!("CARGO_PKG_VERSION");
#[derive(Serialize, Deserialize)]
struct CacheMetadata {
mtime: i64,
}
#[derive(Serialize)]
struct CheckResultRef<'a> {
metadata: &'a CacheMetadata,
messages: &'a [Message],
}
#[derive(Deserialize)]
struct CheckResult {
metadata: CacheMetadata,
messages: Vec<Message>,
}
fn content_dir() -> &'static Path {
Path::new("content")
}
fn cache_key<P: AsRef<Path>>(path: P, settings: &Settings, autofix: flags::Autofix) -> u64 {
let mut hasher = DefaultHasher::new();
CARGO_PKG_VERSION.hash(&mut hasher);
path.as_ref().absolutize().unwrap().hash(&mut hasher);
settings.hash(&mut hasher);
autofix.hash(&mut hasher);
hasher.finish()
}
#[allow(dead_code)]
/// Initialize the cache at the specified `Path`.
pub fn init(path: &Path) -> Result<()> {
// Create the cache directories.
fs::create_dir_all(path.join(content_dir()))?;
// Add the CACHEDIR.TAG.
if !cachedir::is_tagged(path)? {
cachedir::add_tag(path)?;
}
// Add the .gitignore.
let gitignore_path = path.join(".gitignore");
if !gitignore_path.exists() {
let mut file = fs::File::create(gitignore_path)?;
file.write_all(b"*")?;
}
Ok(())
}
fn write_sync(cache_dir: &Path, key: u64, value: &[u8]) -> Result<(), std::io::Error> {
fs::write(
cache_dir.join(content_dir()).join(format!("{key:x}")),
value,
)
}
fn read_sync(cache_dir: &Path, key: u64) -> Result<Vec<u8>, std::io::Error> {
fs::read(cache_dir.join(content_dir()).join(format!("{key:x}")))
}
/// Get a value from the cache.
pub fn get<P: AsRef<Path>>(
path: P,
metadata: &fs::Metadata,
settings: &AllSettings,
autofix: flags::Autofix,
) -> Option<Vec<Message>> {
let encoded = read_sync(
&settings.cli.cache_dir,
cache_key(path, &settings.lib, autofix),
)
.ok()?;
let (mtime, messages) = match bincode::deserialize::<CheckResult>(&encoded[..]) {
Ok(CheckResult {
metadata: CacheMetadata { mtime },
messages,
}) => (mtime, messages),
Err(e) => {
error!("Failed to deserialize encoded cache entry: {e:?}");
return None;
}
};
if FileTime::from_last_modification_time(metadata).unix_seconds() != mtime {
return None;
}
Some(messages)
}
/// Set a value in the cache.
pub fn set<P: AsRef<Path>>(
path: P,
metadata: &fs::Metadata,
settings: &AllSettings,
autofix: flags::Autofix,
messages: &[Message],
) {
let check_result = CheckResultRef {
metadata: &CacheMetadata {
mtime: FileTime::from_last_modification_time(metadata).unix_seconds(),
},
messages,
};
if let Err(e) = write_sync(
&settings.cli.cache_dir,
cache_key(path, &settings.lib, autofix),
&bincode::serialize(&check_result).unwrap(),
) {
error!("Failed to write to cache: {e:?}");
}
}

View File

@@ -2,18 +2,21 @@ use std::path::{Path, PathBuf};
use clap::{command, Parser};
use regex::Regex;
use rustc_hash::FxHashMap;
use crate::logging::LogLevel;
use crate::registry::{RuleCode, RuleCodePrefix};
use crate::resolver::ConfigProcessor;
use crate::settings::types::{
use ruff::fs;
use ruff::logging::LogLevel;
use ruff::registry::{RuleCode, RuleCodePrefix};
use ruff::resolver::ConfigProcessor;
use ruff::settings::types::{
FilePattern, PatternPrefixPair, PerFileIgnore, PythonVersion, SerializationFormat,
};
use crate::{fs, mccabe};
use rustc_hash::FxHashMap;
#[derive(Debug, Parser)]
#[command(author, about = "Ruff: An extremely fast Python linter.")]
#[command(
author,
name = "ruff",
about = "Ruff: An extremely fast Python linter."
)]
#[command(version)]
#[allow(clippy::struct_excessive_bools)]
pub struct Cli {
@@ -134,9 +137,6 @@ pub struct Cli {
/// formatting.
#[arg(long)]
pub line_length: Option<usize>,
/// Maximum McCabe complexity allowed for a given function.
#[arg(long)]
pub max_complexity: Option<usize>,
/// Enable automatic additions of `noqa` directives to failing lines.
#[arg(
long,
@@ -263,7 +263,6 @@ impl Cli {
fixable: self.fixable,
ignore: self.ignore,
line_length: self.line_length,
max_complexity: self.max_complexity,
per_file_ignores: self.per_file_ignores,
respect_gitignore: resolve_bool_arg(
self.respect_gitignore,
@@ -329,7 +328,6 @@ pub struct Overrides {
pub fixable: Option<Vec<RuleCodePrefix>>,
pub ignore: Option<Vec<RuleCodePrefix>>,
pub line_length: Option<usize>,
pub max_complexity: Option<usize>,
pub per_file_ignores: Option<Vec<PatternPrefixPair>>,
pub respect_gitignore: Option<bool>,
pub select: Option<Vec<RuleCodePrefix>>,
@@ -346,7 +344,7 @@ pub struct Overrides {
}
impl ConfigProcessor for &Overrides {
fn process_config(&self, config: &mut crate::settings::configuration::Configuration) {
fn process_config(&self, config: &mut ruff::settings::configuration::Configuration) {
if let Some(cache_dir) = &self.cache_dir {
config.cache_dir = Some(cache_dir.clone());
}
@@ -380,11 +378,6 @@ impl ConfigProcessor for &Overrides {
if let Some(line_length) = &self.line_length {
config.line_length = Some(*line_length);
}
if let Some(max_complexity) = &self.max_complexity {
config.mccabe = Some(mccabe::settings::Options {
max_complexity: Some(*max_complexity),
});
}
if let Some(per_file_ignores) = &self.per_file_ignores {
config.per_file_ignores = Some(collect_per_file_ignores(per_file_ignores.clone()));
}

View File

@@ -11,22 +11,22 @@ use log::{debug, error};
use path_absolutize::path_dedot;
#[cfg(not(target_family = "wasm"))]
use rayon::prelude::*;
use rustpython_ast::Location;
use ruff::cache::CACHE_DIR_NAME;
use ruff::linter::add_noqa_to_path;
use ruff::logging::LogLevel;
use ruff::message::{Location, Message};
use ruff::registry::RuleCode;
use ruff::resolver::{FileDiscovery, PyprojectDiscovery};
use ruff::settings::flags;
use ruff::settings::types::SerializationFormat;
use ruff::{fix, fs, packaging, resolver, warn_user_once, IOError};
use serde::Serialize;
use walkdir::WalkDir;
use crate::cache::CACHE_DIR_NAME;
use crate::cache;
use crate::cli::Overrides;
use crate::diagnostics::{lint_path, lint_stdin, Diagnostics};
use crate::iterators::par_iter;
use crate::linter::add_noqa_to_path;
use crate::logging::LogLevel;
use crate::message::Message;
use crate::registry::RuleCode;
use crate::resolver::{FileDiscovery, PyprojectDiscovery};
use crate::settings::flags;
use crate::settings::types::SerializationFormat;
use crate::{cache, fix, fs, packaging, resolver, violations, warn_user_once};
/// Run the linter over a collection of files.
pub fn run(
@@ -56,19 +56,19 @@ pub fn run(
if matches!(cache, flags::Cache::Enabled) {
match &pyproject_strategy {
PyprojectDiscovery::Fixed(settings) => {
if let Err(e) = cache::init(&settings.cache_dir) {
if let Err(e) = cache::init(&settings.cli.cache_dir) {
error!(
"Failed to initialize cache at {}: {e:?}",
settings.cache_dir.to_string_lossy()
settings.cli.cache_dir.to_string_lossy()
);
}
}
PyprojectDiscovery::Hierarchical(default) => {
for settings in std::iter::once(default).chain(resolver.iter()) {
if let Err(e) = cache::init(&settings.cache_dir) {
if let Err(e) = cache::init(&settings.cli.cache_dir) {
error!(
"Failed to initialize cache at {}: {e:?}",
settings.cache_dir.to_string_lossy()
settings.cli.cache_dir.to_string_lossy()
);
}
}
@@ -83,6 +83,8 @@ pub fn run(
.flatten()
.map(ignore::DirEntry::path)
.collect::<Vec<_>>(),
&resolver,
pyproject_strategy,
);
let start = Instant::now();
@@ -95,7 +97,7 @@ pub fn run(
.parent()
.and_then(|parent| package_roots.get(parent))
.and_then(|package| *package);
let settings = resolver.resolve(path, pyproject_strategy);
let settings = resolver.resolve_all(path, pyproject_strategy);
lint_path(path, package, settings, cache, autofix)
.map_err(|e| (Some(path.to_owned()), e.to_string()))
}
@@ -114,7 +116,7 @@ pub fn run(
let settings = resolver.resolve(path, pyproject_strategy);
if settings.enabled.contains(&RuleCode::E902) {
Diagnostics::new(vec![Message {
kind: violations::IOError(message).into(),
kind: IOError(message).into(),
location: Location::default(),
end_location: Location::default(),
fix: None,
@@ -169,9 +171,9 @@ pub fn run_stdin(
};
let package_root = filename
.and_then(Path::parent)
.and_then(packaging::detect_package_root);
.and_then(|path| packaging::detect_package_root(path, &settings.lib.namespace_packages));
let stdin = read_from_stdin()?;
let mut diagnostics = lint_stdin(filename, package_root, &stdin, settings, autofix)?;
let mut diagnostics = lint_stdin(filename, package_root, &stdin, &settings.lib, autofix)?;
diagnostics.messages.sort_unstable();
Ok(diagnostics)
}
@@ -288,7 +290,7 @@ struct Explanation<'a> {
}
/// Explain a `RuleCode` to the user.
pub fn explain(code: &RuleCode, format: &SerializationFormat) -> Result<()> {
pub fn explain(code: &RuleCode, format: SerializationFormat) -> Result<()> {
match format {
SerializationFormat::Text | SerializationFormat::Grouped => {
println!(

View File

@@ -7,12 +7,13 @@ use std::path::Path;
use anyhow::Result;
use log::debug;
use ruff::linter::{lint_fix, lint_only};
use ruff::message::Message;
use ruff::settings::{flags, AllSettings, Settings};
use ruff::{fix, fs};
use similar::TextDiff;
use crate::linter::{lint_fix, lint_only};
use crate::message::Message;
use crate::settings::{flags, Settings};
use crate::{cache, fix, fs};
use crate::cache;
#[derive(Debug, Default)]
pub struct Diagnostics {
@@ -37,12 +38,12 @@ impl AddAssign for Diagnostics {
pub fn lint_path(
path: &Path,
package: Option<&Path>,
settings: &Settings,
settings: &AllSettings,
cache: flags::Cache,
autofix: fix::FixMode,
) -> Result<Diagnostics> {
// Validate the `Settings` and return any errors.
settings.validate()?;
settings.lib.validate()?;
// Check the cache.
// TODO(charlie): `fixer::Mode::Apply` and `fixer::Mode::Diff` both have
@@ -68,7 +69,7 @@ pub fn lint_path(
// Lint the file.
let (messages, fixed) = if matches!(autofix, fix::FixMode::Apply | fix::FixMode::Diff) {
let (transformed, fixed, messages) = lint_fix(&contents, path, package, settings)?;
let (transformed, fixed, messages) = lint_fix(&contents, path, package, &settings.lib)?;
if fixed > 0 {
if matches!(autofix, fix::FixMode::Apply) {
write(path, transformed)?;
@@ -84,7 +85,7 @@ pub fn lint_path(
}
(messages, fixed)
} else {
let messages = lint_only(&contents, path, package, settings, autofix.into())?;
let messages = lint_only(&contents, path, package, &settings.lib, autofix.into())?;
let fixed = 0;
(messages, fixed)
};

14
ruff_cli/src/lib.rs Normal file
View File

@@ -0,0 +1,14 @@
//! This library only exists to enable the Ruff internal tooling (`ruff_dev`)
//! to automatically update the `ruff --help` output in the `README.md`.
//!
//! For the actual Ruff library, see [`ruff`].
#![allow(clippy::must_use_candidate, dead_code)]
mod cli;
use clap::CommandFactory;
/// Returns the output of `ruff --help`.
pub fn help() -> String {
cli::Cli::command().render_help().to_string()
}

View File

@@ -1,25 +1,41 @@
#![allow(
clippy::match_same_arms,
clippy::missing_errors_doc,
clippy::module_name_repetitions,
clippy::too_many_lines
)]
#![forbid(unsafe_code)]
use std::io::{self};
use std::path::{Path, PathBuf};
use std::process::ExitCode;
use std::sync::mpsc::channel;
use ::ruff::cli::{extract_log_level, Cli, Overrides};
use ::ruff::logging::{set_up_logging, LogLevel};
use ::ruff::printer::{Printer, Violations};
use ::ruff::resolver::{
resolve_settings_with_processor, ConfigProcessor, FileDiscovery, PyprojectDiscovery, Relativity,
};
use ::ruff::settings::configuration::Configuration;
use ::ruff::settings::pyproject;
use ::ruff::settings::types::SerializationFormat;
use ::ruff::settings::{pyproject, Settings};
#[cfg(feature = "update-informer")]
use ::ruff::updates;
use ::ruff::{commands, fix, warn_user_once};
use ::ruff::{fix, fs, warn_user_once};
use anyhow::Result;
use clap::{CommandFactory, Parser};
use cli::{extract_log_level, Cli, Overrides};
use colored::Colorize;
use notify::{recommended_watcher, RecursiveMode, Watcher};
use path_absolutize::path_dedot;
use printer::{Printer, Violations};
use ruff::settings::{AllSettings, CliSettings};
mod cache;
mod cli;
mod commands;
mod diagnostics;
mod iterators;
mod printer;
#[cfg(all(feature = "update-informer"))]
pub mod updates;
/// Resolve the relevant settings strategy and defaults for the current
/// invocation.
@@ -33,7 +49,7 @@ fn resolve(
// First priority: if we're running in isolated mode, use the default settings.
let mut config = Configuration::default();
overrides.process_config(&mut config);
let settings = Settings::from_configuration(config, &path_dedot::CWD)?;
let settings = AllSettings::from_configuration(config, &path_dedot::CWD)?;
Ok(PyprojectDiscovery::Fixed(settings))
} else if let Some(pyproject) = config {
// Second priority: the user specified a `pyproject.toml` file. Use that
@@ -67,12 +83,12 @@ fn resolve(
// as the "default" settings.)
let mut config = Configuration::default();
overrides.process_config(&mut config);
let settings = Settings::from_configuration(config, &path_dedot::CWD)?;
let settings = AllSettings::from_configuration(config, &path_dedot::CWD)?;
Ok(PyprojectDiscovery::Hierarchical(settings))
}
}
pub(crate) fn inner_main() -> Result<ExitCode> {
pub fn main() -> Result<ExitCode> {
// Extract command-line arguments.
let (cli, overrides) = Cli::parse().partition();
let log_level = extract_log_level(&cli);
@@ -98,39 +114,35 @@ pub(crate) fn inner_main() -> Result<ExitCode> {
// Validate the `Settings` and return any errors.
match &pyproject_strategy {
PyprojectDiscovery::Fixed(settings) => settings.validate()?,
PyprojectDiscovery::Hierarchical(settings) => settings.validate()?,
PyprojectDiscovery::Fixed(settings) => settings.lib.validate()?,
PyprojectDiscovery::Hierarchical(settings) => settings.lib.validate()?,
};
// Extract options that are included in `Settings`, but only apply at the top
// level.
let file_strategy = FileDiscovery {
force_exclude: match &pyproject_strategy {
PyprojectDiscovery::Fixed(settings) => settings.force_exclude,
PyprojectDiscovery::Hierarchical(settings) => settings.force_exclude,
PyprojectDiscovery::Fixed(settings) => settings.lib.force_exclude,
PyprojectDiscovery::Hierarchical(settings) => settings.lib.force_exclude,
},
respect_gitignore: match &pyproject_strategy {
PyprojectDiscovery::Fixed(settings) => settings.respect_gitignore,
PyprojectDiscovery::Hierarchical(settings) => settings.respect_gitignore,
PyprojectDiscovery::Fixed(settings) => settings.lib.respect_gitignore,
PyprojectDiscovery::Hierarchical(settings) => settings.lib.respect_gitignore,
},
};
let (fix, fix_only, format, update_check) = match &pyproject_strategy {
PyprojectDiscovery::Fixed(settings) => (
settings.fix,
settings.fix_only,
settings.format,
settings.update_check,
),
PyprojectDiscovery::Hierarchical(settings) => (
settings.fix,
settings.fix_only,
settings.format,
settings.update_check,
),
let CliSettings {
fix,
fix_only,
format,
update_check,
..
} = match &pyproject_strategy {
PyprojectDiscovery::Fixed(settings) => settings.cli.clone(),
PyprojectDiscovery::Hierarchical(settings) => settings.cli.clone(),
};
if let Some(code) = cli.explain {
commands::explain(&code, &format)?;
commands::explain(&code, format)?;
return Ok(ExitCode::SUCCESS);
}
if cli.show_settings {
@@ -185,7 +197,7 @@ pub(crate) fn inner_main() -> Result<ExitCode> {
}
// Perform an initial run instantly.
printer.clear_screen()?;
Printer::clear_screen()?;
printer.write_to_user("Starting linter in watch mode...\n");
let messages = commands::run(
@@ -215,7 +227,7 @@ pub(crate) fn inner_main() -> Result<ExitCode> {
.unwrap_or_default()
});
if py_changed {
printer.clear_screen()?;
Printer::clear_screen()?;
printer.write_to_user("File change detected...\n");
let messages = commands::run(
@@ -244,7 +256,7 @@ pub(crate) fn inner_main() -> Result<ExitCode> {
// Generate lint violations.
let diagnostics = if is_stdin {
commands::run_stdin(
cli.stdin_filename.as_deref(),
cli.stdin_filename.map(fs::normalize_path).as_deref(),
&pyproject_strategy,
&file_strategy,
&overrides,

View File

@@ -1,4 +1,6 @@
use std::collections::BTreeMap;
use std::io;
use std::io::{BufWriter, Write};
use std::path::Path;
use annotate_snippets::display_list::{DisplayList, FormatOptions};
@@ -6,17 +8,16 @@ use annotate_snippets::snippet::{Annotation, AnnotationType, Slice, Snippet, Sou
use anyhow::Result;
use colored::Colorize;
use itertools::iterate;
use rustpython_parser::ast::Location;
use ruff::fs::relativize_path;
use ruff::logging::LogLevel;
use ruff::message::{Location, Message};
use ruff::registry::RuleCode;
use ruff::settings::types::SerializationFormat;
use ruff::{fix, notify_user};
use serde::Serialize;
use serde_json::json;
use crate::diagnostics::Diagnostics;
use crate::fs::relativize_path;
use crate::logging::LogLevel;
use crate::message::Message;
use crate::registry::RuleCode;
use crate::settings::types::SerializationFormat;
use crate::{fix, notify_user};
/// Enum to control whether lint violations are shown to the user.
pub enum Violations {
@@ -70,7 +71,7 @@ impl<'a> Printer<'a> {
}
}
fn post_text(&self, diagnostics: &Diagnostics) {
fn post_text<T: Write>(&self, stdout: &mut T, diagnostics: &Diagnostics) -> Result<()> {
if self.log_level >= &LogLevel::Default {
match self.violations {
Violations::Show => {
@@ -78,9 +79,12 @@ impl<'a> Printer<'a> {
let remaining = diagnostics.messages.len();
let total = fixed + remaining;
if fixed > 0 {
println!("Found {total} error(s) ({fixed} fixed, {remaining} remaining).");
writeln!(
stdout,
"Found {total} error(s) ({fixed} fixed, {remaining} remaining)."
)?;
} else if remaining > 0 {
println!("Found {remaining} error(s).");
writeln!(stdout, "Found {remaining} error(s).")?;
}
if !matches!(self.autofix, fix::FixMode::Apply) {
@@ -90,7 +94,10 @@ impl<'a> Printer<'a> {
.filter(|message| message.kind.fixable())
.count();
if num_fixable > 0 {
println!("{num_fixable} potentially fixable with the --fix option.");
writeln!(
stdout,
"{num_fixable} potentially fixable with the --fix option."
)?;
}
}
}
@@ -98,14 +105,15 @@ impl<'a> Printer<'a> {
let fixed = diagnostics.fixed;
if fixed > 0 {
if matches!(self.autofix, fix::FixMode::Apply) {
println!("Fixed {fixed} error(s).");
writeln!(stdout, "Fixed {fixed} error(s).")?;
} else if matches!(self.autofix, fix::FixMode::Diff) {
println!("Would fix {fixed} error(s).");
writeln!(stdout, "Would fix {fixed} error(s).")?;
}
}
}
}
}
Ok(())
}
pub fn write_once(&self, diagnostics: &Diagnostics) -> Result<()> {
@@ -114,18 +122,21 @@ impl<'a> Printer<'a> {
}
if matches!(self.violations, Violations::Hide) {
let mut stdout = BufWriter::new(io::stdout().lock());
if matches!(
self.format,
SerializationFormat::Text | SerializationFormat::Grouped
) {
self.post_text(diagnostics);
self.post_text(&mut stdout, diagnostics)?;
}
return Ok(());
}
let mut stdout = BufWriter::new(io::stdout().lock());
match self.format {
SerializationFormat::Json => {
println!(
writeln!(
stdout,
"{}",
serde_json::to_string_pretty(
&diagnostics
@@ -146,7 +157,7 @@ impl<'a> Printer<'a> {
})
.collect::<Vec<_>>()
)?
);
)?;
}
SerializationFormat::Junit => {
use quick_junit::{NonSuccessKind, Report, TestCase, TestCaseStatus, TestSuite};
@@ -181,14 +192,14 @@ impl<'a> Printer<'a> {
}
report.add_test_suite(test_suite);
}
println!("{}", report.to_string().unwrap());
writeln!(stdout, "{}", report.to_string().unwrap())?;
}
SerializationFormat::Text => {
for message in &diagnostics.messages {
print_message(message);
print_message(&mut stdout, message)?;
}
self.post_text(diagnostics);
self.post_text(&mut stdout, diagnostics)?;
}
SerializationFormat::Grouped => {
for (filename, messages) in group_messages_by_filename(&diagnostics.messages) {
@@ -210,21 +221,25 @@ impl<'a> Printer<'a> {
);
// Print the filename.
println!("{}:", relativize_path(Path::new(&filename)).underline());
writeln!(
stdout,
"{}:",
relativize_path(Path::new(&filename)).underline()
)?;
// Print each message.
for message in messages {
print_grouped_message(message, row_length, column_length);
print_grouped_message(&mut stdout, message, row_length, column_length)?;
}
println!();
writeln!(stdout)?;
}
self.post_text(diagnostics);
self.post_text(&mut stdout, diagnostics)?;
}
SerializationFormat::Github => {
// Generate error workflow command in GitHub Actions format.
// See: https://docs.github.com/en/actions/reference/workflow-commands-for-github-actions#setting-an-error-message
diagnostics.messages.iter().for_each(|message| {
for message in &diagnostics.messages {
let label = format!(
"{}{}{}{}{}{} {} {}",
relativize_path(Path::new(&message.filename)),
@@ -236,7 +251,8 @@ impl<'a> Printer<'a> {
message.kind.code().as_ref(),
message.kind.body(),
);
println!(
writeln!(
stdout,
"::error title=Ruff \
({}),file={},line={},col={},endLine={},endColumn={}::{}",
message.kind.code(),
@@ -246,13 +262,13 @@ impl<'a> Printer<'a> {
message.end_location.row(),
message.end_location.column(),
label,
);
});
)?;
}
}
SerializationFormat::Gitlab => {
// Generate JSON with errors in GitLab CI format
// https://docs.gitlab.com/ee/ci/testing/code_quality.html#implementing-a-custom-tool
println!(
writeln!(stdout,
"{}",
serde_json::to_string_pretty(
&diagnostics
@@ -275,10 +291,12 @@ impl<'a> Printer<'a> {
)
.collect::<Vec<_>>()
)?
);
)?;
}
}
stdout.flush()?;
Ok(())
}
@@ -294,19 +312,21 @@ impl<'a> Printer<'a> {
);
}
let mut stdout = BufWriter::new(io::stdout().lock());
if !diagnostics.messages.is_empty() {
if self.log_level >= &LogLevel::Default {
println!();
writeln!(stdout)?;
}
for message in &diagnostics.messages {
print_message(message);
print_message(&mut stdout, message)?;
}
}
stdout.flush()?;
Ok(())
}
pub fn clear_screen(&self) -> Result<()> {
pub fn clear_screen() -> Result<()> {
#[cfg(not(target_family = "wasm"))]
clearscreen::clear()?;
Ok(())
@@ -332,7 +352,7 @@ fn num_digits(n: usize) -> usize {
}
/// Print a single `Message` with full details.
fn print_message(message: &Message) {
fn print_message<T: Write>(stdout: &mut T, message: &Message) -> Result<()> {
let label = format!(
"{}{}{}{}{}{} {} {}",
relativize_path(Path::new(&message.filename)).bold(),
@@ -344,7 +364,7 @@ fn print_message(message: &Message) {
message.kind.code().as_ref().red().bold(),
message.kind.body(),
);
println!("{label}");
writeln!(stdout, "{label}")?;
if let Some(source) = &message.source {
let commit = message.kind.commit();
let footer = if commit.is_some() {
@@ -356,7 +376,6 @@ fn print_message(message: &Message) {
} else {
vec![]
};
let snippet = Snippet {
title: Some(Annotation {
label: None,
@@ -386,13 +405,19 @@ fn print_message(message: &Message) {
// Skip the first line, since we format the `label` ourselves.
let message = DisplayList::from(snippet).to_string();
let (_, message) = message.split_once('\n').unwrap();
println!("{message}\n");
writeln!(stdout, "{message}\n")?;
}
Ok(())
}
/// Print a grouped `Message`, assumed to be printed in a group with others from
/// the same file.
fn print_grouped_message(message: &Message, row_length: usize, column_length: usize) {
fn print_grouped_message<T: Write>(
stdout: &mut T,
message: &Message,
row_length: usize,
column_length: usize,
) -> Result<()> {
let label = format!(
" {}{}{}{}{} {} {}",
" ".repeat(row_length - num_digits(message.location.row())),
@@ -403,7 +428,7 @@ fn print_grouped_message(message: &Message, row_length: usize, column_length: us
message.kind.code().as_ref().red().bold(),
message.kind.body(),
);
println!("{label}");
writeln!(stdout, "{label}")?;
if let Some(source) = &message.source {
let commit = message.kind.commit();
let footer = if commit.is_some() {
@@ -415,7 +440,6 @@ fn print_grouped_message(message: &Message, row_length: usize, column_length: us
} else {
vec![]
};
let snippet = Snippet {
title: Some(Annotation {
label: None,
@@ -446,6 +470,7 @@ fn print_grouped_message(message: &Message, row_length: usize, column_length: us
let message = DisplayList::from(snippet).to_string();
let (_, message) = message.split_once('\n').unwrap();
let message = textwrap::indent(message, " ");
println!("{message}");
writeln!(stdout, "{message}")?;
}
Ok(())
}

View File

@@ -9,7 +9,7 @@ use std::time::Duration;
use std::{fs, process, str};
use anyhow::{anyhow, Context, Result};
use assert_cmd::{crate_name, Command};
use assert_cmd::Command;
use itertools::Itertools;
use log::info;
use ruff::logging::{set_up_logging, LogLevel};
@@ -24,6 +24,8 @@ struct Blackd {
client: ureq::Agent,
}
const BIN_NAME: &str = "ruff";
impl Blackd {
pub fn new() -> Result<Self> {
// Get free TCP port to run on
@@ -104,7 +106,7 @@ fn run_test(path: &Path, blackd: &Blackd, ruff_args: &[&str]) -> Result<()> {
let input = fs::read(path)?;
// Step 1: Run `ruff` on the input.
let step_1 = &Command::cargo_bin(crate_name!())?
let step_1 = &Command::cargo_bin(BIN_NAME)?
.args(ruff_args)
.write_stdin(input)
.assert()
@@ -121,7 +123,7 @@ fn run_test(path: &Path, blackd: &Blackd, ruff_args: &[&str]) -> Result<()> {
let step_2_output = blackd.check(&step_1_output)?;
// Step 3: Re-run `ruff` on the input.
let step_3 = &Command::cargo_bin(crate_name!())?
let step_3 = &Command::cargo_bin(BIN_NAME)?
.args(ruff_args)
.write_stdin(step_2_output.clone())
.assert();
@@ -178,7 +180,7 @@ fn test_ruff_black_compatibility() -> Result<()> {
// problem. Ruff would add a `# noqa: W292` after the first run, black introduces a
// newline, and ruff removes the `# noqa: W292` again.
.filter(|origin| *origin != RuleOrigin::Ruff)
.map(|origin| origin.codes().iter().map(AsRef::as_ref).join(","))
.map(|origin| origin.prefixes().as_list(","))
.join(",");
let ruff_args = [
"-",

View File

@@ -3,11 +3,14 @@
use std::str;
use anyhow::Result;
use assert_cmd::{crate_name, Command};
use assert_cmd::Command;
use path_absolutize::path_dedot;
const BIN_NAME: &str = "ruff";
#[test]
fn test_stdin_success() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
cmd.args(["-", "--format", "text"])
.write_stdin("")
.assert()
@@ -17,7 +20,7 @@ fn test_stdin_success() -> Result<()> {
#[test]
fn test_stdin_error() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text"])
.write_stdin("import os\n")
@@ -33,7 +36,7 @@ fn test_stdin_error() -> Result<()> {
#[test]
fn test_stdin_filename() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--stdin-filename", "F401.py"])
.write_stdin("import os\n")
@@ -49,7 +52,7 @@ fn test_stdin_filename() -> Result<()> {
#[test]
fn test_stdin_json() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "json", "--stdin-filename", "F401.py"])
.write_stdin("import os\n")
@@ -57,41 +60,44 @@ fn test_stdin_json() -> Result<()> {
.failure();
assert_eq!(
str::from_utf8(&output.get_output().stdout)?,
r#"[
{
format!(
r#"[
{{
"code": "F401",
"message": "`os` imported but unused",
"fix": {
"fix": {{
"content": "",
"message": "Remove unused import: `os`",
"location": {
"location": {{
"row": 1,
"column": 0
},
"end_location": {
}},
"end_location": {{
"row": 2,
"column": 0
}
},
"location": {
}}
}},
"location": {{
"row": 1,
"column": 8
},
"end_location": {
}},
"end_location": {{
"row": 1,
"column": 10
},
"filename": "F401.py"
}
}},
"filename": "{}/F401.py"
}}
]
"#
"#,
path_dedot::CWD.to_str().unwrap()
)
);
Ok(())
}
#[test]
fn test_stdin_autofix() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--fix"])
.write_stdin("import os\nimport sys\n\nprint(sys.version)\n")
@@ -106,7 +112,7 @@ fn test_stdin_autofix() -> Result<()> {
#[test]
fn test_stdin_autofix_when_not_fixable_should_still_print_contents() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--fix"])
.write_stdin("import os\nimport sys\n\nif (1, 2):\n print(sys.version)\n")
@@ -121,7 +127,7 @@ fn test_stdin_autofix_when_not_fixable_should_still_print_contents() -> Result<(
#[test]
fn test_stdin_autofix_when_no_issues_should_still_print_contents() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--fix"])
.write_stdin("import sys\n\nprint(sys.version)\n")
@@ -136,7 +142,7 @@ fn test_stdin_autofix_when_no_issues_should_still_print_contents() -> Result<()>
#[test]
fn test_show_source() -> Result<()> {
let mut cmd = Command::cargo_bin(crate_name!())?;
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--show-source"])
.write_stdin("l = 1")

View File

@@ -1,12 +1,8 @@
[package]
name = "ruff_dev"
version = "0.0.220"
version = "0.0.223"
edition = "2021"
[lib]
name = "ruff_dev"
doctest = false
[dependencies]
anyhow = { version = "1.0.66" }
clap = { version = "4.0.1", features = ["derive"] }
@@ -14,9 +10,10 @@ itertools = { version = "0.10.5" }
libcst = { git = "https://github.com/charliermarsh/LibCST", rev = "f2f0b7a487a8725d161fe8b3ed73a6758b21e177" }
once_cell = { version = "1.16.0" }
ruff = { path = ".." }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "d532160333ffeb6dbeca2c2728c2391cd1e53b7f" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "d532160333ffeb6dbeca2c2728c2391cd1e53b7f" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "d532160333ffeb6dbeca2c2728c2391cd1e53b7f" }
ruff_cli = { path = "../ruff_cli" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "acbc517b55406c76da83d7b2711941d8d3f65b87" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "acbc517b55406c76da83d7b2711941d8d3f65b87" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "acbc517b55406c76da83d7b2711941d8d3f65b87" }
schemars = { version = "0.8.11" }
serde_json = {version="1.0.91"}
strum = { version = "0.24.1", features = ["strum_macros"] }

View File

@@ -1,8 +1,7 @@
//! Generate CLI help.
use anyhow::Result;
use clap::{Args, CommandFactory};
use ruff::cli::Cli as MainCli;
use clap::Args;
use crate::utils::replace_readme_section;
@@ -21,8 +20,7 @@ fn trim_lines(s: &str) -> String {
}
pub fn main(cli: &Cli) -> Result<()> {
let mut cmd = MainCli::command();
let output = trim_lines(cmd.render_help().to_string().trim());
let output = trim_lines(ruff_cli::help().trim());
if cli.dry_run {
print!("{output}");

View File

@@ -2,8 +2,7 @@
use anyhow::Result;
use clap::Args;
use itertools::Itertools;
use ruff::registry::{RuleCode, RuleOrigin};
use ruff::registry::{Prefixes, RuleCodePrefix, RuleOrigin};
use strum::IntoEnumIterator;
use crate::utils::replace_readme_section;
@@ -21,12 +20,33 @@ pub struct Cli {
pub(crate) dry_run: bool,
}
fn generate_table(table_out: &mut String, prefix: &RuleCodePrefix) {
table_out.push_str("| Code | Name | Message | Fix |");
table_out.push('\n');
table_out.push_str("| ---- | ---- | ------- | --- |");
table_out.push('\n');
for rule_code in prefix.codes() {
let kind = rule_code.kind();
let fix_token = if kind.fixable() { "🛠" } else { "" };
table_out.push_str(&format!(
"| {} | {} | {} | {} |",
kind.code().as_ref(),
kind.as_ref(),
kind.summary().replace('|', r"\|"),
fix_token
));
table_out.push('\n');
}
table_out.push('\n');
}
pub fn main(cli: &Cli) -> Result<()> {
// Generate the table string.
let mut table_out = String::new();
let mut toc_out = String::new();
for origin in RuleOrigin::iter() {
let codes_csv: String = origin.codes().iter().map(AsRef::as_ref).join(", ");
let prefixes = origin.prefixes();
let codes_csv: String = prefixes.as_list(", ");
table_out.push_str(&format!("### {} ({codes_csv})", origin.title()));
table_out.push('\n');
table_out.push('\n');
@@ -50,26 +70,16 @@ pub fn main(cli: &Cli) -> Result<()> {
table_out.push('\n');
}
table_out.push_str("| Code | Name | Message | Fix |");
table_out.push('\n');
table_out.push_str("| ---- | ---- | ------- | --- |");
table_out.push('\n');
for rule_code in RuleCode::iter() {
if rule_code.origin() == origin {
let kind = rule_code.kind();
let fix_token = if kind.fixable() { "🛠" } else { "" };
table_out.push_str(&format!(
"| {} | {} | {} | {} |",
kind.code().as_ref(),
kind.as_ref(),
kind.summary().replace('|', r"\|"),
fix_token
));
table_out.push('\n');
match prefixes {
Prefixes::Single(prefix) => generate_table(&mut table_out, &prefix),
Prefixes::Multiple(entries) => {
for (prefix, category) in entries {
table_out.push_str(&format!("#### {category} ({})", prefix.as_ref()));
table_out.push('\n');
generate_table(&mut table_out, &prefix);
}
}
}
table_out.push('\n');
}
if cli.dry_run {

View File

@@ -1,24 +0,0 @@
#![allow(
clippy::collapsible_else_if,
clippy::collapsible_if,
clippy::implicit_hasher,
clippy::match_same_arms,
clippy::missing_errors_doc,
clippy::missing_panics_doc,
clippy::module_name_repetitions,
clippy::must_use_candidate,
clippy::similar_names,
clippy::too_many_lines
)]
#![forbid(unsafe_code)]
pub mod generate_all;
pub mod generate_cli_help;
pub mod generate_json_schema;
pub mod generate_options;
pub mod generate_rules_table;
pub mod print_ast;
pub mod print_cst;
pub mod print_tokens;
pub mod round_trip;
mod utils;

View File

@@ -1,3 +1,6 @@
//! This crate implements an internal CLI for developers of Ruff.
//!
//! Within the ruff repository you can run it with `cargo dev`.
#![allow(
clippy::collapsible_else_if,
clippy::collapsible_if,
@@ -12,12 +15,19 @@
)]
#![forbid(unsafe_code)]
mod generate_all;
mod generate_cli_help;
mod generate_json_schema;
mod generate_options;
mod generate_rules_table;
mod print_ast;
mod print_cst;
mod print_tokens;
mod round_trip;
mod utils;
use anyhow::Result;
use clap::{Parser, Subcommand};
use ruff_dev::{
generate_all, generate_cli_help, generate_json_schema, generate_options, generate_rules_table,
print_ast, print_cst, print_tokens, round_trip,
};
#[derive(Parser)]
#[command(author, version, about, long_about = None)]

View File

@@ -5,8 +5,7 @@ use std::path::PathBuf;
use anyhow::Result;
use clap::Args;
use ruff::source_code::{Generator, Locator, Stylist};
use rustpython_parser::parser;
use ruff::source_code::round_trip;
#[derive(Args)]
pub struct Cli {
@@ -17,11 +16,6 @@ pub struct Cli {
pub fn main(cli: &Cli) -> Result<()> {
let contents = fs::read_to_string(&cli.file)?;
let python_ast = parser::parse_program(&contents, &cli.file.to_string_lossy())?;
let locator = Locator::new(&contents);
let stylist = Stylist::from_contents(&contents, &locator);
let mut generator: Generator = (&stylist).into();
generator.unparse_suite(&python_ast);
println!("{}", generator.generate());
println!("{}", round_trip(&contents, &cli.file.to_string_lossy())?);
Ok(())
}

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_macros"
version = "0.0.220"
version = "0.0.223"
edition = "2021"
[lib]

View File

@@ -0,0 +1,133 @@
use proc_macro2::Span;
use quote::quote;
use syn::parse::Parse;
use syn::{Ident, Path, Token};
pub fn define_rule_mapping(mapping: Mapping) -> proc_macro2::TokenStream {
let mut rulecode_variants = quote!();
let mut diagkind_variants = quote!();
let mut rulecode_kind_match_arms = quote!();
let mut rulecode_origin_match_arms = quote!();
let mut diagkind_code_match_arms = quote!();
let mut diagkind_body_match_arms = quote!();
let mut diagkind_fixable_match_arms = quote!();
let mut diagkind_commit_match_arms = quote!();
let mut from_impls_for_diagkind = quote!();
for (code, path, name) in mapping.entries {
rulecode_variants.extend(quote! {#code,});
diagkind_variants.extend(quote! {#name(#path),});
rulecode_kind_match_arms.extend(
quote! {RuleCode::#code => DiagnosticKind::#name(<#path as Violation>::placeholder()),},
);
let origin = get_origin(&code);
rulecode_origin_match_arms.extend(quote! {RuleCode::#code => RuleOrigin::#origin,});
diagkind_code_match_arms.extend(quote! {DiagnosticKind::#name(..) => &RuleCode::#code, });
diagkind_body_match_arms
.extend(quote! {DiagnosticKind::#name(x) => Violation::message(x), });
diagkind_fixable_match_arms
.extend(quote! {DiagnosticKind::#name(x) => x.autofix_title_formatter().is_some(),});
diagkind_commit_match_arms.extend(
quote! {DiagnosticKind::#name(x) => x.autofix_title_formatter().map(|f| f(x)), },
);
from_impls_for_diagkind.extend(quote! {
impl From<#path> for DiagnosticKind {
fn from(x: #path) -> Self {
DiagnosticKind::#name(x)
}
}
});
}
quote! {
#[derive(
AsRefStr,
RuleCodePrefix,
EnumIter,
EnumString,
Debug,
Display,
PartialEq,
Eq,
Clone,
Serialize,
Deserialize,
Hash,
PartialOrd,
Ord,
)]
pub enum RuleCode { #rulecode_variants }
#[derive(AsRefStr, Debug, PartialEq, Eq, Serialize, Deserialize)]
pub enum DiagnosticKind { #diagkind_variants }
impl RuleCode {
/// A placeholder representation of the `DiagnosticKind` for the diagnostic.
pub fn kind(&self) -> DiagnosticKind {
match self { #rulecode_kind_match_arms }
}
pub fn origin(&self) -> RuleOrigin {
match self { #rulecode_origin_match_arms }
}
}
impl DiagnosticKind {
/// A four-letter shorthand code for the diagnostic.
pub fn code(&self) -> &'static RuleCode {
match self { #diagkind_code_match_arms }
}
/// The body text for the diagnostic.
pub fn body(&self) -> String {
match self { #diagkind_body_match_arms }
}
/// Whether the diagnostic is (potentially) fixable.
pub fn fixable(&self) -> bool {
match self { #diagkind_fixable_match_arms }
}
/// The message used to describe the fix action for a given `DiagnosticKind`.
pub fn commit(&self) -> Option<String> {
match self { #diagkind_commit_match_arms }
}
}
#from_impls_for_diagkind
}
}
fn get_origin(ident: &Ident) -> Ident {
let ident = ident.to_string();
let mut iter = crate::prefixes::PREFIX_TO_ORIGIN.iter();
let origin = loop {
let (prefix, origin) = iter
.next()
.unwrap_or_else(|| panic!("code doesn't start with any recognized prefix: {ident}"));
if ident.starts_with(prefix) {
break origin;
}
};
Ident::new(origin, Span::call_site())
}
pub struct Mapping {
entries: Vec<(Ident, Path, Ident)>,
}
impl Parse for Mapping {
fn parse(input: syn::parse::ParseStream) -> syn::Result<Self> {
let mut entries = Vec::new();
while !input.is_empty() {
let code: Ident = input.parse()?;
let _: Token![=>] = input.parse()?;
let path: Path = input.parse()?;
let name = path.segments.last().unwrap().ident.clone();
let _: Token![,] = input.parse()?;
entries.push((code, path, name));
}
Ok(Mapping { entries })
}
}

View File

@@ -1,3 +1,4 @@
//! This crate implements internal macros for the `ruff` library.
#![allow(
clippy::collapsible_else_if,
clippy::collapsible_if,
@@ -12,11 +13,10 @@
)]
#![forbid(unsafe_code)]
use proc_macro2::Span;
use quote::quote;
use syn::{parse_macro_input, DeriveInput, Ident};
use syn::{parse_macro_input, DeriveInput};
mod config;
mod define_rule_mapping;
mod prefixes;
mod rule_code_prefix;
@@ -39,21 +39,7 @@ pub fn derive_rule_code_prefix(input: proc_macro::TokenStream) -> proc_macro::To
}
#[proc_macro]
pub fn origin_by_code(item: proc_macro::TokenStream) -> proc_macro::TokenStream {
let ident = parse_macro_input!(item as Ident).to_string();
let mut iter = prefixes::PREFIX_TO_ORIGIN.iter();
let origin = loop {
let (prefix, origin) = iter
.next()
.unwrap_or_else(|| panic!("code doesn't start with any recognized prefix: {ident}"));
if ident.starts_with(prefix) {
break origin;
}
};
let prefix = Ident::new(origin, Span::call_site());
quote! {
RuleOrigin::#prefix
}
.into()
pub fn define_rule_mapping(item: proc_macro::TokenStream) -> proc_macro::TokenStream {
let mapping = parse_macro_input!(item as define_rule_mapping::Mapping);
define_rule_mapping::define_rule_mapping(mapping).into()
}

View File

@@ -9,6 +9,7 @@ pub const PREFIX_TO_ORIGIN: &[(&str, &str)] = &[
("B", "Flake8Bugbear"),
("C4", "Flake8Comprehensions"),
("C9", "McCabe"),
("COM", "Flake8Commas"),
("DTZ", "Flake8Datetimez"),
("D", "Pydocstyle"),
("ERA", "Eradicate"),

View File

@@ -10,8 +10,9 @@ Example usage:
import argparse
import os
from pathlib import Path
ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
ROOT_DIR = Path(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
def dir_name(plugin: str) -> str:
@@ -25,15 +26,16 @@ def pascal_case(plugin: str) -> str:
def main(*, plugin: str, url: str) -> None:
# Create the test fixture folder.
os.makedirs(
os.path.join(ROOT_DIR, f"resources/test/fixtures/{dir_name(plugin)}"),
ROOT_DIR / "resources/test/fixtures" / dir_name(plugin),
exist_ok=True,
)
# Create the Rust module.
os.makedirs(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}"), exist_ok=True)
with open(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}/rules.rs"), "w+") as fp:
rust_module = ROOT_DIR / "src/rules" / dir_name(plugin)
os.makedirs(rust_module, exist_ok=True)
with open(rust_module / "rules.rs", "w+") as fp:
fp.write("use crate::checkers::ast::Checker;\n")
with open(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}/mod.rs"), "w+") as fp:
with open(rust_module / "mod.rs", "w+") as fp:
fp.write("pub(crate) mod rules;\n")
fp.write("\n")
fp.write(
@@ -65,15 +67,14 @@ mod tests {
% dir_name(plugin)
)
# Add the plugin to `lib.rs`.
with open(os.path.join(ROOT_DIR, "src/lib.rs"), "a") as fp:
fp.write(f"mod {dir_name(plugin)};")
# Add the plugin to `rules/mod.rs`.
with open(ROOT_DIR / "src/rules/mod.rs", "a") as fp:
fp.write(f"pub mod {dir_name(plugin)};")
# Add the relevant sections to `src/registry.rs`.
with open(os.path.join(ROOT_DIR, "src/registry.rs")) as fp:
content = fp.read()
content = (ROOT_DIR / "src/registry.rs").read_text()
with open(os.path.join(ROOT_DIR, "src/registry.rs"), "w") as fp:
with open(ROOT_DIR / "src/registry.rs", "w") as fp:
for line in content.splitlines():
if line.strip() == "// Ruff":
indent = line.split("// Ruff")[0]
@@ -108,10 +109,9 @@ mod tests {
fp.write("\n")
# Add the relevant section to `src/violations.rs`.
with open(os.path.join(ROOT_DIR, "src/violations.rs")) as fp:
content = fp.read()
content = (ROOT_DIR / "src/violations.rs").read_text()
with open(os.path.join(ROOT_DIR, "src/violations.rs"), "w") as fp:
with open(ROOT_DIR / "src/violations.rs", "w") as fp:
for line in content.splitlines():
if line.strip() == "// Ruff":
indent = line.split("// Ruff")[0]

View File

@@ -11,8 +11,9 @@ Example usage:
import argparse
import os
from pathlib import Path
ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
ROOT_DIR = Path(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
def dir_name(origin: str) -> str:
@@ -32,16 +33,16 @@ def snake_case(name: str) -> str:
def main(*, name: str, code: str, origin: str) -> None:
# Create a test fixture.
with open(
os.path.join(ROOT_DIR, f"resources/test/fixtures/{dir_name(origin)}/{code}.py"),
ROOT_DIR / "resources/test/fixtures" / dir_name(origin) / f"{code}.py",
"a",
):
pass
# Add the relevant `#testcase` macro.
with open(os.path.join(ROOT_DIR, f"src/{dir_name(origin)}/mod.rs")) as fp:
content = fp.read()
mod_rs = ROOT_DIR / "src/rules" / dir_name(origin) / "mod.rs"
content = mod_rs.read_text()
with open(os.path.join(ROOT_DIR, f"src/{dir_name(origin)}/mod.rs"), "w") as fp:
with open(mod_rs, "w") as fp:
for line in content.splitlines():
if line.strip() == "fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {":
indent = line.split("fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {")[0]
@@ -52,7 +53,7 @@ def main(*, name: str, code: str, origin: str) -> None:
fp.write("\n")
# Add the relevant rule function.
with open(os.path.join(ROOT_DIR, f"src/{dir_name(origin)}/rules.rs"), "a") as fp:
with open(ROOT_DIR / "src/rules" / dir_name(origin) / "rules.rs", "a") as fp:
fp.write(
f"""
/// {code}
@@ -62,10 +63,9 @@ pub fn {snake_case(name)}(checker: &mut Checker) {{}}
fp.write("\n")
# Add the relevant struct to `src/violations.rs`.
with open(os.path.join(ROOT_DIR, "src/violations.rs")) as fp:
content = fp.read()
content = (ROOT_DIR / "src/violations.rs").read_text()
with open(os.path.join(ROOT_DIR, "src/violations.rs"), "w") as fp:
with open(ROOT_DIR / "src/violations.rs", "w") as fp:
for line in content.splitlines():
fp.write(line)
fp.write("\n")
@@ -90,12 +90,11 @@ impl Violation for %s {
fp.write("\n")
# Add the relevant code-to-violation pair to `src/registry.rs`.
with open(os.path.join(ROOT_DIR, "src/registry.rs")) as fp:
content = fp.read()
content = (ROOT_DIR / "src/registry.rs").read_text()
seen_macro = False
has_written = False
with open(os.path.join(ROOT_DIR, "src/registry.rs"), "w") as fp:
with open(ROOT_DIR / "src/registry.rs", "w") as fp:
for line in content.splitlines():
fp.write(line)
fp.write("\n")

View File

@@ -1,5 +1,5 @@
//! An equivalent object hierarchy to the `Expr` hierarchy, but with the ability
//! to compare expressions for equality (via `Eq` and `Hash`).
//! An equivalent object hierarchy to the [`Expr`] hierarchy, but with the
//! ability to compare expressions for equality (via [`Eq`] and [`Hash`]).
use num_bigint::BigInt;
use rustpython_ast::{

View File

@@ -1,10 +1,8 @@
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::Expr;
use crate::ast::helpers::{
collect_call_paths, dealias_call_path, match_call_path, to_module_and_member,
};
use crate::ast::helpers::to_call_path;
use crate::ast::types::{Scope, ScopeKind};
use crate::checkers::ast::Checker;
const CLASS_METHODS: [&str; 3] = ["__new__", "__init_subclass__", "__class_getitem__"];
const METACLASS_BASES: [(&str, &str); 2] = [("", "type"), ("abc", "ABCMeta")];
@@ -18,11 +16,10 @@ pub enum FunctionType {
/// Classify a function based on its scope, name, and decorators.
pub fn classify(
checker: &Checker,
scope: &Scope,
name: &str,
decorator_list: &[Expr],
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
classmethod_decorators: &[String],
staticmethod_decorators: &[String],
) -> FunctionType {
@@ -33,17 +30,18 @@ pub fn classify(
if CLASS_METHODS.contains(&name)
|| scope.bases.iter().any(|expr| {
// The class itself extends a known metaclass, so all methods are class methods.
let call_path = dealias_call_path(collect_call_paths(expr), import_aliases);
METACLASS_BASES
.iter()
.any(|(module, member)| match_call_path(&call_path, module, member, from_imports))
checker.resolve_call_path(expr).map_or(false, |call_path| {
METACLASS_BASES
.iter()
.any(|(module, member)| call_path == [*module, *member])
})
})
|| decorator_list.iter().any(|expr| {
// The method is decorated with a class method decorator (like `@classmethod`).
let call_path = dealias_call_path(collect_call_paths(expr), import_aliases);
classmethod_decorators.iter().any(|decorator| {
let (module, member) = to_module_and_member(decorator);
match_call_path(&call_path, module, member, from_imports)
checker.resolve_call_path(expr).map_or(false, |call_path| {
classmethod_decorators
.iter()
.any(|decorator| call_path == to_call_path(decorator))
})
})
{
@@ -51,10 +49,10 @@ pub fn classify(
} else if decorator_list.iter().any(|expr| {
// The method is decorated with a static method decorator (like
// `@staticmethod`).
let call_path = dealias_call_path(collect_call_paths(expr), import_aliases);
staticmethod_decorators.iter().any(|decorator| {
let (module, member) = to_module_and_member(decorator);
match_call_path(&call_path, module, member, from_imports)
checker.resolve_call_path(expr).map_or(false, |call_path| {
staticmethod_decorators
.iter()
.any(|decorator| call_path == to_call_path(decorator))
})
}) {
FunctionType::StaticMethod

View File

@@ -12,7 +12,8 @@ use rustpython_parser::lexer::Tok;
use rustpython_parser::token::StringKind;
use crate::ast::types::{Binding, BindingKind, Range};
use crate::source_code::{Generator, Locator, Stylist};
use crate::checkers::ast::Checker;
use crate::source_code::{Generator, Indexer, Locator, Stylist};
/// Create an `Expr` with default location from an `ExprKind`.
pub fn create_expr(node: ExprKind) -> Expr {
@@ -54,150 +55,42 @@ fn collect_call_path_inner<'a>(expr: &'a Expr, parts: &mut Vec<&'a str>) {
}
}
/// Convert an `Expr` to its call path (like `List`, or `typing.List`).
pub fn compose_call_path(expr: &Expr) -> Option<String> {
let segments = collect_call_paths(expr);
if segments.is_empty() {
None
} else {
Some(segments.join("."))
}
}
/// Convert an `Expr` to its call path segments (like ["typing", "List"]).
pub fn collect_call_paths(expr: &Expr) -> Vec<&str> {
pub fn collect_call_path(expr: &Expr) -> Vec<&str> {
let mut segments = vec![];
collect_call_path_inner(expr, &mut segments);
segments
}
/// Rewrite any import aliases on a call path.
pub fn dealias_call_path<'a>(
call_path: Vec<&'a str>,
import_aliases: &FxHashMap<&str, &'a str>,
) -> Vec<&'a str> {
if let Some(head) = call_path.first() {
if let Some(origin) = import_aliases.get(head) {
let tail = &call_path[1..];
let mut call_path: Vec<&str> = vec![];
call_path.extend(origin.split('.'));
call_path.extend(tail);
call_path
} else {
call_path
}
/// Convert an `Expr` to its call path (like `List`, or `typing.List`).
pub fn compose_call_path(expr: &Expr) -> Option<String> {
let call_path = collect_call_path(expr);
if call_path.is_empty() {
None
} else {
call_path
Some(format_call_path(&call_path))
}
}
/// Return `true` if the `Expr` is a reference to `${module}.${target}`.
///
/// Useful for, e.g., ensuring that a `Union` reference represents
/// `typing.Union`.
pub fn match_module_member(
expr: &Expr,
module: &str,
member: &str,
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> bool {
match_call_path(
&dealias_call_path(collect_call_paths(expr), import_aliases),
module,
member,
from_imports,
)
}
/// Return `true` if the `call_path` is a reference to `${module}.${target}`.
///
/// Optimized version of `match_module_member` for pre-computed call paths.
pub fn match_call_path(
call_path: &[&str],
module: &str,
member: &str,
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
) -> bool {
// If we have no segments, we can't ever match.
let num_segments = call_path.len();
if num_segments == 0 {
return false;
}
// If the last segment doesn't match the member, we can't ever match.
if call_path[num_segments - 1] != member {
return false;
}
// We now only need the module path, so throw out the member name.
let call_path = &call_path[..num_segments - 1];
let num_segments = call_path.len();
// Case (1): It's a builtin (like `list`).
// Case (2a): We imported from the parent (`from typing.re import Match`,
// `Match`).
// Case (2b): We imported star from the parent (`from typing.re import *`,
// `Match`).
if num_segments == 0 {
module.is_empty()
|| from_imports.get(module).map_or(false, |imports| {
imports.contains(member) || imports.contains("*")
})
/// Format a call path for display.
pub fn format_call_path(call_path: &[&str]) -> String {
if call_path
.first()
.expect("Unable to format empty call path")
.is_empty()
{
call_path[1..].join(".")
} else {
let components: Vec<&str> = module.split('.').collect();
// Case (3a): it's a fully qualified call path (`import typing`,
// `typing.re.Match`). Case (3b): it's a fully qualified call path (`import
// typing.re`, `typing.re.Match`).
if components == call_path {
return true;
}
// Case (4): We imported from the grandparent (`from typing import re`,
// `re.Match`)
let num_matches = (0..components.len())
.take(num_segments)
.take_while(|i| components[components.len() - 1 - i] == call_path[num_segments - 1 - i])
.count();
if num_matches > 0 {
let cut = components.len() - num_matches;
// TODO(charlie): Rewrite to avoid this allocation.
let module = components[..cut].join(".");
let member = components[cut];
if from_imports
.get(&module.as_str())
.map_or(false, |imports| imports.contains(member))
{
return true;
}
}
false
call_path.join(".")
}
}
/// Return `true` if the `Expr` contains a reference to `${module}.${target}`.
pub fn contains_call_path(
expr: &Expr,
module: &str,
member: &str,
import_aliases: &FxHashMap<&str, &str>,
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
) -> bool {
pub fn contains_call_path(checker: &Checker, expr: &Expr, target: &[&str]) -> bool {
any_over_expr(expr, &|expr| {
let call_path = collect_call_paths(expr);
if !call_path.is_empty() {
if match_call_path(
&dealias_call_path(call_path, import_aliases),
module,
member,
from_imports,
) {
return true;
}
}
false
checker
.resolve_call_path(expr)
.map_or(false, |call_path| call_path == target)
})
}
@@ -304,7 +197,7 @@ where
static DUNDER_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"__[^\s]+__").unwrap());
/// Return `true` if the `Stmt` is an assignment to a dunder (like `__all__`).
/// Return `true` if the [`Stmt`] is an assignment to a dunder (like `__all__`).
pub fn is_assignment_to_a_dunder(stmt: &Stmt) -> bool {
// Check whether it's an assignment to a dunder, with or without a type
// annotation. This is what pycodestyle (as of 2.9.1) does.
@@ -326,7 +219,7 @@ pub fn is_assignment_to_a_dunder(stmt: &Stmt) -> bool {
}
}
/// Return `true` if the `Expr` is a singleton (`None`, `True`, `False`, or
/// Return `true` if the [`Expr`] is a singleton (`None`, `True`, `False`, or
/// `...`).
pub fn is_singleton(expr: &Expr) -> bool {
matches!(
@@ -338,7 +231,7 @@ pub fn is_singleton(expr: &Expr) -> bool {
)
}
/// Return `true` if the `Expr` is a constant or tuple of constants.
/// Return `true` if the [`Expr`] is a constant or tuple of constants.
pub fn is_constant(expr: &Expr) -> bool {
match &expr.node {
ExprKind::Constant { .. } => true,
@@ -347,13 +240,13 @@ pub fn is_constant(expr: &Expr) -> bool {
}
}
/// Return `true` if the `Expr` is a non-singleton constant.
/// Return `true` if the [`Expr`] is a non-singleton constant.
pub fn is_constant_non_singleton(expr: &Expr) -> bool {
is_constant(expr) && !is_singleton(expr)
}
/// Return the `Keyword` with the given name, if it's present in the list of
/// `Keyword` arguments.
/// Return the [`Keyword`] with the given name, if it's present in the list of
/// [`Keyword`] arguments.
pub fn find_keyword<'a>(keywords: &'a [Keyword], keyword_name: &str) -> Option<&'a Keyword> {
keywords.iter().find(|keyword| {
let KeywordData { arg, .. } = &keyword.node;
@@ -361,7 +254,7 @@ pub fn find_keyword<'a>(keywords: &'a [Keyword], keyword_name: &str) -> Option<&
})
}
/// Return `true` if an `Expr` is `None`.
/// Return `true` if an [`Expr`] is `None`.
pub fn is_const_none(expr: &Expr) -> bool {
matches!(
&expr.node,
@@ -389,13 +282,13 @@ pub fn extract_handler_names(handlers: &[Excepthandler]) -> Vec<Vec<&str>> {
if let Some(type_) = type_ {
if let ExprKind::Tuple { elts, .. } = &type_.node {
for type_ in elts {
let call_path = collect_call_paths(type_);
let call_path = collect_call_path(type_);
if !call_path.is_empty() {
handler_names.push(call_path);
}
}
} else {
let call_path = collect_call_paths(type_);
let call_path = collect_call_path(type_);
if !call_path.is_empty() {
handler_names.push(call_path);
}
@@ -458,12 +351,37 @@ pub fn format_import_from(level: Option<&usize>, module: Option<&str>) -> String
module_name
}
/// Format the member reference name for a relative import.
pub fn format_import_from_member(
level: Option<&usize>,
module: Option<&str>,
member: &str,
) -> String {
let mut full_name = String::with_capacity(
level.map_or(0, |level| *level)
+ module.as_ref().map_or(0, |module| module.len())
+ 1
+ member.len(),
);
if let Some(level) = level {
for _ in 0..*level {
full_name.push('.');
}
}
if let Some(module) = module {
full_name.push_str(module);
full_name.push('.');
}
full_name.push_str(member);
full_name
}
/// Split a target string (like `typing.List`) into (`typing`, `List`).
pub fn to_module_and_member(target: &str) -> (&str, &str) {
if let Some(index) = target.rfind('.') {
(&target[..index], &target[index + 1..])
pub fn to_call_path(target: &str) -> Vec<&str> {
if target.contains('.') {
target.split('.').collect()
} else {
("", target)
vec!["", target]
}
}
@@ -508,7 +426,7 @@ pub fn match_trailing_content(stmt: &Stmt, locator: &Locator) -> bool {
/// Return the number of trailing empty lines following a statement.
pub fn count_trailing_lines(stmt: &Stmt, locator: &Locator) -> usize {
let suffix =
locator.slice_source_code_at(&Location::new(stmt.end_location.unwrap().row() + 1, 0));
locator.slice_source_code_at(Location::new(stmt.end_location.unwrap().row() + 1, 0));
suffix
.lines()
.take_while(|line| line.trim().is_empty())
@@ -683,31 +601,62 @@ pub fn else_range(stmt: &Stmt, locator: &Locator) -> Option<Range> {
}
}
/// Return `true` if a `Stmt` appears to be part of a multi-statement line, with
/// other statements preceding it.
pub fn preceded_by_continuation(stmt: &Stmt, locator: &Locator) -> bool {
// Does the previous line end in a continuation? This will have a specific
// false-positive, which is that if the previous line ends in a comment, it
// will be treated as a continuation. So we should only use this information to
// make conservative choices.
// TODO(charlie): Come up with a more robust strategy.
if stmt.location.row() > 1 {
let range = Range::new(
Location::new(stmt.location.row() - 1, 0),
Location::new(stmt.location.row(), 0),
);
let line = locator.slice_source_code_range(&range);
if line.trim_end().ends_with('\\') {
return true;
}
}
false
/// Return the `Range` of the first `Tok::Colon` token in a `Range`.
pub fn first_colon_range(range: Range, locator: &Locator) -> Option<Range> {
let contents = locator.slice_source_code_range(&range);
let range = lexer::make_tokenizer_located(&contents, range.location)
.flatten()
.find(|(_, kind, _)| matches!(kind, Tok::Colon))
.map(|(location, _, end_location)| Range {
location,
end_location,
});
range
}
/// Return the `Range` of the first `Elif` or `Else` token in an `If` statement.
pub fn elif_else_range(stmt: &Stmt, locator: &Locator) -> Option<Range> {
let StmtKind::If { body, orelse, .. } = &stmt.node else {
return None;
};
let start = body
.last()
.expect("Expected body to be non-empty")
.end_location
.unwrap();
let end = match &orelse[..] {
[Stmt {
node: StmtKind::If { test, .. },
..
}] => test.location,
[stmt, ..] => stmt.location,
_ => return None,
};
let contents = locator.slice_source_code_range(&Range::new(start, end));
let range = lexer::make_tokenizer_located(&contents, start)
.flatten()
.find(|(_, kind, _)| matches!(kind, Tok::Elif | Tok::Else))
.map(|(location, _, end_location)| Range {
location,
end_location,
});
range
}
/// Return `true` if a `Stmt` appears to be part of a multi-statement line, with
/// other statements preceding it.
pub fn preceded_by_multi_statement_line(stmt: &Stmt, locator: &Locator) -> bool {
match_leading_content(stmt, locator) || preceded_by_continuation(stmt, locator)
pub fn preceded_by_continuation(stmt: &Stmt, indexer: &Indexer) -> bool {
stmt.location.row() > 1
&& indexer
.continuation_lines()
.contains(&(stmt.location.row() - 1))
}
/// Return `true` if a `Stmt` appears to be part of a multi-statement line, with
/// other statements preceding it.
pub fn preceded_by_multi_statement_line(stmt: &Stmt, locator: &Locator, indexer: &Indexer) -> bool {
match_leading_content(stmt, locator) || preceded_by_continuation(stmt, indexer)
}
/// Return `true` if a `Stmt` appears to be part of a multi-statement line, with
@@ -786,159 +735,15 @@ impl<'a> SimpleCallArgs<'a> {
#[cfg(test)]
mod tests {
use anyhow::Result;
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::Location;
use rustpython_parser::parser;
use crate::ast::helpers::{
else_range, identifier_range, match_module_member, match_trailing_content,
elif_else_range, else_range, first_colon_range, identifier_range, match_trailing_content,
};
use crate::ast::types::Range;
use crate::source_code::Locator;
#[test]
fn builtin() -> Result<()> {
let expr = parser::parse_expression("list", "<filename>")?;
assert!(match_module_member(
&expr,
"",
"list",
&FxHashMap::default(),
&FxHashMap::default(),
));
Ok(())
}
#[test]
fn fully_qualified() -> Result<()> {
let expr = parser::parse_expression("typing.re.Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::default(),
&FxHashMap::default(),
));
Ok(())
}
#[test]
fn unimported() -> Result<()> {
let expr = parser::parse_expression("Match", "<filename>")?;
assert!(!match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::default(),
&FxHashMap::default(),
));
let expr = parser::parse_expression("re.Match", "<filename>")?;
assert!(!match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::default(),
&FxHashMap::default(),
));
Ok(())
}
#[test]
fn from_star() -> Result<()> {
let expr = parser::parse_expression("Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::from_iter([("typing.re", FxHashSet::from_iter(["*"]))]),
&FxHashMap::default()
));
Ok(())
}
#[test]
fn from_parent() -> Result<()> {
let expr = parser::parse_expression("Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::from_iter([("typing.re", FxHashSet::from_iter(["Match"]))]),
&FxHashMap::default()
));
Ok(())
}
#[test]
fn from_grandparent() -> Result<()> {
let expr = parser::parse_expression("re.Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::from_iter([("typing", FxHashSet::from_iter(["re"]))]),
&FxHashMap::default()
));
let expr = parser::parse_expression("match.Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re.match",
"Match",
&FxHashMap::from_iter([("typing.re", FxHashSet::from_iter(["match"]))]),
&FxHashMap::default()
));
let expr = parser::parse_expression("re.match.Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re.match",
"Match",
&FxHashMap::from_iter([("typing", FxHashSet::from_iter(["re"]))]),
&FxHashMap::default()
));
Ok(())
}
#[test]
fn from_alias() -> Result<()> {
let expr = parser::parse_expression("IMatch", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::from_iter([("typing.re", FxHashSet::from_iter(["Match"]))]),
&FxHashMap::from_iter([("IMatch", "Match")]),
));
Ok(())
}
#[test]
fn from_aliased_parent() -> Result<()> {
let expr = parser::parse_expression("t.Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::default(),
&FxHashMap::from_iter([("t", "typing.re")]),
));
Ok(())
}
#[test]
fn from_aliased_grandparent() -> Result<()> {
let expr = parser::parse_expression("t.re.Match", "<filename>")?;
assert!(match_module_member(
&expr,
"typing.re",
"Match",
&FxHashMap::default(),
&FxHashMap::from_iter([("t", "typing")]),
));
Ok(())
}
#[test]
fn trailing_content() -> Result<()> {
let contents = "x = 1";
@@ -1066,4 +871,54 @@ else:
assert_eq!(range.end_location.column(), 4);
Ok(())
}
#[test]
fn test_first_colon_range() {
let contents = "with a: pass";
let locator = Locator::new(contents);
let range = first_colon_range(
Range::new(Location::new(1, 0), Location::new(1, contents.len())),
&locator,
)
.unwrap();
assert_eq!(range.location.row(), 1);
assert_eq!(range.location.column(), 6);
assert_eq!(range.end_location.row(), 1);
assert_eq!(range.end_location.column(), 7);
}
#[test]
fn test_elif_else_range() -> Result<()> {
let contents = "
if a:
...
elif b:
...
"
.trim_start();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = Locator::new(contents);
let range = elif_else_range(stmt, &locator).unwrap();
assert_eq!(range.location.row(), 3);
assert_eq!(range.location.column(), 0);
assert_eq!(range.end_location.row(), 3);
assert_eq!(range.end_location.column(), 4);
let contents = "
if a:
...
else:
...
"
.trim_start();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = Locator::new(contents);
let range = elif_else_range(stmt, &locator).unwrap();
assert_eq!(range.location.row(), 3);
assert_eq!(range.location.column(), 0);
assert_eq!(range.end_location.row(), 3);
assert_eq!(range.end_location.column(), 4);
Ok(())
}
}

View File

@@ -96,7 +96,7 @@ impl<'a> Visitor<'a> for GlobalVisitor<'a> {
}
}
/// Extract a map from global name to its last-defining `Stmt`.
/// Extract a map from global name to its last-defining [`Stmt`].
pub fn extract_globals(body: &[Stmt]) -> FxHashMap<&str, &Stmt> {
let mut visitor = GlobalVisitor::default();
for stmt in body {
@@ -197,12 +197,12 @@ pub fn is_unpacking_assignment(parent: &Stmt, child: &Expr) -> bool {
pub type LocatedCmpop<U = ()> = Located<Cmpop, U>;
/// Extract all `Cmpop` operators from a source code snippet, with appropriate
/// Extract all [`Cmpop`] operators from a source code snippet, with appropriate
/// ranges.
///
/// `RustPython` doesn't include line and column information on `Cmpop` nodes.
/// `RustPython` doesn't include line and column information on [`Cmpop`] nodes.
/// `CPython` doesn't either. This method iterates over the token stream and
/// re-identifies `Cmpop` nodes, annotating them with valid ranges.
/// re-identifies [`Cmpop`] nodes, annotating them with valid ranges.
pub fn locate_cmpops(contents: &str) -> Vec<LocatedCmpop> {
let mut tok_iter = lexer::make_tokenizer(contents).flatten().peekable();
let mut ops: Vec<LocatedCmpop> = vec![];

View File

@@ -104,7 +104,7 @@ impl<'a> Scope<'a> {
}
#[derive(Clone, Debug)]
pub enum BindingKind {
pub enum BindingKind<'a> {
Annotation,
Argument,
Assignment,
@@ -118,16 +118,16 @@ pub enum BindingKind {
Export(Vec<String>),
FutureImportation,
StarImportation(Option<usize>, Option<String>),
Importation(String, String),
FromImportation(String, String),
SubmoduleImportation(String, String),
Importation(&'a str, &'a str),
FromImportation(&'a str, String),
SubmoduleImportation(&'a str, &'a str),
}
#[derive(Clone, Debug)]
pub struct Binding<'a> {
pub kind: BindingKind,
pub kind: BindingKind<'a>,
pub range: Range,
/// The statement in which the `Binding` was defined.
/// The statement in which the [`Binding`] was defined.
pub source: Option<RefEquality<'a, Stmt>>,
/// Tuple of (scope index, range) indicating the scope and range at which
/// the binding was last used.
@@ -168,19 +168,26 @@ impl<'a> Binding<'a> {
pub fn redefines(&self, existing: &'a Binding) -> bool {
match &self.kind {
BindingKind::Importation(_, full_name) | BindingKind::FromImportation(_, full_name) => {
if let BindingKind::SubmoduleImportation(_, existing_full_name) = &existing.kind {
return full_name == existing_full_name;
BindingKind::Importation(.., full_name) => {
if let BindingKind::SubmoduleImportation(.., existing) = &existing.kind {
return full_name == existing;
}
}
BindingKind::SubmoduleImportation(_, full_name) => {
if let BindingKind::Importation(_, existing_full_name)
| BindingKind::FromImportation(_, existing_full_name)
| BindingKind::SubmoduleImportation(_, existing_full_name) = &existing.kind
{
return full_name == existing_full_name;
BindingKind::FromImportation(.., full_name) => {
if let BindingKind::SubmoduleImportation(.., existing) = &existing.kind {
return full_name == existing;
}
}
BindingKind::SubmoduleImportation(.., full_name) => match &existing.kind {
BindingKind::Importation(.., existing)
| BindingKind::SubmoduleImportation(.., existing) => {
return full_name == existing;
}
BindingKind::FromImportation(.., existing) => {
return full_name == existing;
}
_ => {}
},
BindingKind::Annotation => {
return false;
}

View File

@@ -12,7 +12,7 @@ use crate::ast::whitespace::LinesWithTrailingNewline;
use crate::cst::helpers::compose_module_path;
use crate::cst::matchers::match_module;
use crate::fix::Fix;
use crate::source_code::Locator;
use crate::source_code::{Indexer, Locator};
/// Determine if a body contains only a single statement, taking into account
/// deleted.
@@ -79,7 +79,7 @@ fn is_lone_child(child: &Stmt, parent: &Stmt, deleted: &[&Stmt]) -> Result<bool>
/// Return the location of a trailing semicolon following a `Stmt`, if it's part
/// of a multi-statement line.
fn trailing_semicolon(stmt: &Stmt, locator: &Locator) -> Option<Location> {
let contents = locator.slice_source_code_at(&stmt.end_location.unwrap());
let contents = locator.slice_source_code_at(stmt.end_location.unwrap());
for (row, line) in LinesWithTrailingNewline::from(&contents).enumerate() {
let trimmed = line.trim();
if trimmed.starts_with(';') {
@@ -102,7 +102,7 @@ fn trailing_semicolon(stmt: &Stmt, locator: &Locator) -> Option<Location> {
/// Find the next valid break for a `Stmt` after a semicolon.
fn next_stmt_break(semicolon: Location, locator: &Locator) -> Location {
let start_location = Location::new(semicolon.row(), semicolon.column() + 1);
let contents = locator.slice_source_code_at(&start_location);
let contents = locator.slice_source_code_at(start_location);
for (row, line) in LinesWithTrailingNewline::from(&contents).enumerate() {
let trimmed = line.trim();
// Skip past any continuations.
@@ -134,7 +134,7 @@ fn next_stmt_break(semicolon: Location, locator: &Locator) -> Location {
/// Return `true` if a `Stmt` occurs at the end of a file.
fn is_end_of_file(stmt: &Stmt, locator: &Locator) -> bool {
let contents = locator.slice_source_code_at(&stmt.end_location.unwrap());
let contents = locator.slice_source_code_at(stmt.end_location.unwrap());
contents.is_empty()
}
@@ -156,6 +156,7 @@ pub fn delete_stmt(
parent: Option<&Stmt>,
deleted: &[&Stmt],
locator: &Locator,
indexer: &Indexer,
) -> Result<Fix> {
if parent
.map(|parent| is_lone_child(stmt, parent, deleted))
@@ -175,7 +176,7 @@ pub fn delete_stmt(
Fix::deletion(stmt.location, next)
} else if helpers::match_leading_content(stmt, locator) {
Fix::deletion(stmt.location, stmt.end_location.unwrap())
} else if helpers::preceded_by_continuation(stmt, locator) {
} else if helpers::preceded_by_continuation(stmt, indexer) {
if is_end_of_file(stmt, locator) && stmt.location.column() == 0 {
// Special-case: a file can't end in a continuation.
Fix::replacement("\n".to_string(), stmt.location, stmt.end_location.unwrap())
@@ -198,6 +199,7 @@ pub fn remove_unused_imports<'a>(
parent: Option<&Stmt>,
deleted: &[&Stmt],
locator: &Locator,
indexer: &Indexer,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(stmt));
let mut tree = match_module(&module_text)?;
@@ -210,14 +212,17 @@ pub fn remove_unused_imports<'a>(
Some(SmallStatement::Import(import_body)) => (&mut import_body.names, None),
Some(SmallStatement::ImportFrom(import_body)) => {
if let ImportNames::Aliases(names) = &mut import_body.names {
(names, import_body.module.as_ref())
(
names,
Some((&import_body.relative, import_body.module.as_ref())),
)
} else if let ImportNames::Star(..) = &import_body.names {
// Special-case: if the import is a `from ... import *`, then we delete the
// entire statement.
let mut found_star = false;
for unused_import in unused_imports {
let full_name = match import_body.module.as_ref() {
Some(module_name) => format!("{}.*", compose_module_path(module_name),),
Some(module_name) => format!("{}.*", compose_module_path(module_name)),
None => "*".to_string(),
};
if unused_import == full_name {
@@ -232,7 +237,7 @@ pub fn remove_unused_imports<'a>(
if !found_star {
bail!("Expected \'*\' for unused import");
}
return delete_stmt(stmt, parent, deleted, locator);
return delete_stmt(stmt, parent, deleted, locator, indexer);
} else {
bail!("Expected: ImportNames::Aliases | ImportNames::Star");
}
@@ -246,11 +251,25 @@ pub fn remove_unused_imports<'a>(
for unused_import in unused_imports {
let alias_index = aliases.iter().position(|alias| {
let full_name = match import_module {
Some(module_name) => format!(
"{}.{}",
compose_module_path(module_name),
compose_module_path(&alias.name)
),
Some((relative, module)) => {
let module = module.map(compose_module_path);
let member = compose_module_path(&alias.name);
let mut full_name = String::with_capacity(
relative.len()
+ module.as_ref().map_or(0, std::string::String::len)
+ member.len()
+ 1,
);
for _ in 0..relative.len() {
full_name.push('.');
}
if let Some(module) = module {
full_name.push_str(&module);
full_name.push('.');
}
full_name.push_str(&member);
full_name
}
None => compose_module_path(&alias.name),
};
full_name == unused_import
@@ -279,7 +298,7 @@ pub fn remove_unused_imports<'a>(
}
if aliases.is_empty() {
delete_stmt(stmt, parent, deleted, locator)
delete_stmt(stmt, parent, deleted, locator, indexer)
} else {
let mut state = CodegenState::default();
tree.codegen(&mut state);

View File

@@ -66,7 +66,7 @@ fn apply_fixes<'a>(
}
// Add the remaining content.
let slice = locator.slice_source_code_at(&last_pos);
let slice = locator.slice_source_code_at(last_pos);
output.append(&slice);
(Cow::from(output.finish()), num_fixed)

View File

@@ -1,134 +1,9 @@
#![cfg_attr(target_family = "wasm", allow(dead_code))]
use std::collections::hash_map::DefaultHasher;
use std::fs;
use std::hash::{Hash, Hasher};
use std::io::Write;
use std::path::{Path, PathBuf};
use anyhow::Result;
use filetime::FileTime;
use log::error;
use path_absolutize::Absolutize;
use serde::{Deserialize, Serialize};
use crate::message::Message;
use crate::settings::{flags, Settings};
pub const CACHE_DIR_NAME: &str = ".ruff_cache";
const CARGO_PKG_VERSION: &str = env!("CARGO_PKG_VERSION");
#[derive(Serialize, Deserialize)]
struct CacheMetadata {
mtime: i64,
}
#[derive(Serialize)]
struct CheckResultRef<'a> {
metadata: &'a CacheMetadata,
messages: &'a [Message],
}
#[derive(Deserialize)]
struct CheckResult {
metadata: CacheMetadata,
messages: Vec<Message>,
}
/// Return the cache directory for a given project root. Defers to the
/// `RUFF_CACHE_DIR` environment variable, if set.
pub fn cache_dir(project_root: &Path) -> PathBuf {
project_root.join(CACHE_DIR_NAME)
}
fn content_dir() -> &'static Path {
Path::new("content")
}
fn cache_key<P: AsRef<Path>>(path: P, settings: &Settings, autofix: flags::Autofix) -> u64 {
let mut hasher = DefaultHasher::new();
CARGO_PKG_VERSION.hash(&mut hasher);
path.as_ref().absolutize().unwrap().hash(&mut hasher);
settings.hash(&mut hasher);
autofix.hash(&mut hasher);
hasher.finish()
}
#[allow(dead_code)]
/// Initialize the cache at the specified `Path`.
pub fn init(path: &Path) -> Result<()> {
// Create the cache directories.
fs::create_dir_all(path.join(content_dir()))?;
// Add the CACHEDIR.TAG.
if !cachedir::is_tagged(path)? {
cachedir::add_tag(path)?;
}
// Add the .gitignore.
let gitignore_path = path.join(".gitignore");
if !gitignore_path.exists() {
let mut file = fs::File::create(gitignore_path)?;
file.write_all(b"*")?;
}
Ok(())
}
fn write_sync(cache_dir: &Path, key: u64, value: &[u8]) -> Result<(), std::io::Error> {
fs::write(
cache_dir.join(content_dir()).join(format!("{key:x}")),
value,
)
}
fn read_sync(cache_dir: &Path, key: u64) -> Result<Vec<u8>, std::io::Error> {
fs::read(cache_dir.join(content_dir()).join(format!("{key:x}")))
}
/// Get a value from the cache.
pub fn get<P: AsRef<Path>>(
path: P,
metadata: &fs::Metadata,
settings: &Settings,
autofix: flags::Autofix,
) -> Option<Vec<Message>> {
let encoded = read_sync(&settings.cache_dir, cache_key(path, settings, autofix)).ok()?;
let (mtime, messages) = match bincode::deserialize::<CheckResult>(&encoded[..]) {
Ok(CheckResult {
metadata: CacheMetadata { mtime },
messages,
}) => (mtime, messages),
Err(e) => {
error!("Failed to deserialize encoded cache entry: {e:?}");
return None;
}
};
if FileTime::from_last_modification_time(metadata).unix_seconds() != mtime {
return None;
}
Some(messages)
}
/// Set a value in the cache.
pub fn set<P: AsRef<Path>>(
path: P,
metadata: &fs::Metadata,
settings: &Settings,
autofix: flags::Autofix,
messages: &[Message],
) {
let check_result = CheckResultRef {
metadata: &CacheMetadata {
mtime: FileTime::from_last_modification_time(metadata).unix_seconds(),
},
messages,
};
if let Err(e) = write_sync(
&settings.cache_dir,
cache_key(path, settings, autofix),
&bincode::serialize(&check_result).unwrap(),
) {
error!("Failed to write to cache: {e:?}");
}
}

View File

@@ -14,9 +14,7 @@ use rustpython_parser::ast::{
};
use rustpython_parser::parser;
use crate::ast::helpers::{
binding_range, collect_call_paths, dealias_call_path, extract_handler_names, match_call_path,
};
use crate::ast::helpers::{binding_range, collect_call_path, extract_handler_names};
use crate::ast::operations::extract_all_names;
use crate::ast::relocate::relocate_expr;
use crate::ast::types::{
@@ -31,20 +29,20 @@ use crate::python::future::ALL_FEATURE_NAMES;
use crate::python::typing;
use crate::python::typing::SubscriptKind;
use crate::registry::{Diagnostic, RuleCode};
use crate::rules::{
flake8_2020, flake8_annotations, flake8_bandit, flake8_blind_except, flake8_boolean_trap,
flake8_bugbear, flake8_builtins, flake8_comprehensions, flake8_datetimez, flake8_debugger,
flake8_errmsg, flake8_implicit_str_concat, flake8_import_conventions, flake8_pie, flake8_print,
flake8_pytest_style, flake8_return, flake8_simplify, flake8_tidy_imports,
flake8_unused_arguments, mccabe, pandas_vet, pep8_naming, pycodestyle, pydocstyle, pyflakes,
pygrep_hooks, pylint, pyupgrade, ruff,
};
use crate::settings::types::PythonVersion;
use crate::settings::{flags, Settings};
use crate::source_code::{Locator, Stylist};
use crate::source_code::{Indexer, Locator, Stylist};
use crate::violations::DeferralKeyword;
use crate::visibility::{module_visibility, transition_scope, Modifier, Visibility, VisibleScope};
use crate::{
autofix, docstrings, flake8_2020, flake8_annotations, flake8_bandit, flake8_blind_except,
flake8_boolean_trap, flake8_bugbear, flake8_builtins, flake8_comprehensions, flake8_datetimez,
flake8_debugger, flake8_errmsg, flake8_implicit_str_concat, flake8_import_conventions,
flake8_pie, flake8_print, flake8_pytest_style, flake8_return, flake8_simplify,
flake8_tidy_imports, flake8_unused_arguments, mccabe, noqa, pandas_vet, pep8_naming,
pycodestyle, pydocstyle, pyflakes, pygrep_hooks, pylint, pyupgrade, ruff, violations,
visibility,
};
use crate::{autofix, docstrings, noqa, violations, visibility};
const GLOBAL_SCOPE_INDEX: usize = 0;
@@ -59,17 +57,15 @@ pub struct Checker<'a> {
pub(crate) settings: &'a Settings,
pub(crate) noqa_line_for: &'a IntMap<usize, usize>,
pub(crate) locator: &'a Locator<'a>,
pub(crate) style: &'a Stylist<'a>,
pub(crate) stylist: &'a Stylist<'a>,
pub(crate) indexer: &'a Indexer,
// Computed diagnostics.
pub(crate) diagnostics: Vec<Diagnostic>,
// Function and class definition tracking (e.g., for docstring enforcement).
definitions: Vec<(Definition<'a>, Visibility)>,
definitions: Vec<(Definition<'a>, Visibility, DeferralContext<'a>)>,
// Edit tracking.
// TODO(charlie): Instead of exposing deletions, wrap in a public API.
pub(crate) deletions: FxHashSet<RefEquality<'a, Stmt>>,
// Import tracking.
pub(crate) from_imports: FxHashMap<&'a str, FxHashSet<&'a str>>,
pub(crate) import_aliases: FxHashMap<&'a str, &'a str>,
// Retain all scopes and parent nodes, along with a stack of indexes to track which are active
// at various points in time.
pub(crate) parents: Vec<RefEquality<'a, Stmt>>,
@@ -103,6 +99,7 @@ pub struct Checker<'a> {
}
impl<'a> Checker<'a> {
#[allow(clippy::too_many_arguments)]
pub fn new(
settings: &'a Settings,
noqa_line_for: &'a IntMap<usize, usize>,
@@ -111,6 +108,7 @@ impl<'a> Checker<'a> {
path: &'a Path,
locator: &'a Locator,
style: &'a Stylist,
indexer: &'a Indexer,
) -> Checker<'a> {
Checker {
settings,
@@ -119,12 +117,11 @@ impl<'a> Checker<'a> {
noqa,
path,
locator,
style,
stylist: style,
indexer,
diagnostics: vec![],
definitions: vec![],
deletions: FxHashSet::default(),
from_imports: FxHashMap::default(),
import_aliases: FxHashMap::default(),
parents: vec![],
depths: FxHashMap::default(),
child_to_parent: FxHashMap::default(),
@@ -167,28 +164,28 @@ impl<'a> Checker<'a> {
/// Return `true` if the `Expr` is a reference to `typing.${target}`.
pub fn match_typing_expr(&self, expr: &Expr, target: &str) -> bool {
let call_path = dealias_call_path(collect_call_paths(expr), &self.import_aliases);
self.match_typing_call_path(&call_path, target)
self.resolve_call_path(expr).map_or(false, |call_path| {
self.match_typing_call_path(&call_path, target)
})
}
/// Return `true` if the call path is a reference to `typing.${target}`.
pub fn match_typing_call_path(&self, call_path: &[&str], target: &str) -> bool {
if match_call_path(call_path, "typing", target, &self.from_imports) {
if call_path == ["typing", target] {
return true;
}
if typing::TYPING_EXTENSIONS.contains(target) {
if match_call_path(call_path, "typing_extensions", target, &self.from_imports) {
if call_path == ["typing_extensions", target] {
return true;
}
}
if self
.settings
.typing_modules
.iter()
.any(|module| match_call_path(call_path, module, target, &self.from_imports))
{
if self.settings.typing_modules.iter().any(|module| {
let mut module = module.split('.').collect::<Vec<_>>();
module.push(target);
call_path == module.as_slice()
}) {
return true;
}
@@ -209,6 +206,54 @@ impl<'a> Checker<'a> {
})
}
pub fn resolve_call_path<'b>(&'a self, value: &'b Expr) -> Option<Vec<&'a str>>
where
'b: 'a,
{
let call_path = collect_call_path(value);
if let Some(head) = call_path.first() {
if let Some(binding) = self.find_binding(head) {
match &binding.kind {
BindingKind::Importation(.., name) => {
// Ignore relative imports.
if name.starts_with('.') {
return None;
}
let mut source_path: Vec<&str> = name.split('.').collect();
source_path.extend(call_path.iter().skip(1));
return Some(source_path);
}
BindingKind::SubmoduleImportation(name, ..) => {
// Ignore relative imports.
if name.starts_with('.') {
return None;
}
let mut source_path: Vec<&str> = name.split('.').collect();
source_path.extend(call_path.iter().skip(1));
return Some(source_path);
}
BindingKind::FromImportation(.., name) => {
// Ignore relative imports.
if name.starts_with('.') {
return None;
}
let mut source_path: Vec<&str> = name.split('.').collect();
source_path.extend(call_path.iter().skip(1));
return Some(source_path);
}
BindingKind::Builtin => {
let mut source_path: Vec<&str> = Vec::with_capacity(call_path.len() + 1);
source_path.push("");
source_path.extend(call_path);
return Some(source_path);
}
_ => {}
}
}
}
None
}
/// Return `true` if a `RuleCode` is disabled by a `noqa` directive.
pub fn is_ignored(&self, code: &RuleCode, lineno: usize) -> bool {
// TODO(charlie): `noqa` directives are mostly enforced in `check_lines.rs`.
@@ -415,13 +460,11 @@ where
if self.settings.enabled.contains(&RuleCode::N804) {
if let Some(diagnostic) =
pep8_naming::rules::invalid_first_argument_name_for_class_method(
self,
self.current_scope(),
name,
decorator_list,
args,
&self.from_imports,
&self.import_aliases,
&self.settings.pep8_naming,
)
{
self.diagnostics.push(diagnostic);
@@ -431,13 +474,11 @@ where
if self.settings.enabled.contains(&RuleCode::N805) {
if let Some(diagnostic) =
pep8_naming::rules::invalid_first_argument_name_for_method(
self,
self.current_scope(),
name,
decorator_list,
args,
&self.from_imports,
&self.import_aliases,
&self.settings.pep8_naming,
)
{
self.diagnostics.push(diagnostic);
@@ -706,10 +747,7 @@ where
self.add_binding(
name,
Binding {
kind: BindingKind::SubmoduleImportation(
name.to_string(),
full_name.to_string(),
),
kind: BindingKind::SubmoduleImportation(name, full_name),
used: None,
range: Range::from_located(alias),
source: Some(self.current_stmt().clone()),
@@ -728,10 +766,7 @@ where
self.add_binding(
name,
Binding {
kind: BindingKind::Importation(
name.to_string(),
full_name.to_string(),
),
kind: BindingKind::Importation(name, full_name),
// Treat explicit re-export as usage (e.g., `import applications
// as applications`).
used: if alias
@@ -788,12 +823,6 @@ where
}
if let Some(asname) = &alias.node.asname {
for alias in names {
if let Some(asname) = &alias.node.asname {
self.import_aliases.insert(asname, &alias.node.name);
}
}
let name = alias.node.name.split('.').last().unwrap();
if self.settings.enabled.contains(&RuleCode::N811) {
if let Some(diagnostic) =
@@ -890,25 +919,6 @@ where
module,
level,
} => {
// Track `import from` statements, to ensure that we can correctly attribute
// references like `from typing import Union`.
if self.settings.enabled.contains(&RuleCode::UP023) {
pyupgrade::rules::replace_c_element_tree(self, stmt);
}
if level.map(|level| level == 0).unwrap_or(true) {
if let Some(module) = module {
self.from_imports
.entry(module)
.or_insert_with(FxHashSet::default)
.extend(names.iter().map(|alias| alias.node.name.as_str()));
}
for alias in names {
if let Some(asname) = &alias.node.asname {
self.import_aliases.insert(asname, &alias.node.name);
}
}
}
if self.settings.enabled.contains(&RuleCode::E402) {
if self.seen_import_boundary && stmt.location.column() == 0 {
self.diagnostics.push(Diagnostic::new(
@@ -926,6 +936,9 @@ where
if self.settings.enabled.contains(&RuleCode::UP026) {
pyupgrade::rules::rewrite_mock_import(self, stmt);
}
if self.settings.enabled.contains(&RuleCode::UP023) {
pyupgrade::rules::replace_c_element_tree(self, stmt);
}
if self.settings.enabled.contains(&RuleCode::UP029) {
if let Some(module) = module.as_deref() {
pyupgrade::rules::unnecessary_builtin_import(self, stmt, module, names);
@@ -1057,15 +1070,16 @@ where
// be "foo.bar". Given `from foo import bar as baz`, `name` would be "baz"
// and `full_name` would be "foo.bar".
let name = alias.node.asname.as_ref().unwrap_or(&alias.node.name);
let full_name = match module {
None => alias.node.name.to_string(),
Some(parent) => format!("{parent}.{}", alias.node.name),
};
let full_name = helpers::format_import_from_member(
level.as_ref(),
module.as_deref(),
&alias.node.name,
);
let range = Range::from_located(alias);
self.add_binding(
name,
Binding {
kind: BindingKind::FromImportation(name.to_string(), full_name),
kind: BindingKind::FromImportation(name, full_name),
// Treat explicit re-export as usage (e.g., `from .applications
// import FastAPI as FastAPI`).
used: if alias
@@ -1265,7 +1279,7 @@ where
}
}
}
StmtKind::With { items, body, .. } | StmtKind::AsyncWith { items, body, .. } => {
StmtKind::With { items, body, .. } => {
if self.settings.enabled.contains(&RuleCode::B017) {
flake8_bugbear::rules::assert_raises_exception(self, stmt, items);
}
@@ -1273,7 +1287,12 @@ where
flake8_pytest_style::rules::complex_raises(self, stmt, items, body);
}
if self.settings.enabled.contains(&RuleCode::SIM117) {
flake8_simplify::rules::multiple_with_statements(self, stmt);
flake8_simplify::rules::multiple_with_statements(
self,
stmt,
body,
self.current_stmt_parent().map(Into::into),
);
}
}
StmtKind::While { body, orelse, .. } => {
@@ -1453,8 +1472,11 @@ where
pyupgrade::rules::rewrite_yield_from(self, stmt);
}
let scope = transition_scope(&self.visible_scope, stmt, &Documentable::Function);
self.definitions
.push((definition, scope.visibility.clone()));
self.definitions.push((
definition,
scope.visibility.clone(),
(self.scope_stack.clone(), self.parents.clone()),
));
self.visible_scope = scope;
// If any global bindings don't already exist in the global scope, add it.
@@ -1511,8 +1533,11 @@ where
&Documentable::Class,
);
let scope = transition_scope(&self.visible_scope, stmt, &Documentable::Class);
self.definitions
.push((definition, scope.visibility.clone()));
self.definitions.push((
definition,
scope.visibility.clone(),
(self.scope_stack.clone(), self.parents.clone()),
));
self.visible_scope = scope;
// If any global bindings don't already exist in the global scope, add it.
@@ -1708,13 +1733,9 @@ where
&& !self.settings.pyupgrade.keep_runtime_typing
&& self.annotations_future_enabled
&& self.in_annotation))
&& typing::is_pep585_builtin(
expr,
&self.from_imports,
&self.import_aliases,
)
&& typing::is_pep585_builtin(self, expr)
{
pyupgrade::rules::use_pep585_annotation(self, expr, id);
pyupgrade::rules::use_pep585_annotation(self, expr);
}
self.handle_node_load(expr);
@@ -1752,9 +1773,9 @@ where
|| (self.settings.target_version >= PythonVersion::Py37
&& self.annotations_future_enabled
&& self.in_annotation))
&& typing::is_pep585_builtin(expr, &self.from_imports, &self.import_aliases)
&& typing::is_pep585_builtin(self, expr)
{
pyupgrade::rules::use_pep585_annotation(self, expr, attr);
pyupgrade::rules::use_pep585_annotation(self, expr);
}
if self.settings.enabled.contains(&RuleCode::UP016) {
@@ -1822,12 +1843,7 @@ where
}
if self.settings.enabled.contains(&RuleCode::TID251) {
flake8_tidy_imports::rules::banned_attribute_access(
self,
&dealias_call_path(collect_call_paths(expr), &self.import_aliases),
expr,
&self.settings.flake8_tidy_imports.banned_api,
);
flake8_tidy_imports::rules::banned_attribute_access(self, expr);
}
}
ExprKind::Call {
@@ -1976,96 +1992,36 @@ where
}
}
if self.settings.enabled.contains(&RuleCode::S103) {
if let Some(diagnostic) = flake8_bandit::rules::bad_file_permissions(
func,
args,
keywords,
&self.from_imports,
&self.import_aliases,
) {
self.diagnostics.push(diagnostic);
}
flake8_bandit::rules::bad_file_permissions(self, func, args, keywords);
}
if self.settings.enabled.contains(&RuleCode::S501) {
if let Some(diagnostic) = flake8_bandit::rules::request_with_no_cert_validation(
func,
args,
keywords,
&self.from_imports,
&self.import_aliases,
) {
self.diagnostics.push(diagnostic);
}
flake8_bandit::rules::request_with_no_cert_validation(
self, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::S506) {
if let Some(diagnostic) = flake8_bandit::rules::unsafe_yaml_load(
func,
args,
keywords,
&self.from_imports,
&self.import_aliases,
) {
self.diagnostics.push(diagnostic);
}
flake8_bandit::rules::unsafe_yaml_load(self, func, args, keywords);
}
if self.settings.enabled.contains(&RuleCode::S508) {
if let Some(diagnostic) = flake8_bandit::rules::snmp_insecure_version(
func,
args,
keywords,
&self.from_imports,
&self.import_aliases,
) {
self.diagnostics.push(diagnostic);
}
flake8_bandit::rules::snmp_insecure_version(self, func, args, keywords);
}
if self.settings.enabled.contains(&RuleCode::S509) {
if let Some(diagnostic) = flake8_bandit::rules::snmp_weak_cryptography(
func,
args,
keywords,
&self.from_imports,
&self.import_aliases,
) {
self.diagnostics.push(diagnostic);
}
flake8_bandit::rules::snmp_weak_cryptography(self, func, args, keywords);
}
if self.settings.enabled.contains(&RuleCode::S701) {
if let Some(diagnostic) = flake8_bandit::rules::jinja2_autoescape_false(
func,
args,
keywords,
&self.from_imports,
&self.import_aliases,
) {
self.diagnostics.push(diagnostic);
}
flake8_bandit::rules::jinja2_autoescape_false(self, func, args, keywords);
}
if self.settings.enabled.contains(&RuleCode::S106) {
self.diagnostics
.extend(flake8_bandit::rules::hardcoded_password_func_arg(keywords));
}
if self.settings.enabled.contains(&RuleCode::S324) {
if let Some(diagnostic) = flake8_bandit::rules::hashlib_insecure_hash_functions(
func,
args,
keywords,
&self.from_imports,
&self.import_aliases,
) {
self.diagnostics.push(diagnostic);
}
flake8_bandit::rules::hashlib_insecure_hash_functions(
self, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::S113) {
if let Some(diagnostic) = flake8_bandit::rules::request_without_timeout(
func,
args,
keywords,
&self.from_imports,
&self.import_aliases,
) {
self.diagnostics.push(diagnostic);
}
flake8_bandit::rules::request_without_timeout(self, func, args, keywords);
}
// flake8-comprehensions
@@ -2157,14 +2113,7 @@ where
// flake8-debugger
if self.settings.enabled.contains(&RuleCode::T100) {
if let Some(diagnostic) = flake8_debugger::rules::debugger_call(
expr,
func,
&self.from_imports,
&self.import_aliases,
) {
self.diagnostics.push(diagnostic);
}
flake8_debugger::rules::debugger_call(self, expr, func);
}
// pandas-vet
@@ -2191,7 +2140,7 @@ where
if let BindingKind::Importation(.., module) =
&binding.kind
{
module != "pandas"
module != &"pandas"
} else {
matches!(
binding.kind,
@@ -2590,6 +2539,14 @@ where
);
}
if self.settings.enabled.contains(&RuleCode::PLR0133) {
pylint::rules::constant_comparison(self, left, ops, comparators);
}
if self.settings.enabled.contains(&RuleCode::PLR2004) {
pylint::rules::magic_value_comparison(self, left, comparators);
}
if self.settings.enabled.contains(&RuleCode::SIM118) {
flake8_simplify::rules::key_in_dict_compare(self, expr, left, ops, comparators);
}
@@ -2739,15 +2696,19 @@ where
args,
keywords,
} => {
let call_path = dealias_call_path(collect_call_paths(func), &self.import_aliases);
if self.match_typing_call_path(&call_path, "ForwardRef") {
let call_path = self.resolve_call_path(func);
if call_path.as_ref().map_or(false, |call_path| {
self.match_typing_call_path(call_path, "ForwardRef")
}) {
self.visit_expr(func);
for expr in args {
self.in_type_definition = true;
self.visit_expr(expr);
self.in_type_definition = prev_in_type_definition;
}
} else if self.match_typing_call_path(&call_path, "cast") {
} else if call_path.as_ref().map_or(false, |call_path| {
self.match_typing_call_path(call_path, "cast")
}) {
self.visit_expr(func);
if !args.is_empty() {
self.in_type_definition = true;
@@ -2757,14 +2718,18 @@ where
for expr in args.iter().skip(1) {
self.visit_expr(expr);
}
} else if self.match_typing_call_path(&call_path, "NewType") {
} else if call_path.as_ref().map_or(false, |call_path| {
self.match_typing_call_path(call_path, "NewType")
}) {
self.visit_expr(func);
for expr in args.iter().skip(1) {
self.in_type_definition = true;
self.visit_expr(expr);
self.in_type_definition = prev_in_type_definition;
}
} else if self.match_typing_call_path(&call_path, "TypeVar") {
} else if call_path.as_ref().map_or(false, |call_path| {
self.match_typing_call_path(call_path, "TypeVar")
}) {
self.visit_expr(func);
for expr in args.iter().skip(1) {
self.in_type_definition = true;
@@ -2785,7 +2750,9 @@ where
}
}
}
} else if self.match_typing_call_path(&call_path, "NamedTuple") {
} else if call_path.as_ref().map_or(false, |call_path| {
self.match_typing_call_path(call_path, "NamedTuple")
}) {
self.visit_expr(func);
// Ex) NamedTuple("a", [("a", int)])
@@ -2821,7 +2788,9 @@ where
self.visit_expr(value);
self.in_type_definition = prev_in_type_definition;
}
} else if self.match_typing_call_path(&call_path, "TypedDict") {
} else if call_path.as_ref().map_or(false, |call_path| {
self.match_typing_call_path(call_path, "TypedDict")
}) {
self.visit_expr(func);
// Ex) TypedDict("a", {"a": int})
@@ -2847,12 +2816,11 @@ where
self.visit_expr(value);
self.in_type_definition = prev_in_type_definition;
}
} else if ["Arg", "DefaultArg", "NamedArg", "DefaultNamedArg"]
.iter()
.any(|target| {
match_call_path(&call_path, "mypy_extensions", target, &self.from_imports)
})
{
} else if call_path.as_ref().map_or(false, |call_path| {
["Arg", "DefaultArg", "NamedArg", "DefaultNamedArg"]
.iter()
.any(|target| *call_path == ["mypy_extensions", target])
}) {
self.visit_expr(func);
// Ex) DefaultNamedArg(bool | None, name="some_prop_name")
@@ -2886,13 +2854,7 @@ where
self.in_subscript = true;
visitor::walk_expr(self, expr);
} else {
match typing::match_annotated_subscript(
value,
&self.from_imports,
&self.import_aliases,
self.settings.typing_modules.iter().map(String::as_str),
|member| self.is_builtin(member),
) {
match typing::match_annotated_subscript(self, value) {
Some(subscript) => {
match subscript {
// Ex) Optional[int]
@@ -3419,23 +3381,37 @@ impl<'a> Checker<'a> {
// import pyarrow as pa
// import pyarrow.csv
// print(pa.csv.read_csv("test.csv"))
if let BindingKind::Importation(name, full_name)
| BindingKind::FromImportation(name, full_name)
| BindingKind::SubmoduleImportation(name, full_name) =
&self.bindings[*index].kind
{
let has_alias = full_name
.split('.')
.last()
.map(|segment| segment != name)
.unwrap_or_default();
if has_alias {
// Mark the sub-importation as used.
if let Some(index) = scope.values.get(full_name.as_str()) {
self.bindings[*index].used =
Some((scope_id, Range::from_located(expr)));
match &self.bindings[*index].kind {
BindingKind::Importation(name, full_name)
| BindingKind::SubmoduleImportation(name, full_name) => {
let has_alias = full_name
.split('.')
.last()
.map(|segment| &segment != name)
.unwrap_or_default();
if has_alias {
// Mark the sub-importation as used.
if let Some(index) = scope.values.get(full_name) {
self.bindings[*index].used =
Some((scope_id, Range::from_located(expr)));
}
}
}
BindingKind::FromImportation(name, full_name) => {
let has_alias = full_name
.split('.')
.last()
.map(|segment| &segment != name)
.unwrap_or_default();
if has_alias {
// Mark the sub-importation as used.
if let Some(index) = scope.values.get(full_name.as_str()) {
self.bindings[*index].used =
Some((scope_id, Range::from_located(expr)));
}
}
}
_ => {}
}
return;
@@ -3689,6 +3665,7 @@ impl<'a> Checker<'a> {
docstring,
},
self.visible_scope.visibility.clone(),
(self.scope_stack.clone(), self.parents.clone()),
));
docstring.is_some()
}
@@ -3960,9 +3937,12 @@ impl<'a> Checker<'a> {
{
let binding = &self.bindings[*index];
let (BindingKind::Importation(_, full_name)
| BindingKind::SubmoduleImportation(_, full_name)
| BindingKind::FromImportation(_, full_name)) = &binding.kind else { continue; };
let full_name = match &binding.kind {
BindingKind::Importation(.., full_name) => full_name,
BindingKind::FromImportation(.., full_name) => full_name.as_str(),
BindingKind::SubmoduleImportation(.., full_name) => full_name,
_ => continue,
};
// Skip used exports from `__all__`
if binding.used.is_some()
@@ -4025,6 +4005,7 @@ impl<'a> Checker<'a> {
parent,
&deleted,
self.locator,
self.indexer,
) {
Ok(fix) => {
if fix.content.is_empty() || fix.content == "pass" {
@@ -4142,7 +4123,10 @@ impl<'a> Checker<'a> {
let mut overloaded_name: Option<String> = None;
self.definitions.reverse();
while let Some((definition, visibility)) = self.definitions.pop() {
while let Some((definition, visibility, (scopes, parents))) = self.definitions.pop() {
self.scope_stack = scopes.clone();
self.parents = parents.clone();
// flake8-annotations
if enforce_annotations {
// TODO(charlie): This should be even stricter, in that an overload
@@ -4317,6 +4301,7 @@ pub fn check_ast(
python_ast: &Suite,
locator: &Locator,
stylist: &Stylist,
indexer: &Indexer,
noqa_line_for: &IntMap<usize, usize>,
settings: &Settings,
autofix: flags::Autofix,
@@ -4331,6 +4316,7 @@ pub fn check_ast(
path,
locator,
stylist,
indexer,
);
checker.push_scope(Scope::new(ScopeKind::Module));
checker.bind_builtins();
@@ -4355,13 +4341,13 @@ pub fn check_ast(
let mut allocator = vec![];
checker.check_deferred_string_type_definitions(&mut allocator);
// Check docstrings.
checker.check_definitions();
// Reset the scope to module-level, and check all consumed scopes.
checker.scope_stack = vec![GLOBAL_SCOPE_INDEX];
checker.pop_scope();
checker.check_dead_scopes();
// Check docstrings.
checker.check_definitions();
checker.diagnostics
}

View File

@@ -6,16 +6,17 @@ use rustpython_parser::ast::Suite;
use crate::ast::visitor::Visitor;
use crate::directives::IsortDirectives;
use crate::isort;
use crate::isort::track::{Block, ImportTracker};
use crate::registry::{Diagnostic, RuleCode};
use crate::rules::isort;
use crate::rules::isort::track::{Block, ImportTracker};
use crate::settings::{flags, Settings};
use crate::source_code::{Locator, Stylist};
use crate::source_code::{Indexer, Locator, Stylist};
#[allow(clippy::too_many_arguments)]
pub fn check_imports(
python_ast: &Suite,
locator: &Locator,
indexer: &Indexer,
directives: &IsortDirectives,
settings: &Settings,
stylist: &Stylist,
@@ -39,7 +40,7 @@ pub fn check_imports(
for block in &blocks {
if !block.imports.is_empty() {
if let Some(diagnostic) = isort::rules::organize_imports(
block, locator, settings, stylist, autofix, package,
block, locator, indexer, settings, stylist, autofix, package,
) {
diagnostics.push(diagnostic);
}

View File

@@ -1,9 +1,11 @@
//! Lint rules based on checking raw physical lines.
use crate::pycodestyle::rules::{doc_line_too_long, line_too_long, no_newline_at_end_of_file};
use crate::pygrep_hooks::rules::{blanket_noqa, blanket_type_ignore};
use crate::pyupgrade::rules::unnecessary_coding_comment;
use crate::registry::{Diagnostic, RuleCode};
use crate::rules::pycodestyle::rules::{
doc_line_too_long, line_too_long, no_newline_at_end_of_file,
};
use crate::rules::pygrep_hooks::rules::{blanket_noqa, blanket_type_ignore};
use crate::rules::pyupgrade::rules::unnecessary_coding_comment;
use crate::settings::{flags, Settings};
pub fn check_lines(

View File

@@ -4,10 +4,12 @@ use rustpython_parser::lexer::{LexResult, Tok};
use crate::lex::docstring_detection::StateMachine;
use crate::registry::{Diagnostic, RuleCode};
use crate::ruff::rules::Context;
use crate::rules::ruff::rules::Context;
use crate::rules::{
eradicate, flake8_commas, flake8_implicit_str_concat, flake8_quotes, pycodestyle, ruff,
};
use crate::settings::{flags, Settings};
use crate::source_code::Locator;
use crate::{eradicate, flake8_implicit_str_concat, flake8_quotes, pycodestyle, ruff};
pub fn check_tokens(
locator: &Locator,
@@ -28,6 +30,9 @@ pub fn check_tokens(
let enforce_invalid_escape_sequence = settings.enabled.contains(&RuleCode::W605);
let enforce_implicit_string_concatenation = settings.enabled.contains(&RuleCode::ISC001)
|| settings.enabled.contains(&RuleCode::ISC002);
let enforce_trailing_comma = settings.enabled.contains(&RuleCode::COM812)
|| settings.enabled.contains(&RuleCode::COM818)
|| settings.enabled.contains(&RuleCode::COM819);
let mut state_machine = StateMachine::default();
for &(start, ref tok, end) in tokens.iter().flatten() {
@@ -105,7 +110,16 @@ pub fn check_tokens(
// ISC001, ISC002
if enforce_implicit_string_concatenation {
diagnostics.extend(
flake8_implicit_str_concat::rules::implicit(tokens, locator)
flake8_implicit_str_concat::rules::implicit(tokens)
.into_iter()
.filter(|diagnostic| settings.enabled.contains(diagnostic.kind.code())),
);
}
// COM812, COM818, COM819
if enforce_trailing_comma {
diagnostics.extend(
flake8_commas::rules::trailing_commas(tokens, locator)
.into_iter()
.filter(|diagnostic| settings.enabled.contains(diagnostic.kind.code())),
);

View File

@@ -37,14 +37,12 @@ pub struct IsortDirectives {
}
pub struct Directives {
pub commented_lines: Vec<usize>,
pub noqa_line_for: IntMap<usize, usize>,
pub isort: IsortDirectives,
}
pub fn extract_directives(lxr: &[LexResult], flags: Flags) -> Directives {
Directives {
commented_lines: extract_commented_lines(lxr),
noqa_line_for: if flags.contains(Flags::NOQA) {
extract_noqa_line_for(lxr)
} else {
@@ -58,16 +56,6 @@ pub fn extract_directives(lxr: &[LexResult], flags: Flags) -> Directives {
}
}
pub fn extract_commented_lines(lxr: &[LexResult]) -> Vec<usize> {
let mut commented_lines = Vec::new();
for (start, tok, ..) in lxr.iter().flatten() {
if matches!(tok, Tok::Comment(_)) {
commented_lines.push(start.row());
}
}
commented_lines
}
/// Extract a mapping from logical line to noqa line.
pub fn extract_noqa_line_for(lxr: &[LexResult]) -> IntMap<usize, usize> {
let mut noqa_line_for: IntMap<usize, usize> = IntMap::default();

View File

@@ -1,6 +0,0 @@
---
source: src/flake8_annotations/mod.rs
expression: checks
---
[]

View File

@@ -1,67 +0,0 @@
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::{Constant, Expr, ExprKind, Keyword};
use crate::ast::helpers::{match_module_member, SimpleCallArgs};
use crate::ast::types::Range;
use crate::flake8_bandit::helpers::string_literal;
use crate::registry::Diagnostic;
use crate::violations;
const WEAK_HASHES: [&str; 4] = ["md4", "md5", "sha", "sha1"];
fn is_used_for_security(call_args: &SimpleCallArgs) -> bool {
match call_args.get_argument("usedforsecurity", None) {
Some(expr) => !matches!(
&expr.node,
ExprKind::Constant {
value: Constant::Bool(false),
..
}
),
_ => true,
}
}
/// S324
pub fn hashlib_insecure_hash_functions(
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> Option<Diagnostic> {
if match_module_member(func, "hashlib", "new", from_imports, import_aliases) {
let call_args = SimpleCallArgs::new(args, keywords);
if !is_used_for_security(&call_args) {
return None;
}
if let Some(name_arg) = call_args.get_argument("name", Some(0)) {
let hash_func_name = string_literal(name_arg)?;
if WEAK_HASHES.contains(&hash_func_name.to_lowercase().as_str()) {
return Some(Diagnostic::new(
violations::HashlibInsecureHashFunction(hash_func_name.to_string()),
Range::from_located(name_arg),
));
}
}
} else {
for func_name in &WEAK_HASHES {
if match_module_member(func, "hashlib", func_name, from_imports, import_aliases) {
let call_args = SimpleCallArgs::new(args, keywords);
if !is_used_for_security(&call_args) {
return None;
}
return Some(Diagnostic::new(
violations::HashlibInsecureHashFunction((*func_name).to_string()),
Range::from_located(func),
));
}
}
}
None
}

View File

@@ -1,70 +0,0 @@
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::{Expr, ExprKind, Keyword};
use rustpython_parser::ast::Constant;
use crate::ast::helpers::{collect_call_paths, dealias_call_path, match_call_path, SimpleCallArgs};
use crate::ast::types::Range;
use crate::registry::Diagnostic;
use crate::violations;
const REQUESTS_HTTP_VERBS: [&str; 7] = ["get", "options", "head", "post", "put", "patch", "delete"];
const HTTPX_METHODS: [&str; 11] = [
"get",
"options",
"head",
"post",
"put",
"patch",
"delete",
"request",
"stream",
"Client",
"AsyncClient",
];
/// S501
pub fn request_with_no_cert_validation(
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> Option<Diagnostic> {
let call_path = dealias_call_path(collect_call_paths(func), import_aliases);
let call_args = SimpleCallArgs::new(args, keywords);
for func_name in &REQUESTS_HTTP_VERBS {
if match_call_path(&call_path, "requests", func_name, from_imports) {
if let Some(verify_arg) = call_args.get_argument("verify", None) {
if let ExprKind::Constant {
value: Constant::Bool(false),
..
} = &verify_arg.node
{
return Some(Diagnostic::new(
violations::RequestWithNoCertValidation("requests".to_string()),
Range::from_located(verify_arg),
));
}
}
}
}
for func_name in &HTTPX_METHODS {
if match_call_path(&call_path, "httpx", func_name, from_imports) {
if let Some(verify_arg) = call_args.get_argument("verify", None) {
if let ExprKind::Constant {
value: Constant::Bool(false),
..
} = &verify_arg.node
{
return Some(Diagnostic::new(
violations::RequestWithNoCertValidation("httpx".to_string()),
Range::from_located(verify_arg),
));
}
}
}
}
None
}

View File

@@ -1,46 +0,0 @@
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::{Expr, ExprKind, Keyword};
use rustpython_parser::ast::Constant;
use crate::ast::helpers::{collect_call_paths, dealias_call_path, match_call_path, SimpleCallArgs};
use crate::ast::types::Range;
use crate::registry::Diagnostic;
use crate::violations;
const HTTP_VERBS: [&str; 7] = ["get", "options", "head", "post", "put", "patch", "delete"];
/// S113
pub fn request_without_timeout(
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> Option<Diagnostic> {
let call_path = dealias_call_path(collect_call_paths(func), import_aliases);
for func_name in &HTTP_VERBS {
if match_call_path(&call_path, "requests", func_name, from_imports) {
let call_args = SimpleCallArgs::new(args, keywords);
if let Some(timeout_arg) = call_args.get_argument("timeout", None) {
if let Some(timeout) = match &timeout_arg.node {
ExprKind::Constant {
value: value @ Constant::None,
..
} => Some(value.to_string()),
_ => None,
} {
return Some(Diagnostic::new(
violations::RequestWithoutTimeout(Some(timeout)),
Range::from_located(timeout_arg),
));
}
} else {
return Some(Diagnostic::new(
violations::RequestWithoutTimeout(None),
Range::from_located(func),
));
}
}
}
None
}

View File

@@ -1,30 +0,0 @@
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::{Expr, Keyword};
use crate::ast::helpers::{collect_call_paths, dealias_call_path, match_call_path, SimpleCallArgs};
use crate::ast::types::Range;
use crate::registry::Diagnostic;
use crate::violations;
/// S509
pub fn snmp_weak_cryptography(
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> Option<Diagnostic> {
let call_path = dealias_call_path(collect_call_paths(func), import_aliases);
if match_call_path(&call_path, "pysnmp.hlapi", "UsmUserData", from_imports) {
let call_args = SimpleCallArgs::new(args, keywords);
if call_args.len() < 3 {
return Some(Diagnostic::new(
violations::SnmpWeakCryptography,
Range::from_located(func),
));
}
}
None
}

View File

@@ -1,51 +0,0 @@
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::{Expr, ExprKind, Keyword};
use crate::ast::helpers::{match_module_member, SimpleCallArgs};
use crate::ast::types::Range;
use crate::registry::Diagnostic;
use crate::violations;
/// S506
pub fn unsafe_yaml_load(
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> Option<Diagnostic> {
if match_module_member(func, "yaml", "load", from_imports, import_aliases) {
let call_args = SimpleCallArgs::new(args, keywords);
if let Some(loader_arg) = call_args.get_argument("Loader", Some(1)) {
if !match_module_member(
loader_arg,
"yaml",
"SafeLoader",
from_imports,
import_aliases,
) && !match_module_member(
loader_arg,
"yaml",
"CSafeLoader",
from_imports,
import_aliases,
) {
let loader = match &loader_arg.node {
ExprKind::Attribute { attr, .. } => Some(attr.to_string()),
ExprKind::Name { id, .. } => Some(id.to_string()),
_ => None,
};
return Some(Diagnostic::new(
violations::UnsafeYAMLLoad(loader),
Range::from_located(loader_arg),
));
}
} else {
return Some(Diagnostic::new(
violations::UnsafeYAMLLoad(None),
Range::from_located(func),
));
}
}
None
}

View File

@@ -1,174 +0,0 @@
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::{Arguments, Constant, Expr, ExprKind, Operator};
use crate::ast::helpers::{collect_call_paths, dealias_call_path, match_call_path};
use crate::ast::types::Range;
use crate::checkers::ast::Checker;
use crate::registry::Diagnostic;
use crate::violations;
const MUTABLE_FUNCS: &[(&str, &str)] = &[
("", "dict"),
("", "list"),
("", "set"),
("collections", "Counter"),
("collections", "OrderedDict"),
("collections", "defaultdict"),
("collections", "deque"),
];
const IMMUTABLE_TYPES: &[(&str, &str)] = &[
("", "bool"),
("", "bytes"),
("", "complex"),
("", "float"),
("", "frozenset"),
("", "int"),
("", "object"),
("", "range"),
("", "str"),
("collections.abc", "Sized"),
("typing", "LiteralString"),
("typing", "Sized"),
];
const IMMUTABLE_GENERIC_TYPES: &[(&str, &str)] = &[
("", "tuple"),
("collections.abc", "ByteString"),
("collections.abc", "Collection"),
("collections.abc", "Container"),
("collections.abc", "Iterable"),
("collections.abc", "Mapping"),
("collections.abc", "Reversible"),
("collections.abc", "Sequence"),
("collections.abc", "Set"),
("typing", "AbstractSet"),
("typing", "ByteString"),
("typing", "Callable"),
("typing", "Collection"),
("typing", "Container"),
("typing", "FrozenSet"),
("typing", "Iterable"),
("typing", "Literal"),
("typing", "Mapping"),
("typing", "Never"),
("typing", "NoReturn"),
("typing", "Reversible"),
("typing", "Sequence"),
("typing", "Tuple"),
];
pub fn is_mutable_func(
expr: &Expr,
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> bool {
let call_path = dealias_call_path(collect_call_paths(expr), import_aliases);
MUTABLE_FUNCS
.iter()
.any(|(module, member)| match_call_path(&call_path, module, member, from_imports))
}
fn is_mutable_expr(
expr: &Expr,
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> bool {
match &expr.node {
ExprKind::List { .. }
| ExprKind::Dict { .. }
| ExprKind::Set { .. }
| ExprKind::ListComp { .. }
| ExprKind::DictComp { .. }
| ExprKind::SetComp { .. } => true,
ExprKind::Call { func, .. } => is_mutable_func(func, from_imports, import_aliases),
_ => false,
}
}
fn is_immutable_annotation(
expr: &Expr,
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> bool {
match &expr.node {
ExprKind::Name { .. } | ExprKind::Attribute { .. } => {
let call_path = dealias_call_path(collect_call_paths(expr), import_aliases);
IMMUTABLE_TYPES
.iter()
.chain(IMMUTABLE_GENERIC_TYPES)
.any(|(module, member)| match_call_path(&call_path, module, member, from_imports))
}
ExprKind::Subscript { value, slice, .. } => {
let call_path = dealias_call_path(collect_call_paths(value), import_aliases);
if IMMUTABLE_GENERIC_TYPES
.iter()
.any(|(module, member)| match_call_path(&call_path, module, member, from_imports))
{
true
} else if match_call_path(&call_path, "typing", "Union", from_imports) {
if let ExprKind::Tuple { elts, .. } = &slice.node {
elts.iter()
.all(|elt| is_immutable_annotation(elt, from_imports, import_aliases))
} else {
false
}
} else if match_call_path(&call_path, "typing", "Optional", from_imports) {
is_immutable_annotation(slice, from_imports, import_aliases)
} else if match_call_path(&call_path, "typing", "Annotated", from_imports) {
if let ExprKind::Tuple { elts, .. } = &slice.node {
elts.first().map_or(false, |elt| {
is_immutable_annotation(elt, from_imports, import_aliases)
})
} else {
false
}
} else {
false
}
}
ExprKind::BinOp {
left,
op: Operator::BitOr,
right,
} => {
is_immutable_annotation(left, from_imports, import_aliases)
&& is_immutable_annotation(right, from_imports, import_aliases)
}
ExprKind::Constant {
value: Constant::None,
..
} => true,
_ => false,
}
}
/// B006
pub fn mutable_argument_default(checker: &mut Checker, arguments: &Arguments) {
// Scan in reverse order to right-align zip()
for (arg, default) in arguments
.kwonlyargs
.iter()
.rev()
.zip(arguments.kw_defaults.iter().rev())
.chain(
arguments
.args
.iter()
.rev()
.chain(arguments.posonlyargs.iter().rev())
.zip(arguments.defaults.iter().rev()),
)
{
if is_mutable_expr(default, &checker.from_imports, &checker.import_aliases)
&& arg.node.annotation.as_ref().map_or(true, |expr| {
!is_immutable_annotation(expr, &checker.from_imports, &checker.import_aliases)
})
{
checker.diagnostics.push(Diagnostic::new(
violations::MutableArgumentDefault,
Range::from_located(default),
));
}
}
}

View File

@@ -1,73 +0,0 @@
---
source: src/flake8_comprehensions/mod.rs
expression: checks
---
- kind:
UnnecessaryLiteralSet: list
location:
row: 1
column: 5
end_location:
row: 1
column: 16
fix:
content: "{1, 2}"
location:
row: 1
column: 5
end_location:
row: 1
column: 16
parent: ~
- kind:
UnnecessaryLiteralSet: tuple
location:
row: 2
column: 5
end_location:
row: 2
column: 16
fix:
content: "{1, 2}"
location:
row: 2
column: 5
end_location:
row: 2
column: 16
parent: ~
- kind:
UnnecessaryLiteralSet: list
location:
row: 3
column: 5
end_location:
row: 3
column: 12
fix:
content: set()
location:
row: 3
column: 5
end_location:
row: 3
column: 12
parent: ~
- kind:
UnnecessaryLiteralSet: tuple
location:
row: 4
column: 5
end_location:
row: 4
column: 12
fix:
content: set()
location:
row: 4
column: 5
end_location:
row: 4
column: 12
parent: ~

View File

@@ -1,71 +0,0 @@
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::{Expr, Stmt};
use crate::ast::helpers::{collect_call_paths, dealias_call_path, match_call_path};
use crate::ast::types::Range;
use crate::flake8_debugger::types::DebuggerUsingType;
use crate::registry::Diagnostic;
use crate::violations;
const DEBUGGERS: &[(&str, &str)] = &[
("pdb", "set_trace"),
("pudb", "set_trace"),
("ipdb", "set_trace"),
("ipdb", "sset_trace"),
("IPython.terminal.embed", "InteractiveShellEmbed"),
("IPython.frontend.terminal.embed", "InteractiveShellEmbed"),
("celery.contrib.rdb", "set_trace"),
("builtins", "breakpoint"),
("", "breakpoint"),
];
/// Checks for the presence of a debugger call.
pub fn debugger_call(
expr: &Expr,
func: &Expr,
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> Option<Diagnostic> {
let call_path = dealias_call_path(collect_call_paths(func), import_aliases);
if DEBUGGERS
.iter()
.any(|(module, member)| match_call_path(&call_path, module, member, from_imports))
{
Some(Diagnostic::new(
violations::Debugger(DebuggerUsingType::Call(call_path.join("."))),
Range::from_located(expr),
))
} else {
None
}
}
/// Checks for the presence of a debugger import.
pub fn debugger_import(stmt: &Stmt, module: Option<&str>, name: &str) -> Option<Diagnostic> {
// Special-case: allow `import builtins`, which is far more general than (e.g.)
// `import celery.contrib.rdb`).
if module.is_none() && name == "builtins" {
return None;
}
if let Some(module) = module {
if let Some((module_name, member)) = DEBUGGERS
.iter()
.find(|(module_name, member)| module_name == &module && member == &name)
{
return Some(Diagnostic::new(
violations::Debugger(DebuggerUsingType::Import(format!("{module_name}.{member}"))),
Range::from_located(stmt),
));
}
} else if DEBUGGERS
.iter()
.any(|(module_name, ..)| module_name == &name)
{
return Some(Diagnostic::new(
violations::Debugger(DebuggerUsingType::Import(name.to_string())),
Range::from_located(stmt),
));
}
None
}

View File

@@ -1,6 +0,0 @@
---
source: src/flake8_quotes/mod.rs
expression: checks
---
[]

View File

@@ -1,6 +0,0 @@
---
source: src/flake8_quotes/mod.rs
expression: checks
---
[]

View File

@@ -1,6 +0,0 @@
---
source: src/flake8_quotes/mod.rs
expression: checks
---
[]

View File

@@ -1,6 +0,0 @@
---
source: src/flake8_quotes/mod.rs
expression: checks
---
[]

View File

@@ -1,22 +0,0 @@
use rustpython_ast::{Stmt, StmtKind};
use crate::ast::types::Range;
use crate::checkers::ast::Checker;
use crate::registry::Diagnostic;
use crate::violations;
/// SIM117
pub fn multiple_with_statements(checker: &mut Checker, stmt: &Stmt) {
let StmtKind::With { body, .. } = &stmt.node else {
return;
};
if body.len() != 1 {
return;
}
if matches!(body[0].node, StmtKind::With { .. }) {
checker.diagnostics.push(Diagnostic::new(
violations::MultipleWithStatements,
Range::from_located(stmt),
));
}
}

View File

@@ -1,15 +0,0 @@
---
source: src/flake8_simplify/mod.rs
expression: checks
---
- kind:
MultipleWithStatements: ~
location:
row: 1
column: 0
end_location:
row: 3
column: 22
fix: ~
parent: ~

View File

@@ -1,41 +0,0 @@
---
source: src/flake8_tidy_imports/mod.rs
expression: checks
---
- kind:
BannedApi:
name: typing.TypedDict
message: Use typing_extensions.TypedDict instead.
location:
row: 2
column: 7
end_location:
row: 2
column: 23
fix: ~
parent: ~
- kind:
BannedApi:
name: typing.TypedDict
message: Use typing_extensions.TypedDict instead.
location:
row: 7
column: 0
end_location:
row: 7
column: 16
fix: ~
parent: ~
- kind:
BannedApi:
name: typing.TypedDict
message: Use typing_extensions.TypedDict instead.
location:
row: 11
column: 4
end_location:
row: 11
column: 20
fix: ~
parent: ~

View File

@@ -3,9 +3,10 @@
use std::path::Path;
use anyhow::Result;
use ruff::settings::types::PythonVersion;
use serde::{Deserialize, Serialize};
use crate::settings::types::PythonVersion;
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize, Default)]
pub struct Black {
#[serde(alias = "line-length", alias = "line_length")]

View File

@@ -2,23 +2,24 @@ use std::collections::{BTreeSet, HashMap};
use anyhow::Result;
use colored::Colorize;
use ruff::flake8_pytest_style::types::{
use super::black::Black;
use super::plugin::Plugin;
use super::{parser, plugin};
use crate::registry::RuleCodePrefix;
use crate::rules::flake8_pytest_style::types::{
ParametrizeNameType, ParametrizeValuesRowType, ParametrizeValuesType,
};
use ruff::flake8_quotes::settings::Quote;
use ruff::flake8_tidy_imports::settings::Strictness;
use ruff::pydocstyle::settings::Convention;
use ruff::registry::RuleCodePrefix;
use ruff::settings::options::Options;
use ruff::settings::pyproject::Pyproject;
use ruff::{
use crate::rules::flake8_quotes::settings::Quote;
use crate::rules::flake8_tidy_imports::settings::Strictness;
use crate::rules::pydocstyle::settings::Convention;
use crate::rules::{
flake8_annotations, flake8_bugbear, flake8_errmsg, flake8_pytest_style, flake8_quotes,
flake8_tidy_imports, mccabe, pep8_naming, pydocstyle, warn_user,
flake8_tidy_imports, mccabe, pep8_naming, pydocstyle,
};
use crate::black::Black;
use crate::plugin::Plugin;
use crate::{parser, plugin};
use crate::settings::options::Options;
use crate::settings::pyproject::Pyproject;
use crate::warn_user;
pub fn convert(
config: &HashMap<String, HashMap<String, Option<String>>>,
@@ -270,7 +271,7 @@ pub fn convert(
match value.trim() {
"csv" => {
flake8_pytest_style.parametrize_names_type =
Some(ParametrizeNameType::CSV);
Some(ParametrizeNameType::Csv);
}
"tuple" => {
flake8_pytest_style.parametrize_names_type =
@@ -388,14 +389,14 @@ mod tests {
use std::collections::HashMap;
use anyhow::Result;
use ruff::pydocstyle::settings::Convention;
use ruff::registry::RuleCodePrefix;
use ruff::settings::options::Options;
use ruff::settings::pyproject::Pyproject;
use ruff::{flake8_quotes, pydocstyle};
use crate::converter::convert;
use crate::plugin::Plugin;
use super::super::plugin::Plugin;
use super::convert;
use crate::registry::RuleCodePrefix;
use crate::rules::pydocstyle::settings::Convention;
use crate::rules::{flake8_quotes, pydocstyle};
use crate::settings::options::Options;
use crate::settings::pyproject::Pyproject;
#[test]
fn it_converts_empty() -> Result<()> {
@@ -423,6 +424,7 @@ mod tests {
ignore: Some(vec![]),
ignore_init_module_imports: None,
line_length: None,
namespace_packages: None,
per_file_ignores: None,
required_version: None,
respect_gitignore: None,
@@ -488,6 +490,7 @@ mod tests {
ignore: Some(vec![]),
ignore_init_module_imports: None,
line_length: Some(100),
namespace_packages: None,
per_file_ignores: None,
required_version: None,
respect_gitignore: None,
@@ -553,6 +556,7 @@ mod tests {
ignore: Some(vec![]),
ignore_init_module_imports: None,
line_length: Some(100),
namespace_packages: None,
per_file_ignores: None,
required_version: None,
respect_gitignore: None,
@@ -618,6 +622,7 @@ mod tests {
ignore: Some(vec![]),
ignore_init_module_imports: None,
line_length: None,
namespace_packages: None,
per_file_ignores: None,
required_version: None,
respect_gitignore: None,
@@ -683,6 +688,7 @@ mod tests {
ignore: Some(vec![]),
ignore_init_module_imports: None,
line_length: None,
namespace_packages: None,
per_file_ignores: None,
required_version: None,
respect_gitignore: None,
@@ -756,6 +762,7 @@ mod tests {
ignore: Some(vec![]),
ignore_init_module_imports: None,
line_length: None,
namespace_packages: None,
per_file_ignores: None,
required_version: None,
respect_gitignore: None,
@@ -824,6 +831,7 @@ mod tests {
ignore: Some(vec![]),
ignore_init_module_imports: None,
line_length: None,
namespace_packages: None,
per_file_ignores: None,
required_version: None,
respect_gitignore: None,

View File

@@ -0,0 +1,8 @@
mod black;
mod converter;
mod parser;
mod plugin;
pub use black::parse_black_options;
pub use converter::convert;
pub use plugin::Plugin;

View File

@@ -4,11 +4,12 @@ use anyhow::{bail, Result};
use colored::Colorize;
use once_cell::sync::Lazy;
use regex::Regex;
use ruff::registry::{RuleCodePrefix, PREFIX_REDIRECTS};
use ruff::settings::types::PatternPrefixPair;
use ruff::warn_user;
use rustc_hash::FxHashMap;
use crate::registry::{RuleCodePrefix, PREFIX_REDIRECTS};
use crate::settings::types::PatternPrefixPair;
use crate::warn_user;
static COMMA_SEPARATED_LIST_RE: Lazy<Regex> = Lazy::new(|| Regex::new(r"[,\s]").unwrap());
/// Parse a comma-separated list of `RuleCodePrefix` values (e.g.,
@@ -203,10 +204,10 @@ pub fn collect_per_file_ignores(
#[cfg(test)]
mod tests {
use anyhow::Result;
use ruff::registry::RuleCodePrefix;
use ruff::settings::types::PatternPrefixPair;
use crate::parser::{parse_files_to_codes_mapping, parse_prefix_codes, parse_strings};
use super::{parse_files_to_codes_mapping, parse_prefix_codes, parse_strings};
use crate::registry::RuleCodePrefix;
use crate::settings::types::PatternPrefixPair;
#[test]
fn it_parses_prefix_codes() {

View File

@@ -3,7 +3,8 @@ use std::fmt;
use std::str::FromStr;
use anyhow::anyhow;
use ruff::registry::RuleCodePrefix;
use crate::registry::RuleCodePrefix;
#[derive(Clone, Ord, PartialOrd, Eq, PartialEq)]
pub enum Plugin {
@@ -298,7 +299,7 @@ pub fn resolve_select(plugins: &[Plugin]) -> BTreeSet<RuleCodePrefix> {
mod tests {
use std::collections::HashMap;
use crate::plugin::{infer_plugins_from_options, Plugin};
use super::{infer_plugins_from_options, Plugin};
#[test]
fn it_infers_plugins() {

View File

@@ -66,7 +66,7 @@ pub fn relativize_path(path: &Path) -> Cow<str> {
}
/// Read a file's contents from disk.
pub(crate) fn read_file<P: AsRef<Path>>(path: P) -> Result<String> {
pub fn read_file<P: AsRef<Path>>(path: P) -> Result<String> {
let file = File::open(path)?;
let mut buf_reader = BufReader::new(file);
let mut contents = String::new();

View File

@@ -1,6 +0,0 @@
---
source: src/isort/mod.rs
expression: checks
---
[]

View File

@@ -1,6 +0,0 @@
---
source: src/isort/mod.rs
expression: diagnostics
---
[]

View File

@@ -1,6 +0,0 @@
---
source: src/isort/mod.rs
expression: diagnostics
---
[]

View File

@@ -1,6 +0,0 @@
---
source: src/isort/mod.rs
expression: diagnostics
---
[]

View File

@@ -1,6 +0,0 @@
---
source: src/isort/mod.rs
expression: checks
---
[]

View File

@@ -1,6 +0,0 @@
---
source: src/isort/mod.rs
expression: checks
---
[]

View File

@@ -1,6 +0,0 @@
---
source: src/isort/mod.rs
expression: diagnostics
---
[]

View File

@@ -1,6 +0,0 @@
---
source: src/isort/mod.rs
expression: diagnostics
---
[]

View File

@@ -1,6 +0,0 @@
---
source: src/isort/mod.rs
expression: checks
---
[]

View File

@@ -43,7 +43,10 @@ impl StateMachine {
}
pub fn consume(&mut self, tok: &Tok) -> bool {
if matches!(tok, Tok::Newline | Tok::Indent | Tok::Dedent) {
if matches!(
tok,
Tok::NonLogicalNewline | Tok::Newline | Tok::Indent | Tok::Dedent | Tok::Comment(..)
) {
return false;
}

View File

@@ -1,3 +1,9 @@
//! This is the library for the [Ruff] Python linter.
//!
//! **The API is currently completely unstable**
//! and subject to change drastically.
//!
//! [Ruff]: https://github.com/charliermarsh/ruff
#![allow(
clippy::collapsible_else_if,
clippy::collapsible_if,
@@ -12,61 +18,28 @@
)]
#![forbid(unsafe_code)]
extern crate core;
mod ast;
mod autofix;
mod cache;
pub mod cache;
mod checkers;
pub mod cli;
mod cst;
mod diagnostics;
mod directives;
mod doc_lines;
mod docstrings;
mod eradicate;
pub mod fix;
mod flake8_2020;
pub mod flake8_annotations;
pub mod flake8_bandit;
mod flake8_blind_except;
pub mod flake8_boolean_trap;
pub mod flake8_bugbear;
mod flake8_builtins;
mod flake8_comprehensions;
mod flake8_datetimez;
mod flake8_debugger;
pub mod flake8_errmsg;
mod flake8_implicit_str_concat;
mod flake8_import_conventions;
pub mod flake8_pie;
mod flake8_print;
pub mod flake8_pytest_style;
pub mod flake8_quotes;
mod flake8_return;
mod flake8_simplify;
pub mod flake8_tidy_imports;
mod flake8_unused_arguments;
pub mod flake8_to_ruff;
pub mod fs;
mod isort;
pub mod iterators;
mod lex;
pub mod linter;
pub mod logging;
pub mod mccabe;
pub mod message;
mod noqa;
mod pandas_vet;
pub mod pep8_naming;
pub mod printer;
mod pycodestyle;
pub mod pydocstyle;
mod pyflakes;
mod pygrep_hooks;
mod pylint;
mod python;
mod pyupgrade;
pub mod registry;
pub mod resolver;
mod ruff;
mod rules;
mod rustpython_helpers;
pub mod settings;
pub mod source_code;
@@ -76,14 +49,12 @@ mod violations;
mod visibility;
use cfg_if::cfg_if;
pub use violations::IOError;
cfg_if! {
if #[cfg(not(target_family = "wasm"))] {
pub mod commands;
mod packaging;
pub mod packaging;
#[cfg(all(feature = "update-informer"))]
pub mod updates;
mod lib_native;
pub use lib_native::check;

View File

@@ -10,17 +10,17 @@ use crate::resolver::Relativity;
use crate::rustpython_helpers::tokenize;
use crate::settings::configuration::Configuration;
use crate::settings::{flags, pyproject, Settings};
use crate::source_code::{Locator, Stylist};
use crate::source_code::{Indexer, Locator, Stylist};
use crate::{directives, packaging, resolver};
/// Load the relevant `Settings` for a given `Path`.
fn resolve(path: &Path) -> Result<Settings> {
if let Some(pyproject) = pyproject::find_settings_toml(path)? {
// First priority: `pyproject.toml` in the current `Path`.
resolver::resolve_settings(&pyproject, &Relativity::Parent)
Ok(resolver::resolve_settings(&pyproject, &Relativity::Parent)?.lib)
} else if let Some(pyproject) = pyproject::find_user_settings_toml() {
// Second priority: user-specific `pyproject.toml`.
resolver::resolve_settings(&pyproject, &Relativity::Cwd)
Ok(resolver::resolve_settings(&pyproject, &Relativity::Cwd)?.lib)
} else {
// Fallback: default settings.
Settings::from_configuration(Configuration::default(), &path_dedot::CWD)
@@ -44,6 +44,9 @@ pub fn check(path: &Path, contents: &str, autofix: bool) -> Result<Vec<Diagnosti
// Detect the current code style (lazily).
let stylist = Stylist::from_contents(contents, &locator);
// Extra indices from the code.
let indexer: Indexer = tokens.as_slice().into();
// Extract the `# noqa` and `# isort: skip` directives from the source.
let directives =
directives::extract_directives(&tokens, directives::Flags::from_settings(&settings));
@@ -51,11 +54,12 @@ pub fn check(path: &Path, contents: &str, autofix: bool) -> Result<Vec<Diagnosti
// Generate diagnostics.
let diagnostics = check_path(
path,
packaging::detect_package_root(path),
packaging::detect_package_root(path, &settings.namespace_packages),
contents,
tokens,
&locator,
&stylist,
&indexer,
&directives,
&settings,
autofix.into(),

View File

@@ -5,19 +5,20 @@ use rustpython_parser::lexer::LexResult;
use serde::Serialize;
use wasm_bindgen::prelude::*;
use crate::directives;
use crate::linter::check_path;
use crate::registry::{RuleCode, RuleCodePrefix};
use crate::rules::{
flake8_annotations, flake8_bandit, flake8_bugbear, flake8_errmsg, flake8_import_conventions,
flake8_pytest_style, flake8_quotes, flake8_tidy_imports, flake8_unused_arguments, isort,
mccabe, pep8_naming, pycodestyle, pydocstyle, pyupgrade,
};
use crate::rustpython_helpers::tokenize;
use crate::settings::configuration::Configuration;
use crate::settings::options::Options;
use crate::settings::types::PythonVersion;
use crate::settings::{flags, Settings};
use crate::source_code::{Locator, Stylist};
use crate::{
directives, flake8_annotations, flake8_bandit, flake8_bugbear, flake8_errmsg,
flake8_import_conventions, flake8_pytest_style, flake8_quotes, flake8_tidy_imports,
flake8_unused_arguments, isort, mccabe, pep8_naming, pycodestyle, pydocstyle, pyupgrade,
};
use crate::source_code::{Indexer, Locator, Stylist};
const VERSION: &str = env!("CARGO_PKG_VERSION");
@@ -105,14 +106,15 @@ pub fn defaultSettings() -> Result<JsValue, JsValue> {
force_exclude: None,
format: None,
ignore_init_module_imports: None,
namespace_packages: None,
per_file_ignores: None,
required_version: None,
respect_gitignore: None,
show_source: None,
src: None,
unfixable: None,
typing_modules: None,
task_tags: None,
typing_modules: None,
unfixable: None,
update_check: None,
// Use default options for all plugins.
flake8_annotations: Some(flake8_annotations::settings::Settings::default().into()),
@@ -155,6 +157,9 @@ pub fn check(contents: &str, options: JsValue) -> Result<JsValue, JsValue> {
// Detect the current code style (lazily).
let stylist = Stylist::from_contents(contents, &locator);
// Extra indices from the code.
let indexer: Indexer = tokens.as_slice().into();
// Extract the `# noqa` and `# isort: skip` directives from the source.
let directives = directives::extract_directives(&tokens, directives::Flags::empty());
@@ -166,6 +171,7 @@ pub fn check(contents: &str, options: JsValue) -> Result<JsValue, JsValue> {
tokens,
&locator,
&stylist,
&indexer,
&directives,
&settings,
flags::Autofix::Enabled,

View File

@@ -17,7 +17,7 @@ use crate::message::{Message, Source};
use crate::noqa::add_noqa;
use crate::registry::{Diagnostic, LintSource, RuleCode};
use crate::settings::{flags, Settings};
use crate::source_code::{Locator, Stylist};
use crate::source_code::{Indexer, Locator, Stylist};
use crate::{directives, fs, rustpython_helpers, violations};
const CARGO_PKG_NAME: &str = env!("CARGO_PKG_NAME");
@@ -26,13 +26,14 @@ const CARGO_PKG_REPOSITORY: &str = env!("CARGO_PKG_REPOSITORY");
/// Generate `Diagnostic`s from the source code contents at the
/// given `Path`.
#[allow(clippy::too_many_arguments)]
pub(crate) fn check_path(
pub fn check_path(
path: &Path,
package: Option<&Path>,
contents: &str,
tokens: Vec<LexResult>,
locator: &Locator,
stylist: &Stylist,
indexer: &Indexer,
directives: &Directives,
settings: &Settings,
autofix: flags::Autofix,
@@ -65,7 +66,7 @@ pub(crate) fn check_path(
let use_ast = settings
.enabled
.iter()
.any(|rule_code| matches!(rule_code.lint_source(), LintSource::AST));
.any(|rule_code| matches!(rule_code.lint_source(), LintSource::Ast));
let use_imports = !directives.isort.skip_file
&& settings
.enabled
@@ -79,6 +80,7 @@ pub(crate) fn check_path(
&python_ast,
locator,
stylist,
indexer,
&directives.noqa_line_for,
settings,
autofix,
@@ -90,6 +92,7 @@ pub(crate) fn check_path(
diagnostics.extend(check_imports(
&python_ast,
locator,
indexer,
&directives.isort,
settings,
stylist,
@@ -127,7 +130,7 @@ pub(crate) fn check_path(
{
diagnostics.extend(check_lines(
contents,
&directives.commented_lines,
indexer.commented_lines(),
&doc_lines,
settings,
autofix,
@@ -135,16 +138,16 @@ pub(crate) fn check_path(
}
// Enforce `noqa` directives.
if matches!(noqa, flags::Noqa::Enabled)
if (matches!(noqa, flags::Noqa::Enabled) && !diagnostics.is_empty())
|| settings
.enabled
.iter()
.any(|rule_code| matches!(rule_code.lint_source(), LintSource::NoQA))
.any(|rule_code| matches!(rule_code.lint_source(), LintSource::NoQa))
{
check_noqa(
&mut diagnostics,
contents,
&directives.commented_lines,
indexer.commented_lines(),
&directives.noqa_line_for,
settings,
autofix,
@@ -184,6 +187,9 @@ pub fn add_noqa_to_path(path: &Path, settings: &Settings) -> Result<usize> {
// Detect the current code style (lazily).
let stylist = Stylist::from_contents(&contents, &locator);
// Extra indices from the code.
let indexer: Indexer = tokens.as_slice().into();
// Extract the `# noqa` and `# isort: skip` directives from the source.
let directives =
directives::extract_directives(&tokens, directives::Flags::from_settings(settings));
@@ -196,6 +202,7 @@ pub fn add_noqa_to_path(path: &Path, settings: &Settings) -> Result<usize> {
tokens,
&locator,
&stylist,
&indexer,
&directives,
settings,
flags::Autofix::Disabled,
@@ -230,6 +237,9 @@ pub fn lint_only(
// Detect the current code style (lazily).
let stylist = Stylist::from_contents(contents, &locator);
// Extra indices from the code.
let indexer: Indexer = tokens.as_slice().into();
// Extract the `# noqa` and `# isort: skip` directives from the source.
let directives =
directives::extract_directives(&tokens, directives::Flags::from_settings(settings));
@@ -242,6 +252,7 @@ pub fn lint_only(
tokens,
&locator,
&stylist,
&indexer,
&directives,
settings,
autofix,
@@ -290,6 +301,9 @@ pub fn lint_fix(
// Detect the current code style (lazily).
let stylist = Stylist::from_contents(&contents, &locator);
// Extra indices from the code.
let indexer: Indexer = tokens.as_slice().into();
// Extract the `# noqa` and `# isort: skip` directives from the source.
let directives =
directives::extract_directives(&tokens, directives::Flags::from_settings(settings));
@@ -302,6 +316,7 @@ pub fn lint_fix(
tokens,
&locator,
&stylist,
&indexer,
&directives,
settings,
flags::Autofix::Enabled,
@@ -366,6 +381,7 @@ pub fn test_path(path: &Path, settings: &Settings) -> Result<Vec<Diagnostic>> {
let tokens: Vec<LexResult> = rustpython_helpers::tokenize(&contents);
let locator = Locator::new(&contents);
let stylist = Stylist::from_contents(&contents, &locator);
let indexer: Indexer = tokens.as_slice().into();
let directives =
directives::extract_directives(&tokens, directives::Flags::from_settings(settings));
let mut diagnostics = check_path(
@@ -375,6 +391,7 @@ pub fn test_path(path: &Path, settings: &Settings) -> Result<Vec<Diagnostic>> {
tokens,
&locator,
&stylist,
&indexer,
&directives,
settings,
flags::Autofix::Enabled,
@@ -395,6 +412,7 @@ pub fn test_path(path: &Path, settings: &Settings) -> Result<Vec<Diagnostic>> {
let tokens: Vec<LexResult> = rustpython_helpers::tokenize(&contents);
let locator = Locator::new(&contents);
let stylist = Stylist::from_contents(&contents, &locator);
let indexer: Indexer = tokens.as_slice().into();
let directives =
directives::extract_directives(&tokens, directives::Flags::from_settings(settings));
let diagnostics = check_path(
@@ -404,6 +422,7 @@ pub fn test_path(path: &Path, settings: &Settings) -> Result<Vec<Diagnostic>> {
tokens,
&locator,
&stylist,
&indexer,
&directives,
settings,
flags::Autofix::Enabled,

Some files were not shown because too many files have changed in this diff Show More