Compare commits

...

30 Commits

Author SHA1 Message Date
Charlie Marsh
23b622943e Bump version to 0.0.230 2023-01-22 13:58:41 -05:00
Harutaka Kawamura
a7ce8621a9 Update RustPython to fix Dict.keys type (#2086)
This PR upgrades RustPython to fix the type of `Dict.keys` to `Vec<Option<Expr>>` (see https://github.com/RustPython/RustPython/pull/4449 for why this change was needed) and unblock #1884.
2023-01-22 13:24:00 -05:00
Shannon Rothe
36fb8f7a63 flake8_to_ruff: support isort options (#2082)
See: https://github.com/charliermarsh/ruff/issues/1749.
2023-01-22 13:18:01 -05:00
alm
e11cf1bf65 Update linters PyPI links to latest version (#2062) 2023-01-22 13:10:22 -05:00
Maksudul Haque
75e16c0ce5 [pep8-naming][N806] Don't mark TypeVar & NewType Assignment as Errors (#2085)
closes https://github.com/charliermarsh/ruff/issues/1985
2023-01-22 12:54:13 -05:00
Martin Fischer
1beedf20f9 fix: add_rule.py for --linter ruff 2023-01-22 11:51:29 -05:00
Martin Fischer
4758ee6ac4 refactor: Generate Linter -> RuleSelector mapping via macro
To enable ruff_dev to automatically generate the rule Markdown tables in
the README the ruff library contained the following function:

    Linter::codes() -> Vec<RuleSelector>

which was slightly changed to `fn prefixes(&self) -> Prefixes` in
9dc66b5a65 to enable ruff_dev to split
up the Markdown tables for linters that have multiple prefixes
(pycodestyle has E & W, Pylint has PLC, PLE, PLR & PLW).

The definition of this method was however largely redundant with the
#[prefix] macro attributes in the Linter enum, which are used to
derive the Linter::parse_code function, used by the --explain command.

This commit removes the redundant Linter::prefixes by introducing a
same-named method with a different signature to the RuleNamespace trait:

     fn prefixes(&self) -> &'static [&'static str];

As well as implementing IntoIterator<Rule> for &Linter. We extend the
extisting RuleNamespace proc macro to automatically derive both
implementations from the Linter enum definition.

To support the previously mentioned Markdown table splitting we
introduce a very simple hand-written method to the Linter impl:

    fn categories(&self) -> Option<&'static [LinterCategory]>;
2023-01-22 11:51:29 -05:00
Martin Fischer
c3dd1b0e3c refactor: Rename ParseCode trait to RuleNamespace
ParseCode was a fitting name since the trait only contained a single
parse_code method ... since we now however want to introduce an
additional `prefixes` method RuleNamespace is more fitting.
2023-01-22 11:51:29 -05:00
Martin Fischer
87443e6301 Support prefix "PL" to select all of Pylint 2023-01-22 11:51:29 -05:00
Martin Fischer
16d2ceba79 refactor: Avoid unnecessary Map indexing 2023-01-22 11:51:29 -05:00
Martin Fischer
aedee7294e refactor: Stop using Ident as BTreeMap key
Using Ident as the key type is inconvenient since creating an Ident
requires the specification of a Span, which isn't actually used by
the Hash implementation of Ident.
2023-01-22 11:51:29 -05:00
Martin Fischer
4f12b31dc8 refactor: Drop RuleSelector::codes in favor of IntoIterator impl 2023-01-22 11:51:29 -05:00
Martin Fischer
9f14e7c830 refactor: Update some variable/field/method names 2023-01-22 11:51:29 -05:00
Martin Fischer
4cc492a17a refactor: Encapsulate PerFileIgnore impl details 2023-01-22 11:51:29 -05:00
Martin Fischer
028436af81 refactor: Group Configuration struct fields 2023-01-22 11:51:29 -05:00
Martin Fischer
da4994aa73 refactor: impl From<&Configuration> for RuleTable 2023-01-22 11:51:29 -05:00
Martin Fischer
4dcb491bec refactor: Avoid some unnecessary allocations 2023-01-22 11:51:29 -05:00
Simon Brugman
6fc6bf0648 feat: enable autofix for TRY004 (#2084)
functionality was already implemented, just the trait needed to be added
2023-01-22 07:18:56 -05:00
Charlie Marsh
c1cb4796f8 Support decorators in source code generator (#2081) 2023-01-21 23:26:32 -05:00
Charlie Marsh
d81620397e Improve generator precedence operations (#2080) 2023-01-21 23:21:37 -05:00
Charlie Marsh
84b1490d03 Base INP check on package inference (#2079)
If a file doesn't have a `package`, then it must both be in a directory that lacks an `__init__.py`, and a directory that _isn't_ marked as a namespace package.

Closes #2075.
2023-01-21 19:49:56 -05:00
Simon Brugman
28f05aa6e7 feat: update scripts to new rules structure (#2078)
- optional `prefix` argument for `add_plugin.py`
- rules directory instead of `rules.rs`
- pathlib syntax
- fix test case where code was added instead of name

Example:
```
python scripts/add_plugin.py --url https://pypi.org/project/example/1.0.0/ example --prefix EXA
python scripts/add_rule.py --name SecondRule --code EXA002 --linter example
python scripts/add_rule.py --name FirstRule --code EXA001 --linter example
python scripts/add_rule.py --name ThirdRule --code EXA003 --linter example
 ```

Note that it breaks compatibility with 'old style' plugins (generation works fine, but namespaces need to be changed):
```
python scripts/add_rule.py --name DoTheThing --code PLC999 --linter pylint
```
2023-01-21 19:19:58 -05:00
Charlie Marsh
325faa8e18 Include package path in cache key (#2077)
Closes #2075.
2023-01-21 18:33:35 -05:00
Charlie Marsh
6bfa1804de Remove remaining ropey usages (#2076) 2023-01-21 18:24:10 -05:00
Charlie Marsh
4dcf284a04 Index source code upfront to power (row, column) lookups (#1990)
## Summary

The problem: given a (row, column) number (e.g., for a token in the AST), we need to be able to map it to a precise byte index in the source code. A while ago, we moved to `ropey` for this, since it was faster in practice (mostly, I think, because it's able to defer indexing). However, at some threshold of accesses, it becomes faster to index the string in advance, as we're doing here.

## Benchmark

It looks like this is ~3.6% slower for the default rule set, but ~9.3% faster for `--select ALL`.

**I suspect there's a strategy that would be strictly faster in both cases**, based on deferring even more computation (right now, we lazily compute these offsets, but we do it for the entire file at once, even if we only need some slice at the top), or caching the `ropey` lookups in some way.

Before:

![main](https://user-images.githubusercontent.com/1309177/213883581-8f73c61d-2979-4171-88a6-a88d7ff07e40.png)

After:

![48 all](https://user-images.githubusercontent.com/1309177/213883586-3e049680-9ef9-49e2-8f04-fd6ff402eba7.png)

## Alternatives

I tried tweaking the `Vec::with_capacity` hints, and even trying `Vec::with_capacity(str_indices::lines_crlf::count_breaks(contents))` to do a quick scan of the number of lines, but that turned out to be slower.
2023-01-21 17:56:11 -05:00
Zeddicus414
08fc9b8095 ICN001 check from imports that have no alias (#2072)
Add tests.

Ensure that these cases are caught by ICN001:
```python
from xml.dom import minidom
from xml.dom.minidom import parseString
```

with config:
```toml
[tool.ruff.flake8-import-conventions.extend-aliases]
"dask.dataframe" = "dd"
"xml.dom.minidom" = "md"
"xml.dom.minidom.parseString" = "pstr"
```
2023-01-21 17:47:08 -05:00
Cosmo
39aed6f11d Update link to Pylint parity tracking issue (#2074) 2023-01-21 17:46:55 -05:00
Zeddicus414
5726118cfe ICN001 import-alias-is-not-conventional should check "from" imports (#2070)
Closes https://github.com/charliermarsh/ruff/issues/2047.
2023-01-21 15:43:51 -05:00
Simon Brugman
67de8ac85e feat: implementation for TRY004 (#2066)
See: #2056.
2023-01-21 14:58:59 -05:00
figsoda
b1bda0de82 fix: pin rustpython to the same revision to fix cargo vendor (#2069)
I was trying to update ruff in nixpkgs and ran into this error when it was running `cargo vendor`
```
error: failed to sync

Caused by:
  found duplicate version of package `rustpython-ast v0.2.0` vendored from two sources:

        source 1: https://github.com/RustPython/RustPython.git?rev=62aa942bf506ea3d41ed0503b947b84141fdaa3c#62aa942b
        source 2: https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa#ff90fe52
```
2023-01-21 14:40:00 -05:00
138 changed files with 2410 additions and 875 deletions

View File

@@ -96,7 +96,9 @@ jobs:
- uses: Swatinem/rust-cache@v1
- run: ./scripts/add_rule.py --name DoTheThing --code PLC999 --linter pylint
- run: cargo check
- run: ./scripts/add_plugin.py test --url https://pypi.org/project/-test/0.1.0/
- run: |
./scripts/add_plugin.py test --url https://pypi.org/project/-test/0.1.0/ --prefix TST
./scripts/add_rule.py --name FirstRule --code TST001 --linter test
- run: cargo check
# TODO(charlie): Re-enable the `wasm-pack` tests.

View File

@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.0.229
rev: v0.0.230
hooks:
- id: ruff

132
Cargo.lock generated
View File

@@ -719,7 +719,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.229"
version = "0.0.230"
dependencies = [
"anyhow",
"clap 4.0.32",
@@ -1826,19 +1826,9 @@ dependencies = [
"winapi",
]
[[package]]
name = "ropey"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bbd22239fafefc42138ca5da064f3c17726a80d2379d817a3521240e78dd0064"
dependencies = [
"smallvec",
"str_indices",
]
[[package]]
name = "ruff"
version = "0.0.229"
version = "0.0.230"
dependencies = [
"anyhow",
"bitflags",
@@ -1868,12 +1858,11 @@ dependencies = [
"once_cell",
"path-absolutize",
"regex",
"ropey",
"ruff_macros",
"rustc-hash",
"rustpython-ast 0.2.0 (git+https://github.com/RustPython/RustPython.git?rev=62aa942bf506ea3d41ed0503b947b84141fdaa3c)",
"rustpython-common 0.2.0 (git+https://github.com/RustPython/RustPython.git?rev=62aa942bf506ea3d41ed0503b947b84141fdaa3c)",
"rustpython-parser 0.2.0 (git+https://github.com/RustPython/RustPython.git?rev=62aa942bf506ea3d41ed0503b947b84141fdaa3c)",
"rustpython-ast",
"rustpython-common",
"rustpython-parser",
"schemars",
"semver",
"serde",
@@ -1893,7 +1882,7 @@ dependencies = [
[[package]]
name = "ruff_cli"
version = "0.0.229"
version = "0.0.230"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -1930,7 +1919,7 @@ dependencies = [
[[package]]
name = "ruff_dev"
version = "0.0.229"
version = "0.0.230"
dependencies = [
"anyhow",
"clap 4.0.32",
@@ -1939,9 +1928,9 @@ dependencies = [
"once_cell",
"ruff",
"ruff_cli",
"rustpython-ast 0.2.0 (git+https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa)",
"rustpython-common 0.2.0 (git+https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa)",
"rustpython-parser 0.2.0 (git+https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa)",
"rustpython-ast",
"rustpython-common",
"rustpython-parser",
"schemars",
"serde_json",
"strum",
@@ -1951,7 +1940,7 @@ dependencies = [
[[package]]
name = "ruff_macros"
version = "0.0.229"
version = "0.0.230"
dependencies = [
"once_cell",
"proc-macro2",
@@ -2005,52 +1994,17 @@ dependencies = [
[[package]]
name = "rustpython-ast"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=62aa942bf506ea3d41ed0503b947b84141fdaa3c#62aa942bf506ea3d41ed0503b947b84141fdaa3c"
source = "git+https://github.com/RustPython/RustPython.git?rev=4f38cb68e4a97aeea9eb19673803a0bd5f655383#4f38cb68e4a97aeea9eb19673803a0bd5f655383"
dependencies = [
"num-bigint",
"rustpython-common 0.2.0 (git+https://github.com/RustPython/RustPython.git?rev=62aa942bf506ea3d41ed0503b947b84141fdaa3c)",
"rustpython-compiler-core 0.2.0 (git+https://github.com/RustPython/RustPython.git?rev=62aa942bf506ea3d41ed0503b947b84141fdaa3c)",
]
[[package]]
name = "rustpython-ast"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa#ff90fe52eea578c8ebdd9d95e078cc041a5959fa"
dependencies = [
"num-bigint",
"rustpython-common 0.2.0 (git+https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa)",
"rustpython-compiler-core 0.2.0 (git+https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa)",
"rustpython-common",
"rustpython-compiler-core",
]
[[package]]
name = "rustpython-common"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=62aa942bf506ea3d41ed0503b947b84141fdaa3c#62aa942bf506ea3d41ed0503b947b84141fdaa3c"
dependencies = [
"ascii",
"bitflags",
"cfg-if",
"hexf-parse",
"itertools",
"lexical-parse-float",
"libc",
"lock_api",
"num-bigint",
"num-complex",
"num-traits",
"once_cell",
"radium",
"rand",
"siphasher",
"unic-ucd-category",
"volatile",
"widestring",
]
[[package]]
name = "rustpython-common"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa#ff90fe52eea578c8ebdd9d95e078cc041a5959fa"
source = "git+https://github.com/RustPython/RustPython.git?rev=4f38cb68e4a97aeea9eb19673803a0bd5f655383#4f38cb68e4a97aeea9eb19673803a0bd5f655383"
dependencies = [
"ascii",
"bitflags",
@@ -2075,24 +2029,7 @@ dependencies = [
[[package]]
name = "rustpython-compiler-core"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=62aa942bf506ea3d41ed0503b947b84141fdaa3c#62aa942bf506ea3d41ed0503b947b84141fdaa3c"
dependencies = [
"bincode",
"bitflags",
"bstr 0.2.17",
"itertools",
"lz4_flex",
"num-bigint",
"num-complex",
"num_enum",
"serde",
"thiserror",
]
[[package]]
name = "rustpython-compiler-core"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa#ff90fe52eea578c8ebdd9d95e078cc041a5959fa"
source = "git+https://github.com/RustPython/RustPython.git?rev=4f38cb68e4a97aeea9eb19673803a0bd5f655383#4f38cb68e4a97aeea9eb19673803a0bd5f655383"
dependencies = [
"bincode",
"bitflags",
@@ -2109,7 +2046,7 @@ dependencies = [
[[package]]
name = "rustpython-parser"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=62aa942bf506ea3d41ed0503b947b84141fdaa3c#62aa942bf506ea3d41ed0503b947b84141fdaa3c"
source = "git+https://github.com/RustPython/RustPython.git?rev=4f38cb68e4a97aeea9eb19673803a0bd5f655383#4f38cb68e4a97aeea9eb19673803a0bd5f655383"
dependencies = [
"ahash",
"anyhow",
@@ -2122,33 +2059,8 @@ dependencies = [
"phf 0.10.1",
"phf_codegen 0.10.0",
"rustc-hash",
"rustpython-ast 0.2.0 (git+https://github.com/RustPython/RustPython.git?rev=62aa942bf506ea3d41ed0503b947b84141fdaa3c)",
"rustpython-compiler-core 0.2.0 (git+https://github.com/RustPython/RustPython.git?rev=62aa942bf506ea3d41ed0503b947b84141fdaa3c)",
"thiserror",
"tiny-keccak",
"unic-emoji-char",
"unic-ucd-ident",
"unicode_names2",
]
[[package]]
name = "rustpython-parser"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa#ff90fe52eea578c8ebdd9d95e078cc041a5959fa"
dependencies = [
"ahash",
"anyhow",
"itertools",
"lalrpop",
"lalrpop-util",
"log",
"num-bigint",
"num-traits",
"phf 0.10.1",
"phf_codegen 0.10.0",
"rustc-hash",
"rustpython-ast 0.2.0 (git+https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa)",
"rustpython-compiler-core 0.2.0 (git+https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa)",
"rustpython-ast",
"rustpython-compiler-core",
"thiserror",
"tiny-keccak",
"unic-emoji-char",
@@ -2333,12 +2245,6 @@ version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f"
[[package]]
name = "str_indices"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f026164926842ec52deb1938fae44f83dfdb82d0a5b0270c5bd5935ab74d6dd"
[[package]]
name = "string_cache"
version = "0.8.4"

View File

@@ -8,7 +8,7 @@ default-members = [".", "ruff_cli"]
[package]
name = "ruff"
version = "0.0.229"
version = "0.0.230"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = "2021"
rust-version = "1.65.0"
@@ -46,12 +46,11 @@ num-traits = "0.2.15"
once_cell = { version = "1.16.0" }
path-absolutize = { version = "3.0.14", features = ["once_cell_cache", "use_unix_paths_on_wasm"] }
regex = { version = "1.6.0" }
ropey = { version = "1.5.0", features = ["cr_lines", "simd"], default-features = false }
ruff_macros = { version = "0.0.229", path = "ruff_macros" }
ruff_macros = { version = "0.0.230", path = "ruff_macros" }
rustc-hash = { version = "1.1.0" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "62aa942bf506ea3d41ed0503b947b84141fdaa3c" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "62aa942bf506ea3d41ed0503b947b84141fdaa3c" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "62aa942bf506ea3d41ed0503b947b84141fdaa3c" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "4f38cb68e4a97aeea9eb19673803a0bd5f655383" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "4f38cb68e4a97aeea9eb19673803a0bd5f655383" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "4f38cb68e4a97aeea9eb19673803a0bd5f655383" }
schemars = { version = "0.8.11" }
semver = { version = "1.0.16" }
serde = { version = "1.0.147", features = ["derive"] }
@@ -98,7 +97,3 @@ opt-level = 3
# https://github.com/bytecodealliance/wasm-tools/blob/b5c3d98e40590512a3b12470ef358d5c7b983b15/crates/wasmparser/src/limits.rs#L29
[profile.dev.package.rustpython-parser]
opt-level = 1
[[bench]]
name = "source_code_locator"
harness = false

View File

@@ -134,7 +134,7 @@ developer of [Zulip](https://github.com/zulip/zulip):
1. [eradicate (ERA)](#eradicate-era)
1. [pandas-vet (PD)](#pandas-vet-pd)
1. [pygrep-hooks (PGH)](#pygrep-hooks-pgh)
1. [Pylint (PLC, PLE, PLR, PLW)](#pylint-plc-ple-plr-plw)
1. [Pylint (PL)](#pylint-pl)
1. [flake8-pie (PIE)](#flake8-pie-pie)
1. [flake8-commas (COM)](#flake8-commas-com)
1. [flake8-no-pep420 (INP)](#flake8-no-pep420-inp)
@@ -201,7 +201,7 @@ Ruff also works with [pre-commit](https://pre-commit.com):
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.229'
rev: 'v0.0.230'
hooks:
- id: ruff
```
@@ -544,7 +544,7 @@ The 🛠 emoji indicates that a rule is automatically fixable by the `--fix` com
<!-- Begin auto-generated sections. -->
### Pyflakes (F)
For more, see [Pyflakes](https://pypi.org/project/pyflakes/2.5.0/) on PyPI.
For more, see [Pyflakes](https://pypi.org/project/pyflakes/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -594,7 +594,7 @@ For more, see [Pyflakes](https://pypi.org/project/pyflakes/2.5.0/) on PyPI.
### pycodestyle (E, W)
For more, see [pycodestyle](https://pypi.org/project/pycodestyle/2.9.1/) on PyPI.
For more, see [pycodestyle](https://pypi.org/project/pycodestyle/) on PyPI.
#### Error (E)
| Code | Name | Message | Fix |
@@ -625,7 +625,7 @@ For more, see [pycodestyle](https://pypi.org/project/pycodestyle/2.9.1/) on PyPI
### mccabe (C90)
For more, see [mccabe](https://pypi.org/project/mccabe/0.7.0/) on PyPI.
For more, see [mccabe](https://pypi.org/project/mccabe/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -633,7 +633,7 @@ For more, see [mccabe](https://pypi.org/project/mccabe/0.7.0/) on PyPI.
### isort (I)
For more, see [isort](https://pypi.org/project/isort/5.10.1/) on PyPI.
For more, see [isort](https://pypi.org/project/isort/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -642,7 +642,7 @@ For more, see [isort](https://pypi.org/project/isort/5.10.1/) on PyPI.
### pydocstyle (D)
For more, see [pydocstyle](https://pypi.org/project/pydocstyle/6.1.1/) on PyPI.
For more, see [pydocstyle](https://pypi.org/project/pydocstyle/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -695,7 +695,7 @@ For more, see [pydocstyle](https://pypi.org/project/pydocstyle/6.1.1/) on PyPI.
### pyupgrade (UP)
For more, see [pyupgrade](https://pypi.org/project/pyupgrade/3.2.0/) on PyPI.
For more, see [pyupgrade](https://pypi.org/project/pyupgrade/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -735,7 +735,7 @@ For more, see [pyupgrade](https://pypi.org/project/pyupgrade/3.2.0/) on PyPI.
### pep8-naming (N)
For more, see [pep8-naming](https://pypi.org/project/pep8-naming/0.13.2/) on PyPI.
For more, see [pep8-naming](https://pypi.org/project/pep8-naming/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -757,7 +757,7 @@ For more, see [pep8-naming](https://pypi.org/project/pep8-naming/0.13.2/) on PyP
### flake8-2020 (YTT)
For more, see [flake8-2020](https://pypi.org/project/flake8-2020/1.7.0/) on PyPI.
For more, see [flake8-2020](https://pypi.org/project/flake8-2020/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -774,7 +774,7 @@ For more, see [flake8-2020](https://pypi.org/project/flake8-2020/1.7.0/) on PyPI
### flake8-annotations (ANN)
For more, see [flake8-annotations](https://pypi.org/project/flake8-annotations/2.9.1/) on PyPI.
For more, see [flake8-annotations](https://pypi.org/project/flake8-annotations/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -792,7 +792,7 @@ For more, see [flake8-annotations](https://pypi.org/project/flake8-annotations/2
### flake8-bandit (S)
For more, see [flake8-bandit](https://pypi.org/project/flake8-bandit/4.1.1/) on PyPI.
For more, see [flake8-bandit](https://pypi.org/project/flake8-bandit/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -814,7 +814,7 @@ For more, see [flake8-bandit](https://pypi.org/project/flake8-bandit/4.1.1/) on
### flake8-blind-except (BLE)
For more, see [flake8-blind-except](https://pypi.org/project/flake8-blind-except/0.2.1/) on PyPI.
For more, see [flake8-blind-except](https://pypi.org/project/flake8-blind-except/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -822,7 +822,7 @@ For more, see [flake8-blind-except](https://pypi.org/project/flake8-blind-except
### flake8-boolean-trap (FBT)
For more, see [flake8-boolean-trap](https://pypi.org/project/flake8-boolean-trap/0.1.0/) on PyPI.
For more, see [flake8-boolean-trap](https://pypi.org/project/flake8-boolean-trap/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -832,7 +832,7 @@ For more, see [flake8-boolean-trap](https://pypi.org/project/flake8-boolean-trap
### flake8-bugbear (B)
For more, see [flake8-bugbear](https://pypi.org/project/flake8-bugbear/22.10.27/) on PyPI.
For more, see [flake8-bugbear](https://pypi.org/project/flake8-bugbear/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -867,7 +867,7 @@ For more, see [flake8-bugbear](https://pypi.org/project/flake8-bugbear/22.10.27/
### flake8-builtins (A)
For more, see [flake8-builtins](https://pypi.org/project/flake8-builtins/2.0.1/) on PyPI.
For more, see [flake8-builtins](https://pypi.org/project/flake8-builtins/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -877,7 +877,7 @@ For more, see [flake8-builtins](https://pypi.org/project/flake8-builtins/2.0.1/)
### flake8-comprehensions (C4)
For more, see [flake8-comprehensions](https://pypi.org/project/flake8-comprehensions/3.10.1/) on PyPI.
For more, see [flake8-comprehensions](https://pypi.org/project/flake8-comprehensions/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -900,7 +900,7 @@ For more, see [flake8-comprehensions](https://pypi.org/project/flake8-comprehens
### flake8-debugger (T10)
For more, see [flake8-debugger](https://pypi.org/project/flake8-debugger/4.1.2/) on PyPI.
For more, see [flake8-debugger](https://pypi.org/project/flake8-debugger/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -908,7 +908,7 @@ For more, see [flake8-debugger](https://pypi.org/project/flake8-debugger/4.1.2/)
### flake8-errmsg (EM)
For more, see [flake8-errmsg](https://pypi.org/project/flake8-errmsg/0.4.0/) on PyPI.
For more, see [flake8-errmsg](https://pypi.org/project/flake8-errmsg/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -918,7 +918,7 @@ For more, see [flake8-errmsg](https://pypi.org/project/flake8-errmsg/0.4.0/) on
### flake8-implicit-str-concat (ISC)
For more, see [flake8-implicit-str-concat](https://pypi.org/project/flake8-implicit-str-concat/0.3.0/) on PyPI.
For more, see [flake8-implicit-str-concat](https://pypi.org/project/flake8-implicit-str-concat/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -936,7 +936,7 @@ For more, see [flake8-import-conventions](https://github.com/joaopalmeiro/flake8
### flake8-print (T20)
For more, see [flake8-print](https://pypi.org/project/flake8-print/5.0.0/) on PyPI.
For more, see [flake8-print](https://pypi.org/project/flake8-print/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -945,7 +945,7 @@ For more, see [flake8-print](https://pypi.org/project/flake8-print/5.0.0/) on Py
### flake8-pytest-style (PT)
For more, see [flake8-pytest-style](https://pypi.org/project/flake8-pytest-style/1.6.0/) on PyPI.
For more, see [flake8-pytest-style](https://pypi.org/project/flake8-pytest-style/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -977,7 +977,7 @@ For more, see [flake8-pytest-style](https://pypi.org/project/flake8-pytest-style
### flake8-quotes (Q)
For more, see [flake8-quotes](https://pypi.org/project/flake8-quotes/3.3.1/) on PyPI.
For more, see [flake8-quotes](https://pypi.org/project/flake8-quotes/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -988,7 +988,7 @@ For more, see [flake8-quotes](https://pypi.org/project/flake8-quotes/3.3.1/) on
### flake8-return (RET)
For more, see [flake8-return](https://pypi.org/project/flake8-return/1.2.0/) on PyPI.
For more, see [flake8-return](https://pypi.org/project/flake8-return/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -1003,7 +1003,7 @@ For more, see [flake8-return](https://pypi.org/project/flake8-return/1.2.0/) on
### flake8-simplify (SIM)
For more, see [flake8-simplify](https://pypi.org/project/flake8-simplify/0.19.3/) on PyPI.
For more, see [flake8-simplify](https://pypi.org/project/flake8-simplify/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -1035,7 +1035,7 @@ For more, see [flake8-simplify](https://pypi.org/project/flake8-simplify/0.19.3/
### flake8-tidy-imports (TID)
For more, see [flake8-tidy-imports](https://pypi.org/project/flake8-tidy-imports/4.8.0/) on PyPI.
For more, see [flake8-tidy-imports](https://pypi.org/project/flake8-tidy-imports/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -1044,7 +1044,7 @@ For more, see [flake8-tidy-imports](https://pypi.org/project/flake8-tidy-imports
### flake8-unused-arguments (ARG)
For more, see [flake8-unused-arguments](https://pypi.org/project/flake8-unused-arguments/0.0.12/) on PyPI.
For more, see [flake8-unused-arguments](https://pypi.org/project/flake8-unused-arguments/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -1056,7 +1056,7 @@ For more, see [flake8-unused-arguments](https://pypi.org/project/flake8-unused-a
### flake8-datetimez (DTZ)
For more, see [flake8-datetimez](https://pypi.org/project/flake8-datetimez/20.10.0/) on PyPI.
For more, see [flake8-datetimez](https://pypi.org/project/flake8-datetimez/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -1072,7 +1072,7 @@ For more, see [flake8-datetimez](https://pypi.org/project/flake8-datetimez/20.10
### eradicate (ERA)
For more, see [eradicate](https://pypi.org/project/eradicate/2.1.0/) on PyPI.
For more, see [eradicate](https://pypi.org/project/eradicate/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -1080,7 +1080,7 @@ For more, see [eradicate](https://pypi.org/project/eradicate/2.1.0/) on PyPI.
### pandas-vet (PD)
For more, see [pandas-vet](https://pypi.org/project/pandas-vet/0.2.3/) on PyPI.
For more, see [pandas-vet](https://pypi.org/project/pandas-vet/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -1108,9 +1108,9 @@ For more, see [pygrep-hooks](https://github.com/pre-commit/pygrep-hooks) on GitH
| PGH003 | blanket-type-ignore | Use specific rule codes when ignoring type issues | |
| PGH004 | blanket-noqa | Use specific rule codes when using `noqa` | |
### Pylint (PLC, PLE, PLR, PLW)
### Pylint (PL)
For more, see [Pylint](https://pypi.org/project/pylint/2.15.7/) on PyPI.
For more, see [Pylint](https://pypi.org/project/pylint/) on PyPI.
#### Convention (PLC)
| Code | Name | Message | Fix |
@@ -1143,7 +1143,7 @@ For more, see [Pylint](https://pypi.org/project/pylint/2.15.7/) on PyPI.
### flake8-pie (PIE)
For more, see [flake8-pie](https://pypi.org/project/flake8-pie/0.16.0/) on PyPI.
For more, see [flake8-pie](https://pypi.org/project/flake8-pie/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -1154,7 +1154,7 @@ For more, see [flake8-pie](https://pypi.org/project/flake8-pie/0.16.0/) on PyPI.
### flake8-commas (COM)
For more, see [flake8-commas](https://pypi.org/project/flake8-commas/2.1.0/) on PyPI.
For more, see [flake8-commas](https://pypi.org/project/flake8-commas/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -1164,7 +1164,7 @@ For more, see [flake8-commas](https://pypi.org/project/flake8-commas/2.1.0/) on
### flake8-no-pep420 (INP)
For more, see [flake8-no-pep420](https://pypi.org/project/flake8-no-pep420/2.3.0/) on PyPI.
For more, see [flake8-no-pep420](https://pypi.org/project/flake8-no-pep420/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -1172,7 +1172,7 @@ For more, see [flake8-no-pep420](https://pypi.org/project/flake8-no-pep420/2.3.0
### flake8-executable (EXE)
For more, see [flake8-executable](https://pypi.org/project/flake8-executable/2.1.1/) on PyPI.
For more, see [flake8-executable](https://pypi.org/project/flake8-executable/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -1182,7 +1182,7 @@ For more, see [flake8-executable](https://pypi.org/project/flake8-executable/2.1
### flake8-type-checking (TYP)
For more, see [flake8-type-checking](https://pypi.org/project/flake8-type-checking/2.3.0/) on PyPI.
For more, see [flake8-type-checking](https://pypi.org/project/flake8-type-checking/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
@@ -1194,6 +1194,7 @@ For more, see [tryceratops](https://pypi.org/project/tryceratops/1.1.0/) on PyPI
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| TRY004 | prefer-type-error | Prefer `TypeError` exception for invalid type | 🛠 |
| TRY300 | try-consider-else | Consider `else` block | |
### Ruff-specific rules (RUF)
@@ -1538,7 +1539,7 @@ Like Flake8, Pylint supports plugins (called "checkers"), while Ruff implements
Unlike Pylint, Ruff is capable of automatically fixing its own lint violations.
Pylint parity is being tracked in [#689](https://github.com/charliermarsh/ruff/issues/689).
Pylint parity is being tracked in [#970](https://github.com/charliermarsh/ruff/issues/970).
### Which tools does Ruff replace?

View File

@@ -1,18 +0,0 @@
use std::fs;
use std::path::Path;
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use ropey::Rope;
fn criterion_benchmark(c: &mut Criterion) {
let contents = fs::read_to_string(Path::new("resources/test/fixtures/D.py")).unwrap();
c.bench_function("rope", |b| {
b.iter(|| {
let rope = Rope::from_str(black_box(&contents));
rope.line_to_char(black_box(4));
});
});
}
criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);

View File

@@ -12,7 +12,7 @@ const RULES_SUBMODULE_DOC_PREFIX: &str = "//! Rules from ";
/// The `src/rules/*/mod.rs` files are expected to have a first line such as the
/// following:
///
/// //! Rules from [Pyflakes](https://pypi.org/project/pyflakes/2.5.0/).
/// //! Rules from [Pyflakes](https://pypi.org/project/pyflakes/).
///
/// This function extracts the link label and url from these comments and
/// generates the `name` and `url` functions for the `Linter` enum

View File

@@ -771,7 +771,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8_to_ruff"
version = "0.0.229"
version = "0.0.230"
dependencies = [
"anyhow",
"clap",
@@ -1975,7 +1975,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.229"
version = "0.0.230"
dependencies = [
"anyhow",
"bincode",

View File

@@ -1,6 +1,6 @@
[package]
name = "flake8-to-ruff"
version = "0.0.229"
version = "0.0.230"
edition = "2021"
[dependencies]

View File

@@ -18,7 +18,7 @@ use std::path::PathBuf;
use anyhow::Result;
use clap::Parser;
use configparser::ini::Ini;
use ruff::flake8_to_ruff;
use ruff::flake8_to_ruff::{self, ExternalConfig};
#[derive(Parser)]
#[command(
@@ -48,14 +48,18 @@ fn main() -> Result<()> {
let config = ini.load(cli.file).map_err(|msg| anyhow::anyhow!(msg))?;
// Read the pyproject.toml file.
let black = cli
.pyproject
.map(flake8_to_ruff::parse_black_options)
.transpose()?
.flatten();
let pyproject = cli.pyproject.map(flake8_to_ruff::parse).transpose()?;
let external_config = pyproject
.as_ref()
.and_then(|pyproject| pyproject.tool.as_ref())
.map(|tool| ExternalConfig {
black: tool.black.as_ref(),
isort: tool.isort.as_ref(),
})
.unwrap_or_default();
// Create Ruff's pyproject.toml section.
let pyproject = flake8_to_ruff::convert(&config, black.as_ref(), cli.plugin)?;
let pyproject = flake8_to_ruff::convert(&config, &external_config, cli.plugin)?;
println!("{}", toml_edit::easy::to_string_pretty(&pyproject)?);
Ok(())

View File

@@ -7,7 +7,7 @@ build-backend = "maturin"
[project]
name = "ruff"
version = "0.0.229"
version = "0.0.230"
description = "An extremely fast Python linter, written in Rust."
authors = [
{ name = "Charlie Marsh", email = "charlie.r.marsh@gmail.com" },

View File

@@ -0,0 +1,21 @@
# Test absolute imports
# Violation cases
import xml.dom.minidom
import xml.dom.minidom as wrong
from xml.dom import minidom as wrong
from xml.dom import minidom
from xml.dom.minidom import parseString as wrong # Ensure ICN001 throws on function import.
from xml.dom.minidom import parseString
from xml.dom.minidom import parse, parseString
from xml.dom.minidom import parse as ps, parseString as wrong
# No ICN001 violations
import xml.dom.minidom as md
from xml.dom import minidom as md
from xml.dom.minidom import parseString as pstr
from xml.dom.minidom import parse, parseString as pstr
from xml.dom.minidom import parse as ps, parseString as pstr
# Test relative imports
from ..xml.dom import minidom as okay # Ensure config "xml.dom.minidom" doesn't catch relative imports

View File

@@ -1 +0,0 @@
from long_module_name import member_one, member_two, member_three, member_four, member_five

View File

@@ -1,5 +1,7 @@
import collections
from collections import namedtuple
from typing import TypeVar
from typing import NewType
GLOBAL: str = "foo"
@@ -11,5 +13,9 @@ def f():
Camel = 0
CONSTANT = 0
_ = 0
MyObj1 = collections.namedtuple("MyObj1", ["a", "b"])
MyObj2 = namedtuple("MyObj12", ["a", "b"])
T = TypeVar("T")
UserId = NewType('UserId', int)

View File

@@ -31,3 +31,7 @@ MyType10 = TypedDict("MyType10", {"key": Literal["value"]})
# using namespace TypedDict
MyType11 = typing.TypedDict("MyType11", {"key": int})
# unpacking
c = {"c": float}
MyType12 = TypedDict("MyType1", {"a": int, "b": str, **c})

View File

@@ -62,6 +62,12 @@ print(
"bar %(bar)s" % {"foo": x, "bar": y}
)
bar = {"bar": y}
print(
"foo %(foo)s "
"bar %(bar)s" % {"foo": x, **bar}
)
print("%s \N{snowman}" % (a,))
print("%(foo)s \N{snowman}" % {"foo": 1})

View File

@@ -0,0 +1,296 @@
"""
Violation:
Prefer TypeError when relevant.
"""
def incorrect_basic(some_arg):
if isinstance(some_arg, int):
pass
else:
raise Exception("...") # should be typeerror
def incorrect_multiple_type_check(some_arg):
if isinstance(some_arg, (int, str)):
pass
else:
raise Exception("...") # should be typeerror
class MyClass:
pass
def incorrect_with_issubclass(some_arg):
if issubclass(some_arg, MyClass):
pass
else:
raise Exception("...") # should be typeerror
def incorrect_with_callable(some_arg):
if callable(some_arg):
pass
else:
raise Exception("...") # should be typeerror
def incorrect_ArithmeticError(some_arg):
if isinstance(some_arg, int):
pass
else:
raise ArithmeticError("...") # should be typeerror
def incorrect_AssertionError(some_arg):
if isinstance(some_arg, int):
pass
else:
raise AssertionError("...") # should be typeerror
def incorrect_AttributeError(some_arg):
if isinstance(some_arg, int):
pass
else:
raise AttributeError("...") # should be typeerror
def incorrect_BufferError(some_arg):
if isinstance(some_arg, int):
pass
else:
raise BufferError # should be typeerror
def incorrect_EOFError(some_arg):
if isinstance(some_arg, int):
pass
else:
raise EOFError("...") # should be typeerror
def incorrect_ImportError(some_arg):
if isinstance(some_arg, int):
pass
else:
raise ImportError("...") # should be typeerror
def incorrect_LookupError(some_arg):
if isinstance(some_arg, int):
pass
else:
raise LookupError("...") # should be typeerror
def incorrect_MemoryError(some_arg):
if isinstance(some_arg, int):
pass
else:
# should be typeerror
# not multiline is on purpose for fix
raise MemoryError(
"..."
)
def incorrect_NameError(some_arg):
if isinstance(some_arg, int):
pass
else:
raise NameError("...") # should be typeerror
def incorrect_ReferenceError(some_arg):
if isinstance(some_arg, int):
pass
else:
raise ReferenceError("...") # should be typeerror
def incorrect_RuntimeError(some_arg):
if isinstance(some_arg, int):
pass
else:
raise RuntimeError("...") # should be typeerror
def incorrect_SyntaxError(some_arg):
if isinstance(some_arg, int):
pass
else:
raise SyntaxError("...") # should be typeerror
def incorrect_SystemError(some_arg):
if isinstance(some_arg, int):
pass
else:
raise SystemError("...") # should be typeerror
def incorrect_ValueError(some_arg):
if isinstance(some_arg, int):
pass
else:
raise ValueError("...") # should be typeerror
def incorrect_not_operator_isinstance(some_arg):
if not isinstance(some_arg):
pass
else:
raise Exception("...") # should be typeerror
def incorrect_and_operator_isinstance(arg1, arg2):
if isinstance(some_arg) and isinstance(arg2):
pass
else:
raise Exception("...") # should be typeerror
def incorrect_or_operator_isinstance(arg1, arg2):
if isinstance(some_arg) or isinstance(arg2):
pass
else:
raise Exception("...") # should be typeerror
def incorrect_multiple_operators_isinstance(arg1, arg2, arg3):
if not isinstance(arg1) and isinstance(arg2) or isinstance(arg3):
pass
else:
raise Exception("...") # should be typeerror
def incorrect_not_operator_callable(some_arg):
if not callable(some_arg):
pass
else:
raise Exception("...") # should be typeerror
def incorrect_and_operator_callable(arg1, arg2):
if callable(some_arg) and callable(arg2):
pass
else:
raise Exception("...") # should be typeerror
def incorrect_or_operator_callable(arg1, arg2):
if callable(some_arg) or callable(arg2):
pass
else:
raise Exception("...") # should be typeerror
def incorrect_multiple_operators_callable(arg1, arg2, arg3):
if not callable(arg1) and callable(arg2) or callable(arg3):
pass
else:
raise Exception("...") # should be typeerror
def incorrect_not_operator_issubclass(some_arg):
if not issubclass(some_arg):
pass
else:
raise Exception("...") # should be typeerror
def incorrect_and_operator_issubclass(arg1, arg2):
if issubclass(some_arg) and issubclass(arg2):
pass
else:
raise Exception("...") # should be typeerror
def incorrect_or_operator_issubclass(arg1, arg2):
if issubclass(some_arg) or issubclass(arg2):
pass
else:
raise Exception("...") # should be typeerror
def incorrect_multiple_operators_issubclass(arg1, arg2, arg3):
if not issubclass(arg1) and issubclass(arg2) or issubclass(arg3):
pass
else:
raise Exception("...") # should be typeerror
def incorrect_multi_conditional(arg1, arg2):
if isinstance(arg1, int):
pass
elif isinstance(arg2, int):
raise Exception("...") # should be typeerror
class MyCustomTypeValidation(Exception):
pass
def correct_custom_exception(some_arg):
if isinstance(some_arg, int):
pass
else:
raise MyCustomTypeValidation("...") # that's correct, because it's not vanilla
def correct_complex_conditional(val):
if val is not None and (not isinstance(val, int) or val < 0):
raise ValueError(...) # fine if this is not a TypeError
def correct_multi_conditional(some_arg):
if some_arg == 3:
pass
elif isinstance(some_arg, int):
pass
else:
raise Exception("...") # fine if this is not a TypeError
def correct_should_ignore(some_arg):
if isinstance(some_arg, int):
pass
else:
raise TypeError("...")
def check_body(some_args):
if isinstance(some_args, int):
raise ValueError("...") # should be typeerror
def check_body_correct(some_args):
if isinstance(some_args, int):
raise TypeError("...") # correct
def multiple_elifs(some_args):
if not isinstance(some_args, int):
raise ValueError("...") # should be typerror
elif some_args < 3:
raise ValueError("...") # this is ok
elif some_args > 10:
raise ValueError("...") # this is ok if we don't simplify
else:
pass
def multiple_ifs(some_args):
if not isinstance(some_args, int):
raise ValueError("...") # should be typerror
else:
if some_args < 3:
raise ValueError("...") # this is ok
else:
if some_args > 10:
raise ValueError("...") # this is ok if we don't simplify
else:
pass

View File

@@ -1554,6 +1554,7 @@
"PIE8",
"PIE80",
"PIE807",
"PL",
"PLC",
"PLC0",
"PLC04",
@@ -1748,6 +1749,9 @@
"TID251",
"TID252",
"TRY",
"TRY0",
"TRY00",
"TRY004",
"TRY3",
"TRY30",
"TRY300",

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_cli"
version = "0.0.229"
version = "0.0.230"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = "2021"
rust-version = "1.65.0"

View File

@@ -35,10 +35,19 @@ fn content_dir() -> &'static Path {
Path::new("content")
}
fn cache_key<P: AsRef<Path>>(path: P, settings: &Settings, autofix: flags::Autofix) -> u64 {
fn cache_key<P: AsRef<Path>>(
path: P,
package: Option<&P>,
settings: &Settings,
autofix: flags::Autofix,
) -> u64 {
let mut hasher = DefaultHasher::new();
CARGO_PKG_VERSION.hash(&mut hasher);
path.as_ref().absolutize().unwrap().hash(&mut hasher);
package
.as_ref()
.map(|path| path.as_ref().absolutize().unwrap())
.hash(&mut hasher);
settings.hash(&mut hasher);
autofix.hash(&mut hasher);
hasher.finish()
@@ -79,13 +88,14 @@ fn read_sync(cache_dir: &Path, key: u64) -> Result<Vec<u8>, std::io::Error> {
/// Get a value from the cache.
pub fn get<P: AsRef<Path>>(
path: P,
package: Option<&P>,
metadata: &fs::Metadata,
settings: &AllSettings,
autofix: flags::Autofix,
) -> Option<Vec<Message>> {
let encoded = read_sync(
&settings.cli.cache_dir,
cache_key(path, &settings.lib, autofix),
cache_key(path, package, &settings.lib, autofix),
)
.ok()?;
let (mtime, messages) = match bincode::deserialize::<CheckResult>(&encoded[..]) {
@@ -107,6 +117,7 @@ pub fn get<P: AsRef<Path>>(
/// Set a value in the cache.
pub fn set<P: AsRef<Path>>(
path: P,
package: Option<&P>,
metadata: &fs::Metadata,
settings: &AllSettings,
autofix: flags::Autofix,
@@ -120,7 +131,7 @@ pub fn set<P: AsRef<Path>>(
};
if let Err(e) = write_sync(
&settings.cli.cache_dir,
cache_key(path, &settings.lib, autofix),
cache_key(path, package, &settings.lib, autofix),
&bincode::serialize(&check_result).unwrap(),
) {
error!("Failed to write to cache: {e:?}");

View File

@@ -1,8 +1,7 @@
use std::path::{Path, PathBuf};
use std::path::PathBuf;
use clap::{command, Parser};
use regex::Regex;
use ruff::fs;
use ruff::logging::LogLevel;
use ruff::registry::{Rule, RuleSelector};
use ruff::resolver::ConfigProcessor;
@@ -444,9 +443,6 @@ pub fn collect_per_file_ignores(pairs: Vec<PatternPrefixPair>) -> Vec<PerFileIgn
}
per_file_ignores
.into_iter()
.map(|(pattern, prefixes)| {
let absolute = fs::normalize_path(Path::new(&pattern));
PerFileIgnore::new(pattern, absolute, &prefixes)
})
.map(|(pattern, prefixes)| PerFileIgnore::new(pattern, &prefixes, None))
.collect()
}

View File

@@ -15,7 +15,7 @@ use ruff::cache::CACHE_DIR_NAME;
use ruff::linter::add_noqa_to_path;
use ruff::logging::LogLevel;
use ruff::message::{Location, Message};
use ruff::registry::{Linter, ParseCode, Rule};
use ruff::registry::{Linter, Rule, RuleNamespace};
use ruff::resolver::{FileDiscovery, PyprojectDiscovery};
use ruff::settings::flags;
use ruff::settings::types::SerializationFormat;

View File

@@ -55,7 +55,9 @@ pub fn lint_path(
&& matches!(autofix, fix::FixMode::None | fix::FixMode::Generate)
{
let metadata = path.metadata()?;
if let Some(messages) = cache::get(path, &metadata, settings, autofix.into()) {
if let Some(messages) =
cache::get(path, package.as_ref(), &metadata, settings, autofix.into())
{
debug!("Cache hit for: {}", path.to_string_lossy());
return Ok(Diagnostics::new(messages));
}
@@ -92,7 +94,14 @@ pub fn lint_path(
// Re-populate the cache.
if let Some(metadata) = metadata {
cache::set(path, &metadata, settings, autofix.into(), &messages);
cache::set(
path,
package.as_ref(),
&metadata,
settings,
autofix.into(),
&messages,
);
}
Ok(Diagnostics { messages, fixed })

View File

@@ -13,7 +13,7 @@ use assert_cmd::Command;
use itertools::Itertools;
use log::info;
use ruff::logging::{set_up_logging, LogLevel};
use ruff::registry::Linter;
use ruff::registry::{Linter, RuleNamespace};
use strum::IntoEnumIterator;
use walkdir::WalkDir;
@@ -180,7 +180,7 @@ fn test_ruff_black_compatibility() -> Result<()> {
// problem. Ruff would add a `# noqa: W292` after the first run, black introduces a
// newline, and ruff removes the `# noqa: W292` again.
.filter(|linter| *linter != Linter::Ruff)
.map(|linter| linter.prefixes().as_list(","))
.map(|linter| linter.prefixes().join(","))
.join(",");
let ruff_args = [
"-",

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_dev"
version = "0.0.229"
version = "0.0.230"
edition = "2021"
[dependencies]
@@ -11,9 +11,9 @@ libcst = { git = "https://github.com/charliermarsh/LibCST", rev = "f2f0b7a487a87
once_cell = { version = "1.16.0" }
ruff = { path = ".." }
ruff_cli = { path = "../ruff_cli" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "ff90fe52eea578c8ebdd9d95e078cc041a5959fa" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "ff90fe52eea578c8ebdd9d95e078cc041a5959fa" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "ff90fe52eea578c8ebdd9d95e078cc041a5959fa" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "4f38cb68e4a97aeea9eb19673803a0bd5f655383" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "4f38cb68e4a97aeea9eb19673803a0bd5f655383" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "4f38cb68e4a97aeea9eb19673803a0bd5f655383" }
schemars = { version = "0.8.11" }
serde_json = {version="1.0.91"}
strum = { version = "0.24.1", features = ["strum_macros"] }

View File

@@ -2,7 +2,7 @@
use anyhow::Result;
use clap::Args;
use ruff::registry::{Linter, Prefixes, RuleSelector};
use ruff::registry::{Linter, LinterCategory, Rule, RuleNamespace};
use strum::IntoEnumIterator;
use crate::utils::replace_readme_section;
@@ -20,12 +20,12 @@ pub struct Cli {
pub(crate) dry_run: bool,
}
fn generate_table(table_out: &mut String, prefix: &RuleSelector) {
fn generate_table(table_out: &mut String, rules: impl IntoIterator<Item = Rule>) {
table_out.push_str("| Code | Name | Message | Fix |");
table_out.push('\n');
table_out.push_str("| ---- | ---- | ------- | --- |");
table_out.push('\n');
for rule in prefix.codes() {
for rule in rules {
let fix_token = match rule.autofixable() {
None => "",
Some(_) => "🛠",
@@ -48,8 +48,7 @@ pub fn main(cli: &Cli) -> Result<()> {
let mut table_out = String::new();
let mut toc_out = String::new();
for linter in Linter::iter() {
let prefixes = linter.prefixes();
let codes_csv: String = prefixes.as_list(", ");
let codes_csv: String = linter.prefixes().join(", ");
table_out.push_str(&format!("### {} ({codes_csv})", linter.name()));
table_out.push('\n');
table_out.push('\n');
@@ -86,15 +85,14 @@ pub fn main(cli: &Cli) -> Result<()> {
table_out.push('\n');
}
match prefixes {
Prefixes::Single(prefix) => generate_table(&mut table_out, &prefix),
Prefixes::Multiple(entries) => {
for (prefix, category) in entries {
table_out.push_str(&format!("#### {category} ({})", prefix.as_ref()));
table_out.push('\n');
generate_table(&mut table_out, &prefix);
}
if let Some(categories) = linter.categories() {
for LinterCategory(prefix, name, selector) in categories {
table_out.push_str(&format!("#### {name} ({prefix})"));
table_out.push('\n');
generate_table(&mut table_out, selector);
}
} else {
generate_table(&mut table_out, &linter);
}
}

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_macros"
version = "0.0.229"
version = "0.0.230"
edition = "2021"
[lib]

View File

@@ -19,8 +19,8 @@ use syn::{parse_macro_input, DeriveInput, ItemFn};
mod config;
mod define_rule_mapping;
mod derive_message_formats;
mod parse_code;
mod rule_code_prefix;
mod rule_namespace;
#[proc_macro_derive(ConfigurationOptions, attributes(option, doc, option_group))]
pub fn derive_config(input: proc_macro::TokenStream) -> proc_macro::TokenStream {
@@ -37,11 +37,11 @@ pub fn define_rule_mapping(item: proc_macro::TokenStream) -> proc_macro::TokenSt
define_rule_mapping::define_rule_mapping(&mapping).into()
}
#[proc_macro_derive(ParseCode, attributes(prefix))]
pub fn derive_rule_code_prefix(input: proc_macro::TokenStream) -> proc_macro::TokenStream {
#[proc_macro_derive(RuleNamespace, attributes(prefix))]
pub fn derive_rule_namespace(input: proc_macro::TokenStream) -> proc_macro::TokenStream {
let input = parse_macro_input!(input as DeriveInput);
parse_code::derive_impl(input)
rule_namespace::derive_impl(input)
.unwrap_or_else(syn::Error::into_compile_error)
.into()
}

View File

@@ -91,9 +91,12 @@ pub fn expand<'a>(
variant_name: impl Fn(&str) -> &'a Ident,
) -> proc_macro2::TokenStream {
// Build up a map from prefix to matching RuleCodes.
let mut prefix_to_codes: BTreeMap<Ident, BTreeSet<String>> = BTreeMap::default();
let mut prefix_to_codes: BTreeMap<String, BTreeSet<String>> = BTreeMap::default();
let mut all_codes = BTreeSet::new();
let mut pl_codes = BTreeSet::new();
for variant in variants {
let span = variant.span();
let code_str = variant.to_string();
let code_prefix_len = code_str
.chars()
@@ -103,28 +106,32 @@ pub fn expand<'a>(
for i in 0..=code_suffix_len {
let prefix = code_str[..code_prefix_len + i].to_string();
prefix_to_codes
.entry(Ident::new(&prefix, span))
.entry(prefix)
.or_default()
.insert(code_str.clone());
}
prefix_to_codes
.entry(Ident::new(ALL, span))
.or_default()
.insert(code_str.clone());
if code_str.starts_with("PL") {
pl_codes.insert(code_str.to_string());
}
all_codes.insert(code_str);
}
prefix_to_codes.insert(ALL.to_string(), all_codes);
prefix_to_codes.insert("PL".to_string(), pl_codes);
// Add any prefix aliases (e.g., "U" to "UP").
for (alias, rule_code) in PREFIX_REDIRECTS.iter() {
prefix_to_codes.insert(
Ident::new(alias, Span::call_site()),
(*alias).to_string(),
prefix_to_codes
.get(&Ident::new(rule_code, Span::call_site()))
.get(*rule_code)
.unwrap_or_else(|| panic!("Unknown RuleCode: {alias:?}"))
.clone(),
);
}
let prefix_variants = prefix_to_codes.keys().map(|prefix| {
let prefix = Ident::new(prefix, Span::call_site());
quote! {
#prefix
}
@@ -148,6 +155,7 @@ pub fn expand<'a>(
Two,
Three,
Four,
Five,
}
#[derive(
@@ -181,50 +189,51 @@ pub fn expand<'a>(
fn generate_impls<'a>(
rule_type: &Ident,
prefix_ident: &Ident,
prefix_to_codes: &BTreeMap<Ident, BTreeSet<String>>,
prefix_to_codes: &BTreeMap<String, BTreeSet<String>>,
variant_name: impl Fn(&str) -> &'a Ident,
) -> proc_macro2::TokenStream {
let codes_match_arms = prefix_to_codes.iter().map(|(prefix, codes)| {
let into_iter_match_arms = prefix_to_codes.iter().map(|(prefix_str, codes)| {
let codes = codes.iter().map(|code| {
let rule_variant = variant_name(code);
quote! {
#rule_type::#rule_variant
}
});
let prefix_str = prefix.to_string();
let prefix = Ident::new(prefix_str, Span::call_site());
if let Some(target) = PREFIX_REDIRECTS.get(prefix_str.as_str()) {
quote! {
#prefix_ident::#prefix => {
crate::warn_user_once!(
"`{}` has been remapped to `{}`", #prefix_str, #target
);
vec![#(#codes),*]
vec![#(#codes),*].into_iter()
}
}
} else {
quote! {
#prefix_ident::#prefix => vec![#(#codes),*],
#prefix_ident::#prefix => vec![#(#codes),*].into_iter(),
}
}
});
let specificity_match_arms = prefix_to_codes.keys().map(|prefix| {
if *prefix == ALL {
let specificity_match_arms = prefix_to_codes.keys().map(|prefix_str| {
let prefix = Ident::new(prefix_str, Span::call_site());
if prefix_str == ALL {
quote! {
#prefix_ident::#prefix => SuffixLength::None,
}
} else {
let num_numeric = prefix
.to_string()
.chars()
.filter(|char| char.is_numeric())
.count();
let mut num_numeric = prefix_str.chars().filter(|char| char.is_numeric()).count();
if prefix_str != "PL" && prefix_str.starts_with("PL") {
num_numeric += 1;
}
let suffix_len = match num_numeric {
0 => quote! { SuffixLength::Zero },
1 => quote! { SuffixLength::One },
2 => quote! { SuffixLength::Two },
3 => quote! { SuffixLength::Three },
4 => quote! { SuffixLength::Four },
5 => quote! { SuffixLength::Five },
_ => panic!("Invalid prefix: {prefix}"),
};
quote! {
@@ -233,11 +242,11 @@ fn generate_impls<'a>(
}
});
let categories = prefix_to_codes.keys().map(|prefix| {
let prefix_str = prefix.to_string();
let categories = prefix_to_codes.keys().map(|prefix_str| {
if prefix_str.chars().all(char::is_alphabetic)
&& !PREFIX_REDIRECTS.contains_key(&prefix_str.as_str())
{
let prefix = Ident::new(prefix_str, Span::call_site());
quote! {
#prefix_ident::#prefix,
}
@@ -248,15 +257,6 @@ fn generate_impls<'a>(
quote! {
impl #prefix_ident {
pub fn codes(&self) -> Vec<#rule_type> {
use colored::Colorize;
#[allow(clippy::match_same_arms)]
match self {
#(#codes_match_arms)*
}
}
pub fn specificity(&self) -> SuffixLength {
#[allow(clippy::match_same_arms)]
match self {
@@ -265,6 +265,20 @@ fn generate_impls<'a>(
}
}
impl IntoIterator for &#prefix_ident {
type Item = #rule_type;
type IntoIter = ::std::vec::IntoIter<Self::Item>;
fn into_iter(self) -> Self::IntoIter {
use colored::Colorize;
#[allow(clippy::match_same_arms)]
match self {
#(#into_iter_match_arms)*
}
}
}
pub const CATEGORIES: &[#prefix_ident] = &[#(#categories)*];
}
}

View File

@@ -1,3 +1,4 @@
use proc_macro2::{Ident, Span};
use quote::quote;
use syn::spanned::Spanned;
use syn::{Data, DataEnum, DeriveInput, Error, Lit, Meta, MetaNameValue};
@@ -11,6 +12,8 @@ pub fn derive_impl(input: DeriveInput) -> syn::Result<proc_macro2::TokenStream>
let mut parsed = Vec::new();
let mut prefix_match_arms = quote!();
for variant in variants {
let prefix_attrs: Vec<_> = variant
.attrs
@@ -25,30 +28,73 @@ pub fn derive_impl(input: DeriveInput) -> syn::Result<proc_macro2::TokenStream>
));
}
let mut prefix_literals = Vec::new();
for attr in prefix_attrs {
let Ok(Meta::NameValue(MetaNameValue{lit: Lit::Str(lit), ..})) = attr.parse_meta() else {
return Err(Error::new(attr.span(), r#"expected attribute to be in the form of [#prefix = "..."]"#))
};
parsed.push((lit, variant.ident.clone()));
parsed.push((lit.clone(), variant.ident.clone()));
prefix_literals.push(lit);
}
let variant_ident = variant.ident;
prefix_match_arms.extend(quote! {
Self::#variant_ident => &[#(#prefix_literals),*],
});
}
parsed.sort_by_key(|(prefix, _)| prefix.value().len());
let mut if_statements = quote!();
let mut into_iter_match_arms = quote!();
for (prefix, field) in parsed {
if_statements.extend(quote! {if let Some(rest) = code.strip_prefix(#prefix) {
return Some((#ident::#field, rest));
}});
let prefix_ident = Ident::new(&prefix.value(), Span::call_site());
if field != "Pycodestyle" {
into_iter_match_arms.extend(quote! {
#ident::#field => RuleSelector::#prefix_ident.into_iter(),
});
}
}
into_iter_match_arms.extend(quote! {
#ident::Pycodestyle => {
let rules: Vec<_> = (&RuleSelector::E).into_iter().chain(&RuleSelector::W).collect();
rules.into_iter()
}
});
Ok(quote! {
impl crate::registry::ParseCode for #ident {
impl crate::registry::RuleNamespace for #ident {
fn parse_code(code: &str) -> Option<(Self, &str)> {
#if_statements
None
}
fn prefixes(&self) -> &'static [&'static str] {
match self { #prefix_match_arms }
}
}
impl IntoIterator for &#ident {
type Item = Rule;
type IntoIter = ::std::vec::IntoIter<Self::Item>;
fn into_iter(self) -> Self::IntoIter {
use colored::Colorize;
match self {
#into_iter_match_arms
}
}
}
})
}

View File

@@ -5,7 +5,8 @@ Example usage:
python scripts/add_plugin.py \
flake8-pie \
--url https://pypi.org/project/flake8-pie/0.16.0/
--url https://pypi.org/project/flake8-pie/
--prefix PIE
"""
import argparse
@@ -14,19 +15,18 @@ import os
from _utils import ROOT_DIR, dir_name, get_indent, pascal_case
def main(*, plugin: str, url: str) -> None:
def main(*, plugin: str, url: str, prefix_code: str) -> None:
# Create the test fixture folder.
os.makedirs(
ROOT_DIR / "resources/test/fixtures" / dir_name(plugin),
exist_ok=True,
)
# Create the Rust module.
rust_module = ROOT_DIR / "src/rules" / dir_name(plugin)
os.makedirs(rust_module, exist_ok=True)
with open(rust_module / "rules.rs", "w+") as fp:
fp.write("use crate::checkers::ast::Checker;\n")
with open(rust_module / "mod.rs", "w+") as fp:
# Create the Plugin rules module.
plugin_dir = ROOT_DIR / "src/rules" / dir_name(plugin)
plugin_dir.mkdir(exist_ok=True)
with (plugin_dir / "mod.rs").open("w+") as fp:
fp.write(f"//! Rules from [{plugin}]({url}).\n")
fp.write("pub(crate) mod rules;\n")
fp.write("\n")
@@ -59,46 +59,36 @@ mod tests {
% dir_name(plugin)
)
# Create a subdirectory for rules and create a `mod.rs` placeholder
rules_dir = plugin_dir / "rules"
rules_dir.mkdir(exist_ok=True)
with (rules_dir / "mod.rs").open("w+") as fp:
fp.write("\n\n")
# Create the snapshots subdirectory
(plugin_dir / "snapshots").mkdir(exist_ok=True)
# Add the plugin to `rules/mod.rs`.
with open(ROOT_DIR / "src/rules/mod.rs", "a") as fp:
with (ROOT_DIR / "src/rules/mod.rs").open("a") as fp:
fp.write(f"pub mod {dir_name(plugin)};")
# Add the relevant sections to `src/registry.rs`.
content = (ROOT_DIR / "src/registry.rs").read_text()
with open(ROOT_DIR / "src/registry.rs", "w") as fp:
with (ROOT_DIR / "src/registry.rs").open("w") as fp:
for line in content.splitlines():
indent = get_indent(line)
if line.strip() == "// Ruff":
if line.strip() == "// ruff":
fp.write(f"{indent}// {plugin}")
fp.write("\n")
elif line.strip() == '#[prefix = "RUF"]':
fp.write(f'{indent}#[prefix = "TODO"]\n')
fp.write(f'{indent}#[prefix = "{prefix_code}"]\n')
fp.write(f"{indent}{pascal_case(plugin)},")
fp.write("\n")
elif line.strip() == "Linter::Ruff => Prefixes::Single(RuleSelector::RUF),":
prefix = 'todo!("Fill-in prefix after generating codes")'
fp.write(
f"{indent}Linter::{pascal_case(plugin)} => Prefixes::Single({prefix}),"
)
fp.write("\n")
fp.write(line)
fp.write("\n")
# Add the relevant section to `src/violations.rs`.
content = (ROOT_DIR / "src/violations.rs").read_text()
with open(ROOT_DIR / "src/violations.rs", "w") as fp:
for line in content.splitlines():
if line.strip() == "// Ruff":
indent = get_indent(line)
fp.write(f"{indent}// {plugin}")
fp.write("\n")
fp.write(line)
fp.write("\n")
@@ -108,7 +98,7 @@ if __name__ == "__main__":
description="Generate boilerplate for a new Flake8 plugin.",
epilog=(
"Example usage: python scripts/add_plugin.py flake8-pie "
"--url https://pypi.org/project/flake8-pie/0.16.0/"
"--url https://pypi.org/project/flake8-pie/"
),
)
parser.add_argument(
@@ -122,6 +112,13 @@ if __name__ == "__main__":
type=str,
help="The URL of the latest release in PyPI.",
)
parser.add_argument(
"--prefix",
required=False,
default="TODO",
type=str,
help="Prefix code for the plugin. Leave empty to manually fill.",
)
args = parser.parse_args()
main(plugin=args.plugin, url=args.url)
main(plugin=args.plugin, url=args.url, prefix_code=args.prefix)

View File

@@ -21,47 +21,58 @@ def snake_case(name: str) -> str:
def main(*, name: str, code: str, linter: str) -> None:
# Create a test fixture.
with open(
ROOT_DIR / "resources/test/fixtures" / dir_name(linter) / f"{code}.py",
"a",
):
with (ROOT_DIR / "resources/test/fixtures" / dir_name(linter) / f"{code}.py").open("a"):
pass
plugin_module = ROOT_DIR / "src/rules" / dir_name(linter)
rule_name_snake = snake_case(name)
# Add the relevant `#testcase` macro.
mod_rs = ROOT_DIR / "src/rules" / dir_name(linter) / "mod.rs"
mod_rs = plugin_module / "mod.rs"
content = mod_rs.read_text()
with open(mod_rs, "w") as fp:
with mod_rs.open("w") as fp:
for line in content.splitlines():
if line.strip() == "fn rules(rule_code: Rule, path: &Path) -> Result<()> {":
indent = get_indent(line)
fp.write(f'{indent}#[test_case(Rule::{code}, Path::new("{code}.py"); "{code}")]')
fp.write(f'{indent}#[test_case(Rule::{name}, Path::new("{code}.py"); "{code}")]')
fp.write("\n")
fp.write(line)
fp.write("\n")
# Add the relevant rule function.
with open(ROOT_DIR / "src/rules" / dir_name(linter) / (snake_case(name) + ".rs"), "w") as fp:
fp.write(
f"""
/// {code}
pub fn {snake_case(name)}(checker: &mut Checker) {{}}
"""
)
fp.write("\n")
# Add the exports
rules_dir = plugin_module / "rules"
rules_mod = rules_dir / "mod.rs"
# Add the relevant struct to `src/violations.rs`.
content = (ROOT_DIR / "src/violations.rs").read_text()
with open(ROOT_DIR / "src/violations.rs", "w") as fp:
for line in content.splitlines():
fp.write(line)
contents = rules_mod.read_text()
parts = contents.split("\n\n")
if len(parts) == 2:
new_contents = parts[0] + "\n"
new_contents += f"pub use {rule_name_snake}::{{{rule_name_snake}, {name}}};"
new_contents += "\n"
new_contents += "\n"
new_contents += parts[1]
new_contents += f"mod {rule_name_snake};"
new_contents += "\n"
rules_mod.write_text(new_contents)
else:
with rules_mod.open("a") as fp:
fp.write(f"pub use {rule_name_snake}::{{{rule_name_snake}, {name}}};")
fp.write("\n")
fp.write(f"mod {rule_name_snake};")
fp.write("\n")
if line.startswith(f"// {linter}"):
fp.write(
"""define_violation!(
# Add the relevant rule function.
with (rules_dir / f"{rule_name_snake}.rs").open("w") as fp:
fp.write(
"""use ruff_macros::derive_message_formats;
use crate::define_violation;
use crate::violation::Violation;
use crate::checkers::ast::Checker;
define_violation!(
pub struct %s;
);
impl Violation for %s {
@@ -72,16 +83,23 @@ impl Violation for %s {
}
}
"""
% (name, name)
)
fp.write("\n")
% (name, name)
)
fp.write("\n")
fp.write(
f"""
/// {code}
pub fn {rule_name_snake}(checker: &mut Checker) {{}}
"""
)
fp.write("\n")
# Add the relevant code-to-violation pair to `src/registry.rs`.
content = (ROOT_DIR / "src/registry.rs").read_text()
seen_macro = False
has_written = False
with open(ROOT_DIR / "src/registry.rs", "w") as fp:
with (ROOT_DIR / "src/registry.rs").open("w") as fp:
for line in content.splitlines():
fp.write(line)
fp.write("\n")
@@ -98,7 +116,7 @@ impl Violation for %s {
if line.strip() == f"// {linter}":
indent = get_indent(line)
fp.write(f"{indent}{code} => violations::{name},")
fp.write(f"{indent}{code} => rules::{linter}::rules::{name},")
fp.write("\n")
has_written = True

View File

@@ -295,7 +295,7 @@ pub enum ComparableExpr<'a> {
orelse: Box<ComparableExpr<'a>>,
},
Dict {
keys: Vec<ComparableExpr<'a>>,
keys: Vec<Option<ComparableExpr<'a>>>,
values: Vec<ComparableExpr<'a>>,
},
Set {
@@ -424,7 +424,10 @@ impl<'a> From<&'a Expr> for ComparableExpr<'a> {
orelse: orelse.into(),
},
ExprKind::Dict { keys, values } => Self::Dict {
keys: keys.iter().map(std::convert::Into::into).collect(),
keys: keys
.iter()
.map(|expr| expr.as_ref().map(std::convert::Into::into))
.collect(),
values: values.iter().map(std::convert::Into::into).collect(),
},
ExprKind::Set { elts } => Self::Set {

View File

@@ -161,7 +161,7 @@ where
}
ExprKind::Dict { keys, values } => values
.iter()
.chain(keys.iter())
.chain(keys.iter().flatten())
.any(|expr| any_over_expr(expr, func)),
ExprKind::Set { elts } | ExprKind::List { elts, .. } | ExprKind::Tuple { elts, .. } => {
elts.iter().any(|expr| any_over_expr(expr, func))
@@ -366,7 +366,7 @@ pub fn collect_arg_names<'a>(arguments: &'a Arguments) -> FxHashSet<&'a str> {
/// Returns `true` if a statement or expression includes at least one comment.
pub fn has_comments_in(range: Range, locator: &Locator) -> bool {
lexer::make_tokenizer(&locator.slice_source_code_range(&range))
lexer::make_tokenizer(locator.slice_source_code_range(&range))
.any(|result| result.map_or(false, |(_, tok, _)| matches!(tok, Tok::Comment(..))))
}
@@ -486,7 +486,7 @@ pub fn identifier_range(stmt: &Stmt, locator: &Locator) -> Range {
| StmtKind::AsyncFunctionDef { .. }
) {
let contents = locator.slice_source_code_range(&Range::from_located(stmt));
for (start, tok, end) in lexer::make_tokenizer_located(&contents, stmt.location).flatten() {
for (start, tok, end) in lexer::make_tokenizer_located(contents, stmt.location).flatten() {
if matches!(tok, Tok::Name { .. }) {
return Range::new(start, end);
}
@@ -515,7 +515,7 @@ pub fn binding_range(binding: &Binding, locator: &Locator) -> Range {
// Return the ranges of `Name` tokens within a specified node.
pub fn find_names<T>(located: &Located<T>, locator: &Locator) -> Vec<Range> {
let contents = locator.slice_source_code_range(&Range::from_located(located));
lexer::make_tokenizer_located(&contents, located.location)
lexer::make_tokenizer_located(contents, located.location)
.flatten()
.filter(|(_, tok, _)| matches!(tok, Tok::Name { .. }))
.map(|(start, _, end)| Range {
@@ -535,7 +535,7 @@ pub fn excepthandler_name_range(handler: &Excepthandler, locator: &Locator) -> O
let type_end_location = type_.end_location.unwrap();
let contents =
locator.slice_source_code_range(&Range::new(type_end_location, body[0].location));
let range = lexer::make_tokenizer_located(&contents, type_end_location)
let range = lexer::make_tokenizer_located(contents, type_end_location)
.flatten()
.tuple_windows()
.find(|(tok, next_tok)| {
@@ -562,7 +562,7 @@ pub fn except_range(handler: &Excepthandler, locator: &Locator) -> Range {
location: handler.location,
end_location: end,
});
let range = lexer::make_tokenizer_located(&contents, handler.location)
let range = lexer::make_tokenizer_located(contents, handler.location)
.flatten()
.find(|(_, kind, _)| matches!(kind, Tok::Except { .. }))
.map(|(location, _, end_location)| Range {
@@ -576,7 +576,7 @@ pub fn except_range(handler: &Excepthandler, locator: &Locator) -> Range {
/// Find f-strings that don't contain any formatted values in a `JoinedStr`.
pub fn find_useless_f_strings(expr: &Expr, locator: &Locator) -> Vec<(Range, Range)> {
let contents = locator.slice_source_code_range(&Range::from_located(expr));
lexer::make_tokenizer_located(&contents, expr.location)
lexer::make_tokenizer_located(contents, expr.location)
.flatten()
.filter_map(|(location, tok, end_location)| match tok {
Tok::String {
@@ -630,7 +630,7 @@ pub fn else_range(stmt: &Stmt, locator: &Locator) -> Option<Range> {
.expect("Expected orelse to be non-empty")
.location,
});
let range = lexer::make_tokenizer_located(&contents, body_end)
let range = lexer::make_tokenizer_located(contents, body_end)
.flatten()
.find(|(_, kind, _)| matches!(kind, Tok::Else))
.map(|(location, _, end_location)| Range {
@@ -646,7 +646,7 @@ pub fn else_range(stmt: &Stmt, locator: &Locator) -> Option<Range> {
/// Return the `Range` of the first `Tok::Colon` token in a `Range`.
pub fn first_colon_range(range: Range, locator: &Locator) -> Option<Range> {
let contents = locator.slice_source_code_range(&range);
let range = lexer::make_tokenizer_located(&contents, range.location)
let range = lexer::make_tokenizer_located(contents, range.location)
.flatten()
.find(|(_, kind, _)| matches!(kind, Tok::Colon))
.map(|(location, _, end_location)| Range {
@@ -676,7 +676,7 @@ pub fn elif_else_range(stmt: &Stmt, locator: &Locator) -> Option<Range> {
_ => return None,
};
let contents = locator.slice_source_code_range(&Range::new(start, end));
let range = lexer::make_tokenizer_located(&contents, start)
let range = lexer::make_tokenizer_located(contents, start)
.flatten()
.find(|(_, kind, _)| matches!(kind, Tok::Elif | Tok::Else))
.map(|(location, _, end_location)| Range {

View File

@@ -39,7 +39,7 @@ pub fn relocate_expr(expr: &mut Expr, location: Range) {
relocate_expr(orelse, location);
}
ExprKind::Dict { keys, values } => {
for expr in keys {
for expr in keys.iter_mut().flatten() {
relocate_expr(expr, location);
}
for expr in values {

View File

@@ -288,7 +288,7 @@ pub fn walk_expr<'a, V: Visitor<'a> + ?Sized>(visitor: &mut V, expr: &'a Expr) {
visitor.visit_expr(orelse);
}
ExprKind::Dict { keys, values } => {
for expr in keys {
for expr in keys.iter().flatten() {
visitor.visit_expr(expr);
}
for expr in values {

View File

@@ -1,4 +1,3 @@
use std::borrow::Cow;
use std::str::Lines;
use rustpython_ast::{Located, Location};
@@ -7,7 +6,7 @@ use crate::ast::types::Range;
use crate::source_code::Locator;
/// Extract the leading indentation from a line.
pub fn indentation<'a, T>(locator: &'a Locator, located: &'a Located<T>) -> Option<Cow<'a, str>> {
pub fn indentation<'a, T>(locator: &'a Locator, located: &'a Located<T>) -> Option<&'a str> {
let range = Range::from_located(located);
let indentation = locator.slice_source_code_range(&Range::new(
Location::new(range.location.row(), 0),

View File

@@ -80,7 +80,7 @@ fn is_lone_child(child: &Stmt, parent: &Stmt, deleted: &[&Stmt]) -> Result<bool>
/// of a multi-statement line.
fn trailing_semicolon(stmt: &Stmt, locator: &Locator) -> Option<Location> {
let contents = locator.slice_source_code_at(stmt.end_location.unwrap());
for (row, line) in LinesWithTrailingNewline::from(&contents).enumerate() {
for (row, line) in LinesWithTrailingNewline::from(contents).enumerate() {
let trimmed = line.trim();
if trimmed.starts_with(';') {
let column = line
@@ -103,7 +103,7 @@ fn trailing_semicolon(stmt: &Stmt, locator: &Locator) -> Option<Location> {
fn next_stmt_break(semicolon: Location, locator: &Locator) -> Location {
let start_location = Location::new(semicolon.row(), semicolon.column() + 1);
let contents = locator.slice_source_code_at(start_location);
for (row, line) in LinesWithTrailingNewline::from(&contents).enumerate() {
for (row, line) in LinesWithTrailingNewline::from(contents).enumerate() {
let trimmed = line.trim();
// Skip past any continuations.
if trimmed.starts_with('\\') {
@@ -202,7 +202,7 @@ pub fn remove_unused_imports<'a>(
indexer: &Indexer,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(stmt));
let mut tree = match_module(&module_text)?;
let mut tree = match_module(module_text)?;
let Some(Statement::Simple(body)) = tree.body.first_mut() else {
bail!("Expected Statement::Simple");

View File

@@ -1,8 +1,6 @@
use std::borrow::Cow;
use std::collections::BTreeSet;
use itertools::Itertools;
use ropey::RopeBuilder;
use rustpython_ast::Location;
use crate::ast::types::Range;
@@ -14,10 +12,7 @@ pub mod fixer;
pub mod helpers;
/// Auto-fix errors in a file, and write the fixed source code to disk.
pub fn fix_file<'a>(
diagnostics: &'a [Diagnostic],
locator: &'a Locator<'a>,
) -> Option<(Cow<'a, str>, usize)> {
pub fn fix_file(diagnostics: &[Diagnostic], locator: &Locator) -> Option<(String, usize)> {
if diagnostics.iter().all(|check| check.fix.is_none()) {
return None;
}
@@ -32,8 +27,8 @@ pub fn fix_file<'a>(
fn apply_fixes<'a>(
fixes: impl Iterator<Item = &'a Fix>,
locator: &'a Locator<'a>,
) -> (Cow<'a, str>, usize) {
let mut output = RopeBuilder::new();
) -> (String, usize) {
let mut output = String::new();
let mut last_pos: Location = Location::new(1, 0);
let mut applied: BTreeSet<&Fix> = BTreeSet::default();
let mut num_fixed: usize = 0;
@@ -54,10 +49,10 @@ fn apply_fixes<'a>(
// Add all contents from `last_pos` to `fix.location`.
let slice = locator.slice_source_code_range(&Range::new(last_pos, fix.location));
output.append(&slice);
output.push_str(slice);
// Add the patch itself.
output.append(&fix.content);
output.push_str(&fix.content);
// Track that the fix was applied.
last_pos = fix.end_location;
@@ -67,9 +62,9 @@ fn apply_fixes<'a>(
// Add the remaining content.
let slice = locator.slice_source_code_at(last_pos);
output.append(&slice);
output.push_str(slice);
(Cow::from(output.finish()), num_fixed)
(output, num_fixed)
}
#[cfg(test)]

View File

@@ -273,7 +273,7 @@ impl<'a> Checker<'a> {
Location::new(*noqa_lineno, 0),
Location::new(noqa_lineno + 1, 0),
));
match noqa::extract_noqa_directive(&line) {
match noqa::extract_noqa_directive(line) {
Directive::None => false,
Directive::All(..) => true,
Directive::Codes(.., codes) => noqa::includes(code, &codes),
@@ -1230,6 +1230,28 @@ where
}
}
if self
.settings
.rules
.enabled(&Rule::ImportAliasIsNotConventional)
{
let full_name = helpers::format_import_from_member(
level.as_ref(),
module.as_deref(),
&alias.node.name,
);
if let Some(diagnostic) =
flake8_import_conventions::rules::check_conventional_import(
stmt,
&full_name,
alias.node.asname.as_deref(),
&self.settings.flake8_import_conventions.aliases,
)
{
self.diagnostics.push(diagnostic);
}
}
if let Some(asname) = &alias.node.asname {
if self
.settings
@@ -1391,6 +1413,15 @@ where
self.current_stmt_parent().map(std::convert::Into::into),
);
}
if self.settings.rules.enabled(&Rule::PreferTypeError) {
tryceratops::rules::prefer_type_error(
self,
body,
test,
orelse,
self.current_stmt_parent().map(Into::into),
);
}
}
StmtKind::Assert { test, msg } => {
if self.settings.rules.enabled(&Rule::AssertTuple) {
@@ -3124,7 +3155,7 @@ where
// Ex) TypedDict("a", {"a": int})
if args.len() > 1 {
if let ExprKind::Dict { keys, values } = &args[1].node {
for key in keys {
for key in keys.iter().flatten() {
self.in_type_definition = false;
self.visit_expr(key);
self.in_type_definition = prev_in_type_definition;
@@ -4579,12 +4610,13 @@ impl<'a> Checker<'a> {
Location::new(expr.location.row(), 0),
Location::new(expr.location.row(), expr.location.column()),
));
let body = pydocstyle::helpers::raw_contents(&contents);
let body = pydocstyle::helpers::raw_contents(contents);
let docstring = Docstring {
kind: definition.kind,
expr,
contents: &contents,
indentation: &indentation,
contents,
indentation,
body,
};

View File

@@ -4,12 +4,16 @@ use crate::registry::{Diagnostic, Rule};
use crate::rules::flake8_no_pep420::rules::implicit_namespace_package;
use crate::settings::Settings;
pub fn check_file_path(path: &Path, settings: &Settings) -> Vec<Diagnostic> {
pub fn check_file_path(
path: &Path,
package: Option<&Path>,
settings: &Settings,
) -> Vec<Diagnostic> {
let mut diagnostics: Vec<Diagnostic> = vec![];
// flake8-no-pep420
if settings.rules.enabled(&Rule::ImplicitNamespacePackage) {
if let Some(diagnostic) = implicit_namespace_package(path) {
if let Some(diagnostic) = implicit_namespace_package(path, package) {
diagnostics.push(diagnostic);
}
}

View File

@@ -1,5 +1,3 @@
use std::borrow::Cow;
use rustpython_ast::{Expr, Stmt};
#[derive(Debug, Clone)]
@@ -23,9 +21,9 @@ pub struct Definition<'a> {
pub struct Docstring<'a> {
pub kind: DefinitionKind<'a>,
pub expr: &'a Expr,
pub contents: &'a Cow<'a, str>,
pub contents: &'a str,
pub body: &'a str,
pub indentation: &'a Cow<'a, str>,
pub indentation: &'a str,
}
pub enum Documentable {

View File

@@ -1,8 +1,5 @@
//! Extract Black configuration settings from a pyproject.toml.
use std::path::Path;
use anyhow::Result;
use serde::{Deserialize, Serialize};
use crate::settings::types::PythonVersion;
@@ -14,20 +11,3 @@ pub struct Black {
#[serde(alias = "target-version", alias = "target_version")]
pub target_version: Option<Vec<PythonVersion>>,
}
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
struct Tools {
black: Option<Black>,
}
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
struct Pyproject {
tool: Option<Tools>,
}
pub fn parse_black_options<P: AsRef<Path>>(path: P) -> Result<Option<Black>> {
let contents = std::fs::read_to_string(path)?;
Ok(toml_edit::easy::from_str::<Pyproject>(&contents)?
.tool
.and_then(|tool| tool.black))
}

View File

@@ -3,7 +3,7 @@ use std::collections::{BTreeSet, HashMap};
use anyhow::Result;
use colored::Colorize;
use super::black::Black;
use super::external_config::ExternalConfig;
use super::plugin::Plugin;
use super::{parser, plugin};
use crate::registry::RuleSelector;
@@ -23,7 +23,7 @@ use crate::warn_user;
pub fn convert(
config: &HashMap<String, HashMap<String, Option<String>>>,
black: Option<&Black>,
external_config: &ExternalConfig,
plugins: Option<Vec<Plugin>>,
) -> Result<Pyproject> {
// Extract the Flake8 section.
@@ -377,7 +377,7 @@ pub fn convert(
}
// Extract any settings from the existing `pyproject.toml`.
if let Some(black) = black {
if let Some(black) = &external_config.black {
if let Some(line_length) = &black.line_length {
options.line_length = Some(*line_length);
}
@@ -389,6 +389,19 @@ pub fn convert(
}
}
if let Some(isort) = &external_config.isort {
if let Some(src_paths) = &isort.src_paths {
match options.src.as_mut() {
Some(src) => {
src.extend(src_paths.clone());
}
None => {
options.src = Some(src_paths.clone());
}
}
}
}
// Create the pyproject.toml.
Ok(Pyproject::new(options))
}
@@ -401,6 +414,7 @@ mod tests {
use super::super::plugin::Plugin;
use super::convert;
use crate::flake8_to_ruff::ExternalConfig;
use crate::registry::RuleSelector;
use crate::rules::pydocstyle::settings::Convention;
use crate::rules::{flake8_quotes, pydocstyle};
@@ -411,7 +425,7 @@ mod tests {
fn it_converts_empty() -> Result<()> {
let actual = convert(
&HashMap::from([("flake8".to_string(), HashMap::default())]),
None,
&ExternalConfig::default(),
None,
)?;
let expected = Pyproject::new(Options {
@@ -475,7 +489,7 @@ mod tests {
"flake8".to_string(),
HashMap::from([("max-line-length".to_string(), Some("100".to_string()))]),
)]),
None,
&ExternalConfig::default(),
Some(vec![]),
)?;
let expected = Pyproject::new(Options {
@@ -539,7 +553,7 @@ mod tests {
"flake8".to_string(),
HashMap::from([("max_line_length".to_string(), Some("100".to_string()))]),
)]),
None,
&ExternalConfig::default(),
Some(vec![]),
)?;
let expected = Pyproject::new(Options {
@@ -603,7 +617,7 @@ mod tests {
"flake8".to_string(),
HashMap::from([("max_line_length".to_string(), Some("abc".to_string()))]),
)]),
None,
&ExternalConfig::default(),
Some(vec![]),
)?;
let expected = Pyproject::new(Options {
@@ -667,7 +681,7 @@ mod tests {
"flake8".to_string(),
HashMap::from([("inline-quotes".to_string(), Some("single".to_string()))]),
)]),
None,
&ExternalConfig::default(),
Some(vec![]),
)?;
let expected = Pyproject::new(Options {
@@ -739,7 +753,7 @@ mod tests {
Some("numpy".to_string()),
)]),
)]),
None,
&ExternalConfig::default(),
Some(vec![Plugin::Flake8Docstrings]),
)?;
let expected = Pyproject::new(Options {
@@ -810,7 +824,7 @@ mod tests {
"flake8".to_string(),
HashMap::from([("inline-quotes".to_string(), Some("single".to_string()))]),
)]),
None,
&ExternalConfig::default(),
None,
)?;
let expected = Pyproject::new(Options {

View File

@@ -0,0 +1,8 @@
use super::black::Black;
use super::isort::Isort;
#[derive(Default)]
pub struct ExternalConfig<'a> {
pub black: Option<&'a Black>,
pub isort: Option<&'a Isort>,
}

View File

@@ -0,0 +1,10 @@
//! Extract isort configuration settings from a pyproject.toml.
use serde::{Deserialize, Serialize};
/// The [isort configuration](https://pycqa.github.io/isort/docs/configuration/config_files.html).
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize, Default)]
pub struct Isort {
#[serde(alias = "src-paths", alias = "src_paths")]
pub src_paths: Option<Vec<String>>,
}

View File

@@ -1,8 +1,12 @@
mod black;
mod converter;
mod external_config;
mod isort;
mod parser;
mod plugin;
mod pyproject;
pub use black::parse_black_options;
pub use converter::convert;
pub use external_config::ExternalConfig;
pub use plugin::Plugin;
pub use pyproject::parse;

View File

@@ -98,7 +98,7 @@ impl fmt::Debug for Plugin {
}
impl Plugin {
pub fn prefix(&self) -> RuleSelector {
pub fn selector(&self) -> RuleSelector {
match self {
Plugin::Flake8Annotations => RuleSelector::ANN,
Plugin::Flake8Bandit => RuleSelector::S,
@@ -249,7 +249,7 @@ pub fn infer_plugins_from_options(flake8: &HashMap<String, Option<String>>) -> V
///
/// For example, if the user ignores `ANN101`, we should infer that
/// `flake8-annotations` is active.
pub fn infer_plugins_from_codes(codes: &BTreeSet<RuleSelector>) -> Vec<Plugin> {
pub fn infer_plugins_from_codes(selectors: &BTreeSet<RuleSelector>) -> Vec<Plugin> {
[
Plugin::Flake8Annotations,
Plugin::Flake8Bandit,
@@ -273,11 +273,10 @@ pub fn infer_plugins_from_codes(codes: &BTreeSet<RuleSelector>) -> Vec<Plugin> {
]
.into_iter()
.filter(|plugin| {
for prefix in codes {
if prefix
.codes()
.iter()
.any(|code| plugin.prefix().codes().contains(code))
for selector in selectors {
if selector
.into_iter()
.any(|rule| plugin.selector().into_iter().any(|r| r == rule))
{
return true;
}
@@ -291,7 +290,7 @@ pub fn infer_plugins_from_codes(codes: &BTreeSet<RuleSelector>) -> Vec<Plugin> {
/// plugins.
pub fn resolve_select(plugins: &[Plugin]) -> BTreeSet<RuleSelector> {
let mut select = BTreeSet::from([RuleSelector::F, RuleSelector::E, RuleSelector::W]);
select.extend(plugins.iter().map(Plugin::prefix));
select.extend(plugins.iter().map(Plugin::selector));
select
}

View File

@@ -0,0 +1,24 @@
use std::path::Path;
use anyhow::Result;
use serde::{Deserialize, Serialize};
use super::black::Black;
use super::isort::Isort;
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
pub struct Tools {
pub black: Option<Black>,
pub isort: Option<Isort>,
}
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
pub struct Pyproject {
pub tool: Option<Tools>,
}
pub fn parse<P: AsRef<Path>>(path: P) -> Result<Pyproject> {
let contents = std::fs::read_to_string(path)?;
let pyproject = toml_edit::easy::from_str::<Pyproject>(&contents)?;
Ok(pyproject)
}

View File

@@ -16,6 +16,8 @@ use crate::directives::Directives;
use crate::doc_lines::{doc_lines_from_ast, doc_lines_from_tokens};
use crate::message::{Message, Source};
use crate::noqa::add_noqa;
#[cfg(test)]
use crate::packaging::detect_package_root;
use crate::registry::{Diagnostic, LintSource, Rule};
use crate::settings::{flags, Settings};
use crate::source_code::{Indexer, Locator, Stylist};
@@ -69,7 +71,7 @@ pub fn check_path(
.iter_enabled()
.any(|rule_code| matches!(rule_code.lint_source(), LintSource::Filesystem))
{
diagnostics.extend(check_file_path(path, settings));
diagnostics.extend(check_file_path(path, package, settings));
}
// Run the AST-based rules.
@@ -395,7 +397,8 @@ pub fn test_path(path: &Path, settings: &Settings) -> Result<Vec<Diagnostic>> {
directives::extract_directives(&tokens, directives::Flags::from_settings(settings));
let mut diagnostics = check_path(
path,
None,
path.parent()
.and_then(|parent| detect_package_root(parent, &settings.namespace_packages)),
&contents,
tokens,
&locator,

View File

@@ -63,9 +63,9 @@ pub struct Source {
impl Source {
pub fn from_diagnostic(diagnostic: &Diagnostic, locator: &Locator) -> Self {
let location = Location::new(diagnostic.location.row(), 0);
// Diagnostics can already extend one-past-the-end per Ropey's semantics. If
// they do, though, then they'll end at the start of a line. We need to
// avoid extending by yet another line past-the-end.
// Diagnostics can already extend one-past-the-end. If they do, though, then
// they'll end at the start of a line. We need to avoid extending by yet another
// line past-the-end.
let end_location = if diagnostic.end_location.column() == 0 {
diagnostic.end_location
} else {

View File

@@ -1,8 +1,7 @@
//! Registry of [`Rule`] to [`DiagnosticKind`] mappings.
use itertools::Itertools;
use once_cell::sync::Lazy;
use ruff_macros::ParseCode;
use ruff_macros::RuleNamespace;
use rustc_hash::FxHashMap;
use rustpython_parser::ast::Location;
use serde::{Deserialize, Serialize};
@@ -428,8 +427,9 @@ ruff_macros::define_rule_mapping!(
// flake8-type-checking
TYP005 => rules::flake8_type_checking::rules::EmptyTypeCheckingBlock,
// tryceratops
TRY004 => rules::tryceratops::rules::PreferTypeError,
TRY300 => rules::tryceratops::rules::TryConsiderElse,
// Ruff
// ruff
RUF001 => violations::AmbiguousUnicodeCharacterString,
RUF002 => violations::AmbiguousUnicodeCharacterDocstring,
RUF003 => violations::AmbiguousUnicodeCharacterComment,
@@ -438,14 +438,14 @@ ruff_macros::define_rule_mapping!(
RUF100 => violations::UnusedNOQA,
);
#[derive(EnumIter, Debug, PartialEq, Eq, ParseCode)]
#[derive(EnumIter, Debug, PartialEq, Eq, RuleNamespace)]
pub enum Linter {
#[prefix = "F"]
Pyflakes,
#[prefix = "E"]
#[prefix = "W"]
Pycodestyle,
#[prefix = "C9"]
#[prefix = "C90"]
McCabe,
#[prefix = "I"]
Isort,
@@ -519,78 +519,31 @@ pub enum Linter {
Ruff,
}
pub trait ParseCode: Sized {
pub trait RuleNamespace: Sized {
fn parse_code(code: &str) -> Option<(Self, &str)>;
}
pub enum Prefixes {
Single(RuleSelector),
Multiple(Vec<(RuleSelector, &'static str)>),
}
impl Prefixes {
pub fn as_list(&self, separator: &str) -> String {
match self {
Prefixes::Single(prefix) => prefix.as_ref().to_string(),
Prefixes::Multiple(entries) => entries
.iter()
.map(|(prefix, _)| prefix.as_ref())
.join(separator),
}
}
fn prefixes(&self) -> &'static [&'static str];
}
include!(concat!(env!("OUT_DIR"), "/linter.rs"));
/// The prefix, name and selector for an upstream linter category.
pub struct LinterCategory(pub &'static str, pub &'static str, pub RuleSelector);
impl Linter {
pub fn prefixes(&self) -> Prefixes {
pub fn categories(&self) -> Option<&'static [LinterCategory]> {
match self {
Linter::Eradicate => Prefixes::Single(RuleSelector::ERA),
Linter::Flake82020 => Prefixes::Single(RuleSelector::YTT),
Linter::Flake8Annotations => Prefixes::Single(RuleSelector::ANN),
Linter::Flake8Bandit => Prefixes::Single(RuleSelector::S),
Linter::Flake8BlindExcept => Prefixes::Single(RuleSelector::BLE),
Linter::Flake8BooleanTrap => Prefixes::Single(RuleSelector::FBT),
Linter::Flake8Bugbear => Prefixes::Single(RuleSelector::B),
Linter::Flake8Builtins => Prefixes::Single(RuleSelector::A),
Linter::Flake8Comprehensions => Prefixes::Single(RuleSelector::C4),
Linter::Flake8Datetimez => Prefixes::Single(RuleSelector::DTZ),
Linter::Flake8Debugger => Prefixes::Single(RuleSelector::T10),
Linter::Flake8ErrMsg => Prefixes::Single(RuleSelector::EM),
Linter::Flake8ImplicitStrConcat => Prefixes::Single(RuleSelector::ISC),
Linter::Flake8ImportConventions => Prefixes::Single(RuleSelector::ICN),
Linter::Flake8Print => Prefixes::Single(RuleSelector::T20),
Linter::Flake8PytestStyle => Prefixes::Single(RuleSelector::PT),
Linter::Flake8Quotes => Prefixes::Single(RuleSelector::Q),
Linter::Flake8Return => Prefixes::Single(RuleSelector::RET),
Linter::Flake8Simplify => Prefixes::Single(RuleSelector::SIM),
Linter::Flake8TidyImports => Prefixes::Single(RuleSelector::TID),
Linter::Flake8UnusedArguments => Prefixes::Single(RuleSelector::ARG),
Linter::Isort => Prefixes::Single(RuleSelector::I),
Linter::McCabe => Prefixes::Single(RuleSelector::C90),
Linter::PEP8Naming => Prefixes::Single(RuleSelector::N),
Linter::PandasVet => Prefixes::Single(RuleSelector::PD),
Linter::Pycodestyle => Prefixes::Multiple(vec![
(RuleSelector::E, "Error"),
(RuleSelector::W, "Warning"),
Linter::Pycodestyle => Some(&[
LinterCategory("E", "Error", RuleSelector::E),
LinterCategory("W", "Warning", RuleSelector::W),
]),
Linter::Pydocstyle => Prefixes::Single(RuleSelector::D),
Linter::Pyflakes => Prefixes::Single(RuleSelector::F),
Linter::PygrepHooks => Prefixes::Single(RuleSelector::PGH),
Linter::Pylint => Prefixes::Multiple(vec![
(RuleSelector::PLC, "Convention"),
(RuleSelector::PLE, "Error"),
(RuleSelector::PLR, "Refactor"),
(RuleSelector::PLW, "Warning"),
Linter::Pylint => Some(&[
LinterCategory("PLC", "Convention", RuleSelector::PLC),
LinterCategory("PLE", "Error", RuleSelector::PLE),
LinterCategory("PLR", "Refactor", RuleSelector::PLR),
LinterCategory("PLW", "Warning", RuleSelector::PLW),
]),
Linter::Pyupgrade => Prefixes::Single(RuleSelector::UP),
Linter::Flake8Pie => Prefixes::Single(RuleSelector::PIE),
Linter::Flake8Commas => Prefixes::Single(RuleSelector::COM),
Linter::Flake8NoPep420 => Prefixes::Single(RuleSelector::INP),
Linter::Flake8Executable => Prefixes::Single(RuleSelector::EXE),
Linter::Flake8TypeChecking => Prefixes::Single(RuleSelector::TYP),
Linter::Tryceratops => Prefixes::Single(RuleSelector::TRY),
Linter::Ruff => Prefixes::Single(RuleSelector::RUF),
_ => None,
}
}
}
@@ -741,7 +694,7 @@ pub static CODE_REDIRECTS: Lazy<FxHashMap<&'static str, Rule>> = Lazy::new(|| {
mod tests {
use strum::IntoEnumIterator;
use super::{Linter, ParseCode, Rule};
use super::{Linter, Rule, RuleNamespace};
#[test]
fn check_code_serialization() {

View File

@@ -1,4 +1,4 @@
//! Rules from [eradicate](https://pypi.org/project/eradicate/2.1.0/).
//! Rules from [eradicate](https://pypi.org/project/eradicate/).
pub(crate) mod detection;
pub(crate) mod rules;

View File

@@ -32,7 +32,7 @@ pub fn commented_out_code(
let line = locator.slice_source_code_range(&Range::new(location, end_location));
// Verify that the comment is on its own line, and that it contains code.
if is_standalone_comment(&line) && comment_contains_code(&line, &settings.task_tags[..]) {
if is_standalone_comment(line) && comment_contains_code(line, &settings.task_tags[..]) {
let mut diagnostic = Diagnostic::new(violations::CommentedOutCode, Range::new(start, end));
if matches!(autofix, flags::Autofix::Enabled)
&& settings.rules.should_fix(&Rule::CommentedOutCode)

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-2020](https://pypi.org/project/flake8-2020/1.7.0/).
//! Rules from [flake8-2020](https://pypi.org/project/flake8-2020/).
pub(crate) mod rules;
#[cfg(test)]

View File

@@ -16,7 +16,7 @@ pub fn add_return_none_annotation(locator: &Locator, stmt: &Stmt) -> Result<Fix>
let mut seen_lpar = false;
let mut seen_rpar = false;
let mut count: usize = 0;
for (start, tok, ..) in lexer::make_tokenizer_located(&contents, range.location).flatten() {
for (start, tok, ..) in lexer::make_tokenizer_located(contents, range.location).flatten() {
if seen_lpar && seen_rpar {
if matches!(tok, Tok::Colon) {
return Ok(Fix::insertion(" -> None".to_string(), start));

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-annotations](https://pypi.org/project/flake8-annotations/2.9.1/).
//! Rules from [flake8-annotations](https://pypi.org/project/flake8-annotations/).
mod fixes;
pub(crate) mod helpers;
pub(crate) mod rules;

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-bandit](https://pypi.org/project/flake8-bandit/4.1.1/).
//! Rules from [flake8-bandit](https://pypi.org/project/flake8-bandit/).
mod helpers;
pub(crate) mod rules;
pub mod settings;

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-blind-except](https://pypi.org/project/flake8-blind-except/0.2.1/).
//! Rules from [flake8-blind-except](https://pypi.org/project/flake8-blind-except/).
pub(crate) mod rules;
#[cfg(test)]

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-boolean-trap](https://pypi.org/project/flake8-boolean-trap/0.1.0/).
//! Rules from [flake8-boolean-trap](https://pypi.org/project/flake8-boolean-trap/).
pub(crate) mod rules;
#[cfg(test)]

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-bugbear](https://pypi.org/project/flake8-bugbear/22.10.27/).
//! Rules from [flake8-bugbear](https://pypi.org/project/flake8-bugbear/).
pub(crate) mod rules;
pub mod settings;

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-builtins](https://pypi.org/project/flake8-builtins/2.0.1/).
//! Rules from [flake8-builtins](https://pypi.org/project/flake8-builtins/).
pub(crate) mod rules;
pub mod settings;
pub(crate) mod types;

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-commas](https://pypi.org/project/flake8-commas/2.1.0/).
//! Rules from [flake8-commas](https://pypi.org/project/flake8-commas/).
pub(crate) mod rules;
#[cfg(test)]

View File

@@ -34,7 +34,7 @@ pub fn fix_unnecessary_generator_list(
) -> Result<Fix> {
// Expr(Call(GeneratorExp)))) -> Expr(ListComp)))
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
let mut tree = match_module(module_text)?;
let mut body = match_expr(&mut tree)?;
let call = match_call(body)?;
let arg = match_arg(call)?;
@@ -75,7 +75,7 @@ pub fn fix_unnecessary_generator_set(
) -> Result<Fix> {
// Expr(Call(GeneratorExp)))) -> Expr(SetComp)))
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
let mut tree = match_module(module_text)?;
let mut body = match_expr(&mut tree)?;
let call = match_call(body)?;
let arg = match_arg(call)?;
@@ -116,7 +116,7 @@ pub fn fix_unnecessary_generator_dict(
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
let mut tree = match_module(module_text)?;
let mut body = match_expr(&mut tree)?;
let call = match_call(body)?;
let arg = match_arg(call)?;
@@ -175,7 +175,7 @@ pub fn fix_unnecessary_list_comprehension_set(
// Expr(Call(ListComp)))) ->
// Expr(SetComp)))
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
let mut tree = match_module(module_text)?;
let mut body = match_expr(&mut tree)?;
let call = match_call(body)?;
let arg = match_arg(call)?;
@@ -214,7 +214,7 @@ pub fn fix_unnecessary_list_comprehension_dict(
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
let mut tree = match_module(module_text)?;
let mut body = match_expr(&mut tree)?;
let call = match_call(body)?;
let arg = match_arg(call)?;
@@ -305,7 +305,7 @@ fn drop_trailing_comma<'a>(
pub fn fix_unnecessary_literal_set(locator: &Locator, expr: &rustpython_ast::Expr) -> Result<Fix> {
// Expr(Call(List|Tuple)))) -> Expr(Set)))
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
let mut tree = match_module(module_text)?;
let mut body = match_expr(&mut tree)?;
let mut call = match_call(body)?;
let arg = match_arg(call)?;
@@ -348,7 +348,7 @@ pub fn fix_unnecessary_literal_set(locator: &Locator, expr: &rustpython_ast::Exp
pub fn fix_unnecessary_literal_dict(locator: &Locator, expr: &rustpython_ast::Expr) -> Result<Fix> {
// Expr(Call(List|Tuple)))) -> Expr(Dict)))
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
let mut tree = match_module(module_text)?;
let mut body = match_expr(&mut tree)?;
let call = match_call(body)?;
let arg = match_arg(call)?;
@@ -416,7 +416,7 @@ pub fn fix_unnecessary_collection_call(
) -> Result<Fix> {
// Expr(Call("list" | "tuple" | "dict")))) -> Expr(List|Tuple|Dict)
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
let mut tree = match_module(module_text)?;
let mut body = match_expr(&mut tree)?;
let call = match_call(body)?;
let Expression::Name(name) = &call.func.as_ref() else {
@@ -524,7 +524,7 @@ pub fn fix_unnecessary_literal_within_tuple_call(
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
let mut tree = match_module(module_text)?;
let mut body = match_expr(&mut tree)?;
let call = match_call(body)?;
let arg = match_arg(call)?;
@@ -578,7 +578,7 @@ pub fn fix_unnecessary_literal_within_list_call(
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
let mut tree = match_module(module_text)?;
let mut body = match_expr(&mut tree)?;
let call = match_call(body)?;
let arg = match_arg(call)?;
@@ -632,7 +632,7 @@ pub fn fix_unnecessary_literal_within_list_call(
pub fn fix_unnecessary_list_call(locator: &Locator, expr: &rustpython_ast::Expr) -> Result<Fix> {
// Expr(Call(List|Tuple)))) -> Expr(List|Tuple)))
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
let mut tree = match_module(module_text)?;
let mut body = match_expr(&mut tree)?;
let call = match_call(body)?;
let arg = match_arg(call)?;
@@ -657,7 +657,7 @@ pub fn fix_unnecessary_call_around_sorted(
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
let mut tree = match_module(module_text)?;
let mut body = match_expr(&mut tree)?;
let outer_call = match_call(body)?;
let inner_call = match &outer_call.args[..] {
@@ -739,7 +739,7 @@ pub fn fix_unnecessary_comprehension(
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
let mut tree = match_module(module_text)?;
let mut body = match_expr(&mut tree)?;
match &body.value {

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-comprehensions](https://pypi.org/project/flake8-comprehensions/3.10.1/).
//! Rules from [flake8-comprehensions](https://pypi.org/project/flake8-comprehensions/).
mod fixes;
pub(crate) mod rules;

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-datetimez](https://pypi.org/project/flake8-datetimez/20.10.0/).
//! Rules from [flake8-datetimez](https://pypi.org/project/flake8-datetimez/).
pub(crate) mod rules;
#[cfg(test)]

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-debugger](https://pypi.org/project/flake8-debugger/4.1.2/).
//! Rules from [flake8-debugger](https://pypi.org/project/flake8-debugger/).
pub(crate) mod rules;
pub(crate) mod types;

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-errmsg](https://pypi.org/project/flake8-errmsg/0.4.0/).
//! Rules from [flake8-errmsg](https://pypi.org/project/flake8-errmsg/).
pub(crate) mod rules;
pub mod settings;

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-executable](https://pypi.org/project/flake8-executable/2.1.1/).
//! Rules from [flake8-executable](https://pypi.org/project/flake8-executable/).
pub(crate) mod helpers;
pub(crate) mod rules;

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-implicit-str-concat](https://pypi.org/project/flake8-implicit-str-concat/0.3.0/).
//! Rules from [flake8-implicit-str-concat](https://pypi.org/project/flake8-implicit-str-concat/).
pub(crate) mod rules;
#[cfg(test)]

View File

@@ -84,4 +84,27 @@ mod tests {
insta::assert_yaml_snapshot!("override_default", diagnostics);
Ok(())
}
#[test]
fn from_imports() -> Result<()> {
let diagnostics = test_path(
Path::new("./resources/test/fixtures/flake8_import_conventions/from_imports.py"),
&Settings {
flake8_import_conventions: super::settings::Options {
aliases: None,
extend_aliases: Some(FxHashMap::from_iter([
("xml.dom.minidom".to_string(), "md".to_string()),
(
"xml.dom.minidom.parseString".to_string(),
"pstr".to_string(),
),
])),
}
.into(),
..Settings::for_rule(Rule::ImportAliasIsNotConventional)
},
)?;
insta::assert_yaml_snapshot!("from_imports", diagnostics);
Ok(())
}
}

View File

@@ -0,0 +1,101 @@
---
source: src/rules/flake8_import_conventions/mod.rs
expression: diagnostics
---
- kind:
ImportAliasIsNotConventional:
- xml.dom.minidom
- md
location:
row: 3
column: 0
end_location:
row: 3
column: 22
fix: ~
parent: ~
- kind:
ImportAliasIsNotConventional:
- xml.dom.minidom
- md
location:
row: 4
column: 0
end_location:
row: 4
column: 31
fix: ~
parent: ~
- kind:
ImportAliasIsNotConventional:
- xml.dom.minidom
- md
location:
row: 5
column: 0
end_location:
row: 5
column: 36
fix: ~
parent: ~
- kind:
ImportAliasIsNotConventional:
- xml.dom.minidom
- md
location:
row: 6
column: 0
end_location:
row: 6
column: 27
fix: ~
parent: ~
- kind:
ImportAliasIsNotConventional:
- xml.dom.minidom.parseString
- pstr
location:
row: 7
column: 0
end_location:
row: 7
column: 48
fix: ~
parent: ~
- kind:
ImportAliasIsNotConventional:
- xml.dom.minidom.parseString
- pstr
location:
row: 8
column: 0
end_location:
row: 8
column: 39
fix: ~
parent: ~
- kind:
ImportAliasIsNotConventional:
- xml.dom.minidom.parseString
- pstr
location:
row: 9
column: 0
end_location:
row: 9
column: 46
fix: ~
parent: ~
- kind:
ImportAliasIsNotConventional:
- xml.dom.minidom.parseString
- pstr
location:
row: 10
column: 0
end_location:
row: 10
column: 61
fix: ~
parent: ~

View File

@@ -1,9 +1,9 @@
//! Rules from [flake8-no-pep420](https://pypi.org/project/flake8-no-pep420/2.3.0/).
//! Rules from [flake8-no-pep420](https://pypi.org/project/flake8-no-pep420/).
pub(crate) mod rules;
#[cfg(test)]
mod tests {
use std::path::Path;
use std::path::{Path, PathBuf};
use anyhow::Result;
use test_case::test_case;
@@ -12,11 +12,12 @@ mod tests {
use crate::registry::Rule;
use crate::settings::Settings;
#[test_case(Path::new("test_pass"); "INP001_0")]
#[test_case(Path::new("test_pass_init"); "INP001_0")]
#[test_case(Path::new("test_fail_empty"); "INP001_1")]
#[test_case(Path::new("test_fail_nonempty"); "INP001_2")]
#[test_case(Path::new("test_fail_shebang"); "INP001_3")]
#[test_case(Path::new("test_ignored"); "INP001_4")]
#[test_case(Path::new("test_pass_namespace_package"); "INP001_5")]
fn test_flake8_no_pep420(path: &Path) -> Result<()> {
let snapshot = format!("{}", path.to_string_lossy());
let diagnostics = test_path(
@@ -24,7 +25,12 @@ mod tests {
.join(path)
.join("example.py")
.as_path(),
&Settings::for_rule(Rule::ImplicitNamespacePackage),
&Settings {
namespace_packages: vec![PathBuf::from(
"./resources/test/fixtures/flake8_no_pep420/test_pass_namespace_package",
)],
..Settings::for_rule(Rule::ImplicitNamespacePackage)
},
)?;
insta::assert_yaml_snapshot!(snapshot, diagnostics);
Ok(())

View File

@@ -5,14 +5,13 @@ use crate::registry::Diagnostic;
use crate::{fs, violations};
/// INP001
pub fn implicit_namespace_package(path: &Path) -> Option<Diagnostic> {
if let Some(parent) = path.parent() {
if !parent.join("__init__.py").as_path().exists() {
return Some(Diagnostic::new(
violations::ImplicitNamespacePackage(fs::relativize_path(path).to_string()),
Range::default(),
));
}
pub fn implicit_namespace_package(path: &Path, package: Option<&Path>) -> Option<Diagnostic> {
if package.is_none() {
Some(Diagnostic::new(
violations::ImplicitNamespacePackage(fs::relativize_path(path).to_string()),
Range::default(),
))
} else {
None
}
None
}

View File

@@ -0,0 +1,6 @@
---
source: src/rules/flake8_no_pep420/mod.rs
expression: diagnostics
---
[]

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-pie](https://pypi.org/project/flake8-pie/0.16.0/).
//! Rules from [flake8-pie](https://pypi.org/project/flake8-pie/).
pub(crate) mod rules;
#[cfg(test)]

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-print](https://pypi.org/project/flake8-print/5.0.0/).
//! Rules from [flake8-print](https://pypi.org/project/flake8-print/).
pub(crate) mod rules;
#[cfg(test)]

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-pytest-style](https://pypi.org/project/flake8-pytest-style/1.6.0/).
//! Rules from [flake8-pytest-style](https://pypi.org/project/flake8-pytest-style/).
pub(crate) mod rules;
pub mod settings;
pub mod types;

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-quotes](https://pypi.org/project/flake8-quotes/3.3.1/).
//! Rules from [flake8-quotes](https://pypi.org/project/flake8-quotes/).
pub(crate) mod rules;
pub mod settings;

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-return](https://pypi.org/project/flake8-return/1.2.0/).
//! Rules from [flake8-return](https://pypi.org/project/flake8-return/).
mod helpers;
pub(crate) mod rules;
mod visitor;

View File

@@ -109,7 +109,7 @@ fn implicit_return(checker: &mut Checker, last_stmt: &Stmt) {
if checker.patch(&Rule::ImplicitReturn) {
if let Some(indent) = indentation(checker.locator, last_stmt) {
let mut content = String::new();
content.push_str(&indent);
content.push_str(indent);
content.push_str("return None");
content.push('\n');
diagnostic.amend(Fix::insertion(

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-simplify](https://pypi.org/project/flake8-simplify/0.19.3/).
//! Rules from [flake8-simplify](https://pypi.org/project/flake8-simplify/).
pub(crate) mod rules;
#[cfg(test)]

View File

@@ -1,5 +1,3 @@
use std::borrow::Cow;
use anyhow::{bail, Result};
use libcst_native::{
BooleanOp, BooleanOperation, Codegen, CodegenState, CompoundStatement, Expression, If,
@@ -52,9 +50,9 @@ pub(crate) fn fix_nested_if_statements(
// If this is an `elif`, we have to remove the `elif` keyword for now. (We'll
// restore the `el` later on.)
let module_text = if is_elif {
Cow::Owned(contents.replacen("elif", "if", 1))
contents.replacen("elif", "if", 1)
} else {
contents
contents.to_string()
};
// If the block is indented, "embed" it in a function definition, to preserve
@@ -63,10 +61,7 @@ pub(crate) fn fix_nested_if_statements(
let module_text = if outer_indent.is_empty() {
module_text
} else {
Cow::Owned(format!(
"def f():{}{module_text}",
stylist.line_ending().as_str()
))
format!("def f():{}{module_text}", stylist.line_ending().as_str())
};
// Parse the CST.
@@ -82,7 +77,7 @@ pub(crate) fn fix_nested_if_statements(
let Suite::IndentedBlock(indented_block) = &mut embedding.body else {
bail!("Expected indented block")
};
indented_block.indent = Some(&outer_indent);
indented_block.indent = Some(outer_indent);
&mut *indented_block.body
};

View File

@@ -1,5 +1,3 @@
use std::borrow::Cow;
use anyhow::{bail, Result};
use libcst_native::{Codegen, CodegenState, CompoundStatement, Statement, Suite, With};
use rustpython_ast::Location;
@@ -30,9 +28,9 @@ pub(crate) fn fix_multiple_with_statements(
// indentation while retaining valid source code. (We'll strip the prefix later
// on.)
let module_text = if outer_indent.is_empty() {
contents
contents.to_string()
} else {
Cow::Owned(format!("def f():\n{contents}"))
format!("def f():\n{contents}")
};
// Parse the CST.
@@ -48,7 +46,7 @@ pub(crate) fn fix_multiple_with_statements(
let Suite::IndentedBlock(indented_block) = &mut embedding.body else {
bail!("Expected indented block")
};
indented_block.indent = Some(&outer_indent);
indented_block.indent = Some(outer_indent);
&mut *indented_block.body
};

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-tidy-imports](https://pypi.org/project/flake8-tidy-imports/4.8.0/).
//! Rules from [flake8-tidy-imports](https://pypi.org/project/flake8-tidy-imports/).
pub mod options;
pub mod banned_api;

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-type-checking](https://pypi.org/project/flake8-type-checking/2.3.0/).
//! Rules from [flake8-type-checking](https://pypi.org/project/flake8-type-checking/).
pub(crate) mod rules;
#[cfg(test)]

View File

@@ -1,4 +1,4 @@
//! Rules from [flake8-unused-arguments](https://pypi.org/project/flake8-unused-arguments/0.0.12/).
//! Rules from [flake8-unused-arguments](https://pypi.org/project/flake8-unused-arguments/).
mod helpers;
pub(crate) mod rules;
pub mod settings;

View File

@@ -17,7 +17,7 @@ pub struct Comment<'a> {
/// Collect all comments in an import block.
pub fn collect_comments<'a>(range: &Range, locator: &'a Locator) -> Vec<Comment<'a>> {
let contents = locator.slice_source_code_range(range);
lexer::make_tokenizer_located(&contents, range.location)
lexer::make_tokenizer_located(contents, range.location)
.flatten()
.filter_map(|(start, tok, end)| {
if let Tok::Comment(value) = tok {

View File

@@ -13,7 +13,7 @@ pub fn trailing_comma(stmt: &Stmt, locator: &Locator) -> TrailingComma {
let contents = locator.slice_source_code_range(&Range::from_located(stmt));
let mut count: usize = 0;
let mut trailing_comma = TrailingComma::Absent;
for (_, tok, _) in lexer::make_tokenizer(&contents).flatten() {
for (_, tok, _) in lexer::make_tokenizer(contents).flatten() {
if matches!(tok, Tok::Lpar) {
count += 1;
}
@@ -110,7 +110,7 @@ pub fn find_splice_location(body: &[Stmt], locator: &Locator) -> Location {
// Find the first token that isn't a comment or whitespace.
let contents = locator.slice_source_code_at(splice);
for (.., tok, end) in lexer::make_tokenizer(&contents).flatten() {
for (.., tok, end) in lexer::make_tokenizer(contents).flatten() {
if matches!(tok, Tok::Comment(..) | Tok::Newline) {
splice = end;
} else {

View File

@@ -1,4 +1,4 @@
//! Rules from [isort](https://pypi.org/project/isort/5.10.1/).
//! Rules from [isort](https://pypi.org/project/isort/).
use std::cmp::Ordering;
use std::collections::{BTreeMap, BTreeSet};
use std::path::{Path, PathBuf};
@@ -8,7 +8,6 @@ use comments::Comment;
use helpers::trailing_comma;
use itertools::Either::{Left, Right};
use itertools::Itertools;
use ropey::RopeBuilder;
use rustc_hash::FxHashMap;
use rustpython_ast::{Stmt, StmtKind};
use settings::RelatveImportsOrder;
@@ -593,7 +592,7 @@ pub fn format_imports(
extra_standard_library,
);
let mut output = RopeBuilder::new();
let mut output = String::new();
// Generate replacement source code.
let mut is_first_block = true;
@@ -630,14 +629,14 @@ pub fn format_imports(
if is_first_block {
is_first_block = false;
} else if !no_lines_before.contains(&import_type) {
output.append(stylist.line_ending());
output.push_str(stylist.line_ending());
}
let mut is_first_statement = true;
for import in imports {
match import {
Import((alias, comments)) => {
output.append(&format::format_import(
output.push_str(&format::format_import(
&alias,
&comments,
is_first_statement,
@@ -645,7 +644,7 @@ pub fn format_imports(
));
}
ImportFrom((import_from, comments, trailing_comma, aliases)) => {
output.append(&format::format_import_from(
output.push_str(&format::format_import_from(
&import_from,
&comments,
&aliases,
@@ -663,14 +662,14 @@ pub fn format_imports(
match trailer {
None => {}
Some(Trailer::Sibling) => {
output.append(stylist.line_ending());
output.push_str(stylist.line_ending());
}
Some(Trailer::FunctionDef | Trailer::ClassDef) => {
output.append(stylist.line_ending());
output.append(stylist.line_ending());
output.push_str(stylist.line_ending());
output.push_str(stylist.line_ending());
}
}
output.finish().to_string()
output
}
#[cfg(test)]
@@ -701,7 +700,6 @@ mod tests {
#[test_case(Path::new("insert_empty_lines.py"))]
#[test_case(Path::new("insert_empty_lines.pyi"))]
#[test_case(Path::new("leading_prefix.py"))]
#[test_case(Path::new("line_ending_cr.py"))]
#[test_case(Path::new("line_ending_crlf.py"))]
#[test_case(Path::new("line_ending_lf.py"))]
#[test_case(Path::new("magic_trailing_comma.py"))]

View File

@@ -38,7 +38,7 @@ pub fn organize_imports(
package: Option<&Path>,
) -> Option<Diagnostic> {
let indentation = locator.slice_source_code_range(&extract_indentation_range(&block.imports));
let indentation = leading_space(&indentation);
let indentation = leading_space(indentation);
let range = extract_range(&block.imports);
@@ -96,7 +96,7 @@ pub fn organize_imports(
Location::new(range.location.row(), 0),
Location::new(range.end_location.row() + 1 + num_trailing_lines, 0),
);
let actual = dedent(&locator.slice_source_code_range(&range));
let actual = dedent(locator.slice_source_code_range(&range));
if actual == dedent(&expected) {
None
} else {

View File

@@ -1,22 +0,0 @@
---
source: src/rules/isort/mod.rs
expression: diagnostics
---
- kind:
UnsortedImports: ~
location:
row: 1
column: 0
end_location:
row: 2
column: 0
fix:
content: "from long_module_name import (\r member_five,\r member_four,\r member_one,\r member_three,\r member_two,\r)\r"
location:
row: 1
column: 0
end_location:
row: 2
column: 0
parent: ~

View File

@@ -1,4 +1,4 @@
//! Rules from [mccabe](https://pypi.org/project/mccabe/0.7.0/).
//! Rules from [mccabe](https://pypi.org/project/mccabe/).
pub(crate) mod rules;
pub mod settings;

View File

@@ -1,4 +1,4 @@
//! Rules from [pandas-vet](https://pypi.org/project/pandas-vet/0.2.3/).
//! Rules from [pandas-vet](https://pypi.org/project/pandas-vet/).
pub(crate) mod helpers;
pub(crate) mod rules;
@@ -19,7 +19,7 @@ mod tests {
fn rule_code(contents: &str, expected: &[Rule]) -> Result<()> {
let contents = dedent(contents);
let settings = settings::Settings::for_rules(RuleSelector::PD.codes());
let settings = settings::Settings::for_rules(&RuleSelector::PD);
let tokens: Vec<LexResult> = rustpython_helpers::tokenize(&contents);
let locator = Locator::new(&contents);
let stylist = Stylist::from_contents(&contents, &locator);

View File

@@ -31,6 +31,16 @@ pub fn is_namedtuple_assignment(checker: &Checker, stmt: &Stmt) -> bool {
})
}
pub fn is_type_var_assignment(checker: &Checker, stmt: &Stmt) -> bool {
let StmtKind::Assign { value, .. } = &stmt.node else {
return false;
};
checker.resolve_call_path(value).map_or(false, |call_path| {
call_path.as_slice() == ["typing", "TypeVar"]
|| call_path.as_slice() == ["typing", "NewType"]
})
}
#[cfg(test)]
mod tests {
use super::{is_acronym, is_camelcase, is_mixed_case};

View File

@@ -1,4 +1,4 @@
//! Rules from [pep8-naming](https://pypi.org/project/pep8-naming/0.13.2/).
//! Rules from [pep8-naming](https://pypi.org/project/pep8-naming/).
mod helpers;
pub(crate) mod rules;
pub mod settings;

Some files were not shown because too many files have changed in this diff Show More