Compare commits

...

45 Commits

Author SHA1 Message Date
Charlie Marsh
9aeb5df5fe Bump version to 0.0.220 2023-01-12 17:57:04 -05:00
Charlie Marsh
7ffba7b552 Use absolute paths for GitHub and Gitlab annotations (#1837)
Note that the _annotation path_ is absolute, while the path encoded in
the message remains relative.

![Screen Shot 2023-01-12 at 5 54 11
PM](https://user-images.githubusercontent.com/1309177/212198531-63f15445-0f6a-471c-a64c-18ad2b6df0c7.png)

Closes #1835.
2023-01-12 17:54:34 -05:00
Charlie Marsh
06473bb1b5 Support for-else loops in SIM110 and SIM111 (#1834)
This PR adds support for `SIM110` and `SIM111` simplifications of the
form:

```py
def f():
    # SIM110
    for x in iterable:
        if check(x):
            return True
    else:
        return False
```
2023-01-12 17:04:58 -05:00
Ash Berlin-Taylor
bf5c048502 Airflow is now using ruff (#1833)
😀
2023-01-12 16:50:01 -05:00
Charlie Marsh
eaed08ae79 Skip SIM110/SIM111 fixes that create long lines 2023-01-12 16:21:54 -05:00
Charlie Marsh
e0fdc4c5e8 Avoid SIM110/SIM110 errors with else statements (#1832)
Closes #1831.
2023-01-12 16:17:27 -05:00
Charlie Marsh
590bec57f4 Fix typo in relative-imports-order option name 2023-01-12 15:57:58 -05:00
Charlie Marsh
3110d342c7 Implement isort's reverse_relative setting (#1826)
This PR implements `reverse-relative`, from isort, but renames it to
`relative-imports-order` with the respected value `closest-to-furthest`
and `furthest-to-closest`, and the latter being the default.

Closes #1813.
2023-01-12 15:48:40 -05:00
nefrob
39aae28eb4 📝 Update readme example for adding isort required imports (#1824)
Fixes use of  isort name to the ruff name.
2023-01-12 13:18:06 -05:00
Charlie Marsh
dcccfe2591 Avoid parsing pyproject.toml files when settings are fixed (#1827)
Apart from being wasteful, this can also cause problems (see the linked
issue).

Resolves #1812.
2023-01-12 13:15:44 -05:00
Martin Fischer
38f5e8f423 Decouple linter module from cache module 2023-01-12 13:09:59 -05:00
Martin Fischer
74f14182ea Decouple resolver module from cli::Overrides 2023-01-12 13:09:59 -05:00
Charlie Marsh
bbc1e7804e Don't trigger SIM401 for complex default values (#1825)
Resolves #1809.
2023-01-12 12:51:23 -05:00
messense
c6320b29e4 Implement autofix for flake8-quotes (#1810)
Resolves #1789
2023-01-12 12:42:28 -05:00
Maksudul Haque
1a90408e8c [flake8-bandit] Add Rule for S701 (jinja2 autoescape false) (#1815)
ref: https://github.com/charliermarsh/ruff/issues/1646

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-12 11:59:20 -05:00
Jeroen Van Goey
07134c50c8 Add usage of ruff in pandas to README (#1811)
pandas now uses ruff for linting, see
https://github.com/pandas-dev/pandas/pull/50160
2023-01-12 10:55:21 -05:00
Charlie Marsh
b36d4a15b0 Modify visibility and shuffle around some modules (#1807) 2023-01-11 23:57:05 -05:00
Charlie Marsh
d8162ce79d Bump version to 0.0.219 2023-01-11 23:46:01 -05:00
Charlie Marsh
e11ef54bda Improve globset documentation and help message (#1808)
Closes #1545.
2023-01-11 23:41:56 -05:00
messense
9a07b0623e Move top level ruff into python folder (#1806)
https://maturin.rs/project_layout.html#mixed-rustpython-project

Resolves #1805
2023-01-11 23:12:55 -05:00
Charlie Marsh
f450e2e79d Implement doc line length enforcement (#1804)
This PR implements `W505` (`DocLineTooLong`), which is similar to `E501`
(`LineTooLong`) but confined to doc lines.

I based the "doc line" definition on pycodestyle, which defines a doc
line as a standalone comment or string statement. Our definition is a
bit more liberal, since we consider any string statement a doc line
(even if it's part of a multi-line statement) -- but that seems fine to
me.

Note that, unusually, this rule requires custom extraction from both the
token stream (to find standalone comments) and the AST (to find string
statements).

Closes #1784.
2023-01-11 22:32:14 -05:00
Colin Delahunty
329946f162 Avoid erroneous Q002 error message for single-quote docstrings (#1777)
Fixes #1775. Before implementing your solution I thought of a slightly
simpler one. However, it will let this function pass:
```
def double_inside_single(a):
    'Double inside "single "'
```
If we want function to pass, my implementation works. But if we do not,
then I can go with how you suggested I implemented this (I left how I
would begin to handle it commented out). The bottom of the flake8-quotes
documentation seems to suggest that this should pass:
https://pypi.org/project/flake8-quotes/

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-11 20:01:54 -05:00
Charlie Marsh
588399e415 Fix Clippy error 2023-01-11 19:59:00 -05:00
Chammika Mannakkara
4523885268 flake8_simplify : SIM401 (#1778)
Ref #998 

- Implements SIM401 with fix
- Added tests

Notes: 
- only recognize simple ExprKind::Name variables in expr patterns for
now
- bug-fix from reference implementation: check 3-conditions (dict-key,
target-variable, dict-name) to be equal, `flake8_simplify` only test
first two (only first in second pattern)
2023-01-11 19:51:37 -05:00
Maksudul Haque
de81b0cd38 [flake8-simplify] Add Rule for SIM115 (Use context handler for opening files) (#1782)
ref: https://github.com/charliermarsh/ruff/issues/998

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-11 19:28:05 -05:00
Charlie Marsh
4fce296e3f Skip SIM108 violations for complex if-statements (#1802)
We now skip SIM108 violations if: the resulting statement would exceed
the user-specified line length, or the `if` statement contains comments.

Closes #1719.

Closes #1766.
2023-01-11 19:21:30 -05:00
Charlie Marsh
9d48d7bbd1 Skip unused argument checks for magic methods (#1801)
We still check `__init__`, `__call__`, and `__new__`.

Closes #1796.
2023-01-11 19:02:20 -05:00
Charlie Marsh
c56f263618 Avoid flagging builtins for OSError rewrites (#1800)
Related to (but does not fix) #1790.
2023-01-11 18:49:25 -05:00
Grzegorz Bokota
fb2382fbc3 Update readme to reflect #1763 (#1780)
When checking changes in the 0.0.218 release I noticed that auto fixing
PT004 and PT005 was disabled but this change was not reflected in
README. So I create this small PR to do this.

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-11 18:37:41 -05:00
Charlie Marsh
c92a5a8704 Avoid rewriting flake8-comprehensions expressions for builtin overrides (#1799)
Closes #1788.
2023-01-11 18:33:55 -05:00
Charlie Marsh
d7cf3147b7 Refactor flake8-comprehensions rules to take fewer arguments (#1797) 2023-01-11 18:21:18 -05:00
Charlie Marsh
bf4d35c705 Convert flake8-comprehensions checks to Checker style (#1795) 2023-01-11 18:11:20 -05:00
Charlie Marsh
4e97e9c7cf Improve PIE794 autofix behavior (#1794)
We now: (1) trigger PIE794 for objects without bases (not sure why this
was omitted before); and (2) remove the entire line, rather than leaving
behind trailing whitespace.

Resolves #1787.
2023-01-11 18:01:29 -05:00
Charlie Marsh
a3fcc3b28d Disable update check by default (#1786)
This has received enough criticism that I'm comfortable making it
opt-in.
2023-01-11 13:47:40 -05:00
Charlie Marsh
cfbd068dd5 Bump version to 0.0.218 2023-01-10 21:28:23 -05:00
Charlie Marsh
8aed23fe0a Avoid B023 false-positives for some common builtins (#1776)
This is based on the upstream work in
https://github.com/PyCQA/flake8-bugbear/pull/303 and
https://github.com/PyCQA/flake8-bugbear/pull/305/files.

Resolves #1686.
2023-01-10 21:23:48 -05:00
Colin Delahunty
c016c41c71 Pyupgrade: Format specifiers (#1594)
A part of #827. Posting this for visibility. Still has some work to do
to be done.

Things that still need done before this is ready:

- [x] Does not work when the item is being assigned to a variable
- [x] Does not work if being used in a function call
- [x] Fix incorrectly removed calls in the function
- [x] Has not been tested with pyupgrade negative test cases

Tests from pyupgrade can be seen here:
https://github.com/asottile/pyupgrade/blob/main/tests/features/format_literals_test.py

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-10 20:21:04 -05:00
Charlie Marsh
f1a5e53f06 Enable isort-style required-imports enforcement (#1762)
In isort, this is called `add-imports`, but I prefer the declarative
name.

The idea is that by adding the following to your `pyproject.toml`, you
can ensure that the import is included in all files:

```toml
[tool.ruff.isort]
required-imports = ["from __future__ import annotations"]
```

I mostly reverse-engineered isort's logic for making decisions, though I
made some slight tweaks that I think are preferable. A few comments:

- Like isort, we don't enforce this on empty files (like empty
`__init__.py`).
- Like isort, we require that the import is at the top-level.
- isort will skip any docstrings, and any comments on the first three
lines (I think, based on testing). Ruff places the import after the last
docstring or comment in the file preamble (that is: after the last
docstring or comment that comes before the _first_ non-docstring and
non-comment).

Resolves #1700.
2023-01-10 18:12:57 -05:00
Charlie Marsh
1e94e0221f Disable doctests (#1772)
We don't have any doctests, but `cargo test --all` spends more than half
the time on doctests? A little confusing, but this brings the test time
from > 4s to < 2s on my machine.
2023-01-10 15:10:16 -05:00
Martin Fischer
543865c96b Generate RuleCode::origin() via macro (#1770) 2023-01-10 13:20:43 -05:00
Maksudul Haque
b8e3f0bc13 [flake8-bandit] Add Rule for S508 (snmp insecure version) & S509 (snmp weak cryptography) (#1771)
ref: https://github.com/charliermarsh/ruff/issues/1646

Co-authored-by: messense <messense@icloud.com>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-10 13:13:54 -05:00
Charlie Marsh
643cedb200 Move CONTRIBUTING.md to top-level (#1768) 2023-01-10 07:38:12 -05:00
Charlie Marsh
91620c378a Disable release builds on CI (#1761) 2023-01-10 07:33:03 -05:00
Harutaka Kawamura
b732135795 Do not autofix PT004 and PT005 (#1763)
As @edgarrmondragon commented in
https://github.com/charliermarsh/ruff/pull/1740#issuecomment-1376230550,
just renaming fixture doesn't work.
2023-01-10 07:24:16 -05:00
messense
9384a081f9 Implement flake8-simplify SIM112 (#1764)
Ref #998
2023-01-10 07:24:01 -05:00
281 changed files with 6025 additions and 2490 deletions

View File

@@ -27,8 +27,8 @@ jobs:
toolchain: nightly-2022-11-01
override: true
- uses: Swatinem/rust-cache@v1
- run: cargo build --all --release
- run: ./target/release/ruff_dev generate-all
- run: cargo build --all
- run: ./target/debug/ruff_dev generate-all
- run: git diff --quiet README.md || echo "::error file=README.md::This file is outdated. Run 'cargo +nightly dev generate-all'."
- run: git diff --quiet ruff.schema.json || echo "::error file=ruff.schema.json::This file is outdated. Run 'cargo +nightly dev generate-all'."
- run: git diff --exit-code -- README.md ruff.schema.json

View File

@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.0.217
rev: v0.0.220
hooks:
- id: ruff

8
Cargo.lock generated
View File

@@ -735,7 +735,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.217-dev.0"
version = "0.0.220-dev.0"
dependencies = [
"anyhow",
"clap 4.0.32",
@@ -1874,7 +1874,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.217"
version = "0.0.220"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -1942,7 +1942,7 @@ dependencies = [
[[package]]
name = "ruff_dev"
version = "0.0.217"
version = "0.0.220"
dependencies = [
"anyhow",
"clap 4.0.32",
@@ -1962,7 +1962,7 @@ dependencies = [
[[package]]
name = "ruff_macros"
version = "0.0.217"
version = "0.0.220"
dependencies = [
"once_cell",
"proc-macro2",

View File

@@ -6,7 +6,7 @@ members = [
[package]
name = "ruff"
version = "0.0.217"
version = "0.0.220"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = "2021"
rust-version = "1.65.0"
@@ -19,6 +19,7 @@ license = "MIT"
[lib]
name = "ruff"
crate-type = ["cdylib", "rlib"]
doctest = false
[dependencies]
annotate-snippets = { version = "0.9.1", features = ["color"] }
@@ -51,7 +52,7 @@ path-absolutize = { version = "3.0.14", features = ["once_cell_cache", "use_unix
quick-junit = { version = "0.3.2" }
regex = { version = "1.6.0" }
ropey = { version = "1.5.0", features = ["cr_lines", "simd"], default-features = false }
ruff_macros = { version = "0.0.217", path = "ruff_macros" }
ruff_macros = { version = "0.0.220", path = "ruff_macros" }
rustc-hash = { version = "1.1.0" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "d532160333ffeb6dbeca2c2728c2391cd1e53b7f" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "d532160333ffeb6dbeca2c2728c2391cd1e53b7f" }

120
README.md
View File

@@ -46,7 +46,9 @@ imports, and more.
Ruff is extremely actively developed and used in major open-source projects like:
- [pandas](https://github.com/pandas-dev/pandas)
- [FastAPI](https://github.com/tiangolo/fastapi)
- [Apache Airflow](https://github.com/apache/airflow)
- [Bokeh](https://github.com/bokeh/bokeh)
- [Zulip](https://github.com/zulip/zulip)
- [Pydantic](https://github.com/pydantic/pydantic)
@@ -180,7 +182,7 @@ Ruff also works with [pre-commit](https://pre-commit.com):
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.217'
rev: 'v0.0.220'
hooks:
- id: ruff
# Respect `exclude` and `extend-exclude` settings.
@@ -343,21 +345,21 @@ Options:
Disable cache reads
--isolated
Ignore all configuration files
--select <SELECT>
--select <RULE_CODE>
Comma-separated list of rule codes to enable (or ALL, to enable all rules)
--extend-select <EXTEND_SELECT>
--extend-select <RULE_CODE>
Like --select, but adds additional rule codes on top of the selected ones
--ignore <IGNORE>
--ignore <RULE_CODE>
Comma-separated list of rule codes to disable
--extend-ignore <EXTEND_IGNORE>
--extend-ignore <RULE_CODE>
Like --ignore, but adds additional rule codes on top of the ignored ones
--exclude <EXCLUDE>
--exclude <FILE_PATTERN>
List of paths, used to omit files and/or directories from analysis
--extend-exclude <EXTEND_EXCLUDE>
--extend-exclude <FILE_PATTERN>
Like --exclude, but adds additional files and directories on top of those already excluded
--fixable <FIXABLE>
--fixable <RULE_CODE>
List of rule codes to treat as eligible for autofix. Only applicable when autofix itself is enabled (e.g., via `--fix`)
--unfixable <UNFIXABLE>
--unfixable <RULE_CODE>
List of rule codes to treat as ineligible for autofix. Only applicable when autofix itself is enabled (e.g., via `--fix`)
--per-file-ignores <PER_FILE_IGNORES>
List of mappings from file pattern to code to exclude
@@ -597,6 +599,7 @@ For more, see [pycodestyle](https://pypi.org/project/pycodestyle/2.9.1/) on PyPI
| E902 | IOError | IOError: `...` | |
| E999 | SyntaxError | SyntaxError: `...` | |
| W292 | NoNewLineAtEndOfFile | No newline at end of file | 🛠 |
| W505 | DocLineTooLong | Doc line too long (89 > 88 characters) | |
| W605 | InvalidEscapeSequence | Invalid escape sequence: '\c' | 🛠 |
### mccabe (C90)
@@ -614,6 +617,7 @@ For more, see [isort](https://pypi.org/project/isort/5.10.1/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| I001 | UnsortedImports | Import block is un-sorted or un-formatted | 🛠 |
| I002 | MissingRequiredImport | Missing required import: `from __future__ import ...` | 🛠 |
### pydocstyle (D)
@@ -701,6 +705,7 @@ For more, see [pyupgrade](https://pypi.org/project/pyupgrade/3.2.0/) on PyPI.
| UP027 | RewriteListComprehension | Replace unpacked list comprehension with a generator expression | 🛠 |
| UP028 | RewriteYieldFrom | Replace `yield` over `for` loop with `yield from` | 🛠 |
| UP029 | UnnecessaryBuiltinImport | Unnecessary builtin import: `...` | 🛠 |
| UP030 | FormatLiterals | Use implicit references for positional format fields | 🛠 |
### pep8-naming (N)
@@ -777,6 +782,9 @@ For more, see [flake8-bandit](https://pypi.org/project/flake8-bandit/4.1.1/) on
| S324 | HashlibInsecureHashFunction | Probable use of insecure hash functions in `hashlib`: "..." | |
| S501 | RequestWithNoCertValidation | Probable use of `...` call with `verify=False` disabling SSL certificate checks | |
| S506 | UnsafeYAMLLoad | Probable use of unsafe `yaml.load`. Allows instantiation of arbitrary objects. Consider `yaml.safe_load`. | |
| S508 | SnmpInsecureVersion | The use of SNMPv1 and SNMPv2 is insecure. Use SNMPv3 if able. | |
| S509 | SnmpWeakCryptography | You should not use SNMPv3 without encryption. `noAuthNoPriv` & `authNoPriv` is insecure. | |
| S701 | Jinja2AutoescapeFalse | By default, jinja2 sets `autoescape` to `False`. Consider using `autoescape=True` or the `select_autoescape` function to mitigate XSS vulnerabilities. | |
### flake8-blind-except (BLE)
@@ -916,8 +924,8 @@ For more, see [flake8-pytest-style](https://pypi.org/project/flake8-pytest-style
| PT001 | IncorrectFixtureParenthesesStyle | Use `@pytest.fixture()` over `@pytest.fixture` | 🛠 |
| PT002 | FixturePositionalArgs | Configuration for fixture `...` specified via positional args, use kwargs | |
| PT003 | ExtraneousScopeFunction | `scope='function'` is implied in `@pytest.fixture()` | |
| PT004 | MissingFixtureNameUnderscore | Fixture `...` does not return anything, add leading underscore | 🛠 |
| PT005 | IncorrectFixtureNameUnderscore | Fixture `...` returns a value, remove leading underscore | 🛠 |
| PT004 | MissingFixtureNameUnderscore | Fixture `...` does not return anything, add leading underscore | |
| PT005 | IncorrectFixtureNameUnderscore | Fixture `...` returns a value, remove leading underscore | |
| PT006 | ParametrizeNamesWrongType | Wrong name(s) type in `@pytest.mark.parametrize`, expected `tuple` | 🛠 |
| PT007 | ParametrizeValuesWrongType | Wrong values type in `@pytest.mark.parametrize` expected `list` of `tuple` | |
| PT008 | PatchWithLambda | Use `return_value=` instead of patching with `lambda` | |
@@ -945,10 +953,10 @@ For more, see [flake8-quotes](https://pypi.org/project/flake8-quotes/3.3.1/) on
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| Q000 | BadQuotesInlineString | Single quotes found but double quotes preferred | |
| Q001 | BadQuotesMultilineString | Single quote multiline found but double quotes preferred | |
| Q002 | BadQuotesDocstring | Single quote docstring found but double quotes preferred | |
| Q003 | AvoidQuoteEscape | Change outer quotes to avoid escaping inner quotes | |
| Q000 | BadQuotesInlineString | Single quotes found but double quotes preferred | 🛠 |
| Q001 | BadQuotesMultilineString | Single quote multiline found but double quotes preferred | 🛠 |
| Q002 | BadQuotesDocstring | Single quote docstring found but double quotes preferred | 🛠 |
| Q003 | AvoidQuoteEscape | Change outer quotes to avoid escaping inner quotes | 🛠 |
### flake8-return (RET)
@@ -971,6 +979,7 @@ For more, see [flake8-simplify](https://pypi.org/project/flake8-simplify/0.19.3/
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| SIM115 | OpenFileWithContextHandler | Use context handler for opening files | |
| SIM101 | DuplicateIsinstanceCall | Multiple `isinstance` calls for `...`, merge into a single call | 🛠 |
| SIM102 | NestedIfStatements | Use a single `if` statement instead of nested `if` statements | |
| SIM103 | ReturnBoolConditionDirectly | Return the condition `...` directly | 🛠 |
@@ -980,6 +989,7 @@ For more, see [flake8-simplify](https://pypi.org/project/flake8-simplify/0.19.3/
| SIM109 | CompareWithTuple | Use `value in (..., ...)` instead of `value == ... or value == ...` | 🛠 |
| SIM110 | ConvertLoopToAny | Use `return any(x for x in y)` instead of `for` loop | 🛠 |
| SIM111 | ConvertLoopToAll | Use `return all(x for x in y)` instead of `for` loop | 🛠 |
| SIM112 | UseCapitalEnvironmentVariables | Use capitalized environment variable `...` instead of `...` | 🛠 |
| SIM117 | MultipleWithStatements | Use a single `with` statement with multiple contexts instead of nested `with` statements | |
| SIM118 | KeyInDict | Use `key in dict` instead of `key in dict.keys()` | 🛠 |
| SIM201 | NegateEqualOp | Use `left != right` instead of `not left == right` | 🛠 |
@@ -993,6 +1003,7 @@ For more, see [flake8-simplify](https://pypi.org/project/flake8-simplify/0.19.3/
| SIM222 | OrTrue | Use `True` instead of `... or True` | 🛠 |
| SIM223 | AndFalse | Use `False` instead of `... and False` | 🛠 |
| SIM300 | YodaConditions | Yoda conditions are discouraged, use `left == right` instead | 🛠 |
| SIM401 | DictGetWithDefault | Use `var = dict.get(key, "default")` instead of an `if` block | 🛠 |
### flake8-tidy-imports (TID)
@@ -1817,6 +1828,8 @@ Exclusions are based on globs, and can be either:
`directory`). Note that these paths are relative to the project root
(e.g., the directory containing your `pyproject.toml`).
For more information on the glob syntax, refer to the [`globset` documentation](https://docs.rs/globset/latest/globset/#syntax).
Note that you'll typically want to use
[`extend-exclude`](#extend-exclude) to modify the excluded paths.
@@ -1864,6 +1877,18 @@ line-length = 100
A list of file patterns to omit from linting, in addition to those
specified by `exclude`.
Exclusions are based on globs, and can be either:
- Single-path patterns, like `.mypy_cache` (to exclude any directory
named `.mypy_cache` in the tree), `foo.py` (to exclude any file named
`foo.py`), or `foo_*.py` (to exclude any file matching `foo_*.py` ).
- Relative patterns, like `directory/foo.py` (to exclude that specific
file) or `directory/*.py` (to exclude any Python files in
`directory`). Note that these paths are relative to the project root
(e.g., the directory containing your `pyproject.toml`).
For more information on the glob syntax, refer to the [`globset` documentation](https://docs.rs/globset/latest/globset/#syntax).
**Default value**: `[]`
**Type**: `Vec<FilePattern>`
@@ -2335,7 +2360,7 @@ unfixable = ["F401"]
Enable or disable automatic update checks (overridden by the
`--update-check` and `--no-update-check` command-line flags).
**Default value**: `true`
**Default value**: `false`
**Type**: `bool`
@@ -2343,7 +2368,7 @@ Enable or disable automatic update checks (overridden by the
```toml
[tool.ruff]
update-check = false
update-check = true
```
---
@@ -2998,6 +3023,47 @@ order-by-type = true
---
#### [`relative-imports-order`](#relative-imports-order)
Whether to place "closer" imports (fewer `.` characters, most local)
before "further" imports (more `.` characters, least local), or vice
versa.
The default ("furthest-to-closest") is equivalent to isort's
`reverse-relative` default (`reverse-relative = false`); setting
this to "closest-to-furthest" is equivalent to isort's `reverse-relative
= true`.
**Default value**: `furthest-to-closest`
**Type**: `RelatveImportsOrder`
**Example usage**:
```toml
[tool.ruff.isort]
relative-imports-order = "closest-to-furthest"
```
---
#### [`required-imports`](#required-imports)
Add the specified import line to all files.
**Default value**: `[]`
**Type**: `Vec<String>`
**Example usage**:
```toml
[tool.ruff.isort]
required-imports = ["from __future__ import annotations"]
```
---
#### [`single-line-exclusions`](#single-line-exclusions)
One or more modules to exclude from the single line rule.
@@ -3137,6 +3203,24 @@ ignore-overlong-task-comments = true
---
#### [`max-doc-length`](#max-doc-length)
The maximum line length to allow for line-length violations within
documentation (`W505`), including standalone comments.
**Default value**: `None`
**Type**: `usize`
**Example usage**:
```toml
[tool.ruff.pycodestyle]
max-doc-length = 88
```
---
### `pydocstyle`
#### [`convention`](#convention)
@@ -3191,4 +3275,4 @@ MIT
## Contributing
Contributions are welcome and hugely appreciated. To get started, check out the
[contributing guidelines](https://github.com/charliermarsh/ruff/blob/main/.github/CONTRIBUTING.md).
[contributing guidelines](https://github.com/charliermarsh/ruff/blob/main/CONTRIBUTING.md).

View File

@@ -771,7 +771,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8_to_ruff"
version = "0.0.217"
version = "0.0.220"
dependencies = [
"anyhow",
"clap",
@@ -1975,7 +1975,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.217"
version = "0.0.220"
dependencies = [
"anyhow",
"bincode",

View File

@@ -1,10 +1,11 @@
[package]
name = "flake8-to-ruff"
version = "0.0.217-dev.0"
version = "0.0.220-dev.0"
edition = "2021"
[lib]
name = "flake8_to_ruff"
doctest = false
[dependencies]
anyhow = { version = "1.0.66" }

View File

@@ -4,7 +4,7 @@ build-backend = "maturin"
[project]
name = "ruff"
version = "0.0.217"
version = "0.0.220"
description = "An extremely fast Python linter, written in Rust."
authors = [
{ name = "Charlie Marsh", email = "charlie.r.marsh@gmail.com" },
@@ -35,6 +35,7 @@ urls = { repository = "https://github.com/charliermarsh/ruff" }
[tool.maturin]
bindings = "bin"
python-source = "python"
strip = true
[tool.setuptools]

View File

@@ -0,0 +1,6 @@
from pysnmp.hlapi import CommunityData
CommunityData("public", mpModel=0) # S508
CommunityData("public", mpModel=1) # S508
CommunityData("public", mpModel=2) # OK

View File

@@ -0,0 +1,7 @@
from pysnmp.hlapi import UsmUserData
insecure = UsmUserData("securityName") # S509
auth_no_priv = UsmUserData("securityName", "authName") # S509
less_insecure = UsmUserData("securityName", "authName", "privName") # OK

View File

@@ -0,0 +1,29 @@
import jinja2
from jinja2 import Environment, select_autoescape
templateLoader = jinja2.FileSystemLoader( searchpath="/" )
something = ''
Environment(loader=templateLoader, load=templateLoader, autoescape=True)
templateEnv = jinja2.Environment(autoescape=True,
loader=templateLoader )
Environment(loader=templateLoader, load=templateLoader, autoescape=something) # S701
templateEnv = jinja2.Environment(autoescape=False, loader=templateLoader ) # S701
Environment(loader=templateLoader,
load=templateLoader,
autoescape=False) # S701
Environment(loader=templateLoader, # S701
load=templateLoader)
Environment(loader=templateLoader, autoescape=select_autoescape())
Environment(loader=templateLoader,
autoescape=select_autoescape(['html', 'htm', 'xml']))
Environment(loader=templateLoader,
autoescape=jinja2.select_autoescape(['html', 'htm', 'xml']))
def fake_func():
return 'foobar'
Environment(loader=templateLoader, autoescape=fake_func()) # S701

View File

@@ -25,10 +25,10 @@ for x in range(3):
def check_inside_functions_too():
ls = [lambda: x for x in range(2)]
st = {lambda: x for x in range(2)}
gn = (lambda: x for x in range(2))
dt = {x: lambda: x for x in range(2)}
ls = [lambda: x for x in range(2)] # error
st = {lambda: x for x in range(2)} # error
gn = (lambda: x for x in range(2)) # error
dt = {x: lambda: x for x in range(2)} # error
async def pointless_async_iterable():
@@ -37,9 +37,9 @@ async def pointless_async_iterable():
async def container_for_problems():
async for x in pointless_async_iterable():
functions.append(lambda: x)
functions.append(lambda: x) # error
[lambda: x async for x in pointless_async_iterable()]
[lambda: x async for x in pointless_async_iterable()] # error
a = 10
@@ -47,10 +47,10 @@ b = 0
while True:
a = a_ = a - 1
b += 1
functions.append(lambda: a)
functions.append(lambda: a_)
functions.append(lambda: b)
functions.append(lambda: c) # not a name error because of late binding!
functions.append(lambda: a) # error
functions.append(lambda: a_) # error
functions.append(lambda: b) # error
functions.append(lambda: c) # error, but not a name error due to late binding
c: bool = a > 3
if not c:
break
@@ -58,7 +58,7 @@ while True:
# Nested loops should not duplicate reports
for j in range(2):
for k in range(3):
lambda: j * k
lambda: j * k # error
for j, k, l in [(1, 2, 3)]:
@@ -80,3 +80,95 @@ for var in range(2):
for i in range(3):
lambda: f"{i}"
# `query` is defined in the function, so also defining it in the loop should be OK.
for name in ["a", "b"]:
query = name
def myfunc(x):
query = x
query_post = x
_ = query
_ = query_post
query_post = name # in case iteration order matters
# Bug here because two dict comprehensions reference `name`, one of which is inside
# the lambda. This should be totally fine, of course.
_ = {
k: v
for k, v in reduce(
lambda data, event: merge_mappings(
[data, {name: f(caches, data, event) for name, f in xx}]
),
events,
{name: getattr(group, name) for name in yy},
).items()
if k in backfill_fields
}
# OK to define lambdas if they're immediately consumed, typically as the `key=`
# argument or in a consumed `filter()` (even if a comprehension is better style)
for x in range(2):
# It's not a complete get-out-of-linting-free construct - these should fail:
min([None, lambda: x], key=repr)
sorted([None, lambda: x], key=repr)
any(filter(bool, [None, lambda: x]))
list(filter(bool, [None, lambda: x]))
all(reduce(bool, [None, lambda: x]))
# But all these should be OK:
min(range(3), key=lambda y: x * y)
max(range(3), key=lambda y: x * y)
sorted(range(3), key=lambda y: x * y)
any(map(lambda y: x < y, range(3)))
all(map(lambda y: x < y, range(3)))
set(map(lambda y: x < y, range(3)))
list(map(lambda y: x < y, range(3)))
tuple(map(lambda y: x < y, range(3)))
sorted(map(lambda y: x < y, range(3)))
frozenset(map(lambda y: x < y, range(3)))
any(filter(lambda y: x < y, range(3)))
all(filter(lambda y: x < y, range(3)))
set(filter(lambda y: x < y, range(3)))
list(filter(lambda y: x < y, range(3)))
tuple(filter(lambda y: x < y, range(3)))
sorted(filter(lambda y: x < y, range(3)))
frozenset(filter(lambda y: x < y, range(3)))
any(reduce(lambda y: x | y, range(3)))
all(reduce(lambda y: x | y, range(3)))
set(reduce(lambda y: x | y, range(3)))
list(reduce(lambda y: x | y, range(3)))
tuple(reduce(lambda y: x | y, range(3)))
sorted(reduce(lambda y: x | y, range(3)))
frozenset(reduce(lambda y: x | y, range(3)))
import functools
any(functools.reduce(lambda y: x | y, range(3)))
all(functools.reduce(lambda y: x | y, range(3)))
set(functools.reduce(lambda y: x | y, range(3)))
list(functools.reduce(lambda y: x | y, range(3)))
tuple(functools.reduce(lambda y: x | y, range(3)))
sorted(functools.reduce(lambda y: x | y, range(3)))
frozenset(functools.reduce(lambda y: x | y, range(3)))
# OK because the lambda which references a loop variable is defined in a `return`
# statement, and after we return the loop variable can't be redefined.
# In principle we could do something fancy with `break`, but it's not worth it.
def iter_f(names):
for name in names:
if exists(name):
return lambda: name if exists(name) else None
if foo(name):
return [lambda: name] # known false alarm
if False:
return [lambda: i for i in range(3)] # error

View File

@@ -2,3 +2,10 @@ x = list(x for x in range(3))
x = list(
x for x in range(3)
)
def list(*args, **kwargs):
return None
list(x for x in range(3))

View File

@@ -2,3 +2,10 @@ x = set(x for x in range(3))
x = set(
x for x in range(3)
)
def set(*args, **kwargs):
return None
set(x for x in range(3))

View File

@@ -3,3 +3,10 @@ l = list()
d1 = dict()
d2 = dict(a=1)
d3 = dict(**d2)
def list():
return [1, 2, 3]
a = list()

View File

@@ -4,3 +4,10 @@ list(sorted(x))
reversed(sorted(x))
reversed(sorted(x, key=lambda e: e))
reversed(sorted(x, reverse=True))
def reversed(*args, **kwargs):
return None
reversed(sorted(x, reverse=True))

View File

@@ -31,3 +31,10 @@ class User(BaseModel):
@buzz.setter
def buzz(self, value: str | int) -> None:
...
class User:
bar: str = StringField()
foo: bool = BooleanField()
# ...
bar = StringField() # PIE794

View File

@@ -17,6 +17,15 @@ def fun_with_params_no_docstring(a, b="""
""" """docstring"""):
pass
def fun_with_params_no_docstring2(a, b=c[foo():], c=\
""" not a docstring """):
pass
def function_with_single_docstring(a):
"Single line docstring"
def double_inside_single(a):
'Double inside "single "'

View File

@@ -13,11 +13,19 @@ def foo2():
def fun_with_params_no_docstring(a, b='''
not a
not a
''' '''docstring'''):
pass
def fun_with_params_no_docstring2(a, b=c[foo():], c=\
''' not a docstring '''):
pass
def function_with_single_docstring(a):
'Single line docstring'
def double_inside_single(a):
"Double inside 'single '"

View File

@@ -1,4 +1,5 @@
this_should_raise_Q003 = 'This is a \'string\''
this_should_raise_Q003 = 'This is \\ a \\\'string\''
this_is_fine = '"This" is a \'string\''
this_is_fine = "This is a 'string'"
this_is_fine = "\"This\" is a 'string'"

View File

@@ -1,13 +1,13 @@
# Bad
# SIM108
if a:
b = c
else:
b = d
# Good
# OK
b = c if a else d
# https://github.com/MartinThoma/flake8-simplify/issues/115
# OK
if a:
b = c
elif c:
@@ -15,6 +15,7 @@ elif c:
else:
b = d
# OK
if True:
pass
elif a:
@@ -22,6 +23,7 @@ elif a:
else:
b = 2
# OK (false negative)
if True:
pass
else:
@@ -30,19 +32,62 @@ else:
else:
b = 2
import sys
# OK
if sys.version_info >= (3, 9):
randbytes = random.randbytes
else:
randbytes = _get_random_bytes
# OK
if sys.platform == "darwin":
randbytes = random.randbytes
else:
randbytes = _get_random_bytes
# OK
if sys.platform.startswith("linux"):
randbytes = random.randbytes
else:
randbytes = _get_random_bytes
# OK (includes comments)
if x > 0:
# test test
abc = x
else:
# test test test
abc = -x
# OK (too long)
if parser.errno == BAD_FIRST_LINE:
req = wrappers.Request(sock, server=self._server)
else:
req = wrappers.Request(
sock,
parser.get_method(),
parser.get_scheme() or _scheme,
parser.get_path(),
parser.get_version(),
parser.get_query_string(),
server=self._server,
)
# SIM108
if a:
b = cccccccccccccccccccccccccccccccccccc
else:
b = ddddddddddddddddddddddddddddddddddddd
# OK (too long)
if True:
if a:
b = cccccccccccccccccccccccccccccccccccc
else:
b = ddddddddddddddddddddddddddddddddddddd

View File

@@ -1,5 +1,6 @@
def f():
for x in iterable: # SIM110
# SIM110
for x in iterable:
if check(x):
return True
return False
@@ -20,14 +21,16 @@ def f():
def f():
for x in iterable: # SIM111
# SIM111
for x in iterable:
if check(x):
return False
return True
def f():
for x in iterable: # SIM111
# SIM111
for x in iterable:
if not x.is_empty():
return False
return True
@@ -45,3 +48,70 @@ def f():
if check(x):
return "foo"
return "bar"
def f():
# SIM110
for x in iterable:
if check(x):
return True
else:
return False
def f():
# SIM111
for x in iterable:
if check(x):
return False
else:
return True
def f():
# SIM110
for x in iterable:
if check(x):
return True
else:
return False
return True
def f():
# SIM111
for x in iterable:
if check(x):
return False
else:
return True
return False
def f():
for x in iterable:
if check(x):
return True
elif x.is_empty():
return True
return False
def f():
for x in iterable:
if check(x):
return True
else:
return True
return False
def f():
for x in iterable:
if check(x):
return True
elif x.is_empty():
return True
else:
return True
return False

View File

@@ -1,10 +1,18 @@
def f():
for x in iterable: # SIM110
# SIM110
for x in iterable:
if check(x):
return True
return False
def f():
for x in iterable:
if check(x):
return True
return True
def f():
for el in [1, 2, 3]:
if is_true(el):
@@ -13,21 +21,97 @@ def f():
def f():
for x in iterable: # SIM111
# SIM111
for x in iterable:
if check(x):
return False
return True
def f():
for x in iterable: # SIM 111
# SIM111
for x in iterable:
if not x.is_empty():
return False
return True
def f():
for x in iterable:
if check(x):
return False
return False
def f():
for x in iterable:
if check(x):
return "foo"
return "bar"
def f():
# SIM110
for x in iterable:
if check(x):
return True
else:
return False
def f():
# SIM111
for x in iterable:
if check(x):
return False
else:
return True
def f():
# SIM110
for x in iterable:
if check(x):
return True
else:
return False
return True
def f():
# SIM111
for x in iterable:
if check(x):
return False
else:
return True
return False
def f():
for x in iterable:
if check(x):
return True
elif x.is_empty():
return True
return False
def f():
for x in iterable:
if check(x):
return True
else:
return True
return False
def f():
for x in iterable:
if check(x):
return True
elif x.is_empty():
return True
else:
return True
return False

View File

@@ -0,0 +1,19 @@
import os
# Bad
os.environ['foo']
os.environ.get('foo')
os.environ.get('foo', 'bar')
os.getenv('foo')
# Good
os.environ['FOO']
os.environ.get('FOO')
os.environ.get('FOO', 'bar')
os.getenv('FOO')

View File

@@ -0,0 +1,6 @@
f = open('foo.txt') # SIM115
data = f.read()
f.close()
with open('foo.txt') as f: # OK
data = f.read()

View File

@@ -0,0 +1,87 @@
###
# Positive cases
###
# SIM401 (pattern-1)
if key in a_dict:
var = a_dict[key]
else:
var = "default1"
# SIM401 (pattern-2)
if key not in a_dict:
var = "default2"
else:
var = a_dict[key]
# SIM401 (default with a complex expression)
if key in a_dict:
var = a_dict[key]
else:
var = val1 + val2
# SIM401 (complex expression in key)
if keys[idx] in a_dict:
var = a_dict[keys[idx]]
else:
var = "default"
# SIM401 (complex expression in dict)
if key in dicts[idx]:
var = dicts[idx][key]
else:
var = "default"
# SIM401 (complex expression in var)
if key in a_dict:
vars[idx] = a_dict[key]
else:
vars[idx] = "default"
###
# Negative cases
###
# OK (false negative)
if not key in a_dict:
var = "default"
else:
var = a_dict[key]
# OK (different dict)
if key in a_dict:
var = other_dict[key]
else:
var = "default"
# OK (different key)
if key in a_dict:
var = a_dict[other_key]
else:
var = "default"
# OK (different var)
if key in a_dict:
var = a_dict[key]
else:
other_var = "default"
# OK (extra vars in body)
if key in a_dict:
var = a_dict[key]
var2 = value2
else:
var = "default"
# OK (extra vars in orelse)
if key in a_dict:
var = a_dict[key]
else:
var2 = value2
var = "default"
# OK (complex default value)
if key in a_dict:
var = a_dict[key]
else:
var = foo()

View File

@@ -181,3 +181,17 @@ def f(a: int, b: int) -> str:
def f(a, b):
return f"{a}{b}"
###
# Unused arguments on magic methods.
###
class C:
def __init__(self, x) -> None:
print("Hello, world!")
def __str__(self) -> str:
return "Hello, world!"
def __exit__(self, exc_type, exc_value, traceback) -> None:
print("Hello, world!")

View File

@@ -0,0 +1,3 @@
from ... import a
from .. import b
from . import c

View File

@@ -0,0 +1,3 @@
#!/usr/bin/env python3
x = 1

View File

@@ -0,0 +1,3 @@
"""Hello, world!"""
x = 1

View File

@@ -0,0 +1 @@
"""Hello, world!"""

View File

@@ -0,0 +1,2 @@
"""Hello, world!"""; x = \
1; y = 2

View File

@@ -0,0 +1 @@
"""Hello, world!"""; x = 1

View File

@@ -0,0 +1,2 @@
from __future__ import generator_stop
import os

View File

@@ -0,0 +1,15 @@
#!/usr/bin/env python3
"""Here's a top-level docstring that's over the limit."""
def f():
"""Here's a docstring that's also over the limit."""
x = 1 # Here's a comment that's over the limit, but it's not standalone.
# Here's a standalone comment that's over the limit.
print("Here's a string that's over the limit, but it's not a docstring.")
"This is also considered a docstring, and is over the limit."

View File

@@ -0,0 +1,8 @@
class SocketError(Exception):
pass
try:
raise SocketError()
except SocketError:
pass

View File

@@ -0,0 +1,36 @@
# Invalid calls; errors expected.
"{0}" "{1}" "{2}".format(1, 2, 3)
"a {3} complicated {1} string with {0} {2}".format(
"first", "second", "third", "fourth"
)
'{0}'.format(1)
'{0:x}'.format(30)
x = '{0}'.format(1)
'''{0}\n{1}\n'''.format(1, 2)
x = "foo {0}" \
"bar {1}".format(1, 2)
("{0}").format(1)
"\N{snowman} {0}".format(1)
'{' '0}'.format(1)
# These will not change because we are waiting for libcst to fix this issue:
# https://github.com/Instagram/LibCST/issues/846
print(
'foo{0}'
'bar{1}'.format(1, 2)
)
print(
'foo{0}' # ohai\n"
'bar{1}'.format(1, 2)
)

View File

@@ -0,0 +1,23 @@
# Valid calls; no errors expected.
'{}'.format(1)
x = ('{0} {1}',)
'{0} {0}'.format(1)
'{0:<{1}}'.format(1, 4)
f"{0}".format(a)
f"{0}".format(1)
print(f"{0}".format(1))
# I did not include the following tests because ruff does not seem to work with
# invalid python syntax (which is a good thing)
# "{0}"format(1)
# '{'.format(1)", "'}'.format(1)
# ("{0}" # {1}\n"{2}").format(1, 2, 3)

View File

@@ -17,7 +17,7 @@ resources/test/project/examples/docs/docs/file.py:8:5: F841 Local variable `x` i
resources/test/project/project/file.py:1:8: F401 `os` imported but unused
resources/test/project/project/import_file.py:1:1: I001 Import block is un-sorted or un-formatted
Found 7 error(s).
6 potentially fixable with the --fix option.
7 potentially fixable with the --fix option.
```
Running from the project directory itself should exhibit the same behavior:
@@ -32,7 +32,7 @@ examples/docs/docs/file.py:8:5: F841 Local variable `x` is assigned to but never
project/file.py:1:8: F401 `os` imported but unused
project/import_file.py:1:1: I001 Import block is un-sorted or un-formatted
Found 7 error(s).
6 potentially fixable with the --fix option.
7 potentially fixable with the --fix option.
```
Running from the sub-package directory should exhibit the same behavior, but omit the top-level
@@ -43,7 +43,7 @@ files:
docs/file.py:1:1: I001 Import block is un-sorted or un-formatted
docs/file.py:8:5: F841 Local variable `x` is assigned to but never used
Found 2 error(s).
1 potentially fixable with the --fix option.
2 potentially fixable with the --fix option.
```
`--config` should force Ruff to use the specified `pyproject.toml` for all files, and resolve
@@ -74,7 +74,7 @@ docs/docs/file.py:1:1: I001 Import block is un-sorted or un-formatted
docs/docs/file.py:8:5: F841 Local variable `x` is assigned to but never used
excluded/script.py:5:5: F841 Local variable `x` is assigned to but never used
Found 4 error(s).
1 potentially fixable with the --fix option.
4 potentially fixable with the --fix option.
```
Passing an excluded directory directly should report errors in the contained files:

View File

@@ -40,7 +40,7 @@
]
},
"exclude": {
"description": "A list of file patterns to exclude from linting.\n\nExclusions are based on globs, and can be either:\n\n- Single-path patterns, like `.mypy_cache` (to exclude any directory named `.mypy_cache` in the tree), `foo.py` (to exclude any file named `foo.py`), or `foo_*.py` (to exclude any file matching `foo_*.py` ). - Relative patterns, like `directory/foo.py` (to exclude that specific file) or `directory/*.py` (to exclude any Python files in `directory`). Note that these paths are relative to the project root (e.g., the directory containing your `pyproject.toml`).\n\nNote that you'll typically want to use [`extend-exclude`](#extend-exclude) to modify the excluded paths.",
"description": "A list of file patterns to exclude from linting.\n\nExclusions are based on globs, and can be either:\n\n- Single-path patterns, like `.mypy_cache` (to exclude any directory named `.mypy_cache` in the tree), `foo.py` (to exclude any file named `foo.py`), or `foo_*.py` (to exclude any file matching `foo_*.py` ). - Relative patterns, like `directory/foo.py` (to exclude that specific file) or `directory/*.py` (to exclude any Python files in `directory`). Note that these paths are relative to the project root (e.g., the directory containing your `pyproject.toml`).\n\nFor more information on the glob syntax, refer to the [`globset` documentation](https://docs.rs/globset/latest/globset/#syntax).\n\nNote that you'll typically want to use [`extend-exclude`](#extend-exclude) to modify the excluded paths.",
"type": [
"array",
"null"
@@ -57,7 +57,7 @@
]
},
"extend-exclude": {
"description": "A list of file patterns to omit from linting, in addition to those specified by `exclude`.",
"description": "A list of file patterns to omit from linting, in addition to those specified by `exclude`.\n\nExclusions are based on globs, and can be either:\n\n- Single-path patterns, like `.mypy_cache` (to exclude any directory named `.mypy_cache` in the tree), `foo.py` (to exclude any file named `foo.py`), or `foo_*.py` (to exclude any file matching `foo_*.py` ). - Relative patterns, like `directory/foo.py` (to exclude that specific file) or `directory/*.py` (to exclude any Python files in `directory`). Note that these paths are relative to the project root (e.g., the directory containing your `pyproject.toml`).\n\nFor more information on the glob syntax, refer to the [`globset` documentation](https://docs.rs/globset/latest/globset/#syntax).",
"type": [
"array",
"null"
@@ -820,6 +820,27 @@
"null"
]
},
"relative-imports-order": {
"description": "Whether to place \"closer\" imports (fewer `.` characters, most local) before \"further\" imports (more `.` characters, least local), or vice versa.\n\nThe default (\"furthest-to-closest\") is equivalent to isort's `reverse-relative` default (`reverse-relative = false`); setting this to \"closest-to-furthest\" is equivalent to isort's `reverse-relative = true`.",
"anyOf": [
{
"$ref": "#/definitions/RelatveImportsOrder"
},
{
"type": "null"
}
]
},
"required-imports": {
"description": "Add the specified import line to all files.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"single-line-exclusions": {
"description": "One or more modules to exclude from the single line rule.",
"type": [
@@ -935,6 +956,15 @@
"boolean",
"null"
]
},
"max-doc-length": {
"description": "The maximum line length to allow for line-length violations within documentation (`W505`), including standalone comments.",
"type": [
"integer",
"null"
],
"format": "uint",
"minimum": 0.0
}
},
"additionalProperties": false
@@ -988,6 +1018,24 @@
}
]
},
"RelatveImportsOrder": {
"oneOf": [
{
"description": "Place \"closer\" imports (fewer `.` characters, most local) before \"further\" imports (more `.` characters, least local).",
"type": "string",
"enum": [
"closest-to-furthest"
]
},
{
"description": "Place \"further\" imports (more `.` characters, least local) imports before \"closer\" imports (fewer `.` characters, most local).",
"type": "string",
"enum": [
"furthest-to-closest"
]
}
]
},
"RuleCodePrefix": {
"type": "string",
"enum": [
@@ -1268,6 +1316,7 @@
"I0",
"I00",
"I001",
"I002",
"I2",
"I25",
"I252",
@@ -1493,6 +1542,11 @@
"S50",
"S501",
"S506",
"S508",
"S509",
"S7",
"S70",
"S701",
"SIM",
"SIM1",
"SIM10",
@@ -1506,6 +1560,8 @@
"SIM11",
"SIM110",
"SIM111",
"SIM112",
"SIM115",
"SIM117",
"SIM118",
"SIM2",
@@ -1525,6 +1581,9 @@
"SIM3",
"SIM30",
"SIM300",
"SIM4",
"SIM40",
"SIM401",
"T",
"T1",
"T10",
@@ -1592,10 +1651,15 @@
"UP027",
"UP028",
"UP029",
"UP03",
"UP030",
"W",
"W2",
"W29",
"W292",
"W5",
"W50",
"W505",
"W6",
"W60",
"W605",

View File

@@ -1,8 +1,12 @@
[package]
name = "ruff_dev"
version = "0.0.217"
version = "0.0.220"
edition = "2021"
[lib]
name = "ruff_dev"
doctest = false
[dependencies]
anyhow = { version = "1.0.66" }
clap = { version = "4.0.1", features = ["derive"] }

View File

@@ -5,9 +5,7 @@ use std::path::PathBuf;
use anyhow::Result;
use clap::Args;
use ruff::source_code_generator::SourceCodeGenerator;
use ruff::source_code_locator::SourceCodeLocator;
use ruff::source_code_style::SourceCodeStyleDetector;
use ruff::source_code::{Generator, Locator, Stylist};
use rustpython_parser::parser;
#[derive(Args)]
@@ -20,9 +18,9 @@ pub struct Cli {
pub fn main(cli: &Cli) -> Result<()> {
let contents = fs::read_to_string(&cli.file)?;
let python_ast = parser::parse_program(&contents, &cli.file.to_string_lossy())?;
let locator = SourceCodeLocator::new(&contents);
let stylist = SourceCodeStyleDetector::from_contents(&contents, &locator);
let mut generator: SourceCodeGenerator = (&stylist).into();
let locator = Locator::new(&contents);
let stylist = Stylist::from_contents(&contents, &locator);
let mut generator: Generator = (&stylist).into();
generator.unparse_suite(&python_ast);
println!("{}", generator.generate());
Ok(())

View File

@@ -1,10 +1,11 @@
[package]
name = "ruff_macros"
version = "0.0.217"
version = "0.0.220"
edition = "2021"
[lib]
proc-macro = true
doctest = false
[dependencies]
once_cell = { version = "1.17.0" }

View File

@@ -12,9 +12,12 @@
)]
#![forbid(unsafe_code)]
use syn::{parse_macro_input, DeriveInput};
use proc_macro2::Span;
use quote::quote;
use syn::{parse_macro_input, DeriveInput, Ident};
mod config;
mod prefixes;
mod rule_code_prefix;
#[proc_macro_derive(ConfigurationOptions, attributes(option, doc, option_group))]
@@ -34,3 +37,23 @@ pub fn derive_rule_code_prefix(input: proc_macro::TokenStream) -> proc_macro::To
.unwrap_or_else(syn::Error::into_compile_error)
.into()
}
#[proc_macro]
pub fn origin_by_code(item: proc_macro::TokenStream) -> proc_macro::TokenStream {
let ident = parse_macro_input!(item as Ident).to_string();
let mut iter = prefixes::PREFIX_TO_ORIGIN.iter();
let origin = loop {
let (prefix, origin) = iter
.next()
.unwrap_or_else(|| panic!("code doesn't start with any recognized prefix: {ident}"));
if ident.starts_with(prefix) {
break origin;
}
};
let prefix = Ident::new(origin, Span::call_site());
quote! {
RuleOrigin::#prefix
}
.into()
}

View File

@@ -0,0 +1,53 @@
// Longer prefixes should come first so that you can find an origin for a code
// by simply picking the first entry that starts with the given prefix.
pub const PREFIX_TO_ORIGIN: &[(&str, &str)] = &[
("ANN", "Flake8Annotations"),
("ARG", "Flake8UnusedArguments"),
("A", "Flake8Builtins"),
("BLE", "Flake8BlindExcept"),
("B", "Flake8Bugbear"),
("C4", "Flake8Comprehensions"),
("C9", "McCabe"),
("DTZ", "Flake8Datetimez"),
("D", "Pydocstyle"),
("ERA", "Eradicate"),
("EM", "Flake8ErrMsg"),
("E", "Pycodestyle"),
("FBT", "Flake8BooleanTrap"),
("F", "Pyflakes"),
("ICN", "Flake8ImportConventions"),
("ISC", "Flake8ImplicitStrConcat"),
("I", "Isort"),
("N", "PEP8Naming"),
("PD", "PandasVet"),
("PGH", "PygrepHooks"),
("PL", "Pylint"),
("PT", "Flake8PytestStyle"),
("Q", "Flake8Quotes"),
("RET", "Flake8Return"),
("SIM", "Flake8Simplify"),
("S", "Flake8Bandit"),
("T10", "Flake8Debugger"),
("T20", "Flake8Print"),
("TID", "Flake8TidyImports"),
("UP", "Pyupgrade"),
("W", "Pycodestyle"),
("YTT", "Flake82020"),
("PIE", "Flake8Pie"),
("RUF", "Ruff"),
];
#[cfg(test)]
mod tests {
use super::PREFIX_TO_ORIGIN;
#[test]
fn order() {
for (idx, (prefix, _)) in PREFIX_TO_ORIGIN.iter().enumerate() {
for (prior_prefix, _) in PREFIX_TO_ORIGIN[..idx].iter() {
assert!(!prefix.starts_with(prior_prefix));
}
}
}
}

View File

@@ -34,7 +34,7 @@ def main(*, plugin: str, url: str) -> None:
with open(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}/rules.rs"), "w+") as fp:
fp.write("use crate::checkers::ast::Checker;\n")
with open(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}/mod.rs"), "w+") as fp:
fp.write("pub mod rules;\n")
fp.write("pub(crate) mod rules;\n")
fp.write("\n")
fp.write(
"""#[cfg(test)]

View File

@@ -116,33 +116,6 @@ impl Violation for %s {
fp.write("\n")
has_written = True
# Add the relevant code-to-origin pair to `src/registry.rs`.
with open(os.path.join(ROOT_DIR, "src/registry.rs")) as fp:
content = fp.read()
seen_impl = False
has_written = False
with open(os.path.join(ROOT_DIR, "src/registry.rs"), "w") as fp:
for line in content.splitlines():
fp.write(line)
fp.write("\n")
if has_written:
continue
if line.startswith("impl RuleCode"):
seen_impl = True
continue
if not seen_impl:
continue
if line.strip() == f"// {origin}":
indent = line.split("//")[0]
fp.write(f"{indent}RuleCode::{code} => RuleOrigin::{pascal_case(origin)},")
fp.write("\n")
has_written = True
if __name__ == "__main__":
parser = argparse.ArgumentParser(

View File

@@ -1,5 +1,12 @@
use rustpython_ast::{Expr, Stmt, StmtKind};
pub fn name(stmt: &Stmt) -> &str {
match &stmt.node {
StmtKind::FunctionDef { name, .. } | StmtKind::AsyncFunctionDef { name, .. } => name,
_ => panic!("Expected StmtKind::FunctionDef | StmtKind::AsyncFunctionDef"),
}
}
pub fn decorator_list(stmt: &Stmt) -> &Vec<Expr> {
match &stmt.node {
StmtKind::FunctionDef { decorator_list, .. }

View File

@@ -388,6 +388,12 @@ impl<'a> From<&'a Box<Expr>> for Box<ComparableExpr<'a>> {
}
}
impl<'a> From<&'a Box<Expr>> for ComparableExpr<'a> {
fn from(expr: &'a Box<Expr>) -> Self {
(&**expr).into()
}
}
impl<'a> From<&'a Expr> for ComparableExpr<'a> {
fn from(expr: &'a Expr) -> Self {
match &expr.node {

View File

@@ -12,9 +12,7 @@ use rustpython_parser::lexer::Tok;
use rustpython_parser::token::StringKind;
use crate::ast::types::{Binding, BindingKind, Range};
use crate::source_code_generator::SourceCodeGenerator;
use crate::source_code_style::SourceCodeStyleDetector;
use crate::SourceCodeLocator;
use crate::source_code::{Generator, Locator, Stylist};
/// Create an `Expr` with default location from an `ExprKind`.
pub fn create_expr(node: ExprKind) -> Expr {
@@ -27,15 +25,15 @@ pub fn create_stmt(node: StmtKind) -> Stmt {
}
/// Generate source code from an `Expr`.
pub fn unparse_expr(expr: &Expr, stylist: &SourceCodeStyleDetector) -> String {
let mut generator: SourceCodeGenerator = stylist.into();
pub fn unparse_expr(expr: &Expr, stylist: &Stylist) -> String {
let mut generator: Generator = stylist.into();
generator.unparse_expr(expr, 0);
generator.generate()
}
/// Generate source code from an `Stmt`.
pub fn unparse_stmt(stmt: &Stmt, stylist: &SourceCodeStyleDetector) -> String {
let mut generator: SourceCodeGenerator = stylist.into();
pub fn unparse_stmt(stmt: &Stmt, stylist: &Stylist) -> String {
let mut generator: Generator = stylist.into();
generator.unparse_stmt(stmt);
generator.generate()
}
@@ -430,6 +428,13 @@ pub fn collect_arg_names<'a>(arguments: &'a Arguments) -> FxHashSet<&'a str> {
arg_names
}
/// Returns `true` if a statement or expression includes at least one comment.
pub fn has_comments<T>(located: &Located<T>, locator: &Locator) -> bool {
lexer::make_tokenizer(&locator.slice_source_code_range(&Range::from_located(located)))
.flatten()
.any(|(_, tok, _)| matches!(tok, Tok::Comment(..)))
}
/// Returns `true` if a call is an argumented `super` invocation.
pub fn is_super_call_with_arguments(func: &Expr, args: &[Expr]) -> bool {
if let ExprKind::Name { id, .. } = &func.node {
@@ -476,14 +481,14 @@ pub fn to_absolute(relative: Location, base: Location) -> Location {
}
/// Return `true` if a `Stmt` has leading content.
pub fn match_leading_content(stmt: &Stmt, locator: &SourceCodeLocator) -> bool {
pub fn match_leading_content(stmt: &Stmt, locator: &Locator) -> bool {
let range = Range::new(Location::new(stmt.location.row(), 0), stmt.location);
let prefix = locator.slice_source_code_range(&range);
prefix.chars().any(|char| !char.is_whitespace())
}
/// Return `true` if a `Stmt` has trailing content.
pub fn match_trailing_content(stmt: &Stmt, locator: &SourceCodeLocator) -> bool {
pub fn match_trailing_content(stmt: &Stmt, locator: &Locator) -> bool {
let range = Range::new(
stmt.end_location.unwrap(),
Location::new(stmt.end_location.unwrap().row() + 1, 0),
@@ -501,7 +506,7 @@ pub fn match_trailing_content(stmt: &Stmt, locator: &SourceCodeLocator) -> bool
}
/// Return the number of trailing empty lines following a statement.
pub fn count_trailing_lines(stmt: &Stmt, locator: &SourceCodeLocator) -> usize {
pub fn count_trailing_lines(stmt: &Stmt, locator: &Locator) -> usize {
let suffix =
locator.slice_source_code_at(&Location::new(stmt.end_location.unwrap().row() + 1, 0));
suffix
@@ -513,7 +518,7 @@ pub fn count_trailing_lines(stmt: &Stmt, locator: &SourceCodeLocator) -> usize {
/// Return the appropriate visual `Range` for any message that spans a `Stmt`.
/// Specifically, this method returns the range of a function or class name,
/// rather than that of the entire function or class body.
pub fn identifier_range(stmt: &Stmt, locator: &SourceCodeLocator) -> Range {
pub fn identifier_range(stmt: &Stmt, locator: &Locator) -> Range {
if matches!(
stmt.node,
StmtKind::ClassDef { .. }
@@ -532,7 +537,7 @@ pub fn identifier_range(stmt: &Stmt, locator: &SourceCodeLocator) -> Range {
}
/// Like `identifier_range`, but accepts a `Binding`.
pub fn binding_range(binding: &Binding, locator: &SourceCodeLocator) -> Range {
pub fn binding_range(binding: &Binding, locator: &Locator) -> Range {
if matches!(
binding.kind,
BindingKind::ClassDefinition | BindingKind::FunctionDefinition
@@ -548,7 +553,7 @@ pub fn binding_range(binding: &Binding, locator: &SourceCodeLocator) -> Range {
}
// Return the ranges of `Name` tokens within a specified node.
pub fn find_names<T>(located: &Located<T>, locator: &SourceCodeLocator) -> Vec<Range> {
pub fn find_names<T>(located: &Located<T>, locator: &Locator) -> Vec<Range> {
let contents = locator.slice_source_code_range(&Range::from_located(located));
lexer::make_tokenizer_located(&contents, located.location)
.flatten()
@@ -561,10 +566,7 @@ pub fn find_names<T>(located: &Located<T>, locator: &SourceCodeLocator) -> Vec<R
}
/// Return the `Range` of `name` in `Excepthandler`.
pub fn excepthandler_name_range(
handler: &Excepthandler,
locator: &SourceCodeLocator,
) -> Option<Range> {
pub fn excepthandler_name_range(handler: &Excepthandler, locator: &Locator) -> Option<Range> {
let ExcepthandlerKind::ExceptHandler {
name, type_, body, ..
} = &handler.node;
@@ -587,7 +589,7 @@ pub fn excepthandler_name_range(
}
/// Return the `Range` of `except` in `Excepthandler`.
pub fn except_range(handler: &Excepthandler, locator: &SourceCodeLocator) -> Range {
pub fn except_range(handler: &Excepthandler, locator: &Locator) -> Range {
let ExcepthandlerKind::ExceptHandler { body, type_, .. } = &handler.node;
let end = if let Some(type_) = type_ {
type_.location
@@ -612,7 +614,7 @@ pub fn except_range(handler: &Excepthandler, locator: &SourceCodeLocator) -> Ran
}
/// Find f-strings that don't contain any formatted values in a `JoinedStr`.
pub fn find_useless_f_strings(expr: &Expr, locator: &SourceCodeLocator) -> Vec<(Range, Range)> {
pub fn find_useless_f_strings(expr: &Expr, locator: &Locator) -> Vec<(Range, Range)> {
let contents = locator.slice_source_code_range(&Range::from_located(expr));
lexer::make_tokenizer_located(&contents, expr.location)
.flatten()
@@ -649,7 +651,7 @@ pub fn find_useless_f_strings(expr: &Expr, locator: &SourceCodeLocator) -> Vec<(
}
/// Return the `Range` of `else` in `For`, `AsyncFor`, and `While` statements.
pub fn else_range(stmt: &Stmt, locator: &SourceCodeLocator) -> Option<Range> {
pub fn else_range(stmt: &Stmt, locator: &Locator) -> Option<Range> {
match &stmt.node {
StmtKind::For { body, orelse, .. }
| StmtKind::AsyncFor { body, orelse, .. }
@@ -683,7 +685,7 @@ pub fn else_range(stmt: &Stmt, locator: &SourceCodeLocator) -> Option<Range> {
/// Return `true` if a `Stmt` appears to be part of a multi-statement line, with
/// other statements preceding it.
pub fn preceded_by_continuation(stmt: &Stmt, locator: &SourceCodeLocator) -> bool {
pub fn preceded_by_continuation(stmt: &Stmt, locator: &Locator) -> bool {
// Does the previous line end in a continuation? This will have a specific
// false-positive, which is that if the previous line ends in a comment, it
// will be treated as a continuation. So we should only use this information to
@@ -704,16 +706,31 @@ pub fn preceded_by_continuation(stmt: &Stmt, locator: &SourceCodeLocator) -> boo
/// Return `true` if a `Stmt` appears to be part of a multi-statement line, with
/// other statements preceding it.
pub fn preceded_by_multi_statement_line(stmt: &Stmt, locator: &SourceCodeLocator) -> bool {
pub fn preceded_by_multi_statement_line(stmt: &Stmt, locator: &Locator) -> bool {
match_leading_content(stmt, locator) || preceded_by_continuation(stmt, locator)
}
/// Return `true` if a `Stmt` appears to be part of a multi-statement line, with
/// other statements following it.
pub fn followed_by_multi_statement_line(stmt: &Stmt, locator: &SourceCodeLocator) -> bool {
pub fn followed_by_multi_statement_line(stmt: &Stmt, locator: &Locator) -> bool {
match_trailing_content(stmt, locator)
}
/// Return `true` if a `Stmt` is a docstring.
pub fn is_docstring_stmt(stmt: &Stmt) -> bool {
if let StmtKind::Expr { value } = &stmt.node {
matches!(
value.node,
ExprKind::Constant {
value: Constant::Str { .. },
..
}
)
} else {
false
}
}
#[derive(Default)]
/// A simple representation of a call's positional and keyword arguments.
pub struct SimpleCallArgs<'a> {
@@ -759,6 +776,11 @@ impl<'a> SimpleCallArgs<'a> {
}
None
}
/// Get the number of positional and keyword arguments used.
pub fn len(&self) -> usize {
self.args.len() + self.kwargs.len()
}
}
#[cfg(test)]
@@ -772,7 +794,7 @@ mod tests {
else_range, identifier_range, match_module_member, match_trailing_content,
};
use crate::ast::types::Range;
use crate::source_code_locator::SourceCodeLocator;
use crate::source_code::Locator;
#[test]
fn builtin() -> Result<()> {
@@ -922,25 +944,25 @@ mod tests {
let contents = "x = 1";
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert!(!match_trailing_content(stmt, &locator));
let contents = "x = 1; y = 2";
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert!(match_trailing_content(stmt, &locator));
let contents = "x = 1 ";
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert!(!match_trailing_content(stmt, &locator));
let contents = "x = 1 # Comment";
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert!(!match_trailing_content(stmt, &locator));
let contents = r#"
@@ -950,7 +972,7 @@ y = 2
.trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert!(!match_trailing_content(stmt, &locator));
Ok(())
@@ -961,7 +983,7 @@ y = 2
let contents = "def f(): pass".trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
identifier_range(stmt, &locator),
Range::new(Location::new(1, 4), Location::new(1, 5),)
@@ -975,7 +997,7 @@ def \
.trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
identifier_range(stmt, &locator),
Range::new(Location::new(2, 2), Location::new(2, 3),)
@@ -984,7 +1006,7 @@ def \
let contents = "class Class(): pass".trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
identifier_range(stmt, &locator),
Range::new(Location::new(1, 6), Location::new(1, 11),)
@@ -993,7 +1015,7 @@ def \
let contents = "class Class: pass".trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
identifier_range(stmt, &locator),
Range::new(Location::new(1, 6), Location::new(1, 11),)
@@ -1007,7 +1029,7 @@ class Class():
.trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
identifier_range(stmt, &locator),
Range::new(Location::new(2, 6), Location::new(2, 11),)
@@ -1016,7 +1038,7 @@ class Class():
let contents = r#"x = y + 1"#.trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
identifier_range(stmt, &locator),
Range::new(Location::new(1, 0), Location::new(1, 9),)
@@ -1036,7 +1058,7 @@ else:
.trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
let range = else_range(stmt, &locator).unwrap();
assert_eq!(range.location.row(), 3);
assert_eq!(range.location.column(), 0);

View File

@@ -74,7 +74,6 @@ pub enum ScopeKind<'a> {
Function(FunctionDef<'a>),
Generator,
Module,
Arg,
Lambda(Lambda<'a>),
}

View File

@@ -1,225 +1 @@
use std::borrow::Cow;
use std::collections::BTreeSet;
use itertools::Itertools;
use ropey::RopeBuilder;
use rustpython_parser::ast::Location;
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::registry::Diagnostic;
use crate::source_code_locator::SourceCodeLocator;
#[derive(Debug, Copy, Clone, Hash)]
pub enum Mode {
Generate,
Apply,
Diff,
None,
}
impl From<bool> for Mode {
fn from(value: bool) -> Self {
if value {
Mode::Apply
} else {
Mode::None
}
}
}
/// Auto-fix errors in a file, and write the fixed source code to disk.
pub fn fix_file<'a>(
diagnostics: &'a [Diagnostic],
locator: &'a SourceCodeLocator<'a>,
) -> Option<(Cow<'a, str>, usize)> {
if diagnostics.iter().all(|check| check.fix.is_none()) {
return None;
}
Some(apply_fixes(
diagnostics.iter().filter_map(|check| check.fix.as_ref()),
locator,
))
}
/// Apply a series of fixes.
fn apply_fixes<'a>(
fixes: impl Iterator<Item = &'a Fix>,
locator: &'a SourceCodeLocator<'a>,
) -> (Cow<'a, str>, usize) {
let mut output = RopeBuilder::new();
let mut last_pos: Location = Location::new(1, 0);
let mut applied: BTreeSet<&Fix> = BTreeSet::default();
let mut num_fixed: usize = 0;
for fix in fixes.sorted_by_key(|fix| fix.location) {
// If we already applied an identical fix as part of another correction, skip
// any re-application.
if applied.contains(&fix) {
num_fixed += 1;
continue;
}
// Best-effort approach: if this fix overlaps with a fix we've already applied,
// skip it.
if last_pos > fix.location {
continue;
}
// Add all contents from `last_pos` to `fix.location`.
let slice = locator.slice_source_code_range(&Range::new(last_pos, fix.location));
output.append(&slice);
// Add the patch itself.
output.append(&fix.content);
// Track that the fix was applied.
last_pos = fix.end_location;
applied.insert(fix);
num_fixed += 1;
}
// Add the remaining content.
let slice = locator.slice_source_code_at(&last_pos);
output.append(&slice);
(Cow::from(output.finish()), num_fixed)
}
#[cfg(test)]
mod tests {
use rustpython_parser::ast::Location;
use crate::autofix::fixer::apply_fixes;
use crate::autofix::Fix;
use crate::SourceCodeLocator;
#[test]
fn empty_file() {
let fixes = vec![];
let locator = SourceCodeLocator::new(r#""#);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(contents, "");
assert_eq!(fixed, 0);
}
#[test]
fn apply_single_replacement() {
let fixes = vec![Fix {
content: "Bar".to_string(),
location: Location::new(1, 8),
end_location: Location::new(1, 14),
}];
let locator = SourceCodeLocator::new(
r#"
class A(object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A(Bar):
...
"#
.trim(),
);
assert_eq!(fixed, 1);
}
#[test]
fn apply_single_removal() {
let fixes = vec![Fix {
content: String::new(),
location: Location::new(1, 7),
end_location: Location::new(1, 15),
}];
let locator = SourceCodeLocator::new(
r#"
class A(object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A:
...
"#
.trim()
);
assert_eq!(fixed, 1);
}
#[test]
fn apply_double_removal() {
let fixes = vec![
Fix {
content: String::new(),
location: Location::new(1, 7),
end_location: Location::new(1, 16),
},
Fix {
content: String::new(),
location: Location::new(1, 16),
end_location: Location::new(1, 23),
},
];
let locator = SourceCodeLocator::new(
r#"
class A(object, object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A:
...
"#
.trim()
);
assert_eq!(fixed, 2);
}
#[test]
fn ignore_overlapping_fixes() {
let fixes = vec![
Fix {
content: String::new(),
location: Location::new(1, 7),
end_location: Location::new(1, 15),
},
Fix {
content: "ignored".to_string(),
location: Location::new(1, 9),
end_location: Location::new(1, 11),
},
];
let locator = SourceCodeLocator::new(
r#"
class A(object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A:
...
"#
.trim(),
);
assert_eq!(fixed, 1);
}
}

View File

@@ -9,10 +9,10 @@ use crate::ast::helpers;
use crate::ast::helpers::to_absolute;
use crate::ast::types::Range;
use crate::ast::whitespace::LinesWithTrailingNewline;
use crate::autofix::Fix;
use crate::cst::helpers::compose_module_path;
use crate::cst::matchers::match_module;
use crate::source_code_locator::SourceCodeLocator;
use crate::fix::Fix;
use crate::source_code::Locator;
/// Determine if a body contains only a single statement, taking into account
/// deleted.
@@ -78,7 +78,7 @@ fn is_lone_child(child: &Stmt, parent: &Stmt, deleted: &[&Stmt]) -> Result<bool>
/// Return the location of a trailing semicolon following a `Stmt`, if it's part
/// of a multi-statement line.
fn trailing_semicolon(stmt: &Stmt, locator: &SourceCodeLocator) -> Option<Location> {
fn trailing_semicolon(stmt: &Stmt, locator: &Locator) -> Option<Location> {
let contents = locator.slice_source_code_at(&stmt.end_location.unwrap());
for (row, line) in LinesWithTrailingNewline::from(&contents).enumerate() {
let trimmed = line.trim();
@@ -100,7 +100,7 @@ fn trailing_semicolon(stmt: &Stmt, locator: &SourceCodeLocator) -> Option<Locati
}
/// Find the next valid break for a `Stmt` after a semicolon.
fn next_stmt_break(semicolon: Location, locator: &SourceCodeLocator) -> Location {
fn next_stmt_break(semicolon: Location, locator: &Locator) -> Location {
let start_location = Location::new(semicolon.row(), semicolon.column() + 1);
let contents = locator.slice_source_code_at(&start_location);
for (row, line) in LinesWithTrailingNewline::from(&contents).enumerate() {
@@ -133,7 +133,7 @@ fn next_stmt_break(semicolon: Location, locator: &SourceCodeLocator) -> Location
}
/// Return `true` if a `Stmt` occurs at the end of a file.
fn is_end_of_file(stmt: &Stmt, locator: &SourceCodeLocator) -> bool {
fn is_end_of_file(stmt: &Stmt, locator: &Locator) -> bool {
let contents = locator.slice_source_code_at(&stmt.end_location.unwrap());
contents.is_empty()
}
@@ -155,7 +155,7 @@ pub fn delete_stmt(
stmt: &Stmt,
parent: Option<&Stmt>,
deleted: &[&Stmt],
locator: &SourceCodeLocator,
locator: &Locator,
) -> Result<Fix> {
if parent
.map(|parent| is_lone_child(stmt, parent, deleted))
@@ -197,7 +197,7 @@ pub fn remove_unused_imports<'a>(
stmt: &Stmt,
parent: Option<&Stmt>,
deleted: &[&Stmt],
locator: &SourceCodeLocator,
locator: &Locator,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(stmt));
let mut tree = match_module(&module_text)?;
@@ -299,20 +299,20 @@ mod tests {
use rustpython_parser::parser;
use crate::autofix::helpers::{next_stmt_break, trailing_semicolon};
use crate::source_code_locator::SourceCodeLocator;
use crate::source_code::Locator;
#[test]
fn find_semicolon() -> Result<()> {
let contents = "x = 1";
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(trailing_semicolon(stmt, &locator), None);
let contents = "x = 1; y = 1";
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
trailing_semicolon(stmt, &locator),
Some(Location::new(1, 5))
@@ -321,7 +321,7 @@ mod tests {
let contents = "x = 1 ; y = 1";
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
trailing_semicolon(stmt, &locator),
Some(Location::new(1, 6))
@@ -334,7 +334,7 @@ x = 1 \
.trim();
let program = parser::parse_program(contents, "<filename>")?;
let stmt = program.first().unwrap();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
trailing_semicolon(stmt, &locator),
Some(Location::new(2, 2))
@@ -346,14 +346,14 @@ x = 1 \
#[test]
fn find_next_stmt_break() {
let contents = "x = 1; y = 1";
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
next_stmt_break(Location::new(1, 4), &locator),
Location::new(1, 5)
);
let contents = "x = 1 ; y = 1";
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
next_stmt_break(Location::new(1, 5), &locator),
Location::new(1, 6)
@@ -364,7 +364,7 @@ x = 1 \
; y = 1
"#
.trim();
let locator = SourceCodeLocator::new(contents);
let locator = Locator::new(contents);
assert_eq!(
next_stmt_break(Location::new(2, 2), &locator),
Location::new(2, 4)

View File

@@ -1,38 +1,210 @@
use std::borrow::Cow;
use std::collections::BTreeSet;
use itertools::Itertools;
use ropey::RopeBuilder;
use rustpython_ast::Location;
use serde::{Deserialize, Serialize};
use crate::ast::types::Range;
use crate::fix::Fix;
use crate::registry::Diagnostic;
use crate::source_code::Locator;
pub mod fixer;
pub mod helpers;
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize)]
pub struct Fix {
pub content: String,
pub location: Location,
pub end_location: Location,
/// Auto-fix errors in a file, and write the fixed source code to disk.
pub fn fix_file<'a>(
diagnostics: &'a [Diagnostic],
locator: &'a Locator<'a>,
) -> Option<(Cow<'a, str>, usize)> {
if diagnostics.iter().all(|check| check.fix.is_none()) {
return None;
}
Some(apply_fixes(
diagnostics.iter().filter_map(|check| check.fix.as_ref()),
locator,
))
}
impl Fix {
pub fn deletion(start: Location, end: Location) -> Self {
Self {
/// Apply a series of fixes.
fn apply_fixes<'a>(
fixes: impl Iterator<Item = &'a Fix>,
locator: &'a Locator<'a>,
) -> (Cow<'a, str>, usize) {
let mut output = RopeBuilder::new();
let mut last_pos: Location = Location::new(1, 0);
let mut applied: BTreeSet<&Fix> = BTreeSet::default();
let mut num_fixed: usize = 0;
for fix in fixes.sorted_by_key(|fix| fix.location) {
// If we already applied an identical fix as part of another correction, skip
// any re-application.
if applied.contains(&fix) {
num_fixed += 1;
continue;
}
// Best-effort approach: if this fix overlaps with a fix we've already applied,
// skip it.
if last_pos > fix.location {
continue;
}
// Add all contents from `last_pos` to `fix.location`.
let slice = locator.slice_source_code_range(&Range::new(last_pos, fix.location));
output.append(&slice);
// Add the patch itself.
output.append(&fix.content);
// Track that the fix was applied.
last_pos = fix.end_location;
applied.insert(fix);
num_fixed += 1;
}
// Add the remaining content.
let slice = locator.slice_source_code_at(&last_pos);
output.append(&slice);
(Cow::from(output.finish()), num_fixed)
}
#[cfg(test)]
mod tests {
use rustpython_parser::ast::Location;
use crate::autofix::apply_fixes;
use crate::fix::Fix;
use crate::source_code::Locator;
#[test]
fn empty_file() {
let fixes = vec![];
let locator = Locator::new(r#""#);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(contents, "");
assert_eq!(fixed, 0);
}
#[test]
fn apply_single_replacement() {
let fixes = vec![Fix {
content: "Bar".to_string(),
location: Location::new(1, 8),
end_location: Location::new(1, 14),
}];
let locator = Locator::new(
r#"
class A(object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A(Bar):
...
"#
.trim(),
);
assert_eq!(fixed, 1);
}
#[test]
fn apply_single_removal() {
let fixes = vec![Fix {
content: String::new(),
location: start,
end_location: end,
}
location: Location::new(1, 7),
end_location: Location::new(1, 15),
}];
let locator = Locator::new(
r#"
class A(object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A:
...
"#
.trim()
);
assert_eq!(fixed, 1);
}
pub fn replacement(content: String, start: Location, end: Location) -> Self {
Self {
content,
location: start,
end_location: end,
}
#[test]
fn apply_double_removal() {
let fixes = vec![
Fix {
content: String::new(),
location: Location::new(1, 7),
end_location: Location::new(1, 16),
},
Fix {
content: String::new(),
location: Location::new(1, 16),
end_location: Location::new(1, 23),
},
];
let locator = Locator::new(
r#"
class A(object, object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A:
...
"#
.trim()
);
assert_eq!(fixed, 2);
}
pub fn insertion(content: String, at: Location) -> Self {
Self {
content,
location: at,
end_location: at,
}
#[test]
fn ignore_overlapping_fixes() {
let fixes = vec![
Fix {
content: String::new(),
location: Location::new(1, 7),
end_location: Location::new(1, 15),
},
Fix {
content: "ignored".to_string(),
location: Location::new(1, 9),
end_location: Location::new(1, 11),
},
];
let locator = Locator::new(
r#"
class A(object):
...
"#
.trim(),
);
let (contents, fixed) = apply_fixes(fixes.iter(), &locator);
assert_eq!(
contents,
r#"
class A:
...
"#
.trim(),
);
assert_eq!(fixed, 1);
}
}

View File

@@ -1,3 +1,4 @@
#![cfg_attr(target_family = "wasm", allow(dead_code))]
use std::collections::hash_map::DefaultHasher;
use std::fs;
use std::hash::{Hash, Hasher};
@@ -53,6 +54,7 @@ fn cache_key<P: AsRef<Path>>(path: P, settings: &Settings, autofix: flags::Autof
hasher.finish()
}
#[allow(dead_code)]
/// Initialize the cache at the specified `Path`.
pub fn init(path: &Path) -> Result<()> {
// Create the cache directories.

View File

@@ -33,8 +33,7 @@ use crate::python::typing::SubscriptKind;
use crate::registry::{Diagnostic, RuleCode};
use crate::settings::types::PythonVersion;
use crate::settings::{flags, Settings};
use crate::source_code_locator::SourceCodeLocator;
use crate::source_code_style::SourceCodeStyleDetector;
use crate::source_code::{Locator, Stylist};
use crate::violations::DeferralKeyword;
use crate::visibility::{module_visibility, transition_scope, Modifier, Visibility, VisibleScope};
use crate::{
@@ -59,8 +58,8 @@ pub struct Checker<'a> {
noqa: flags::Noqa,
pub(crate) settings: &'a Settings,
pub(crate) noqa_line_for: &'a IntMap<usize, usize>,
pub(crate) locator: &'a SourceCodeLocator<'a>,
pub(crate) style: &'a SourceCodeStyleDetector<'a>,
pub(crate) locator: &'a Locator<'a>,
pub(crate) style: &'a Stylist<'a>,
// Computed diagnostics.
pub(crate) diagnostics: Vec<Diagnostic>,
// Function and class definition tracking (e.g., for docstring enforcement).
@@ -110,8 +109,8 @@ impl<'a> Checker<'a> {
autofix: flags::Autofix,
noqa: flags::Noqa,
path: &'a Path,
locator: &'a SourceCodeLocator,
style: &'a SourceCodeStyleDetector,
locator: &'a Locator,
style: &'a Stylist,
) -> Checker<'a> {
Checker {
settings,
@@ -658,7 +657,7 @@ where
}
if self.settings.enabled.contains(&RuleCode::PIE794) {
flake8_pie::rules::dupe_class_field_definitions(self, bases, body);
flake8_pie::rules::dupe_class_field_definitions(self, stmt, body);
}
self.check_builtin_shadowing(name, stmt, false);
@@ -1207,14 +1206,14 @@ where
}
if self.settings.enabled.contains(&RuleCode::UP024) {
if let Some(item) = exc {
pyupgrade::rules::os_error_alias(self, item);
pyupgrade::rules::os_error_alias(self, &item);
}
}
}
StmtKind::AugAssign { target, .. } => {
self.handle_node_load(target);
}
StmtKind::If { test, .. } => {
StmtKind::If { test, body, orelse } => {
if self.settings.enabled.contains(&RuleCode::F634) {
pyflakes::rules::if_tuple(self, stmt, test);
}
@@ -1231,6 +1230,11 @@ where
self.current_stmt_parent().map(|parent| parent.0),
);
}
if self.settings.enabled.contains(&RuleCode::SIM401) {
flake8_simplify::rules::use_dict_get_with_default(
self, stmt, test, body, orelse,
);
}
}
StmtKind::Assert { test, msg } => {
if self.settings.enabled.contains(&RuleCode::F631) {
@@ -1306,8 +1310,15 @@ where
if self.settings.enabled.contains(&RuleCode::PLW0120) {
pylint::rules::useless_else_on_loop(self, stmt, body, orelse);
}
if self.settings.enabled.contains(&RuleCode::SIM118) {
flake8_simplify::rules::key_in_dict_for(self, target, iter);
if matches!(stmt.node, StmtKind::For { .. }) {
if self.settings.enabled.contains(&RuleCode::SIM110)
|| self.settings.enabled.contains(&RuleCode::SIM111)
{
flake8_simplify::rules::convert_for_loop_to_any_all(self, stmt, None);
}
if self.settings.enabled.contains(&RuleCode::SIM118) {
flake8_simplify::rules::key_in_dict_for(self, target, iter);
}
}
}
StmtKind::Try {
@@ -1333,7 +1344,7 @@ where
flake8_bugbear::rules::redundant_tuple_in_exception_handler(self, handlers);
}
if self.settings.enabled.contains(&RuleCode::UP024) {
pyupgrade::rules::os_error_alias(self, handlers);
pyupgrade::rules::os_error_alias(self, &handlers);
}
if self.settings.enabled.contains(&RuleCode::PT017) {
self.diagnostics.extend(
@@ -1405,6 +1416,9 @@ where
if self.settings.enabled.contains(&RuleCode::B015) {
flake8_bugbear::rules::useless_comparison(self, value);
}
if self.settings.enabled.contains(&RuleCode::SIM112) {
flake8_simplify::rules::use_capital_environment_variables(self, value);
}
}
_ => {}
}
@@ -1827,6 +1841,8 @@ where
|| self.settings.enabled.contains(&RuleCode::F523)
|| self.settings.enabled.contains(&RuleCode::F524)
|| self.settings.enabled.contains(&RuleCode::F525)
// pyupgrade
|| self.settings.enabled.contains(&RuleCode::UP030)
{
if let ExprKind::Attribute { value, attr, .. } = &func.node {
if let ExprKind::Constant {
@@ -1873,6 +1889,10 @@ where
self, &summary, location,
);
}
if self.settings.enabled.contains(&RuleCode::UP030) {
pyupgrade::rules::format_literals(self, &summary, expr);
}
}
}
}
@@ -1912,7 +1932,7 @@ where
pyupgrade::rules::replace_stdout_stderr(self, expr, keywords);
}
if self.settings.enabled.contains(&RuleCode::UP024) {
pyupgrade::rules::os_error_alias(self, expr);
pyupgrade::rules::os_error_alias(self, &expr);
}
// flake8-print
@@ -1988,6 +2008,39 @@ where
self.diagnostics.push(diagnostic);
}
}
if self.settings.enabled.contains(&RuleCode::S508) {
if let Some(diagnostic) = flake8_bandit::rules::snmp_insecure_version(
func,
args,
keywords,
&self.from_imports,
&self.import_aliases,
) {
self.diagnostics.push(diagnostic);
}
}
if self.settings.enabled.contains(&RuleCode::S509) {
if let Some(diagnostic) = flake8_bandit::rules::snmp_weak_cryptography(
func,
args,
keywords,
&self.from_imports,
&self.import_aliases,
) {
self.diagnostics.push(diagnostic);
}
}
if self.settings.enabled.contains(&RuleCode::S701) {
if let Some(diagnostic) = flake8_bandit::rules::jinja2_autoescape_false(
func,
args,
keywords,
&self.from_imports,
&self.import_aliases,
) {
self.diagnostics.push(diagnostic);
}
}
if self.settings.enabled.contains(&RuleCode::S106) {
self.diagnostics
.extend(flake8_bandit::rules::hardcoded_password_func_arg(keywords));
@@ -2017,205 +2070,75 @@ where
// flake8-comprehensions
if self.settings.enabled.contains(&RuleCode::C400) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_generator_list(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C400),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_generator_list(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C401) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_generator_set(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C401),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_generator_set(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C402) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_generator_dict(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C402),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_generator_dict(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C403) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_list_comprehension_set(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C403),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_list_comprehension_set(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C404) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_list_comprehension_dict(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C404),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_list_comprehension_dict(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C405) {
if let Some(diagnostic) = flake8_comprehensions::rules::unnecessary_literal_set(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C405),
Range::from_located(expr),
) {
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_literal_set(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C406) {
if let Some(diagnostic) = flake8_comprehensions::rules::unnecessary_literal_dict(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C406),
Range::from_located(expr),
) {
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_literal_dict(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C408) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_collection_call(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C408),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_collection_call(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C409) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_literal_within_tuple_call(
expr,
func,
args,
self.locator,
self.patch(&RuleCode::C409),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_literal_within_tuple_call(
self, expr, func, args,
);
}
if self.settings.enabled.contains(&RuleCode::C410) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_literal_within_list_call(
expr,
func,
args,
self.locator,
self.patch(&RuleCode::C410),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_literal_within_list_call(
self, expr, func, args,
);
}
if self.settings.enabled.contains(&RuleCode::C411) {
if let Some(diagnostic) = flake8_comprehensions::rules::unnecessary_list_call(
expr,
func,
args,
self.locator,
self.patch(&RuleCode::C411),
Range::from_located(expr),
) {
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_list_call(self, expr, func, args);
}
if self.settings.enabled.contains(&RuleCode::C413) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_call_around_sorted(
expr,
func,
args,
self.locator,
self.patch(&RuleCode::C413),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_call_around_sorted(
self, expr, func, args,
);
}
if self.settings.enabled.contains(&RuleCode::C414) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_double_cast_or_process(
func,
args,
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_double_cast_or_process(
self, expr, func, args,
);
}
if self.settings.enabled.contains(&RuleCode::C415) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_subscript_reversal(
func,
args,
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_subscript_reversal(
self, expr, func, args,
);
}
if self.settings.enabled.contains(&RuleCode::C417) {
if let Some(diagnostic) = flake8_comprehensions::rules::unnecessary_map(
func,
args,
Range::from_located(expr),
) {
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_map(self, expr, func, args);
}
// flake8-boolean-trap
@@ -2420,6 +2343,11 @@ where
args, keywords,
));
}
// flake8-simplify
if self.settings.enabled.contains(&RuleCode::SIM115) {
flake8_simplify::rules::open_file_with_context_handler(self, func);
}
}
ExprKind::Dict { keys, values } => {
if self.settings.enabled.contains(&RuleCode::F601)
@@ -2759,18 +2687,9 @@ where
}
ExprKind::ListComp { elt, generators } | ExprKind::SetComp { elt, generators } => {
if self.settings.enabled.contains(&RuleCode::C416) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_comprehension(
expr,
elt,
generators,
self.locator,
self.patch(&RuleCode::C416),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_comprehension(
self, expr, elt, generators,
);
}
if self.settings.enabled.contains(&RuleCode::B023) {
flake8_bugbear::rules::function_uses_loop_variable(self, &Node::Expr(expr));
@@ -3256,7 +3175,7 @@ where
if matches!(stmt.node, StmtKind::For { .. })
&& matches!(sibling.node, StmtKind::Return { .. })
{
flake8_simplify::rules::convert_loop_to_any_all(self, stmt, sibling);
flake8_simplify::rules::convert_for_loop_to_any_all(self, stmt, Some(sibling));
}
}
}
@@ -3336,16 +3255,6 @@ impl<'a> Checker<'a> {
self.parents.iter().rev().nth(1)
}
/// Return the grandparent `Stmt` of the current `Stmt`, if any.
pub fn current_stmt_grandparent(&self) -> Option<&RefEquality<'a, Stmt>> {
self.parents.iter().rev().nth(2)
}
/// Return the current `Expr`.
pub fn current_expr(&self) -> Option<&RefEquality<'a, Expr>> {
self.exprs.iter().rev().next()
}
/// Return the parent `Expr` of the current `Expr`.
pub fn current_expr_parent(&self) -> Option<&RefEquality<'a, Expr>> {
self.exprs.iter().rev().nth(1)
@@ -4406,8 +4315,8 @@ impl<'a> Checker<'a> {
#[allow(clippy::too_many_arguments)]
pub fn check_ast(
python_ast: &Suite,
locator: &SourceCodeLocator,
stylist: &SourceCodeStyleDetector,
locator: &Locator,
stylist: &Stylist,
noqa_line_for: &IntMap<usize, usize>,
settings: &Settings,
autofix: flags::Autofix,

View File

@@ -7,47 +7,49 @@ use rustpython_parser::ast::Suite;
use crate::ast::visitor::Visitor;
use crate::directives::IsortDirectives;
use crate::isort;
use crate::isort::track::ImportTracker;
use crate::registry::Diagnostic;
use crate::isort::track::{Block, ImportTracker};
use crate::registry::{Diagnostic, RuleCode};
use crate::settings::{flags, Settings};
use crate::source_code_locator::SourceCodeLocator;
use crate::source_code_style::SourceCodeStyleDetector;
fn check_import_blocks(
tracker: ImportTracker,
locator: &SourceCodeLocator,
settings: &Settings,
stylist: &SourceCodeStyleDetector,
autofix: flags::Autofix,
package: Option<&Path>,
) -> Vec<Diagnostic> {
let mut diagnostics = vec![];
for block in tracker.into_iter() {
if !block.imports.is_empty() {
if let Some(diagnostic) =
isort::rules::check_imports(&block, locator, settings, stylist, autofix, package)
{
diagnostics.push(diagnostic);
}
}
}
diagnostics
}
use crate::source_code::{Locator, Stylist};
#[allow(clippy::too_many_arguments)]
pub fn check_imports(
python_ast: &Suite,
locator: &SourceCodeLocator,
locator: &Locator,
directives: &IsortDirectives,
settings: &Settings,
stylist: &SourceCodeStyleDetector,
stylist: &Stylist,
autofix: flags::Autofix,
path: &Path,
package: Option<&Path>,
) -> Vec<Diagnostic> {
let mut tracker = ImportTracker::new(locator, directives, path);
for stmt in python_ast {
tracker.visit_stmt(stmt);
// Extract all imports from the AST.
let tracker = {
let mut tracker = ImportTracker::new(locator, directives, path);
for stmt in python_ast {
tracker.visit_stmt(stmt);
}
tracker
};
let blocks: Vec<&Block> = tracker.iter().collect();
// Enforce import rules.
let mut diagnostics = vec![];
if settings.enabled.contains(&RuleCode::I001) {
for block in &blocks {
if !block.imports.is_empty() {
if let Some(diagnostic) = isort::rules::organize_imports(
block, locator, settings, stylist, autofix, package,
) {
diagnostics.push(diagnostic);
}
}
}
}
check_import_blocks(tracker, locator, settings, stylist, autofix, package)
if settings.enabled.contains(&RuleCode::I002) {
diagnostics.extend(isort::rules::add_required_imports(
&blocks, python_ast, locator, settings, autofix,
));
}
diagnostics
}

View File

@@ -1,6 +1,6 @@
//! Lint rules based on checking raw physical lines.
use crate::pycodestyle::rules::{line_too_long, no_newline_at_end_of_file};
use crate::pycodestyle::rules::{doc_line_too_long, line_too_long, no_newline_at_end_of_file};
use crate::pygrep_hooks::rules::{blanket_noqa, blanket_type_ignore};
use crate::pyupgrade::rules::unnecessary_coding_comment;
use crate::registry::{Diagnostic, RuleCode};
@@ -9,18 +9,21 @@ use crate::settings::{flags, Settings};
pub fn check_lines(
contents: &str,
commented_lines: &[usize],
doc_lines: &[usize],
settings: &Settings,
autofix: flags::Autofix,
) -> Vec<Diagnostic> {
let mut diagnostics: Vec<Diagnostic> = vec![];
let enforce_unnecessary_coding_comment = settings.enabled.contains(&RuleCode::UP009);
let enforce_blanket_noqa = settings.enabled.contains(&RuleCode::PGH004);
let enforce_blanket_type_ignore = settings.enabled.contains(&RuleCode::PGH003);
let enforce_doc_line_too_long = settings.enabled.contains(&RuleCode::W505);
let enforce_line_too_long = settings.enabled.contains(&RuleCode::E501);
let enforce_no_newline_at_end_of_file = settings.enabled.contains(&RuleCode::W292);
let enforce_blanket_type_ignore = settings.enabled.contains(&RuleCode::PGH003);
let enforce_blanket_noqa = settings.enabled.contains(&RuleCode::PGH004);
let enforce_unnecessary_coding_comment = settings.enabled.contains(&RuleCode::UP009);
let mut commented_lines_iter = commented_lines.iter().peekable();
let mut doc_lines_iter = doc_lines.iter().peekable();
for (index, line) in contents.lines().enumerate() {
while commented_lines_iter
.next_if(|lineno| &(index + 1) == *lineno)
@@ -40,18 +43,25 @@ pub fn check_lines(
}
if enforce_blanket_type_ignore {
if commented_lines.contains(&(index + 1)) {
if let Some(diagnostic) = blanket_type_ignore(index, line) {
diagnostics.push(diagnostic);
}
if let Some(diagnostic) = blanket_type_ignore(index, line) {
diagnostics.push(diagnostic);
}
}
if enforce_blanket_noqa {
if commented_lines.contains(&(index + 1)) {
if let Some(diagnostic) = blanket_noqa(index, line) {
diagnostics.push(diagnostic);
}
if let Some(diagnostic) = blanket_noqa(index, line) {
diagnostics.push(diagnostic);
}
}
}
while doc_lines_iter
.next_if(|lineno| &(index + 1) == *lineno)
.is_some()
{
if enforce_doc_line_too_long {
if let Some(diagnostic) = doc_line_too_long(index, line, settings) {
diagnostics.push(diagnostic);
}
}
}
@@ -90,6 +100,7 @@ mod tests {
check_lines(
line,
&[],
&[],
&Settings {
line_length,
..Settings::for_rule(RuleCode::E501)

View File

@@ -6,7 +6,7 @@ use nohash_hasher::IntMap;
use rustpython_parser::ast::Location;
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::fix::Fix;
use crate::noqa::{is_file_exempt, Directive};
use crate::registry::{Diagnostic, DiagnosticKind, RuleCode, CODE_REDIRECTS};
use crate::settings::{flags, Settings};

View File

@@ -5,12 +5,12 @@ use rustpython_parser::lexer::{LexResult, Tok};
use crate::lex::docstring_detection::StateMachine;
use crate::registry::{Diagnostic, RuleCode};
use crate::ruff::rules::Context;
use crate::settings::flags;
use crate::source_code_locator::SourceCodeLocator;
use crate::{eradicate, flake8_implicit_str_concat, flake8_quotes, pycodestyle, ruff, Settings};
use crate::settings::{flags, Settings};
use crate::source_code::Locator;
use crate::{eradicate, flake8_implicit_str_concat, flake8_quotes, pycodestyle, ruff};
pub fn check_tokens(
locator: &SourceCodeLocator,
locator: &Locator,
tokens: &[LexResult],
settings: &Settings,
autofix: flags::Autofix,
@@ -67,7 +67,8 @@ pub fn check_tokens(
start,
end,
is_docstring,
&settings.flake8_quotes,
settings,
autofix,
) {
if settings.enabled.contains(diagnostic.kind.code()) {
diagnostics.push(diagnostic);

View File

@@ -4,12 +4,13 @@ use clap::{command, Parser};
use regex::Regex;
use rustc_hash::FxHashMap;
use crate::fs;
use crate::logging::LogLevel;
use crate::registry::{RuleCode, RuleCodePrefix};
use crate::resolver::ConfigProcessor;
use crate::settings::types::{
FilePattern, PatternPrefixPair, PerFileIgnore, PythonVersion, SerializationFormat,
};
use crate::{fs, mccabe};
#[derive(Debug, Parser)]
#[command(author, about = "Ruff: An extremely fast Python linter.")]
@@ -61,33 +62,33 @@ pub struct Cli {
pub isolated: bool,
/// Comma-separated list of rule codes to enable (or ALL, to enable all
/// rules).
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub select: Option<Vec<RuleCodePrefix>>,
/// Like --select, but adds additional rule codes on top of the selected
/// ones.
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub extend_select: Option<Vec<RuleCodePrefix>>,
/// Comma-separated list of rule codes to disable.
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub ignore: Option<Vec<RuleCodePrefix>>,
/// Like --ignore, but adds additional rule codes on top of the ignored
/// ones.
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub extend_ignore: Option<Vec<RuleCodePrefix>>,
/// List of paths, used to omit files and/or directories from analysis.
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "FILE_PATTERN")]
pub exclude: Option<Vec<FilePattern>>,
/// Like --exclude, but adds additional files and directories on top of
/// those already excluded.
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "FILE_PATTERN")]
pub extend_exclude: Option<Vec<FilePattern>>,
/// List of rule codes to treat as eligible for autofix. Only applicable
/// when autofix itself is enabled (e.g., via `--fix`).
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub fixable: Option<Vec<RuleCodePrefix>>,
/// List of rule codes to treat as ineligible for autofix. Only applicable
/// when autofix itself is enabled (e.g., via `--fix`).
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub unfixable: Option<Vec<RuleCodePrefix>>,
/// List of mappings from file pattern to code to exclude
#[arg(long, value_delimiter = ',')]
@@ -344,6 +345,87 @@ pub struct Overrides {
pub update_check: Option<bool>,
}
impl ConfigProcessor for &Overrides {
fn process_config(&self, config: &mut crate::settings::configuration::Configuration) {
if let Some(cache_dir) = &self.cache_dir {
config.cache_dir = Some(cache_dir.clone());
}
if let Some(dummy_variable_rgx) = &self.dummy_variable_rgx {
config.dummy_variable_rgx = Some(dummy_variable_rgx.clone());
}
if let Some(exclude) = &self.exclude {
config.exclude = Some(exclude.clone());
}
if let Some(extend_exclude) = &self.extend_exclude {
config.extend_exclude.extend(extend_exclude.clone());
}
if let Some(fix) = &self.fix {
config.fix = Some(*fix);
}
if let Some(fix_only) = &self.fix_only {
config.fix_only = Some(*fix_only);
}
if let Some(fixable) = &self.fixable {
config.fixable = Some(fixable.clone());
}
if let Some(format) = &self.format {
config.format = Some(*format);
}
if let Some(force_exclude) = &self.force_exclude {
config.force_exclude = Some(*force_exclude);
}
if let Some(ignore) = &self.ignore {
config.ignore = Some(ignore.clone());
}
if let Some(line_length) = &self.line_length {
config.line_length = Some(*line_length);
}
if let Some(max_complexity) = &self.max_complexity {
config.mccabe = Some(mccabe::settings::Options {
max_complexity: Some(*max_complexity),
});
}
if let Some(per_file_ignores) = &self.per_file_ignores {
config.per_file_ignores = Some(collect_per_file_ignores(per_file_ignores.clone()));
}
if let Some(respect_gitignore) = &self.respect_gitignore {
config.respect_gitignore = Some(*respect_gitignore);
}
if let Some(select) = &self.select {
config.select = Some(select.clone());
}
if let Some(show_source) = &self.show_source {
config.show_source = Some(*show_source);
}
if let Some(target_version) = &self.target_version {
config.target_version = Some(*target_version);
}
if let Some(unfixable) = &self.unfixable {
config.unfixable = Some(unfixable.clone());
}
if let Some(update_check) = &self.update_check {
config.update_check = Some(*update_check);
}
// Special-case: `extend_ignore` and `extend_select` are parallel arrays, so
// push an empty array if only one of the two is provided.
match (&self.extend_ignore, &self.extend_select) {
(Some(extend_ignore), Some(extend_select)) => {
config.extend_ignore.push(extend_ignore.clone());
config.extend_select.push(extend_select.clone());
}
(Some(extend_ignore), None) => {
config.extend_ignore.push(extend_ignore.clone());
config.extend_select.push(Vec::new());
}
(None, Some(extend_select)) => {
config.extend_ignore.push(Vec::new());
config.extend_select.push(extend_select.clone());
}
(None, None) => {}
}
}
}
/// Map the CLI settings to a `LogLevel`.
pub fn extract_log_level(cli: &Arguments) -> LogLevel {
if cli.silent {

View File

@@ -15,18 +15,18 @@ use rustpython_ast::Location;
use serde::Serialize;
use walkdir::WalkDir;
use crate::autofix::fixer;
use crate::cache::CACHE_DIR_NAME;
use crate::cli::Overrides;
use crate::diagnostics::{lint_path, lint_stdin, Diagnostics};
use crate::iterators::par_iter;
use crate::linter::{add_noqa_to_path, lint_path, lint_stdin, Diagnostics};
use crate::linter::add_noqa_to_path;
use crate::logging::LogLevel;
use crate::message::Message;
use crate::registry::RuleCode;
use crate::resolver::{FileDiscovery, PyprojectDiscovery};
use crate::settings::flags;
use crate::settings::types::SerializationFormat;
use crate::{cache, fs, packages, resolver, violations, warn_user_once};
use crate::{cache, fix, fs, packaging, resolver, violations, warn_user_once};
/// Run the linter over a collection of files.
pub fn run(
@@ -35,7 +35,7 @@ pub fn run(
file_strategy: &FileDiscovery,
overrides: &Overrides,
cache: flags::Cache,
autofix: fixer::Mode,
autofix: fix::FixMode,
) -> Result<Diagnostics> {
// Collect all the Python files to check.
let start = Instant::now();
@@ -77,7 +77,7 @@ pub fn run(
};
// Discover the package root for each Python file.
let package_roots = packages::detect_package_roots(
let package_roots = packaging::detect_package_roots(
&paths
.iter()
.flatten()
@@ -156,7 +156,7 @@ pub fn run_stdin(
pyproject_strategy: &PyprojectDiscovery,
file_strategy: &FileDiscovery,
overrides: &Overrides,
autofix: fixer::Mode,
autofix: fix::FixMode,
) -> Result<Diagnostics> {
if let Some(filename) = filename {
if !resolver::python_file_at_path(filename, pyproject_strategy, file_strategy, overrides)? {
@@ -169,7 +169,7 @@ pub fn run_stdin(
};
let package_root = filename
.and_then(Path::parent)
.and_then(packages::detect_package_root);
.and_then(packaging::detect_package_root);
let stdin = read_from_stdin()?;
let mut diagnostics = lint_stdin(filename, package_root, &stdin, settings, autofix)?;
diagnostics.messages.sort_unstable();

View File

@@ -1,5 +1,7 @@
use anyhow::{bail, Result};
use libcst_native::{Expr, Import, ImportFrom, Module, SmallStatement, Statement};
use libcst_native::{
Call, Expr, Expression, Import, ImportFrom, Module, SmallStatement, Statement,
};
pub fn match_module(module_text: &str) -> Result<Module> {
match libcst_native::parse_module(module_text, None) {
@@ -8,6 +10,13 @@ pub fn match_module(module_text: &str) -> Result<Module> {
}
}
pub fn match_expression(expression_text: &str) -> Result<Expression> {
match libcst_native::parse_expression(expression_text) {
Ok(expression) => Ok(expression),
Err(_) => bail!("Failed to extract CST from source"),
}
}
pub fn match_expr<'a, 'b>(module: &'a mut Module<'b>) -> Result<&'a mut Expr<'b>> {
if let Some(Statement::Simple(expr)) = module.body.first_mut() {
if let Some(SmallStatement::Expr(expr)) = expr.body.first_mut() {
@@ -43,3 +52,11 @@ pub fn match_import_from<'a, 'b>(module: &'a mut Module<'b>) -> Result<&'a mut I
bail!("Expected Statement::Simple")
}
}
pub fn match_call<'a, 'b>(expression: &'a mut Expression<'b>) -> Result<&'a mut Call<'b>> {
if let Expression::Call(call) = expression {
Ok(call)
} else {
bail!("Expected SmallStatement::Expr")
}
}

154
src/diagnostics.rs Normal file
View File

@@ -0,0 +1,154 @@
#![cfg_attr(target_family = "wasm", allow(dead_code))]
use std::fs::write;
use std::io;
use std::io::Write;
use std::ops::AddAssign;
use std::path::Path;
use anyhow::Result;
use log::debug;
use similar::TextDiff;
use crate::linter::{lint_fix, lint_only};
use crate::message::Message;
use crate::settings::{flags, Settings};
use crate::{cache, fix, fs};
#[derive(Debug, Default)]
pub struct Diagnostics {
pub messages: Vec<Message>,
pub fixed: usize,
}
impl Diagnostics {
pub fn new(messages: Vec<Message>) -> Self {
Self { messages, fixed: 0 }
}
}
impl AddAssign for Diagnostics {
fn add_assign(&mut self, other: Self) {
self.messages.extend(other.messages);
self.fixed += other.fixed;
}
}
/// Lint the source code at the given `Path`.
pub fn lint_path(
path: &Path,
package: Option<&Path>,
settings: &Settings,
cache: flags::Cache,
autofix: fix::FixMode,
) -> Result<Diagnostics> {
// Validate the `Settings` and return any errors.
settings.validate()?;
// Check the cache.
// TODO(charlie): `fixer::Mode::Apply` and `fixer::Mode::Diff` both have
// side-effects that aren't captured in the cache. (In practice, it's fine
// to cache `fixer::Mode::Apply`, since a file either has no fixes, or we'll
// write the fixes to disk, thus invalidating the cache. But it's a bit hard
// to reason about. We need to come up with a better solution here.)
let metadata = if matches!(cache, flags::Cache::Enabled)
&& matches!(autofix, fix::FixMode::None | fix::FixMode::Generate)
{
let metadata = path.metadata()?;
if let Some(messages) = cache::get(path, &metadata, settings, autofix.into()) {
debug!("Cache hit for: {}", path.to_string_lossy());
return Ok(Diagnostics::new(messages));
}
Some(metadata)
} else {
None
};
// Read the file from disk.
let contents = fs::read_file(path)?;
// Lint the file.
let (messages, fixed) = if matches!(autofix, fix::FixMode::Apply | fix::FixMode::Diff) {
let (transformed, fixed, messages) = lint_fix(&contents, path, package, settings)?;
if fixed > 0 {
if matches!(autofix, fix::FixMode::Apply) {
write(path, transformed)?;
} else if matches!(autofix, fix::FixMode::Diff) {
let mut stdout = io::stdout().lock();
TextDiff::from_lines(&contents, &transformed)
.unified_diff()
.header(&fs::relativize_path(path), &fs::relativize_path(path))
.to_writer(&mut stdout)?;
stdout.write_all(b"\n")?;
stdout.flush()?;
}
}
(messages, fixed)
} else {
let messages = lint_only(&contents, path, package, settings, autofix.into())?;
let fixed = 0;
(messages, fixed)
};
// Re-populate the cache.
if let Some(metadata) = metadata {
cache::set(path, &metadata, settings, autofix.into(), &messages);
}
Ok(Diagnostics { messages, fixed })
}
/// Generate `Diagnostic`s from source code content derived from
/// stdin.
pub fn lint_stdin(
path: Option<&Path>,
package: Option<&Path>,
contents: &str,
settings: &Settings,
autofix: fix::FixMode,
) -> Result<Diagnostics> {
// Validate the `Settings` and return any errors.
settings.validate()?;
// Lint the inputs.
let (messages, fixed) = if matches!(autofix, fix::FixMode::Apply | fix::FixMode::Diff) {
let (transformed, fixed, messages) = lint_fix(
contents,
path.unwrap_or_else(|| Path::new("-")),
package,
settings,
)?;
if matches!(autofix, fix::FixMode::Apply) {
// Write the contents to stdout, regardless of whether any errors were fixed.
io::stdout().write_all(transformed.as_bytes())?;
} else if matches!(autofix, fix::FixMode::Diff) {
// But only write a diff if it's non-empty.
if fixed > 0 {
let text_diff = TextDiff::from_lines(contents, &transformed);
let mut unified_diff = text_diff.unified_diff();
if let Some(path) = path {
unified_diff.header(&fs::relativize_path(path), &fs::relativize_path(path));
}
let mut stdout = io::stdout().lock();
unified_diff.to_writer(&mut stdout)?;
stdout.write_all(b"\n")?;
stdout.flush()?;
}
}
(messages, fixed)
} else {
let messages = lint_only(
contents,
path.unwrap_or_else(|| Path::new("-")),
package,
settings,
autofix.into(),
)?;
let fixed = 0;
(messages, fixed)
};
Ok(Diagnostics { messages, fixed })
}

View File

@@ -6,7 +6,7 @@ use rustpython_ast::Location;
use rustpython_parser::lexer::{LexResult, Tok};
use crate::registry::LintSource;
use crate::Settings;
use crate::settings::Settings;
bitflags! {
pub struct Flags: u32 {
@@ -33,6 +33,7 @@ impl Flags {
pub struct IsortDirectives {
pub exclusions: IntSet<usize>,
pub splits: Vec<usize>,
pub skip_file: bool,
}
pub struct Directives {
@@ -89,17 +90,11 @@ pub fn extract_noqa_line_for(lxr: &[LexResult]) -> IntMap<usize, usize> {
pub fn extract_isort_directives(lxr: &[LexResult]) -> IsortDirectives {
let mut exclusions: IntSet<usize> = IntSet::default();
let mut splits: Vec<usize> = Vec::default();
let mut skip_file: bool = false;
let mut off: Option<Location> = None;
let mut last: Option<Location> = None;
for &(start, ref tok, end) in lxr.iter().flatten() {
last = Some(end);
// No need to keep processing, but we do need to determine the last token.
if skip_file {
continue;
}
let Tok::Comment(comment_text) = tok else {
continue;
};
@@ -111,7 +106,10 @@ pub fn extract_isort_directives(lxr: &[LexResult]) -> IsortDirectives {
if comment_text == "# isort: split" {
splits.push(start.row());
} else if comment_text == "# isort: skip_file" || comment_text == "# isort:skip_file" {
skip_file = true;
return IsortDirectives {
skip_file: true,
..IsortDirectives::default()
};
} else if off.is_some() {
if comment_text == "# isort: on" {
if let Some(start) = off {
@@ -130,14 +128,7 @@ pub fn extract_isort_directives(lxr: &[LexResult]) -> IsortDirectives {
}
}
if skip_file {
// Enforce `isort: skip_file`.
if let Some(end) = last {
for row in 1..=end.row() {
exclusions.insert(row);
}
}
} else if let Some(start) = off {
if let Some(start) = off {
// Enforce unterminated `isort: off`.
if let Some(end) = last {
for row in start.row() + 1..=end.row() {
@@ -145,7 +136,11 @@ pub fn extract_isort_directives(lxr: &[LexResult]) -> IsortDirectives {
}
}
}
IsortDirectives { exclusions, splits }
IsortDirectives {
exclusions,
splits,
..IsortDirectives::default()
}
}
#[cfg(test)]
@@ -283,10 +278,7 @@ x = 1
y = 2
z = x + 1";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
assert_eq!(
extract_isort_directives(&lxr).exclusions,
IntSet::from_iter([1, 2, 3, 4])
);
assert_eq!(extract_isort_directives(&lxr).exclusions, IntSet::default());
let contents = "# isort: off
x = 1
@@ -295,10 +287,7 @@ y = 2
# isort: skip_file
z = x + 1";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
assert_eq!(
extract_isort_directives(&lxr).exclusions,
IntSet::from_iter([1, 2, 3, 4, 5, 6])
);
assert_eq!(extract_isort_directives(&lxr).exclusions, IntSet::default());
}
#[test]

58
src/doc_lines.rs Normal file
View File

@@ -0,0 +1,58 @@
//! Doc line extraction. In this context, a doc line is a line consisting of a
//! standalone comment or a constant string statement.
use rustpython_ast::{Constant, ExprKind, Stmt, StmtKind, Suite};
use rustpython_parser::lexer::{LexResult, Tok};
use crate::ast::visitor;
use crate::ast::visitor::Visitor;
/// Extract doc lines (standalone comments) from a token sequence.
pub fn doc_lines_from_tokens(lxr: &[LexResult]) -> Vec<usize> {
let mut doc_lines: Vec<usize> = Vec::default();
let mut prev: Option<usize> = None;
for (start, tok, end) in lxr.iter().flatten() {
if matches!(tok, Tok::Indent | Tok::Dedent | Tok::Newline) {
continue;
}
if matches!(tok, Tok::Comment(..)) {
if let Some(prev) = prev {
if start.row() > prev {
doc_lines.push(start.row());
}
} else {
doc_lines.push(start.row());
}
}
prev = Some(end.row());
}
doc_lines
}
#[derive(Default)]
struct StringLinesVisitor {
string_lines: Vec<usize>,
}
impl Visitor<'_> for StringLinesVisitor {
fn visit_stmt(&mut self, stmt: &Stmt) {
if let StmtKind::Expr { value } = &stmt.node {
if let ExprKind::Constant {
value: Constant::Str(..),
..
} = &value.node
{
self.string_lines
.extend(value.location.row()..=value.end_location.unwrap().row());
}
}
visitor::walk_stmt(self, stmt);
}
}
/// Extract doc lines (standalone strings) from an AST.
pub fn doc_lines_from_ast(python_ast: &Suite) -> Vec<usize> {
let mut visitor = StringLinesVisitor::default();
visitor.visit_body(python_ast);
visitor.string_lines
}

View File

@@ -1,5 +1,5 @@
pub mod detection;
pub mod rules;
pub(crate) mod detection;
pub(crate) mod rules;
#[cfg(test)]
mod tests {

View File

@@ -1,11 +1,12 @@
use rustpython_ast::Location;
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::eradicate::detection::comment_contains_code;
use crate::registry::RuleCode;
use crate::settings::flags;
use crate::{violations, Diagnostic, Settings, SourceCodeLocator};
use crate::fix::Fix;
use crate::registry::{Diagnostic, RuleCode};
use crate::settings::{flags, Settings};
use crate::source_code::Locator;
use crate::violations;
fn is_standalone_comment(line: &str) -> bool {
for char in line.chars() {
@@ -20,7 +21,7 @@ fn is_standalone_comment(line: &str) -> bool {
/// ERA001
pub fn commented_out_code(
locator: &SourceCodeLocator,
locator: &Locator,
start: Location,
end: Location,
settings: &Settings,

53
src/fix.rs Normal file
View File

@@ -0,0 +1,53 @@
use rustpython_ast::Location;
use serde::{Deserialize, Serialize};
#[derive(Debug, Copy, Clone, Hash)]
pub enum FixMode {
Generate,
Apply,
Diff,
None,
}
impl From<bool> for FixMode {
fn from(value: bool) -> Self {
if value {
FixMode::Apply
} else {
FixMode::None
}
}
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize)]
pub struct Fix {
pub content: String,
pub location: Location,
pub end_location: Location,
}
impl Fix {
pub fn deletion(start: Location, end: Location) -> Self {
Self {
content: String::new(),
location: start,
end_location: end,
}
}
pub fn replacement(content: String, start: Location, end: Location) -> Self {
Self {
content,
location: start,
end_location: end,
}
}
pub fn insertion(content: String, at: Location) -> Self {
Self {
content,
location: at,
end_location: at,
}
}
}

View File

@@ -1,4 +1,4 @@
pub mod rules;
pub(crate) mod rules;
#[cfg(test)]
mod tests {

View File

@@ -4,11 +4,11 @@ use rustpython_parser::lexer;
use rustpython_parser::lexer::Tok;
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::source_code_locator::SourceCodeLocator;
use crate::fix::Fix;
use crate::source_code::Locator;
/// ANN204
pub fn add_return_none_annotation(locator: &SourceCodeLocator, stmt: &Stmt) -> Result<Fix> {
pub fn add_return_none_annotation(locator: &Locator, stmt: &Stmt) -> Result<Fix> {
let range = Range::from_located(stmt);
let contents = locator.slice_source_code_range(&range);

View File

@@ -1,6 +1,6 @@
mod fixes;
pub mod helpers;
pub mod rules;
pub(crate) mod helpers;
pub(crate) mod rules;
pub mod settings;
#[cfg(test)]
@@ -9,9 +9,10 @@ mod tests {
use anyhow::Result;
use crate::flake8_annotations;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::{flake8_annotations, Settings};
use crate::settings::Settings;
#[test]
fn defaults() -> Result<()> {

View File

@@ -8,9 +8,9 @@ use crate::checkers::ast::Checker;
use crate::docstrings::definition::{Definition, DefinitionKind};
use crate::flake8_annotations::fixes;
use crate::flake8_annotations::helpers::match_function_def;
use crate::registry::RuleCode;
use crate::registry::{Diagnostic, RuleCode};
use crate::visibility::Visibility;
use crate::{violations, visibility, Diagnostic};
use crate::{violations, visibility};
#[derive(Default)]
struct ReturnStatementVisitor<'a> {
@@ -319,7 +319,7 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
helpers::identifier_range(stmt, checker.locator),
));
}
} else if visibility::is_init(stmt) {
} else if visibility::is_init(cast::name(stmt)) {
// Allow omission of return annotation in `__init__` functions, as long as at
// least one argument is typed.
if checker.settings.enabled.contains(&RuleCode::ANN204) {
@@ -341,7 +341,7 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
checker.diagnostics.push(diagnostic);
}
}
} else if visibility::is_magic(stmt) {
} else if visibility::is_magic(cast::name(stmt)) {
if checker.settings.enabled.contains(&RuleCode::ANN204) {
checker.diagnostics.push(Diagnostic::new(
violations::MissingReturnTypeSpecialMethod(name.to_string()),

View File

@@ -1,5 +1,5 @@
mod helpers;
pub mod rules;
pub(crate) mod rules;
pub mod settings;
#[cfg(test)]
@@ -9,9 +9,10 @@ mod tests {
use anyhow::Result;
use test_case::test_case;
use crate::flake8_bandit;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::{flake8_bandit, Settings};
use crate::settings::Settings;
#[test_case(RuleCode::S101, Path::new("S101.py"); "S101")]
#[test_case(RuleCode::S102, Path::new("S102.py"); "S102")]
@@ -25,6 +26,9 @@ mod tests {
#[test_case(RuleCode::S324, Path::new("S324.py"); "S324")]
#[test_case(RuleCode::S501, Path::new("S501.py"); "S501")]
#[test_case(RuleCode::S506, Path::new("S506.py"); "S506")]
#[test_case(RuleCode::S508, Path::new("S508.py"); "S508")]
#[test_case(RuleCode::S509, Path::new("S509.py"); "S509")]
#[test_case(RuleCode::S701, Path::new("S701.py"); "S701")]
fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let diagnostics = test_path(

View File

@@ -0,0 +1,57 @@
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::{Expr, ExprKind, Keyword};
use rustpython_parser::ast::Constant;
use crate::ast::helpers::{collect_call_paths, dealias_call_path, match_call_path, SimpleCallArgs};
use crate::ast::types::Range;
use crate::registry::Diagnostic;
use crate::violations;
/// S701
pub fn jinja2_autoescape_false(
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> Option<Diagnostic> {
if match_call_path(
&dealias_call_path(collect_call_paths(func), import_aliases),
"jinja2",
"Environment",
from_imports,
) {
let call_args = SimpleCallArgs::new(args, keywords);
if let Some(autoescape_arg) = call_args.get_argument("autoescape", None) {
match &autoescape_arg.node {
ExprKind::Constant {
value: Constant::Bool(true),
..
} => (),
ExprKind::Call { func, .. } => {
if let ExprKind::Name { id, .. } = &func.node {
if id.as_str() != "select_autoescape" {
return Some(Diagnostic::new(
violations::Jinja2AutoescapeFalse(true),
Range::from_located(autoescape_arg),
));
}
}
}
_ => {
return Some(Diagnostic::new(
violations::Jinja2AutoescapeFalse(true),
Range::from_located(autoescape_arg),
))
}
}
} else {
return Some(Diagnostic::new(
violations::Jinja2AutoescapeFalse(false),
Range::from_located(func),
));
}
}
None
}

View File

@@ -9,8 +9,11 @@ pub use hardcoded_password_string::{
};
pub use hardcoded_tmp_directory::hardcoded_tmp_directory;
pub use hashlib_insecure_hash_functions::hashlib_insecure_hash_functions;
pub use jinja2_autoescape_false::jinja2_autoescape_false;
pub use request_with_no_cert_validation::request_with_no_cert_validation;
pub use request_without_timeout::request_without_timeout;
pub use snmp_insecure_version::snmp_insecure_version;
pub use snmp_weak_cryptography::snmp_weak_cryptography;
pub use unsafe_yaml_load::unsafe_yaml_load;
mod assert_used;
@@ -22,6 +25,9 @@ mod hardcoded_password_func_arg;
mod hardcoded_password_string;
mod hardcoded_tmp_directory;
mod hashlib_insecure_hash_functions;
mod jinja2_autoescape_false;
mod request_with_no_cert_validation;
mod request_without_timeout;
mod snmp_insecure_version;
mod snmp_weak_cryptography;
mod unsafe_yaml_load;

View File

@@ -0,0 +1,40 @@
use num_traits::{One, Zero};
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::{Expr, ExprKind, Keyword};
use rustpython_parser::ast::Constant;
use crate::ast::helpers::{collect_call_paths, dealias_call_path, match_call_path, SimpleCallArgs};
use crate::ast::types::Range;
use crate::registry::Diagnostic;
use crate::violations;
/// S508
pub fn snmp_insecure_version(
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> Option<Diagnostic> {
let call_path = dealias_call_path(collect_call_paths(func), import_aliases);
if match_call_path(&call_path, "pysnmp.hlapi", "CommunityData", from_imports) {
let call_args = SimpleCallArgs::new(args, keywords);
if let Some(mp_model_arg) = call_args.get_argument("mpModel", None) {
if let ExprKind::Constant {
value: Constant::Int(value),
..
} = &mp_model_arg.node
{
if value.is_zero() || value.is_one() {
return Some(Diagnostic::new(
violations::SnmpInsecureVersion,
Range::from_located(mp_model_arg),
));
}
}
}
}
None
}

View File

@@ -0,0 +1,30 @@
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::{Expr, Keyword};
use crate::ast::helpers::{collect_call_paths, dealias_call_path, match_call_path, SimpleCallArgs};
use crate::ast::types::Range;
use crate::registry::Diagnostic;
use crate::violations;
/// S509
pub fn snmp_weak_cryptography(
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> Option<Diagnostic> {
let call_path = dealias_call_path(collect_call_paths(func), import_aliases);
if match_call_path(&call_path, "pysnmp.hlapi", "UsmUserData", from_imports) {
let call_args = SimpleCallArgs::new(args, keywords);
if call_args.len() < 3 {
return Some(Diagnostic::new(
violations::SnmpWeakCryptography,
Range::from_located(func),
));
}
}
None
}

View File

@@ -0,0 +1,25 @@
---
source: src/flake8_bandit/mod.rs
expression: diagnostics
---
- kind:
SnmpInsecureVersion: ~
location:
row: 3
column: 32
end_location:
row: 3
column: 33
fix: ~
parent: ~
- kind:
SnmpInsecureVersion: ~
location:
row: 4
column: 32
end_location:
row: 4
column: 33
fix: ~
parent: ~

View File

@@ -0,0 +1,25 @@
---
source: src/flake8_bandit/mod.rs
expression: diagnostics
---
- kind:
SnmpWeakCryptography: ~
location:
row: 4
column: 11
end_location:
row: 4
column: 22
fix: ~
parent: ~
- kind:
SnmpWeakCryptography: ~
location:
row: 5
column: 15
end_location:
row: 5
column: 26
fix: ~
parent: ~

View File

@@ -0,0 +1,55 @@
---
source: src/flake8_bandit/mod.rs
expression: diagnostics
---
- kind:
Jinja2AutoescapeFalse: true
location:
row: 9
column: 67
end_location:
row: 9
column: 76
fix: ~
parent: ~
- kind:
Jinja2AutoescapeFalse: true
location:
row: 10
column: 44
end_location:
row: 10
column: 49
fix: ~
parent: ~
- kind:
Jinja2AutoescapeFalse: true
location:
row: 13
column: 23
end_location:
row: 13
column: 28
fix: ~
parent: ~
- kind:
Jinja2AutoescapeFalse: false
location:
row: 15
column: 0
end_location:
row: 15
column: 11
fix: ~
parent: ~
- kind:
Jinja2AutoescapeFalse: true
location:
row: 29
column: 46
end_location:
row: 29
column: 57
fix: ~
parent: ~

View File

@@ -1,4 +1,4 @@
pub mod rules;
pub(crate) mod rules;
#[cfg(test)]
mod tests {

View File

@@ -1,4 +1,4 @@
pub mod rules;
pub(crate) mod rules;
#[cfg(test)]
mod tests {

View File

@@ -1,4 +1,4 @@
pub mod rules;
pub(crate) mod rules;
pub mod settings;
#[cfg(test)]
@@ -8,9 +8,10 @@ mod tests {
use anyhow::Result;
use test_case::test_case;
use crate::flake8_bugbear;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::{flake8_bugbear, Settings};
use crate::settings::Settings;
#[test_case(RuleCode::B002, Path::new("B002.py"); "B002")]
#[test_case(RuleCode::B003, Path::new("B003.py"); "B003")]

View File

@@ -1,10 +1,10 @@
use rustpython_ast::{Constant, Expr, ExprContext, ExprKind, Location, Stmt, StmtKind};
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::checkers::ast::Checker;
use crate::fix::Fix;
use crate::registry::Diagnostic;
use crate::source_code_generator::SourceCodeGenerator;
use crate::source_code::Generator;
use crate::violations;
fn assertion_error(msg: Option<&Expr>) -> Stmt {
@@ -48,7 +48,7 @@ pub fn assert_false(checker: &mut Checker, stmt: &Stmt, test: &Expr, msg: Option
let mut diagnostic = Diagnostic::new(violations::DoNotAssertFalse, Range::from_located(test));
if checker.patch(diagnostic.kind.code()) {
let mut generator: SourceCodeGenerator = checker.style.into();
let mut generator: Generator = checker.style.into();
generator.unparse_stmt(&assertion_error(msg));
diagnostic.amend(Fix::replacement(
generator.generate(),

View File

@@ -4,10 +4,10 @@ use rustpython_ast::{Excepthandler, ExcepthandlerKind, Expr, ExprContext, ExprKi
use crate::ast::helpers;
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::checkers::ast::Checker;
use crate::fix::Fix;
use crate::registry::{Diagnostic, RuleCode};
use crate::source_code_generator::SourceCodeGenerator;
use crate::source_code::Generator;
use crate::violations;
fn type_pattern(elts: Vec<&Expr>) -> Expr {
@@ -55,7 +55,7 @@ fn duplicate_handler_exceptions<'a>(
Range::from_located(expr),
);
if checker.patch(diagnostic.kind.code()) {
let mut generator: SourceCodeGenerator = checker.style.into();
let mut generator: Generator = checker.style.into();
if unique_elts.len() == 1 {
generator.unparse_expr(unique_elts[0], 0);
} else {

View File

@@ -12,7 +12,9 @@ use crate::violations;
#[derive(Default)]
struct LoadedNamesVisitor<'a> {
// Tuple of: name, defining expression, and defining range.
names: Vec<(&'a str, &'a Expr, Range)>,
loaded: Vec<(&'a str, &'a Expr, Range)>,
// Tuple of: name, defining expression, and defining range.
stored: Vec<(&'a str, &'a Expr, Range)>,
}
/// `Visitor` to collect all used identifiers in a statement.
@@ -22,12 +24,11 @@ where
{
fn visit_expr(&mut self, expr: &'b Expr) {
match &expr.node {
ExprKind::JoinedStr { .. } => {
visitor::walk_expr(self, expr);
}
ExprKind::Name { id, ctx } if matches!(ctx, ExprContext::Load) => {
self.names.push((id, expr, Range::from_located(expr)));
}
ExprKind::Name { id, ctx } => match ctx {
ExprContext::Load => self.loaded.push((id, expr, Range::from_located(expr))),
ExprContext::Store => self.stored.push((id, expr, Range::from_located(expr))),
ExprContext::Del => {}
},
_ => visitor::walk_expr(self, expr),
}
}
@@ -36,6 +37,7 @@ where
#[derive(Default)]
struct SuspiciousVariablesVisitor<'a> {
names: Vec<(&'a str, &'a Expr, Range)>,
safe_functions: Vec<&'a Expr>,
}
/// `Visitor` to collect all suspicious variables (those referenced in
@@ -50,45 +52,90 @@ where
| StmtKind::AsyncFunctionDef { args, body, .. } => {
// Collect all loaded variable names.
let mut visitor = LoadedNamesVisitor::default();
for stmt in body {
visitor.visit_stmt(stmt);
}
visitor.visit_body(body);
// Collect all argument names.
let arg_names = collect_arg_names(args);
let mut arg_names = collect_arg_names(args);
arg_names.extend(visitor.stored.iter().map(|(id, ..)| id));
// Treat any non-arguments as "suspicious".
self.names.extend(
visitor
.names
.into_iter()
.loaded
.iter()
.filter(|(id, ..)| !arg_names.contains(id)),
);
}
_ => visitor::walk_stmt(self, stmt),
StmtKind::Return { value: Some(value) } => {
// Mark `return lambda: x` as safe.
if matches!(value.node, ExprKind::Lambda { .. }) {
self.safe_functions.push(value);
}
}
_ => {}
}
visitor::walk_stmt(self, stmt);
}
fn visit_expr(&mut self, expr: &'b Expr) {
match &expr.node {
ExprKind::Lambda { args, body } => {
// Collect all loaded variable names.
let mut visitor = LoadedNamesVisitor::default();
visitor.visit_expr(body);
// Collect all argument names.
let arg_names = collect_arg_names(args);
// Treat any non-arguments as "suspicious".
self.names.extend(
visitor
.names
.into_iter()
.filter(|(id, ..)| !arg_names.contains(id)),
);
ExprKind::Call {
func,
args,
keywords,
} => {
if let ExprKind::Name { id, .. } = &func.node {
if id == "filter" || id == "reduce" || id == "map" {
for arg in args {
if matches!(arg.node, ExprKind::Lambda { .. }) {
self.safe_functions.push(arg);
}
}
}
}
if let ExprKind::Attribute { value, attr, .. } = &func.node {
if attr == "reduce" {
if let ExprKind::Name { id, .. } = &value.node {
if id == "functools" {
for arg in args {
if matches!(arg.node, ExprKind::Lambda { .. }) {
self.safe_functions.push(arg);
}
}
}
}
}
}
for keyword in keywords {
if keyword.node.arg.as_ref().map_or(false, |arg| arg == "key")
&& matches!(keyword.node.value.node, ExprKind::Lambda { .. })
{
self.safe_functions.push(&keyword.node.value);
}
}
}
_ => visitor::walk_expr(self, expr),
ExprKind::Lambda { args, body } => {
if !self.safe_functions.contains(&expr) {
// Collect all loaded variable names.
let mut visitor = LoadedNamesVisitor::default();
visitor.visit_expr(body);
// Collect all argument names.
let mut arg_names = collect_arg_names(args);
arg_names.extend(visitor.stored.iter().map(|(id, ..)| id));
// Treat any non-arguments as "suspicious".
self.names.extend(
visitor
.loaded
.iter()
.filter(|(id, ..)| !arg_names.contains(id)),
);
}
}
_ => {}
}
visitor::walk_expr(self, expr);
}
}

View File

@@ -1,12 +1,12 @@
use rustpython_ast::{Constant, Expr, ExprContext, ExprKind, Location};
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::checkers::ast::Checker;
use crate::fix::Fix;
use crate::python::identifiers::IDENTIFIER_REGEX;
use crate::python::keyword::KWLIST;
use crate::registry::Diagnostic;
use crate::source_code_generator::SourceCodeGenerator;
use crate::source_code::Generator;
use crate::violations;
fn attribute(value: &Expr, attr: &str) -> Expr {
@@ -48,7 +48,7 @@ pub fn getattr_with_constant(checker: &mut Checker, expr: &Expr, func: &Expr, ar
let mut diagnostic =
Diagnostic::new(violations::GetAttrWithConstant, Range::from_located(expr));
if checker.patch(diagnostic.kind.code()) {
let mut generator: SourceCodeGenerator = checker.style.into();
let mut generator: Generator = checker.style.into();
generator.unparse_expr(&attribute(obj, value), 0);
diagnostic.amend(Fix::replacement(
generator.generate(),

View File

@@ -1,10 +1,10 @@
use rustpython_ast::{Excepthandler, ExcepthandlerKind, ExprKind};
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::checkers::ast::Checker;
use crate::fix::Fix;
use crate::registry::Diagnostic;
use crate::source_code_generator::SourceCodeGenerator;
use crate::source_code::Generator;
use crate::violations;
/// B013
@@ -24,7 +24,7 @@ pub fn redundant_tuple_in_exception_handler(checker: &mut Checker, handlers: &[E
Range::from_located(type_),
);
if checker.patch(diagnostic.kind.code()) {
let mut generator: SourceCodeGenerator = checker.style.into();
let mut generator: Generator = checker.style.into();
generator.unparse_expr(elt, 0);
diagnostic.amend(Fix::replacement(
generator.generate(),

View File

@@ -1,16 +1,15 @@
use rustpython_ast::{Constant, Expr, ExprContext, ExprKind, Location, Stmt, StmtKind};
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::checkers::ast::Checker;
use crate::fix::Fix;
use crate::python::identifiers::IDENTIFIER_REGEX;
use crate::python::keyword::KWLIST;
use crate::registry::Diagnostic;
use crate::source_code_generator::SourceCodeGenerator;
use crate::source_code_style::SourceCodeStyleDetector;
use crate::source_code::{Generator, Stylist};
use crate::violations;
fn assignment(obj: &Expr, name: &str, value: &Expr, stylist: &SourceCodeStyleDetector) -> String {
fn assignment(obj: &Expr, name: &str, value: &Expr, stylist: &Stylist) -> String {
let stmt = Stmt::new(
Location::default(),
Location::default(),
@@ -28,7 +27,7 @@ fn assignment(obj: &Expr, name: &str, value: &Expr, stylist: &SourceCodeStyleDet
type_comment: None,
},
);
let mut generator: SourceCodeGenerator = stylist.into();
let mut generator: Generator = stylist.into();
generator.unparse_stmt(&stmt);
generator.generate()
}

View File

@@ -4,8 +4,8 @@ use rustpython_ast::{Expr, ExprKind, Stmt};
use crate::ast::types::Range;
use crate::ast::visitor;
use crate::ast::visitor::Visitor;
use crate::autofix::Fix;
use crate::checkers::ast::Checker;
use crate::fix::Fix;
use crate::registry::Diagnostic;
use crate::violations;

View File

@@ -1,6 +1,6 @@
---
source: src/flake8_bugbear/mod.rs
expression: checks
expression: diagnostics
---
- kind:
FunctionUsesLoopVariable: x
@@ -172,4 +172,74 @@ expression: checks
column: 16
fix: ~
parent: ~
- kind:
FunctionUsesLoopVariable: x
location:
row: 117
column: 23
end_location:
row: 117
column: 24
fix: ~
parent: ~
- kind:
FunctionUsesLoopVariable: x
location:
row: 118
column: 26
end_location:
row: 118
column: 27
fix: ~
parent: ~
- kind:
FunctionUsesLoopVariable: x
location:
row: 119
column: 36
end_location:
row: 119
column: 37
fix: ~
parent: ~
- kind:
FunctionUsesLoopVariable: x
location:
row: 120
column: 37
end_location:
row: 120
column: 38
fix: ~
parent: ~
- kind:
FunctionUsesLoopVariable: x
location:
row: 121
column: 36
end_location:
row: 121
column: 37
fix: ~
parent: ~
- kind:
FunctionUsesLoopVariable: name
location:
row: 171
column: 28
end_location:
row: 171
column: 32
fix: ~
parent: ~
- kind:
FunctionUsesLoopVariable: i
location:
row: 174
column: 28
end_location:
row: 174
column: 29
fix: ~
parent: ~

View File

@@ -1,5 +1,5 @@
pub mod rules;
pub mod types;
pub(crate) mod rules;
pub(crate) mod types;
#[cfg(test)]
mod tests {

View File

@@ -7,9 +7,9 @@ use libcst_native::{
};
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::cst::matchers::{match_expr, match_module};
use crate::source_code_locator::SourceCodeLocator;
use crate::fix::Fix;
use crate::source_code::Locator;
fn match_call<'a, 'b>(expr: &'a mut Expr<'b>) -> Result<&'a mut Call<'b>> {
if let Expression::Call(call) = &mut expr.value {
@@ -29,7 +29,7 @@ fn match_arg<'a, 'b>(call: &'a Call<'b>) -> Result<&'a Arg<'b>> {
/// (C400) Convert `list(x for x in y)` to `[x for x in y]`.
pub fn fix_unnecessary_generator_list(
locator: &SourceCodeLocator,
locator: &Locator,
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
// Expr(Call(GeneratorExp)))) -> Expr(ListComp)))
@@ -70,7 +70,7 @@ pub fn fix_unnecessary_generator_list(
/// (C401) Convert `set(x for x in y)` to `{x for x in y}`.
pub fn fix_unnecessary_generator_set(
locator: &SourceCodeLocator,
locator: &Locator,
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
// Expr(Call(GeneratorExp)))) -> Expr(SetComp)))
@@ -112,7 +112,7 @@ pub fn fix_unnecessary_generator_set(
/// (C402) Convert `dict((x, x) for x in range(3))` to `{x: x for x in
/// range(3)}`.
pub fn fix_unnecessary_generator_dict(
locator: &SourceCodeLocator,
locator: &Locator,
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
@@ -169,7 +169,7 @@ pub fn fix_unnecessary_generator_dict(
/// (C403) Convert `set([x for x in y])` to `{x for x in y}`.
pub fn fix_unnecessary_list_comprehension_set(
locator: &SourceCodeLocator,
locator: &Locator,
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
// Expr(Call(ListComp)))) ->
@@ -210,7 +210,7 @@ pub fn fix_unnecessary_list_comprehension_set(
/// (C404) Convert `dict([(i, i) for i in range(3)])` to `{i: i for i in
/// range(3)}`.
pub fn fix_unnecessary_list_comprehension_dict(
locator: &SourceCodeLocator,
locator: &Locator,
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
@@ -259,10 +259,7 @@ pub fn fix_unnecessary_list_comprehension_dict(
}
/// (C405) Convert `set((1, 2))` to `{1, 2}`.
pub fn fix_unnecessary_literal_set(
locator: &SourceCodeLocator,
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
pub fn fix_unnecessary_literal_set(locator: &Locator, expr: &rustpython_ast::Expr) -> Result<Fix> {
// Expr(Call(List|Tuple)))) -> Expr(Set)))
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
@@ -305,10 +302,7 @@ pub fn fix_unnecessary_literal_set(
}
/// (C406) Convert `dict([(1, 2)])` to `{1: 2}`.
pub fn fix_unnecessary_literal_dict(
locator: &SourceCodeLocator,
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
pub fn fix_unnecessary_literal_dict(locator: &Locator, expr: &rustpython_ast::Expr) -> Result<Fix> {
// Expr(Call(List|Tuple)))) -> Expr(Dict)))
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
@@ -374,7 +368,7 @@ pub fn fix_unnecessary_literal_dict(
/// (C408)
pub fn fix_unnecessary_collection_call(
locator: &SourceCodeLocator,
locator: &Locator,
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
// Expr(Call("list" | "tuple" | "dict")))) -> Expr(List|Tuple|Dict)
@@ -483,7 +477,7 @@ pub fn fix_unnecessary_collection_call(
/// (C409) Convert `tuple([1, 2])` to `tuple(1, 2)`
pub fn fix_unnecessary_literal_within_tuple_call(
locator: &SourceCodeLocator,
locator: &Locator,
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
@@ -537,7 +531,7 @@ pub fn fix_unnecessary_literal_within_tuple_call(
/// (C410) Convert `list([1, 2])` to `[1, 2]`
pub fn fix_unnecessary_literal_within_list_call(
locator: &SourceCodeLocator,
locator: &Locator,
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
@@ -592,10 +586,7 @@ pub fn fix_unnecessary_literal_within_list_call(
}
/// (C411) Convert `list([i * i for i in x])` to `[i * i for i in x]`.
pub fn fix_unnecessary_list_call(
locator: &SourceCodeLocator,
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
pub fn fix_unnecessary_list_call(locator: &Locator, expr: &rustpython_ast::Expr) -> Result<Fix> {
// Expr(Call(List|Tuple)))) -> Expr(List|Tuple)))
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
let mut tree = match_module(&module_text)?;
@@ -619,7 +610,7 @@ pub fn fix_unnecessary_list_call(
/// (C413) Convert `reversed(sorted([2, 3, 1]))` to `sorted([2, 3, 1],
/// reverse=True)`.
pub fn fix_unnecessary_call_around_sorted(
locator: &SourceCodeLocator,
locator: &Locator,
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(expr));
@@ -701,7 +692,7 @@ pub fn fix_unnecessary_call_around_sorted(
/// (C416) Convert `[i for i in x]` to `list(x)`.
pub fn fix_unnecessary_comprehension(
locator: &SourceCodeLocator,
locator: &Locator,
expr: &rustpython_ast::Expr,
) -> Result<Fix> {
let module_text = locator.slice_source_code_range(&Range::from_located(expr));

View File

@@ -1,5 +1,5 @@
mod fixes;
pub mod rules;
pub(crate) mod rules;
#[cfg(test)]
mod tests {

Some files were not shown because too many files have changed in this diff Show More