Compare commits

...

35 Commits

Author SHA1 Message Date
Charlie Marsh
d8162ce79d Bump version to 0.0.219 2023-01-11 23:46:01 -05:00
Charlie Marsh
e11ef54bda Improve globset documentation and help message (#1808)
Closes #1545.
2023-01-11 23:41:56 -05:00
messense
9a07b0623e Move top level ruff into python folder (#1806)
https://maturin.rs/project_layout.html#mixed-rustpython-project

Resolves #1805
2023-01-11 23:12:55 -05:00
Charlie Marsh
f450e2e79d Implement doc line length enforcement (#1804)
This PR implements `W505` (`DocLineTooLong`), which is similar to `E501`
(`LineTooLong`) but confined to doc lines.

I based the "doc line" definition on pycodestyle, which defines a doc
line as a standalone comment or string statement. Our definition is a
bit more liberal, since we consider any string statement a doc line
(even if it's part of a multi-line statement) -- but that seems fine to
me.

Note that, unusually, this rule requires custom extraction from both the
token stream (to find standalone comments) and the AST (to find string
statements).

Closes #1784.
2023-01-11 22:32:14 -05:00
Colin Delahunty
329946f162 Avoid erroneous Q002 error message for single-quote docstrings (#1777)
Fixes #1775. Before implementing your solution I thought of a slightly
simpler one. However, it will let this function pass:
```
def double_inside_single(a):
    'Double inside "single "'
```
If we want function to pass, my implementation works. But if we do not,
then I can go with how you suggested I implemented this (I left how I
would begin to handle it commented out). The bottom of the flake8-quotes
documentation seems to suggest that this should pass:
https://pypi.org/project/flake8-quotes/

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-11 20:01:54 -05:00
Charlie Marsh
588399e415 Fix Clippy error 2023-01-11 19:59:00 -05:00
Chammika Mannakkara
4523885268 flake8_simplify : SIM401 (#1778)
Ref #998 

- Implements SIM401 with fix
- Added tests

Notes: 
- only recognize simple ExprKind::Name variables in expr patterns for
now
- bug-fix from reference implementation: check 3-conditions (dict-key,
target-variable, dict-name) to be equal, `flake8_simplify` only test
first two (only first in second pattern)
2023-01-11 19:51:37 -05:00
Maksudul Haque
de81b0cd38 [flake8-simplify] Add Rule for SIM115 (Use context handler for opening files) (#1782)
ref: https://github.com/charliermarsh/ruff/issues/998

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-11 19:28:05 -05:00
Charlie Marsh
4fce296e3f Skip SIM108 violations for complex if-statements (#1802)
We now skip SIM108 violations if: the resulting statement would exceed
the user-specified line length, or the `if` statement contains comments.

Closes #1719.

Closes #1766.
2023-01-11 19:21:30 -05:00
Charlie Marsh
9d48d7bbd1 Skip unused argument checks for magic methods (#1801)
We still check `__init__`, `__call__`, and `__new__`.

Closes #1796.
2023-01-11 19:02:20 -05:00
Charlie Marsh
c56f263618 Avoid flagging builtins for OSError rewrites (#1800)
Related to (but does not fix) #1790.
2023-01-11 18:49:25 -05:00
Grzegorz Bokota
fb2382fbc3 Update readme to reflect #1763 (#1780)
When checking changes in the 0.0.218 release I noticed that auto fixing
PT004 and PT005 was disabled but this change was not reflected in
README. So I create this small PR to do this.

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-11 18:37:41 -05:00
Charlie Marsh
c92a5a8704 Avoid rewriting flake8-comprehensions expressions for builtin overrides (#1799)
Closes #1788.
2023-01-11 18:33:55 -05:00
Charlie Marsh
d7cf3147b7 Refactor flake8-comprehensions rules to take fewer arguments (#1797) 2023-01-11 18:21:18 -05:00
Charlie Marsh
bf4d35c705 Convert flake8-comprehensions checks to Checker style (#1795) 2023-01-11 18:11:20 -05:00
Charlie Marsh
4e97e9c7cf Improve PIE794 autofix behavior (#1794)
We now: (1) trigger PIE794 for objects without bases (not sure why this
was omitted before); and (2) remove the entire line, rather than leaving
behind trailing whitespace.

Resolves #1787.
2023-01-11 18:01:29 -05:00
Charlie Marsh
a3fcc3b28d Disable update check by default (#1786)
This has received enough criticism that I'm comfortable making it
opt-in.
2023-01-11 13:47:40 -05:00
Charlie Marsh
cfbd068dd5 Bump version to 0.0.218 2023-01-10 21:28:23 -05:00
Charlie Marsh
8aed23fe0a Avoid B023 false-positives for some common builtins (#1776)
This is based on the upstream work in
https://github.com/PyCQA/flake8-bugbear/pull/303 and
https://github.com/PyCQA/flake8-bugbear/pull/305/files.

Resolves #1686.
2023-01-10 21:23:48 -05:00
Colin Delahunty
c016c41c71 Pyupgrade: Format specifiers (#1594)
A part of #827. Posting this for visibility. Still has some work to do
to be done.

Things that still need done before this is ready:

- [x] Does not work when the item is being assigned to a variable
- [x] Does not work if being used in a function call
- [x] Fix incorrectly removed calls in the function
- [x] Has not been tested with pyupgrade negative test cases

Tests from pyupgrade can be seen here:
https://github.com/asottile/pyupgrade/blob/main/tests/features/format_literals_test.py

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-10 20:21:04 -05:00
Charlie Marsh
f1a5e53f06 Enable isort-style required-imports enforcement (#1762)
In isort, this is called `add-imports`, but I prefer the declarative
name.

The idea is that by adding the following to your `pyproject.toml`, you
can ensure that the import is included in all files:

```toml
[tool.ruff.isort]
required-imports = ["from __future__ import annotations"]
```

I mostly reverse-engineered isort's logic for making decisions, though I
made some slight tweaks that I think are preferable. A few comments:

- Like isort, we don't enforce this on empty files (like empty
`__init__.py`).
- Like isort, we require that the import is at the top-level.
- isort will skip any docstrings, and any comments on the first three
lines (I think, based on testing). Ruff places the import after the last
docstring or comment in the file preamble (that is: after the last
docstring or comment that comes before the _first_ non-docstring and
non-comment).

Resolves #1700.
2023-01-10 18:12:57 -05:00
Charlie Marsh
1e94e0221f Disable doctests (#1772)
We don't have any doctests, but `cargo test --all` spends more than half
the time on doctests? A little confusing, but this brings the test time
from > 4s to < 2s on my machine.
2023-01-10 15:10:16 -05:00
Martin Fischer
543865c96b Generate RuleCode::origin() via macro (#1770) 2023-01-10 13:20:43 -05:00
Maksudul Haque
b8e3f0bc13 [flake8-bandit] Add Rule for S508 (snmp insecure version) & S509 (snmp weak cryptography) (#1771)
ref: https://github.com/charliermarsh/ruff/issues/1646

Co-authored-by: messense <messense@icloud.com>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-01-10 13:13:54 -05:00
Charlie Marsh
643cedb200 Move CONTRIBUTING.md to top-level (#1768) 2023-01-10 07:38:12 -05:00
Charlie Marsh
91620c378a Disable release builds on CI (#1761) 2023-01-10 07:33:03 -05:00
Harutaka Kawamura
b732135795 Do not autofix PT004 and PT005 (#1763)
As @edgarrmondragon commented in
https://github.com/charliermarsh/ruff/pull/1740#issuecomment-1376230550,
just renaming fixture doesn't work.
2023-01-10 07:24:16 -05:00
messense
9384a081f9 Implement flake8-simplify SIM112 (#1764)
Ref #998
2023-01-10 07:24:01 -05:00
Charlie Marsh
edab268d50 Bump version to 0.0.217 2023-01-09 23:26:22 -05:00
Charlie Marsh
e4fad70a57 Update documentation to match latest terminology (#1760)
Closes #1759.
2023-01-09 21:10:47 -05:00
Charlie Marsh
1a09fff991 Update rule-generation scripts to match latest conventions (#1758)
Resolves #1755.
2023-01-09 19:55:46 -05:00
Charlie Marsh
b85105d2ec Add a helper for any-like operations (#1757) 2023-01-09 19:34:33 -05:00
Charlie Marsh
f7ac28a935 Omit sys.version_info and sys.platform checks from ternary rule (#1756)
Resolves #1753.
2023-01-09 19:22:34 -05:00
Charlie Marsh
9532f342a6 Enable project-specific typing module re-exports (#1754)
Resolves #1744.
2023-01-09 18:17:50 -05:00
Mohamed Daahir
0ee37aa0aa Cache build artifacts using Swatinem/rust-cache@v1 (#1750)
This GitHub Action caches build artifacts in addition to dependencies
which halves the CI duration time.

Resolves #1752.
2023-01-09 15:35:32 -05:00
149 changed files with 4419 additions and 1856 deletions

View File

@@ -26,21 +26,9 @@ jobs:
profile: minimal
toolchain: nightly-2022-11-01
override: true
components: rustfmt
- uses: actions/cache@v3
env:
cache-name: cache-cargo
with:
path: |
~/.cargo/registry
~/.cargo/git
key: ${{ runner.os }}-build-${{ env.cache-name }}-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-build-${{ env.cache-name }}-
${{ runner.os }}-build-
${{ runner.os }}-
- run: cargo build --all --release
- run: ./target/release/ruff_dev generate-all
- uses: Swatinem/rust-cache@v1
- run: cargo build --all
- run: ./target/debug/ruff_dev generate-all
- run: git diff --quiet README.md || echo "::error file=README.md::This file is outdated. Run 'cargo +nightly dev generate-all'."
- run: git diff --quiet ruff.schema.json || echo "::error file=ruff.schema.json::This file is outdated. Run 'cargo +nightly dev generate-all'."
- run: git diff --exit-code -- README.md ruff.schema.json
@@ -56,18 +44,6 @@ jobs:
toolchain: nightly-2022-11-01
override: true
components: rustfmt
- uses: actions/cache@v3
env:
cache-name: cache-cargo
with:
path: |
~/.cargo/registry
~/.cargo/git
key: ${{ runner.os }}-build-${{ env.cache-name }}-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-build-${{ env.cache-name }}-
${{ runner.os }}-build-
${{ runner.os }}-
- run: cargo fmt --all --check
cargo_clippy:
@@ -82,18 +58,7 @@ jobs:
override: true
components: clippy
target: wasm32-unknown-unknown
- uses: actions/cache@v3
env:
cache-name: cache-cargo
with:
path: |
~/.cargo/registry
~/.cargo/git
key: ${{ runner.os }}-build-${{ env.cache-name }}-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-build-${{ env.cache-name }}-
${{ runner.os }}-build-
${{ runner.os }}-
- uses: Swatinem/rust-cache@v1
- run: cargo clippy --workspace --all-targets --all-features -- -D warnings -W clippy::pedantic
- run: cargo clippy --workspace --target wasm32-unknown-unknown --all-features -- -D warnings -W clippy::pedantic
@@ -107,18 +72,7 @@ jobs:
profile: minimal
toolchain: nightly-2022-11-01
override: true
- uses: actions/cache@v3
env:
cache-name: cache-cargo
with:
path: |
~/.cargo/registry
~/.cargo/git
key: ${{ runner.os }}-build-${{ env.cache-name }}-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-build-${{ env.cache-name }}-
${{ runner.os }}-build-
${{ runner.os }}-
- uses: Swatinem/rust-cache@v1
- run: cargo install cargo-insta
- run: pip install black[d]==22.12.0
- name: Run tests
@@ -167,22 +121,11 @@ jobs:
profile: minimal
toolchain: nightly-2022-11-01
override: true
- uses: Swatinem/rust-cache@v1
- uses: actions/setup-python@v4
with:
python-version: "3.11"
- run: pip install maturin
- uses: actions/cache@v3
env:
cache-name: cache-cargo
with:
path: |
~/.cargo/registry
~/.cargo/git
key: ${{ runner.os }}-build-${{ env.cache-name }}-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-build-${{ env.cache-name }}-
${{ runner.os }}-build-
${{ runner.os }}-
- run: maturin build -b bin
typos:

View File

@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.0.216
rev: v0.0.219
hooks:
- id: ruff

View File

@@ -56,9 +56,9 @@ prior to merging.
There are four phases to adding a new lint rule:
1. Define the violation in `src/violations.rs` (e.g., `ModuleImportNotAtTopOfFile`).
2. Map the violation to a code in `src/registry.rs` (e.g., `E402`).
3. Define the _logic_ for triggering the violation in `src/checkers/ast.rs` (for AST-based checks),
1. Define the violation struct in `src/violations.rs` (e.g., `ModuleImportNotAtTopOfFile`).
2. Map the violation struct to a rule code in `src/registry.rs` (e.g., `E402`).
3. Define the logic for triggering the violation in `src/checkers/ast.rs` (for AST-based checks),
`src/checkers/tokens.rs` (for token-based checks), or `src/checkers/lines.rs` (for text-based checks).
4. Add a test fixture.
5. Update the generated files (documentation and generated code).
@@ -74,15 +74,16 @@ collecting diagnostics as it goes.
If you need to inspect the AST, you can run `cargo +nightly dev print-ast` with a Python file. Grep
for the `Check::new` invocations to understand how other, similar rules are implemented.
To add a test fixture, create a file under `resources/test/fixtures/[plugin-name]`, named to match
the code you defined earlier (e.g., `E402.py`). This file should contain a variety of
violations and non-violations designed to evaluate and demonstrate the behavior of your lint rule.
To add a test fixture, create a file under `resources/test/fixtures/[origin]`, named to match
the code you defined earlier (e.g., `resources/test/fixtures/pycodestyle/E402.py`). This file should
contain a variety of violations and non-violations designed to evaluate and demonstrate the behavior
of your lint rule.
Run `cargo +nightly dev generate-all` to generate the code for your new fixture. Then run Ruff
locally with (e.g.) `cargo run resources/test/fixtures/pycodestyle/E402.py --no-cache --select E402`.
Once you're satisfied with the output, codify the behavior as a snapshot test by adding a new
`test_case` macro in the relevant `src/[plugin-name]/mod.rs` file. Then, run `cargo test --all`.
`test_case` macro in the relevant `src/[origin]/mod.rs` file. Then, run `cargo test --all`.
Your test will fail, but you'll be prompted to follow-up with `cargo insta review`. Accept the
generated snapshot, then commit the snapshot file alongside the rest of your changes.

8
Cargo.lock generated
View File

@@ -735,7 +735,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.216-dev.0"
version = "0.0.219-dev.0"
dependencies = [
"anyhow",
"clap 4.0.32",
@@ -1874,7 +1874,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.216"
version = "0.0.219"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -1942,7 +1942,7 @@ dependencies = [
[[package]]
name = "ruff_dev"
version = "0.0.216"
version = "0.0.219"
dependencies = [
"anyhow",
"clap 4.0.32",
@@ -1962,7 +1962,7 @@ dependencies = [
[[package]]
name = "ruff_macros"
version = "0.0.216"
version = "0.0.219"
dependencies = [
"once_cell",
"proc-macro2",

View File

@@ -6,7 +6,7 @@ members = [
[package]
name = "ruff"
version = "0.0.216"
version = "0.0.219"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = "2021"
rust-version = "1.65.0"
@@ -19,6 +19,7 @@ license = "MIT"
[lib]
name = "ruff"
crate-type = ["cdylib", "rlib"]
doctest = false
[dependencies]
annotate-snippets = { version = "0.9.1", features = ["color"] }
@@ -51,7 +52,7 @@ path-absolutize = { version = "3.0.14", features = ["once_cell_cache", "use_unix
quick-junit = { version = "0.3.2" }
regex = { version = "1.6.0" }
ropey = { version = "1.5.0", features = ["cr_lines", "simd"], default-features = false }
ruff_macros = { version = "0.0.216", path = "ruff_macros" }
ruff_macros = { version = "0.0.219", path = "ruff_macros" }
rustc-hash = { version = "1.1.0" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "d532160333ffeb6dbeca2c2728c2391cd1e53b7f" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "d532160333ffeb6dbeca2c2728c2391cd1e53b7f" }

275
README.md
View File

@@ -164,9 +164,9 @@ pacman -S ruff
To run Ruff, try any of the following:
```shell
ruff path/to/code/to/check.py # Run Ruff over `check.py`
ruff path/to/code/ # Run Ruff over all files in `/path/to/code` (and any subdirectories)
ruff path/to/code/*.py # Run Ruff over all `.py` files in `/path/to/code`
ruff path/to/code/to/lint.py # Run Ruff over `lint.py`
ruff path/to/code/ # Run Ruff over all files in `/path/to/code` (and any subdirectories)
ruff path/to/code/*.py # Run Ruff over all `.py` files in `/path/to/code`
```
You can run Ruff in `--watch` mode to automatically re-run on-change:
@@ -180,7 +180,7 @@ Ruff also works with [pre-commit](https://pre-commit.com):
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.216'
rev: 'v0.0.219'
hooks:
- id: ruff
# Respect `exclude` and `extend-exclude` settings.
@@ -237,9 +237,9 @@ target-version = "py310"
max-complexity = 10
```
As an example, the following would configure Ruff to: (1) avoid checking for line-length
violations (`E501`); (2) never remove unused imports (`F401`); and (3) ignore import-at-top-of-file
errors (`E402`) in `__init__.py` files:
As an example, the following would configure Ruff to: (1) avoid enforcing line-length violations
(`E501`); (2) never remove unused imports (`F401`); and (3) ignore import-at-top-of-file violations
(`E402`) in `__init__.py` files:
```toml
[tool.ruff]
@@ -269,16 +269,16 @@ select = ["E", "F", "Q"]
docstring-quotes = "double"
```
Ruff mirrors Flake8's error code system, in which each error code consists of a one-to-three letter
prefix, followed by three digits (e.g., `F401`). The prefix indicates that "source" of the error
code (e.g., `F` for Pyflakes, `E` for `pycodestyle`, `ANN` for `flake8-annotations`). The set of
enabled errors is determined by the `select` and `ignore` options, which support both the full
error code (e.g., `F401`) and the prefix (e.g., `F`).
Ruff mirrors Flake8's rule code system, in which each rule code consists of a one-to-three letter
prefix, followed by three digits (e.g., `F401`). The prefix indicates that "source" of the rule
(e.g., `F` for Pyflakes, `E` for `pycodestyle`, `ANN` for `flake8-annotations`). The set of enabled
rules is determined by the `select` and `ignore` options, which support both the full code (e.g.,
`F401`) and the prefix (e.g., `F`).
As a special-case, Ruff also supports the `ALL` error code, which enables all error codes. Note that
some of the `pydocstyle` error codes are conflicting (e.g., `D203` and `D211`) as they represent
alternative docstring formats. Enabling `ALL` without further configuration may result in suboptimal
behavior, especially for the `pydocstyle` plugin.
As a special-case, Ruff also supports the `ALL` code, which enables all rules. Note that some of the
`pydocstyle` rules conflict (e.g., `D203` and `D211`) as they represent alternative docstring
formats. Enabling `ALL` without further configuration may result in suboptimal behavior, especially
for the `pydocstyle` plugin.
As an alternative to `pyproject.toml`, Ruff will also respect a `ruff.toml` file, which implements
an equivalent schema (though the `[tool.ruff]` hierarchy can be omitted). For example, the
@@ -326,43 +326,43 @@ Options:
-v, --verbose
Enable verbose logging
-q, --quiet
Only log errors
Print lint violations, but nothing else
-s, --silent
Disable all logging (but still exit with status code "1" upon detecting errors)
Disable all logging (but still exit with status code "1" upon detecting lint violations)
-e, --exit-zero
Exit with status code "0", even upon detecting errors
Exit with status code "0", even upon detecting lint violations
-w, --watch
Run in watch mode by re-running whenever files change
--fix
Attempt to automatically fix lint errors
Attempt to automatically fix lint violations
--fix-only
Fix any fixable lint errors, but don't report on leftover violations. Implies `--fix`
Fix any fixable lint violations, but don't report on leftover violations. Implies `--fix`
--diff
Avoid writing any fixed files back; instead, output a diff for each changed file to stdout
-n, --no-cache
Disable cache reads
--isolated
Ignore all configuration files
--select <SELECT>
--select <RULE_CODE>
Comma-separated list of rule codes to enable (or ALL, to enable all rules)
--extend-select <EXTEND_SELECT>
Like --select, but adds additional error codes on top of the selected ones
--ignore <IGNORE>
Comma-separated list of error codes to disable
--extend-ignore <EXTEND_IGNORE>
Like --ignore, but adds additional error codes on top of the ignored ones
--exclude <EXCLUDE>
List of paths, used to exclude files and/or directories from checks
--extend-exclude <EXTEND_EXCLUDE>
Like --exclude, but adds additional files and directories on top of the excluded ones
--fixable <FIXABLE>
List of error codes to treat as eligible for autofix. Only applicable when autofix itself is enabled (e.g., via `--fix`)
--unfixable <UNFIXABLE>
List of error codes to treat as ineligible for autofix. Only applicable when autofix itself is enabled (e.g., via `--fix`)
--extend-select <RULE_CODE>
Like --select, but adds additional rule codes on top of the selected ones
--ignore <RULE_CODE>
Comma-separated list of rule codes to disable
--extend-ignore <RULE_CODE>
Like --ignore, but adds additional rule codes on top of the ignored ones
--exclude <FILE_PATTERN>
List of paths, used to omit files and/or directories from analysis
--extend-exclude <FILE_PATTERN>
Like --exclude, but adds additional files and directories on top of those already excluded
--fixable <RULE_CODE>
List of rule codes to treat as eligible for autofix. Only applicable when autofix itself is enabled (e.g., via `--fix`)
--unfixable <RULE_CODE>
List of rule codes to treat as ineligible for autofix. Only applicable when autofix itself is enabled (e.g., via `--fix`)
--per-file-ignores <PER_FILE_IGNORES>
List of mappings from file pattern to code to exclude
--format <FORMAT>
Output serialization format for error messages [env: RUFF_FORMAT=] [possible values: text, json, junit, grouped, github, gitlab]
Output serialization format for violations [env: RUFF_FORMAT=] [possible values: text, json, junit, grouped, github, gitlab]
--stdin-filename <STDIN_FILENAME>
The name of the file when passing it through stdin
--cache-dir <CACHE_DIR>
@@ -380,7 +380,7 @@ Options:
--target-version <TARGET_VERSION>
The minimum Python version that should be supported
--line-length <LINE_LENGTH>
Set the line-length for length-associated checks and automatic formatting
Set the line-length for length-associated rules and automatic formatting
--max-complexity <MAX_COMPLEXITY>
Maximum McCabe complexity allowed for a given function
--add-noqa
@@ -392,7 +392,7 @@ Options:
--show-files
See the files Ruff will be run against with the current settings
--show-settings
See the settings Ruff will use to check a given Python file
See the settings Ruff will use to lint a given Python file
-h, --help
Print help information
-V, --version
@@ -449,16 +449,16 @@ in each directory's `pyproject.toml` file.
By default, Ruff will also skip any files that are omitted via `.ignore`, `.gitignore`,
`.git/info/exclude`, and global `gitignore` files (see: [`respect-gitignore`](#respect-gitignore)).
Files that are passed to `ruff` directly are always checked, regardless of the above criteria.
For example, `ruff /path/to/excluded/file.py` will always check `file.py`.
Files that are passed to `ruff` directly are always linted, regardless of the above criteria.
For example, `ruff /path/to/excluded/file.py` will always lint `file.py`.
### Ignoring errors
To omit a lint check entirely, add it to the "ignore" list via [`ignore`](#ignore) or
To omit a lint rule entirely, add it to the "ignore" list via [`ignore`](#ignore) or
[`extend-ignore`](#extend-ignore), either on the command-line or in your `pyproject.toml` file.
To ignore an error inline, Ruff uses a `noqa` system similar to [Flake8](https://flake8.pycqa.org/en/3.1.1/user/ignoring-errors.html).
To ignore an individual error, add `# noqa: {code}` to the end of the line, like so:
To ignore a violation inline, Ruff uses a `noqa` system similar to [Flake8](https://flake8.pycqa.org/en/3.1.1/user/ignoring-errors.html).
To ignore an individual violation, add `# noqa: {code}` to the end of the line, like so:
```python
# Ignore F841.
@@ -467,7 +467,7 @@ x = 1 # noqa: F841
# Ignore E741 and F841.
i = 1 # noqa: E741, F841
# Ignore _all_ errors.
# Ignore _all_ violations.
x = 1 # noqa
```
@@ -481,9 +481,9 @@ Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor i
""" # noqa: E501
```
To ignore all errors across an entire file, Ruff supports Flake8's `# flake8: noqa` directive (or,
equivalently, `# ruff: noqa`). Adding either of those directives to any part of a file will disable
error reporting for the entire file.
To ignore all violations across an entire file, Ruff supports Flake8's `# flake8: noqa` directive
(or, equivalently, `# ruff: noqa`). Adding either of those directives to any part of a file will
disable enforcement across the entire file.
For targeted exclusions across entire files (e.g., "Ignore all F841 violations in
`/path/to/file.py`"), see the [`per-file-ignores`](#per-file-ignores) configuration setting.
@@ -502,8 +502,8 @@ for more.
Ruff supports several workflows to aid in `noqa` management.
First, Ruff provides a special error code, `RUF100`, to enforce that your `noqa` directives are
"valid", in that the errors they _say_ they ignore are actually being triggered on that line (and
First, Ruff provides a special rule code, `RUF100`, to enforce that your `noqa` directives are
"valid", in that the violations they _say_ they ignore are actually being triggered on that line (and
thus suppressed). You can run `ruff /path/to/file.py --extend-select RUF100` to flag unused `noqa`
directives.
@@ -513,13 +513,13 @@ You can run `ruff /path/to/file.py --extend-select RUF100 --fix` to automaticall
Third, Ruff can _automatically add_ `noqa` directives to all failing lines. This is useful when
migrating a new codebase to Ruff. You can run `ruff /path/to/file.py --add-noqa` to automatically
add `noqa` directives to all failing lines, with the appropriate error codes.
add `noqa` directives to all failing lines, with the appropriate rule codes.
## Supported Rules
Regardless of the rule's origin, Ruff re-implements every rule in Rust as a first-party feature.
By default, Ruff enables all `E` and `F` error codes, which correspond to those built-in to Flake8.
By default, Ruff enables all `E` and `F` rule codes, which correspond to those built-in to Flake8.
The 🛠 emoji indicates that a rule is automatically fixable by the `--fix` command-line option.
@@ -597,6 +597,7 @@ For more, see [pycodestyle](https://pypi.org/project/pycodestyle/2.9.1/) on PyPI
| E902 | IOError | IOError: `...` | |
| E999 | SyntaxError | SyntaxError: `...` | |
| W292 | NoNewLineAtEndOfFile | No newline at end of file | 🛠 |
| W505 | DocLineTooLong | Doc line too long (89 > 88 characters) | |
| W605 | InvalidEscapeSequence | Invalid escape sequence: '\c' | 🛠 |
### mccabe (C90)
@@ -614,6 +615,7 @@ For more, see [isort](https://pypi.org/project/isort/5.10.1/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| I001 | UnsortedImports | Import block is un-sorted or un-formatted | 🛠 |
| I002 | MissingRequiredImport | Missing required import: `from __future__ import ...` | 🛠 |
### pydocstyle (D)
@@ -701,6 +703,7 @@ For more, see [pyupgrade](https://pypi.org/project/pyupgrade/3.2.0/) on PyPI.
| UP027 | RewriteListComprehension | Replace unpacked list comprehension with a generator expression | 🛠 |
| UP028 | RewriteYieldFrom | Replace `yield` over `for` loop with `yield from` | 🛠 |
| UP029 | UnnecessaryBuiltinImport | Unnecessary builtin import: `...` | 🛠 |
| UP030 | FormatLiterals | Use implicit references for positional format fields | 🛠 |
### pep8-naming (N)
@@ -777,6 +780,8 @@ For more, see [flake8-bandit](https://pypi.org/project/flake8-bandit/4.1.1/) on
| S324 | HashlibInsecureHashFunction | Probable use of insecure hash functions in `hashlib`: "..." | |
| S501 | RequestWithNoCertValidation | Probable use of `...` call with `verify=False` disabling SSL certificate checks | |
| S506 | UnsafeYAMLLoad | Probable use of unsafe `yaml.load`. Allows instantiation of arbitrary objects. Consider `yaml.safe_load`. | |
| S508 | SnmpInsecureVersion | The use of SNMPv1 and SNMPv2 is insecure. Use SNMPv3 if able. | |
| S509 | SnmpWeakCryptography | You should not use SNMPv3 without encryption. `noAuthNoPriv` & `authNoPriv` is insecure. | |
### flake8-blind-except (BLE)
@@ -916,8 +921,8 @@ For more, see [flake8-pytest-style](https://pypi.org/project/flake8-pytest-style
| PT001 | IncorrectFixtureParenthesesStyle | Use `@pytest.fixture()` over `@pytest.fixture` | 🛠 |
| PT002 | FixturePositionalArgs | Configuration for fixture `...` specified via positional args, use kwargs | |
| PT003 | ExtraneousScopeFunction | `scope='function'` is implied in `@pytest.fixture()` | |
| PT004 | MissingFixtureNameUnderscore | Fixture `...` does not return anything, add leading underscore | 🛠 |
| PT005 | IncorrectFixtureNameUnderscore | Fixture `...` returns a value, remove leading underscore | 🛠 |
| PT004 | MissingFixtureNameUnderscore | Fixture `...` does not return anything, add leading underscore | |
| PT005 | IncorrectFixtureNameUnderscore | Fixture `...` returns a value, remove leading underscore | |
| PT006 | ParametrizeNamesWrongType | Wrong name(s) type in `@pytest.mark.parametrize`, expected `tuple` | 🛠 |
| PT007 | ParametrizeValuesWrongType | Wrong values type in `@pytest.mark.parametrize` expected `list` of `tuple` | |
| PT008 | PatchWithLambda | Use `return_value=` instead of patching with `lambda` | |
@@ -971,6 +976,7 @@ For more, see [flake8-simplify](https://pypi.org/project/flake8-simplify/0.19.3/
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| SIM115 | OpenFileWithContextHandler | Use context handler for opening files | |
| SIM101 | DuplicateIsinstanceCall | Multiple `isinstance` calls for `...`, merge into a single call | 🛠 |
| SIM102 | NestedIfStatements | Use a single `if` statement instead of nested `if` statements | |
| SIM103 | ReturnBoolConditionDirectly | Return the condition `...` directly | 🛠 |
@@ -980,6 +986,7 @@ For more, see [flake8-simplify](https://pypi.org/project/flake8-simplify/0.19.3/
| SIM109 | CompareWithTuple | Use `value in (..., ...)` instead of `value == ... or value == ...` | 🛠 |
| SIM110 | ConvertLoopToAny | Use `return any(x for x in y)` instead of `for` loop | 🛠 |
| SIM111 | ConvertLoopToAll | Use `return all(x for x in y)` instead of `for` loop | 🛠 |
| SIM112 | UseCapitalEnvironmentVariables | Use capitalized environment variable `...` instead of `...` | 🛠 |
| SIM117 | MultipleWithStatements | Use a single `with` statement with multiple contexts instead of nested `with` statements | |
| SIM118 | KeyInDict | Use `key in dict` instead of `key in dict.keys()` | 🛠 |
| SIM201 | NegateEqualOp | Use `left != right` instead of `not left == right` | 🛠 |
@@ -993,6 +1000,7 @@ For more, see [flake8-simplify](https://pypi.org/project/flake8-simplify/0.19.3/
| SIM222 | OrTrue | Use `True` instead of `... or True` | 🛠 |
| SIM223 | AndFalse | Use `False` instead of `... and False` | 🛠 |
| SIM300 | YodaConditions | Yoda conditions are discouraged, use `left == right` instead | 🛠 |
| SIM401 | DictGetWithDefault | Use `var = dict.get(key, "default")` instead of an `if` block | 🛠 |
### flake8-tidy-imports (TID)
@@ -1066,8 +1074,8 @@ For more, see [pygrep-hooks](https://github.com/pre-commit/pygrep-hooks) on GitH
| ---- | ---- | ------- | --- |
| PGH001 | NoEval | No builtin `eval()` allowed | |
| PGH002 | DeprecatedLogWarn | `warn` is deprecated in favor of `warning` | |
| PGH003 | BlanketTypeIgnore | Use specific error codes when ignoring type issues | |
| PGH004 | BlanketNOQA | Use specific error codes when using `noqa` | |
| PGH003 | BlanketTypeIgnore | Use specific rule codes when ignoring type issues | |
| PGH004 | BlanketNOQA | Use specific rule codes when using `noqa` | |
### Pylint (PLC, PLE, PLR, PLW)
@@ -1393,7 +1401,7 @@ natively, including:
- [`pyupgrade`](https://pypi.org/project/pyupgrade/) ([#827](https://github.com/charliermarsh/ruff/issues/827))
- [`yesqa`](https://github.com/asottile/yesqa)
Note that, in some cases, Ruff uses different error code prefixes than would be found in the
Note that, in some cases, Ruff uses different rule codes and prefixes than would be found in the
originating Flake8 plugins. For example, Ruff uses `TID252` to represent the `I252` rule from
`flake8-tidy-imports`. This helps minimize conflicts across plugins and allows any individual plugin
to be toggled on or off with a single (e.g.) `--select TID`, as opposed to `--select I2` (to avoid
@@ -1418,9 +1426,9 @@ At time of writing, Pylint implements 409 total rules, while Ruff implements 224
at least 60 overlap with the Pylint rule set. Subjectively, Pylint tends to implement more rules
based on type inference (e.g., validating the number of arguments in a function call).
Like Flake8, Pylint supports plugins (called "checkers"), while Ruff implements all checks natively.
Like Flake8, Pylint supports plugins (called "checkers"), while Ruff implements all rules natively.
Unlike Pylint, Ruff is capable of automatically fixing its own lint errors.
Unlike Pylint, Ruff is capable of automatically fixing its own lint violations.
Pylint parity is being tracked in [#689](https://github.com/charliermarsh/ruff/issues/689).
@@ -1533,7 +1541,7 @@ For example, if you're coming from `flake8-docstrings`, and your originating con
`--docstring-convention=numpy`, you'd instead set `convention = "numpy"` in your `pyproject.toml`,
as above.
Alongside `convention`, you'll want to explicitly enable the `D` error code class, like so:
Alongside `convention`, you'll want to explicitly enable the `D` rule code prefix, like so:
```toml
[tool.ruff]
@@ -1786,7 +1794,7 @@ cache-dir = "~/.cache/ruff"
#### [`dummy-variable-rgx`](#dummy-variable-rgx)
A regular expression used to identify "dummy" variables, or those which
should be ignored when evaluating (e.g.) unused-variable checks. The
should be ignored when enforcing (e.g.) unused-variable rules. The
default expression matches `_`, `__`, and `_var`, but not `_var_`.
**Default value**: `"^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$"`
@@ -1817,6 +1825,8 @@ Exclusions are based on globs, and can be either:
`directory`). Note that these paths are relative to the project root
(e.g., the directory containing your `pyproject.toml`).
For more information on the glob syntax, refer to the [`globset` documentation](https://docs.rs/globset/latest/globset/#syntax).
Note that you'll typically want to use
[`extend-exclude`](#extend-exclude) to modify the excluded paths.
@@ -1864,6 +1874,18 @@ line-length = 100
A list of file patterns to omit from linting, in addition to those
specified by `exclude`.
Exclusions are based on globs, and can be either:
- Single-path patterns, like `.mypy_cache` (to exclude any directory
named `.mypy_cache` in the tree), `foo.py` (to exclude any file named
`foo.py`), or `foo_*.py` (to exclude any file matching `foo_*.py` ).
- Relative patterns, like `directory/foo.py` (to exclude that specific
file) or `directory/*.py` (to exclude any Python files in
`directory`). Note that these paths are relative to the project root
(e.g., the directory containing your `pyproject.toml`).
For more information on the glob syntax, refer to the [`globset` documentation](https://docs.rs/globset/latest/globset/#syntax).
**Default value**: `[]`
**Type**: `Vec<FilePattern>`
@@ -1880,8 +1902,8 @@ extend-exclude = ["tests", "src/bad.py"]
#### [`extend-ignore`](#extend-ignore)
A list of check code prefixes to ignore, in addition to those specified
by `ignore`.
A list of rule codes or prefixes to ignore, in addition to those
specified by `ignore`.
**Default value**: `[]`
@@ -1891,7 +1913,7 @@ by `ignore`.
```toml
[tool.ruff]
# Skip unused variable checks (`F841`).
# Skip unused variable rules (`F841`).
extend-ignore = ["F841"]
```
@@ -1899,8 +1921,8 @@ extend-ignore = ["F841"]
#### [`extend-select`](#extend-select)
A list of check code prefixes to enable, in addition to those specified
by `select`.
A list of rule codes or prefixes to enable, in addition to those
specified by `select`.
**Default value**: `[]`
@@ -1918,7 +1940,7 @@ extend-select = ["B", "Q"]
#### [`external`](#external)
A list of check codes that are unsupported by Ruff, but should be
A list of rule codes that are unsupported by Ruff, but should be
preserved when (e.g.) validating `# noqa` directives. Useful for
retaining `# noqa` directives that cover plugins not yet implemented
by Ruff.
@@ -1975,7 +1997,7 @@ fix-only = true
#### [`fixable`](#fixable)
A list of check code prefixes to consider autofix-able.
A list of rule codes or prefixes to consider autofixable.
**Default value**: `["A", "ANN", "ARG", "B", "BLE", "C", "D", "E", "ERA", "F", "FBT", "I", "ICN", "N", "PGH", "PLC", "PLE", "PLR", "PLW", "Q", "RET", "RUF", "S", "T", "TID", "UP", "W", "YTT"]`
@@ -1985,7 +2007,7 @@ A list of check code prefixes to consider autofix-able.
```toml
[tool.ruff]
# Only allow autofix behavior for `E` and `F` checks.
# Only allow autofix behavior for `E` and `F` rules.
fixable = ["E", "F"]
```
@@ -2041,11 +2063,11 @@ format = "grouped"
#### [`ignore`](#ignore)
A list of check code prefixes to ignore. Prefixes can specify exact
checks (like `F841`), entire categories (like `F`), or anything in
A list of rule codes or prefixes to ignore. Prefixes can specify exact
rules (like `F841`), entire categories (like `F`), or anything in
between.
When breaking ties between enabled and disabled checks (via `select` and
When breaking ties between enabled and disabled rules (via `select` and
`ignore`, respectively), more specific prefixes override less
specific prefixes.
@@ -2057,7 +2079,7 @@ specific prefixes.
```toml
[tool.ruff]
# Skip unused variable checks (`F841`).
# Skip unused variable rules (`F841`).
ignore = ["F841"]
```
@@ -2105,8 +2127,8 @@ line-length = 120
#### [`per-file-ignores`](#per-file-ignores)
A list of mappings from file pattern to check code prefixes to exclude,
when considering any matching files.
A list of mappings from file pattern to rule codes or prefixes to
exclude, when considering any matching files.
**Default value**: `{}`
@@ -2164,11 +2186,11 @@ respect_gitignore = false
#### [`select`](#select)
A list of check code prefixes to enable. Prefixes can specify exact
checks (like `F841`), entire categories (like `F`), or anything in
A list of rule codes or prefixes to enable. Prefixes can specify exact
rules (like `F841`), entire categories (like `F`), or anything in
between.
When breaking ties between enabled and disabled checks (via `select` and
When breaking ties between enabled and disabled rules (via `select` and
`ignore`, respectively), more specific prefixes override less
specific prefixes.
@@ -2188,8 +2210,8 @@ select = ["E", "F", "B", "Q"]
#### [`show-source`](#show-source)
Whether to show source code snippets when reporting lint error
violations (overridden by the `--show-source` command-line flag).
Whether to show source code snippets when reporting lint violations
(overridden by the `--show-source` command-line flag).
**Default value**: `false`
@@ -2272,7 +2294,7 @@ target-version = "py37"
A list of task tags to recognize (e.g., "TODO", "FIXME", "XXX").
Comments starting with these tags will be ignored by commented-out code
detection (`ERA`), and skipped by line-length checks (`E501`) if
detection (`ERA`), and skipped by line-length rules (`E501`) if
`ignore-overlong-task-comments` is set to `true`.
**Default value**: `["TODO", "FIXME", "XXX"]`
@@ -2288,9 +2310,33 @@ task-tags = ["HACK"]
---
#### [`typing-modules`](#typing-modules)
A list of modules whose imports should be treated equivalently to
members of the `typing` module.
This is useful for ensuring proper type annotation inference for
projects that re-export `typing` and `typing_extensions` members
from a compatibility module. If omitted, any members imported from
modules apart from `typing` and `typing_extensions` will be treated
as ordinary Python objects.
**Default value**: `[]`
**Type**: `Vec<String>`
**Example usage**:
```toml
[tool.ruff]
typing-modules = ["airflow.typing_compat"]
```
---
#### [`unfixable`](#unfixable)
A list of check code prefixes to consider un-autofix-able.
A list of rule codes or prefixes to consider non-autofix-able.
**Default value**: `[]`
@@ -2311,7 +2357,7 @@ unfixable = ["F401"]
Enable or disable automatic update checks (overridden by the
`--update-check` and `--no-update-check` command-line flags).
**Default value**: `true`
**Default value**: `false`
**Type**: `bool`
@@ -2319,7 +2365,7 @@ Enable or disable automatic update checks (overridden by the
```toml
[tool.ruff]
update-check = false
update-check = true
```
---
@@ -2364,7 +2410,7 @@ mypy-init-return = true
#### [`suppress-dummy-args`](#suppress-dummy-args)
Whether to suppress `ANN000`-level errors for arguments matching the
Whether to suppress `ANN000`-level violations for arguments matching the
"dummy" variable regex (like `_`).
**Default value**: `false`
@@ -2382,8 +2428,8 @@ suppress-dummy-args = true
#### [`suppress-none-returning`](#suppress-none-returning)
Whether to suppress `ANN200`-level errors for functions that meet either
of the following criteria:
Whether to suppress `ANN200`-level violations for functions that meet
either of the following criteria:
- Contain no `return` statement.
- Explicit `return` statement(s) all return `None` (explicitly or
@@ -2444,7 +2490,7 @@ extend-hardcoded-tmp-directory = ["/foo/bar"]
#### [`extend-immutable-calls`](#extend-immutable-calls)
Additional callable functions to consider "immutable" when evaluating,
e.g., `no-mutable-default-argument` checks (`B006`).
e.g., the `no-mutable-default-argument` rule (`B006`).
**Default value**: `[]`
@@ -2531,9 +2577,9 @@ will be added to the `aliases` mapping.
Boolean flag specifying whether `@pytest.fixture()` without parameters
should have parentheses. If the option is set to `true` (the
default), `@pytest.fixture()` is valid and `@pytest.fixture` is an
error. If set to `false`, `@pytest.fixture` is valid and
`@pytest.fixture()` is an error.
default), `@pytest.fixture()` is valid and `@pytest.fixture` is
invalid. If set to `false`, `@pytest.fixture` is valid and
`@pytest.fixture()` is invalid.
**Default value**: `true`
@@ -2552,9 +2598,9 @@ fixture-parentheses = true
Boolean flag specifying whether `@pytest.mark.foo()` without parameters
should have parentheses. If the option is set to `true` (the
default), `@pytest.mark.foo()` is valid and `@pytest.mark.foo` is an
error. If set to `false`, `@pytest.fixture` is valid and
`@pytest.mark.foo()` is an error.
default), `@pytest.mark.foo()` is valid and `@pytest.mark.foo` is
invalid. If set to `false`, `@pytest.fixture` is valid and
`@pytest.mark.foo()` is invalid.
**Default value**: `true`
@@ -2776,7 +2822,7 @@ ban-relative-imports = "all"
#### [`banned-api`](#banned-api)
Specific modules or module members that may not be imported or accessed.
Note that this check is only meant to flag accidental uses,
Note that this rule is only meant to flag accidental uses,
and can be circumvented via `eval` or `importlib`.
**Default value**: `{}`
@@ -2974,6 +3020,23 @@ order-by-type = true
---
#### [`required-imports`](#required-imports)
Add the specified import line to all files.
**Default value**: `[]`
**Type**: `Vec<String>`
**Example usage**:
```toml
[tool.ruff.isort]
add-import = ["from __future__ import annotations"]
```
---
#### [`single-line-exclusions`](#single-line-exclusions)
One or more modules to exclude from the single line rule.
@@ -3096,7 +3159,7 @@ staticmethod-decorators = ["staticmethod", "stcmthd"]
#### [`ignore-overlong-task-comments`](#ignore-overlong-task-comments)
Whether or not line-length checks (`E501`) should be triggered for
Whether or not line-length violations (`E501`) should be triggered for
comments starting with `task-tags` (by default: ["TODO", "FIXME",
and "XXX"]).
@@ -3113,6 +3176,24 @@ ignore-overlong-task-comments = true
---
#### [`max-doc-length`](#max-doc-length)
The maximum line length to allow for line-length violations within
documentation (`W505`), including standalone comments.
**Default value**: `None`
**Type**: `usize`
**Example usage**:
```toml
[tool.ruff.pycodestyle]
max-doc-length = 88
```
---
### `pydocstyle`
#### [`convention`](#convention)
@@ -3167,4 +3248,4 @@ MIT
## Contributing
Contributions are welcome and hugely appreciated. To get started, check out the
[contributing guidelines](https://github.com/charliermarsh/ruff/blob/main/.github/CONTRIBUTING.md).
[contributing guidelines](https://github.com/charliermarsh/ruff/blob/main/CONTRIBUTING.md).

View File

@@ -771,7 +771,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8_to_ruff"
version = "0.0.216"
version = "0.0.219"
dependencies = [
"anyhow",
"clap",
@@ -1975,7 +1975,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.216"
version = "0.0.219"
dependencies = [
"anyhow",
"bincode",

View File

@@ -1,10 +1,11 @@
[package]
name = "flake8-to-ruff"
version = "0.0.216-dev.0"
version = "0.0.219-dev.0"
edition = "2021"
[lib]
name = "flake8_to_ruff"
doctest = false
[dependencies]
anyhow = { version = "1.0.66" }

View File

@@ -84,7 +84,7 @@ flake8-to-ruff path/to/.flake8 --plugin flake8-builtins --plugin flake8-quotes
1. Ruff only supports a subset of the Flake configuration options. `flake8-to-ruff` will warn on and
ignore unsupported options in the `.flake8` file (or equivalent). (Similarly, Ruff has a few
configuration options that don't exist in Flake8.)
2. Ruff will omit any error codes that are unimplemented or unsupported by Ruff, including error
2. Ruff will omit any rule codes that are unimplemented or unsupported by Ruff, including rule
codes from unsupported plugins. (See the [Ruff README](https://github.com/charliermarsh/ruff#user-content-how-does-ruff-compare-to-flake8)
for the complete list of supported plugins.)

View File

@@ -30,7 +30,7 @@ pub fn convert(
.get("flake8")
.expect("Unable to find flake8 section in INI file");
// Extract all referenced check code prefixes, to power plugin inference.
// Extract all referenced rule code prefixes, to power plugin inference.
let mut referenced_codes: BTreeSet<RuleCodePrefix> = BTreeSet::default();
for (key, value) in flake8 {
if let Some(value) = value {
@@ -435,6 +435,7 @@ mod tests {
src: None,
target_version: None,
unfixable: None,
typing_modules: None,
task_tags: None,
update_check: None,
flake8_annotations: None,
@@ -499,6 +500,7 @@ mod tests {
src: None,
target_version: None,
unfixable: None,
typing_modules: None,
task_tags: None,
update_check: None,
flake8_annotations: None,
@@ -563,6 +565,7 @@ mod tests {
src: None,
target_version: None,
unfixable: None,
typing_modules: None,
task_tags: None,
update_check: None,
flake8_annotations: None,
@@ -627,6 +630,7 @@ mod tests {
src: None,
target_version: None,
unfixable: None,
typing_modules: None,
task_tags: None,
update_check: None,
flake8_annotations: None,
@@ -691,6 +695,7 @@ mod tests {
src: None,
target_version: None,
unfixable: None,
typing_modules: None,
task_tags: None,
update_check: None,
flake8_annotations: None,
@@ -764,6 +769,7 @@ mod tests {
src: None,
target_version: None,
unfixable: None,
typing_modules: None,
task_tags: None,
update_check: None,
flake8_annotations: None,
@@ -831,6 +837,7 @@ mod tests {
src: None,
target_version: None,
unfixable: None,
typing_modules: None,
task_tags: None,
update_check: None,
flake8_annotations: None,

View File

@@ -1,6 +1,11 @@
import { useCallback, useEffect, useState } from "react";
import { DEFAULT_PYTHON_SOURCE } from "../constants";
import init, { check, Check, currentVersion, defaultSettings } from "../pkg";
import init, {
check,
Diagnostic,
currentVersion,
defaultSettings,
} from "../pkg";
import { ErrorMessage } from "./ErrorMessage";
import Header from "./Header";
import { useTheme } from "./theme";
@@ -18,7 +23,7 @@ export default function Editor() {
const [edit, setEdit] = useState<number>(0);
const [settingsSource, setSettingsSource] = useState<string | null>(null);
const [pythonSource, setPythonSource] = useState<string | null>(null);
const [checks, setChecks] = useState<Check[]>([]);
const [diagnostics, setDiagnostics] = useState<Diagnostic[]>([]);
const [error, setError] = useState<string | null>(null);
const [theme, setTheme] = useTheme();
@@ -32,25 +37,25 @@ export default function Editor() {
}
let config: any;
let checks: Check[];
let diagnostics: Diagnostic[];
try {
config = JSON.parse(settingsSource);
} catch (e) {
setChecks([]);
setDiagnostics([]);
setError((e as Error).message);
return;
}
try {
checks = check(pythonSource, config);
diagnostics = check(pythonSource, config);
} catch (e) {
setError(e as string);
return;
}
setError(null);
setChecks(checks);
setDiagnostics(diagnostics);
}, [initialized, settingsSource, pythonSource]);
useEffect(() => {
@@ -122,7 +127,7 @@ export default function Editor() {
visible={tab === "Source"}
source={pythonSource}
theme={theme}
checks={checks}
diagnostics={diagnostics}
onChange={handlePythonSourceChange}
/>
<SettingsEditor

View File

@@ -5,19 +5,19 @@
import Editor, { useMonaco } from "@monaco-editor/react";
import { MarkerSeverity, MarkerTag } from "monaco-editor";
import { useCallback, useEffect } from "react";
import { Check } from "../pkg";
import { Diagnostic } from "../pkg";
import { Theme } from "./theme";
export default function SourceEditor({
visible,
source,
theme,
checks,
diagnostics,
onChange,
}: {
visible: boolean;
source: string;
checks: Check[];
diagnostics: Diagnostic[];
theme: Theme;
onChange: (pythonSource: string) => void;
}) {
@@ -33,15 +33,15 @@ export default function SourceEditor({
editor.setModelMarkers(
model,
"owner",
checks.map((check) => ({
startLineNumber: check.location.row,
startColumn: check.location.column + 1,
endLineNumber: check.end_location.row,
endColumn: check.end_location.column + 1,
message: `${check.code}: ${check.message}`,
diagnostics.map((diagnostic) => ({
startLineNumber: diagnostic.location.row,
startColumn: diagnostic.location.column + 1,
endLineNumber: diagnostic.end_location.row,
endColumn: diagnostic.end_location.column + 1,
message: `${diagnostic.code}: ${diagnostic.message}`,
severity: MarkerSeverity.Error,
tags:
check.code === "F401" || check.code === "F841"
diagnostic.code === "F401" || diagnostic.code === "F841"
? [MarkerTag.Unnecessary]
: [],
})),
@@ -52,7 +52,7 @@ export default function SourceEditor({
{
// @ts-expect-error: The type definition is wrong.
provideCodeActions: function (model, position) {
const actions = checks
const actions = diagnostics
.filter((check) => position.startLineNumber === check.location.row)
.filter((check) => check.fix)
.map((check) => ({
@@ -89,7 +89,7 @@ export default function SourceEditor({
return () => {
codeActionProvider?.dispose();
};
}, [checks, monaco]);
}, [diagnostics, monaco]);
const handleChange = useCallback(
(value: string | undefined) => {

View File

@@ -4,7 +4,7 @@ build-backend = "maturin"
[project]
name = "ruff"
version = "0.0.216"
version = "0.0.219"
description = "An extremely fast Python linter, written in Rust."
authors = [
{ name = "Charlie Marsh", email = "charlie.r.marsh@gmail.com" },
@@ -35,6 +35,7 @@ urls = { repository = "https://github.com/charliermarsh/ruff" }
[tool.maturin]
bindings = "bin"
python-source = "python"
strip = true
[tool.setuptools]

View File

@@ -0,0 +1,6 @@
from pysnmp.hlapi import CommunityData
CommunityData("public", mpModel=0) # S508
CommunityData("public", mpModel=1) # S508
CommunityData("public", mpModel=2) # OK

View File

@@ -0,0 +1,7 @@
from pysnmp.hlapi import UsmUserData
insecure = UsmUserData("securityName") # S509
auth_no_priv = UsmUserData("securityName", "authName") # S509
less_insecure = UsmUserData("securityName", "authName", "privName") # OK

View File

@@ -25,10 +25,10 @@ for x in range(3):
def check_inside_functions_too():
ls = [lambda: x for x in range(2)]
st = {lambda: x for x in range(2)}
gn = (lambda: x for x in range(2))
dt = {x: lambda: x for x in range(2)}
ls = [lambda: x for x in range(2)] # error
st = {lambda: x for x in range(2)} # error
gn = (lambda: x for x in range(2)) # error
dt = {x: lambda: x for x in range(2)} # error
async def pointless_async_iterable():
@@ -37,9 +37,9 @@ async def pointless_async_iterable():
async def container_for_problems():
async for x in pointless_async_iterable():
functions.append(lambda: x)
functions.append(lambda: x) # error
[lambda: x async for x in pointless_async_iterable()]
[lambda: x async for x in pointless_async_iterable()] # error
a = 10
@@ -47,10 +47,10 @@ b = 0
while True:
a = a_ = a - 1
b += 1
functions.append(lambda: a)
functions.append(lambda: a_)
functions.append(lambda: b)
functions.append(lambda: c) # not a name error because of late binding!
functions.append(lambda: a) # error
functions.append(lambda: a_) # error
functions.append(lambda: b) # error
functions.append(lambda: c) # error, but not a name error due to late binding
c: bool = a > 3
if not c:
break
@@ -58,7 +58,7 @@ while True:
# Nested loops should not duplicate reports
for j in range(2):
for k in range(3):
lambda: j * k
lambda: j * k # error
for j, k, l in [(1, 2, 3)]:
@@ -80,3 +80,95 @@ for var in range(2):
for i in range(3):
lambda: f"{i}"
# `query` is defined in the function, so also defining it in the loop should be OK.
for name in ["a", "b"]:
query = name
def myfunc(x):
query = x
query_post = x
_ = query
_ = query_post
query_post = name # in case iteration order matters
# Bug here because two dict comprehensions reference `name`, one of which is inside
# the lambda. This should be totally fine, of course.
_ = {
k: v
for k, v in reduce(
lambda data, event: merge_mappings(
[data, {name: f(caches, data, event) for name, f in xx}]
),
events,
{name: getattr(group, name) for name in yy},
).items()
if k in backfill_fields
}
# OK to define lambdas if they're immediately consumed, typically as the `key=`
# argument or in a consumed `filter()` (even if a comprehension is better style)
for x in range(2):
# It's not a complete get-out-of-linting-free construct - these should fail:
min([None, lambda: x], key=repr)
sorted([None, lambda: x], key=repr)
any(filter(bool, [None, lambda: x]))
list(filter(bool, [None, lambda: x]))
all(reduce(bool, [None, lambda: x]))
# But all these should be OK:
min(range(3), key=lambda y: x * y)
max(range(3), key=lambda y: x * y)
sorted(range(3), key=lambda y: x * y)
any(map(lambda y: x < y, range(3)))
all(map(lambda y: x < y, range(3)))
set(map(lambda y: x < y, range(3)))
list(map(lambda y: x < y, range(3)))
tuple(map(lambda y: x < y, range(3)))
sorted(map(lambda y: x < y, range(3)))
frozenset(map(lambda y: x < y, range(3)))
any(filter(lambda y: x < y, range(3)))
all(filter(lambda y: x < y, range(3)))
set(filter(lambda y: x < y, range(3)))
list(filter(lambda y: x < y, range(3)))
tuple(filter(lambda y: x < y, range(3)))
sorted(filter(lambda y: x < y, range(3)))
frozenset(filter(lambda y: x < y, range(3)))
any(reduce(lambda y: x | y, range(3)))
all(reduce(lambda y: x | y, range(3)))
set(reduce(lambda y: x | y, range(3)))
list(reduce(lambda y: x | y, range(3)))
tuple(reduce(lambda y: x | y, range(3)))
sorted(reduce(lambda y: x | y, range(3)))
frozenset(reduce(lambda y: x | y, range(3)))
import functools
any(functools.reduce(lambda y: x | y, range(3)))
all(functools.reduce(lambda y: x | y, range(3)))
set(functools.reduce(lambda y: x | y, range(3)))
list(functools.reduce(lambda y: x | y, range(3)))
tuple(functools.reduce(lambda y: x | y, range(3)))
sorted(functools.reduce(lambda y: x | y, range(3)))
frozenset(functools.reduce(lambda y: x | y, range(3)))
# OK because the lambda which references a loop variable is defined in a `return`
# statement, and after we return the loop variable can't be redefined.
# In principle we could do something fancy with `break`, but it's not worth it.
def iter_f(names):
for name in names:
if exists(name):
return lambda: name if exists(name) else None
if foo(name):
return [lambda: name] # known false alarm
if False:
return [lambda: i for i in range(3)] # error

View File

@@ -2,3 +2,10 @@ x = list(x for x in range(3))
x = list(
x for x in range(3)
)
def list(*args, **kwargs):
return None
list(x for x in range(3))

View File

@@ -2,3 +2,10 @@ x = set(x for x in range(3))
x = set(
x for x in range(3)
)
def set(*args, **kwargs):
return None
set(x for x in range(3))

View File

@@ -3,3 +3,10 @@ l = list()
d1 = dict()
d2 = dict(a=1)
d3 = dict(**d2)
def list():
return [1, 2, 3]
a = list()

View File

@@ -4,3 +4,10 @@ list(sorted(x))
reversed(sorted(x))
reversed(sorted(x, key=lambda e: e))
reversed(sorted(x, reverse=True))
def reversed(*args, **kwargs):
return None
reversed(sorted(x, reverse=True))

View File

@@ -31,3 +31,10 @@ class User(BaseModel):
@buzz.setter
def buzz(self, value: str | int) -> None:
...
class User:
bar: str = StringField()
foo: bool = BooleanField()
# ...
bar = StringField() # PIE794

View File

@@ -17,6 +17,15 @@ def fun_with_params_no_docstring(a, b="""
""" """docstring"""):
pass
def fun_with_params_no_docstring2(a, b=c[foo():], c=\
""" not a docstring """):
pass
def function_with_single_docstring(a):
"Single line docstring"
def double_inside_single(a):
'Double inside "single "'

View File

@@ -13,11 +13,19 @@ def foo2():
def fun_with_params_no_docstring(a, b='''
not a
not a
''' '''docstring'''):
pass
def fun_with_params_no_docstring2(a, b=c[foo():], c=\
''' not a docstring '''):
pass
def function_with_single_docstring(a):
'Single line docstring'
def double_inside_single(a):
"Double inside 'single '"

View File

@@ -1,13 +1,13 @@
# Bad
# SIM108
if a:
b = c
else:
b = d
# Good
# OK
b = c if a else d
# https://github.com/MartinThoma/flake8-simplify/issues/115
# OK
if a:
b = c
elif c:
@@ -15,6 +15,7 @@ elif c:
else:
b = d
# OK
if True:
pass
elif a:
@@ -22,6 +23,7 @@ elif a:
else:
b = 2
# OK (false negative)
if True:
pass
else:
@@ -29,3 +31,63 @@ else:
b = 1
else:
b = 2
import sys
# OK
if sys.version_info >= (3, 9):
randbytes = random.randbytes
else:
randbytes = _get_random_bytes
# OK
if sys.platform == "darwin":
randbytes = random.randbytes
else:
randbytes = _get_random_bytes
# OK
if sys.platform.startswith("linux"):
randbytes = random.randbytes
else:
randbytes = _get_random_bytes
# OK (includes comments)
if x > 0:
# test test
abc = x
else:
# test test test
abc = -x
# OK (too long)
if parser.errno == BAD_FIRST_LINE:
req = wrappers.Request(sock, server=self._server)
else:
req = wrappers.Request(
sock,
parser.get_method(),
parser.get_scheme() or _scheme,
parser.get_path(),
parser.get_version(),
parser.get_query_string(),
server=self._server,
)
# SIM108
if a:
b = cccccccccccccccccccccccccccccccccccc
else:
b = ddddddddddddddddddddddddddddddddddddd
# OK (too long)
if True:
if a:
b = cccccccccccccccccccccccccccccccccccc
else:
b = ddddddddddddddddddddddddddddddddddddd

View File

@@ -0,0 +1,19 @@
import os
# Bad
os.environ['foo']
os.environ.get('foo')
os.environ.get('foo', 'bar')
os.getenv('foo')
# Good
os.environ['FOO']
os.environ.get('FOO')
os.environ.get('FOO', 'bar')
os.getenv('FOO')

View File

@@ -0,0 +1,6 @@
f = open('foo.txt') # SIM115
data = f.read()
f.close()
with open('foo.txt') as f: # OK
data = f.read()

View File

@@ -0,0 +1,81 @@
###
# Positive cases
###
# SIM401 (pattern-1)
if key in a_dict:
var = a_dict[key]
else:
var = "default1"
# SIM401 (pattern-2)
if key not in a_dict:
var = "default2"
else:
var = a_dict[key]
# SIM401 (default with a complex expression)
if key in a_dict:
var = a_dict[key]
else:
var = val1 + val2
# SIM401 (complex expression in key)
if keys[idx] in a_dict:
var = a_dict[keys[idx]]
else:
var = "default"
# SIM401 (complex expression in dict)
if key in dicts[idx]:
var = dicts[idx][key]
else:
var = "default"
# SIM401 (complex expression in var)
if key in a_dict:
vars[idx] = a_dict[key]
else:
vars[idx] = "default"
###
# Negative cases
###
# OK (false negative)
if not key in a_dict:
var = "default"
else:
var = a_dict[key]
# OK (different dict)
if key in a_dict:
var = other_dict[key]
else:
var = "default"
# OK (different key)
if key in a_dict:
var = a_dict[other_key]
else:
var = "default"
# OK (different var)
if key in a_dict:
var = a_dict[key]
else:
other_var = "default"
# OK (extra vars in body)
if key in a_dict:
var = a_dict[key]
var2 = value2
else:
var = "default"
# OK (extra vars in orelse)
if key in a_dict:
var = a_dict[key]
else:
var2 = value2
var = "default"

View File

@@ -181,3 +181,17 @@ def f(a: int, b: int) -> str:
def f(a, b):
return f"{a}{b}"
###
# Unused arguments on magic methods.
###
class C:
def __init__(self, x) -> None:
print("Hello, world!")
def __str__(self) -> str:
return "Hello, world!"
def __exit__(self, exc_type, exc_value, traceback) -> None:
print("Hello, world!")

View File

@@ -0,0 +1,3 @@
#!/usr/bin/env python3
x = 1

View File

@@ -0,0 +1,3 @@
"""Hello, world!"""
x = 1

View File

@@ -0,0 +1 @@
"""Hello, world!"""

View File

@@ -0,0 +1,2 @@
"""Hello, world!"""; x = \
1; y = 2

View File

@@ -0,0 +1 @@
"""Hello, world!"""; x = 1

View File

@@ -0,0 +1,2 @@
from __future__ import generator_stop
import os

View File

@@ -0,0 +1,15 @@
#!/usr/bin/env python3
"""Here's a top-level docstring that's over the limit."""
def f():
"""Here's a docstring that's also over the limit."""
x = 1 # Here's a comment that's over the limit, but it's not standalone.
# Here's a standalone comment that's over the limit.
print("Here's a string that's over the limit, but it's not a docstring.")
"This is also considered a docstring, and is over the limit."

View File

@@ -0,0 +1,7 @@
from typing import Union
from airflow.typing_compat import Literal, Optional
X = Union[Literal[False], Literal["db"]]
y = Optional["Class"]

View File

@@ -0,0 +1,8 @@
class SocketError(Exception):
pass
try:
raise SocketError()
except SocketError:
pass

View File

@@ -0,0 +1,36 @@
# Invalid calls; errors expected.
"{0}" "{1}" "{2}".format(1, 2, 3)
"a {3} complicated {1} string with {0} {2}".format(
"first", "second", "third", "fourth"
)
'{0}'.format(1)
'{0:x}'.format(30)
x = '{0}'.format(1)
'''{0}\n{1}\n'''.format(1, 2)
x = "foo {0}" \
"bar {1}".format(1, 2)
("{0}").format(1)
"\N{snowman} {0}".format(1)
'{' '0}'.format(1)
# These will not change because we are waiting for libcst to fix this issue:
# https://github.com/Instagram/LibCST/issues/846
print(
'foo{0}'
'bar{1}'.format(1, 2)
)
print(
'foo{0}' # ohai\n"
'bar{1}'.format(1, 2)
)

View File

@@ -0,0 +1,23 @@
# Valid calls; no errors expected.
'{}'.format(1)
x = ('{0} {1}',)
'{0} {0}'.format(1)
'{0:<{1}}'.format(1, 4)
f"{0}".format(a)
f"{0}".format(1)
print(f"{0}".format(1))
# I did not include the following tests because ruff does not seem to work with
# invalid python syntax (which is a good thing)
# "{0}"format(1)
# '{'.format(1)", "'}'.format(1)
# ("{0}" # {1}\n"{2}").format(1, 2, 3)

View File

@@ -33,14 +33,14 @@
]
},
"dummy-variable-rgx": {
"description": "A regular expression used to identify \"dummy\" variables, or those which should be ignored when evaluating (e.g.) unused-variable checks. The default expression matches `_`, `__`, and `_var`, but not `_var_`.",
"description": "A regular expression used to identify \"dummy\" variables, or those which should be ignored when enforcing (e.g.) unused-variable rules. The default expression matches `_`, `__`, and `_var`, but not `_var_`.",
"type": [
"string",
"null"
]
},
"exclude": {
"description": "A list of file patterns to exclude from linting.\n\nExclusions are based on globs, and can be either:\n\n- Single-path patterns, like `.mypy_cache` (to exclude any directory named `.mypy_cache` in the tree), `foo.py` (to exclude any file named `foo.py`), or `foo_*.py` (to exclude any file matching `foo_*.py` ). - Relative patterns, like `directory/foo.py` (to exclude that specific file) or `directory/*.py` (to exclude any Python files in `directory`). Note that these paths are relative to the project root (e.g., the directory containing your `pyproject.toml`).\n\nNote that you'll typically want to use [`extend-exclude`](#extend-exclude) to modify the excluded paths.",
"description": "A list of file patterns to exclude from linting.\n\nExclusions are based on globs, and can be either:\n\n- Single-path patterns, like `.mypy_cache` (to exclude any directory named `.mypy_cache` in the tree), `foo.py` (to exclude any file named `foo.py`), or `foo_*.py` (to exclude any file matching `foo_*.py` ). - Relative patterns, like `directory/foo.py` (to exclude that specific file) or `directory/*.py` (to exclude any Python files in `directory`). Note that these paths are relative to the project root (e.g., the directory containing your `pyproject.toml`).\n\nFor more information on the glob syntax, refer to the [`globset` documentation](https://docs.rs/globset/latest/globset/#syntax).\n\nNote that you'll typically want to use [`extend-exclude`](#extend-exclude) to modify the excluded paths.",
"type": [
"array",
"null"
@@ -57,7 +57,7 @@
]
},
"extend-exclude": {
"description": "A list of file patterns to omit from linting, in addition to those specified by `exclude`.",
"description": "A list of file patterns to omit from linting, in addition to those specified by `exclude`.\n\nExclusions are based on globs, and can be either:\n\n- Single-path patterns, like `.mypy_cache` (to exclude any directory named `.mypy_cache` in the tree), `foo.py` (to exclude any file named `foo.py`), or `foo_*.py` (to exclude any file matching `foo_*.py` ). - Relative patterns, like `directory/foo.py` (to exclude that specific file) or `directory/*.py` (to exclude any Python files in `directory`). Note that these paths are relative to the project root (e.g., the directory containing your `pyproject.toml`).\n\nFor more information on the glob syntax, refer to the [`globset` documentation](https://docs.rs/globset/latest/globset/#syntax).",
"type": [
"array",
"null"
@@ -67,7 +67,7 @@
}
},
"extend-ignore": {
"description": "A list of check code prefixes to ignore, in addition to those specified by `ignore`.",
"description": "A list of rule codes or prefixes to ignore, in addition to those specified by `ignore`.",
"type": [
"array",
"null"
@@ -77,7 +77,7 @@
}
},
"extend-select": {
"description": "A list of check code prefixes to enable, in addition to those specified by `select`.",
"description": "A list of rule codes or prefixes to enable, in addition to those specified by `select`.",
"type": [
"array",
"null"
@@ -87,7 +87,7 @@
}
},
"external": {
"description": "A list of check codes that are unsupported by Ruff, but should be preserved when (e.g.) validating `# noqa` directives. Useful for retaining `# noqa` directives that cover plugins not yet implemented by Ruff.",
"description": "A list of rule codes that are unsupported by Ruff, but should be preserved when (e.g.) validating `# noqa` directives. Useful for retaining `# noqa` directives that cover plugins not yet implemented by Ruff.",
"type": [
"array",
"null"
@@ -111,7 +111,7 @@
]
},
"fixable": {
"description": "A list of check code prefixes to consider autofix-able.",
"description": "A list of rule codes or prefixes to consider autofixable.",
"type": [
"array",
"null"
@@ -238,7 +238,7 @@
]
},
"ignore": {
"description": "A list of check code prefixes to ignore. Prefixes can specify exact checks (like `F841`), entire categories (like `F`), or anything in between.\n\nWhen breaking ties between enabled and disabled checks (via `select` and `ignore`, respectively), more specific prefixes override less specific prefixes.",
"description": "A list of rule codes or prefixes to ignore. Prefixes can specify exact rules (like `F841`), entire categories (like `F`), or anything in between.\n\nWhen breaking ties between enabled and disabled rules (via `select` and `ignore`, respectively), more specific prefixes override less specific prefixes.",
"type": [
"array",
"null"
@@ -297,7 +297,7 @@
]
},
"per-file-ignores": {
"description": "A list of mappings from file pattern to check code prefixes to exclude, when considering any matching files.",
"description": "A list of mappings from file pattern to rule codes or prefixes to exclude, when considering any matching files.",
"type": [
"object",
"null"
@@ -361,7 +361,7 @@
]
},
"select": {
"description": "A list of check code prefixes to enable. Prefixes can specify exact checks (like `F841`), entire categories (like `F`), or anything in between.\n\nWhen breaking ties between enabled and disabled checks (via `select` and `ignore`, respectively), more specific prefixes override less specific prefixes.",
"description": "A list of rule codes or prefixes to enable. Prefixes can specify exact rules (like `F841`), entire categories (like `F`), or anything in between.\n\nWhen breaking ties between enabled and disabled rules (via `select` and `ignore`, respectively), more specific prefixes override less specific prefixes.",
"type": [
"array",
"null"
@@ -371,7 +371,7 @@
}
},
"show-source": {
"description": "Whether to show source code snippets when reporting lint error violations (overridden by the `--show-source` command-line flag).",
"description": "Whether to show source code snippets when reporting lint violations (overridden by the `--show-source` command-line flag).",
"type": [
"boolean",
"null"
@@ -399,7 +399,17 @@
]
},
"task-tags": {
"description": "A list of task tags to recognize (e.g., \"TODO\", \"FIXME\", \"XXX\").\n\nComments starting with these tags will be ignored by commented-out code detection (`ERA`), and skipped by line-length checks (`E501`) if `ignore-overlong-task-comments` is set to `true`.",
"description": "A list of task tags to recognize (e.g., \"TODO\", \"FIXME\", \"XXX\").\n\nComments starting with these tags will be ignored by commented-out code detection (`ERA`), and skipped by line-length rules (`E501`) if `ignore-overlong-task-comments` is set to `true`.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"typing-modules": {
"description": "A list of modules whose imports should be treated equivalently to members of the `typing` module.\n\nThis is useful for ensuring proper type annotation inference for projects that re-export `typing` and `typing_extensions` members from a compatibility module. If omitted, any members imported from modules apart from `typing` and `typing_extensions` will be treated as ordinary Python objects.",
"type": [
"array",
"null"
@@ -409,7 +419,7 @@
}
},
"unfixable": {
"description": "A list of check code prefixes to consider un-autofix-able.",
"description": "A list of rule codes or prefixes to consider non-autofix-able.",
"type": [
"array",
"null"
@@ -484,14 +494,14 @@
]
},
"suppress-dummy-args": {
"description": "Whether to suppress `ANN000`-level errors for arguments matching the \"dummy\" variable regex (like `_`).",
"description": "Whether to suppress `ANN000`-level violations for arguments matching the \"dummy\" variable regex (like `_`).",
"type": [
"boolean",
"null"
]
},
"suppress-none-returning": {
"description": "Whether to suppress `ANN200`-level errors for functions that meet either of the following criteria:\n\n- Contain no `return` statement. - Explicit `return` statement(s) all return `None` (explicitly or implicitly).",
"description": "Whether to suppress `ANN200`-level violations for functions that meet either of the following criteria:\n\n- Contain no `return` statement. - Explicit `return` statement(s) all return `None` (explicitly or implicitly).",
"type": [
"boolean",
"null"
@@ -530,7 +540,7 @@
"type": "object",
"properties": {
"extend-immutable-calls": {
"description": "Additional callable functions to consider \"immutable\" when evaluating, e.g., `no-mutable-default-argument` checks (`B006`).",
"description": "Additional callable functions to consider \"immutable\" when evaluating, e.g., the `no-mutable-default-argument` rule (`B006`).",
"type": [
"array",
"null"
@@ -587,14 +597,14 @@
"type": "object",
"properties": {
"fixture-parentheses": {
"description": "Boolean flag specifying whether `@pytest.fixture()` without parameters should have parentheses. If the option is set to `true` (the default), `@pytest.fixture()` is valid and `@pytest.fixture` is an error. If set to `false`, `@pytest.fixture` is valid and `@pytest.fixture()` is an error.",
"description": "Boolean flag specifying whether `@pytest.fixture()` without parameters should have parentheses. If the option is set to `true` (the default), `@pytest.fixture()` is valid and `@pytest.fixture` is invalid. If set to `false`, `@pytest.fixture` is valid and `@pytest.fixture()` is invalid.",
"type": [
"boolean",
"null"
]
},
"mark-parentheses": {
"description": "Boolean flag specifying whether `@pytest.mark.foo()` without parameters should have parentheses. If the option is set to `true` (the default), `@pytest.mark.foo()` is valid and `@pytest.mark.foo` is an error. If set to `false`, `@pytest.fixture` is valid and `@pytest.mark.foo()` is an error.",
"description": "Boolean flag specifying whether `@pytest.mark.foo()` without parameters should have parentheses. If the option is set to `true` (the default), `@pytest.mark.foo()` is valid and `@pytest.mark.foo` is invalid. If set to `false`, `@pytest.fixture` is valid and `@pytest.mark.foo()` is invalid.",
"type": [
"boolean",
"null"
@@ -717,7 +727,7 @@
]
},
"banned-api": {
"description": "Specific modules or module members that may not be imported or accessed. Note that this check is only meant to flag accidental uses, and can be circumvented via `eval` or `importlib`.",
"description": "Specific modules or module members that may not be imported or accessed. Note that this rule is only meant to flag accidental uses, and can be circumvented via `eval` or `importlib`.",
"type": [
"object",
"null"
@@ -810,6 +820,16 @@
"null"
]
},
"required-imports": {
"description": "Add the specified import line to all files.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"single-line-exclusions": {
"description": "One or more modules to exclude from the single line rule.",
"type": [
@@ -920,11 +940,20 @@
"type": "object",
"properties": {
"ignore-overlong-task-comments": {
"description": "Whether or not line-length checks (`E501`) should be triggered for comments starting with `task-tags` (by default: [\"TODO\", \"FIXME\", and \"XXX\"]).",
"description": "Whether or not line-length violations (`E501`) should be triggered for comments starting with `task-tags` (by default: [\"TODO\", \"FIXME\", and \"XXX\"]).",
"type": [
"boolean",
"null"
]
},
"max-doc-length": {
"description": "The maximum line length to allow for line-length violations within documentation (`W505`), including standalone comments.",
"type": [
"integer",
"null"
],
"format": "uint",
"minimum": 0.0
}
},
"additionalProperties": false
@@ -1258,6 +1287,7 @@
"I0",
"I00",
"I001",
"I002",
"I2",
"I25",
"I252",
@@ -1483,6 +1513,8 @@
"S50",
"S501",
"S506",
"S508",
"S509",
"SIM",
"SIM1",
"SIM10",
@@ -1496,6 +1528,8 @@
"SIM11",
"SIM110",
"SIM111",
"SIM112",
"SIM115",
"SIM117",
"SIM118",
"SIM2",
@@ -1515,6 +1549,9 @@
"SIM3",
"SIM30",
"SIM300",
"SIM4",
"SIM40",
"SIM401",
"T",
"T1",
"T10",
@@ -1582,10 +1619,15 @@
"UP027",
"UP028",
"UP029",
"UP03",
"UP030",
"W",
"W2",
"W29",
"W292",
"W5",
"W50",
"W505",
"W6",
"W60",
"W605",

View File

@@ -1,8 +1,12 @@
[package]
name = "ruff_dev"
version = "0.0.216"
version = "0.0.219"
edition = "2021"
[lib]
name = "ruff_dev"
doctest = false
[dependencies]
anyhow = { version = "1.0.66" }
clap = { version = "4.0.1", features = ["derive"] }

View File

@@ -1,10 +1,11 @@
[package]
name = "ruff_macros"
version = "0.0.216"
version = "0.0.219"
edition = "2021"
[lib]
proc-macro = true
doctest = false
[dependencies]
once_cell = { version = "1.17.0" }

View File

@@ -12,9 +12,12 @@
)]
#![forbid(unsafe_code)]
use syn::{parse_macro_input, DeriveInput};
use proc_macro2::Span;
use quote::quote;
use syn::{parse_macro_input, DeriveInput, Ident};
mod config;
mod prefixes;
mod rule_code_prefix;
#[proc_macro_derive(ConfigurationOptions, attributes(option, doc, option_group))]
@@ -34,3 +37,23 @@ pub fn derive_rule_code_prefix(input: proc_macro::TokenStream) -> proc_macro::To
.unwrap_or_else(syn::Error::into_compile_error)
.into()
}
#[proc_macro]
pub fn origin_by_code(item: proc_macro::TokenStream) -> proc_macro::TokenStream {
let ident = parse_macro_input!(item as Ident).to_string();
let mut iter = prefixes::PREFIX_TO_ORIGIN.iter();
let origin = loop {
let (prefix, origin) = iter
.next()
.unwrap_or_else(|| panic!("code doesn't start with any recognized prefix: {ident}"));
if ident.starts_with(prefix) {
break origin;
}
};
let prefix = Ident::new(origin, Span::call_site());
quote! {
RuleOrigin::#prefix
}
.into()
}

View File

@@ -0,0 +1,53 @@
// Longer prefixes should come first so that you can find an origin for a code
// by simply picking the first entry that starts with the given prefix.
pub const PREFIX_TO_ORIGIN: &[(&str, &str)] = &[
("ANN", "Flake8Annotations"),
("ARG", "Flake8UnusedArguments"),
("A", "Flake8Builtins"),
("BLE", "Flake8BlindExcept"),
("B", "Flake8Bugbear"),
("C4", "Flake8Comprehensions"),
("C9", "McCabe"),
("DTZ", "Flake8Datetimez"),
("D", "Pydocstyle"),
("ERA", "Eradicate"),
("EM", "Flake8ErrMsg"),
("E", "Pycodestyle"),
("FBT", "Flake8BooleanTrap"),
("F", "Pyflakes"),
("ICN", "Flake8ImportConventions"),
("ISC", "Flake8ImplicitStrConcat"),
("I", "Isort"),
("N", "PEP8Naming"),
("PD", "PandasVet"),
("PGH", "PygrepHooks"),
("PL", "Pylint"),
("PT", "Flake8PytestStyle"),
("Q", "Flake8Quotes"),
("RET", "Flake8Return"),
("SIM", "Flake8Simplify"),
("S", "Flake8Bandit"),
("T10", "Flake8Debugger"),
("T20", "Flake8Print"),
("TID", "Flake8TidyImports"),
("UP", "Pyupgrade"),
("W", "Pycodestyle"),
("YTT", "Flake82020"),
("PIE", "Flake8Pie"),
("RUF", "Ruff"),
];
#[cfg(test)]
mod tests {
use super::PREFIX_TO_ORIGIN;
#[test]
fn order() {
for (idx, (prefix, _)) in PREFIX_TO_ORIGIN.iter().enumerate() {
for (prior_prefix, _) in PREFIX_TO_ORIGIN[..idx].iter() {
assert!(!prefix.starts_with(prior_prefix));
}
}
}
}

View File

@@ -1,139 +0,0 @@
#!/usr/bin/env python3
"""Generate boilerplate for a new check.
Example usage:
python scripts/add_check.py \
--name PreferListBuiltin \
--code PIE807 \
--plugin flake8-pie
"""
import argparse
import os
ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
def dir_name(plugin: str) -> str:
return plugin.replace("-", "_")
def pascal_case(plugin: str) -> str:
"""Convert from snake-case to PascalCase."""
return "".join(word.title() for word in plugin.split("-"))
def snake_case(name: str) -> str:
"""Convert from PascalCase to snake_case."""
return "".join(f"_{word.lower()}" if word.isupper() else word for word in name).lstrip("_")
def main(*, name: str, code: str, plugin: str) -> None:
# Create a test fixture.
with open(
os.path.join(ROOT_DIR, f"resources/test/fixtures/{dir_name(plugin)}/{code}.py"),
"a",
):
pass
# Add the relevant `#testcase` macro.
with open(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}/mod.rs"), "r") as fp:
content = fp.read()
with open(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}/mod.rs"), "w") as fp:
for line in content.splitlines():
if line.strip() == "fn rules(check_code: RuleCode, path: &Path) -> Result<()> {":
indent = line.split("fn rules(check_code: RuleCode, path: &Path) -> Result<()> {")[0]
fp.write(f'{indent}#[test_case(RuleCode::{code}, Path::new("{code}.py"); "{code}")]')
fp.write("\n")
fp.write(line)
fp.write("\n")
# Add the relevant plugin function.
with open(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}/plugins.rs"), "a") as fp:
fp.write(
f"""
/// {code}
pub fn {snake_case(name)}(checker: &mut Checker) {{}}
"""
)
fp.write("\n")
# Add the relevant sections to `src/registry.rs`.
with open(os.path.join(ROOT_DIR, "src/registry.rs"), "r") as fp:
content = fp.read()
index = 0
with open(os.path.join(ROOT_DIR, "src/registry.rs"), "w") as fp:
for line in content.splitlines():
fp.write(line)
fp.write("\n")
if line.strip() == f"// {plugin}":
if index == 0:
# `RuleCode` definition
indent = line.split(f"// {plugin}")[0]
fp.write(f"{indent}{code},")
fp.write("\n")
elif index == 1:
# `DiagnosticKind` definition
indent = line.split(f"// {plugin}")[0]
fp.write(f"{indent}{name},")
fp.write("\n")
elif index == 2:
# `RuleCode#kind()`
indent = line.split(f"// {plugin}")[0]
fp.write(f"{indent}RuleCode::{code} => DiagnosticKind::{name},")
fp.write("\n")
elif index == 3:
# `RuleCode#category()`
indent = line.split(f"// {plugin}")[0]
fp.write(f"{indent}RuleCode::{code} => CheckCategory::{pascal_case(plugin)},")
fp.write("\n")
elif index == 4:
# `DiagnosticKind#code()`
indent = line.split(f"// {plugin}")[0]
fp.write(f"{indent}DiagnosticKind::{name} => &RuleCode::{code},")
fp.write("\n")
elif index == 5:
# `RuleCode#body`
indent = line.split(f"// {plugin}")[0]
fp.write(f'{indent}DiagnosticKind::{name} => todo!("Write message body for {code}"),')
fp.write("\n")
index += 1
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Generate boilerplate for a new check.",
epilog="python scripts/add_check.py --name PreferListBuiltin --code PIE807 --plugin flake8-pie",
)
parser.add_argument(
"--name",
type=str,
required=True,
help="The name of the check to generate, in PascalCase (e.g., 'LineTooLong').",
)
parser.add_argument(
"--code",
type=str,
required=True,
help="The code of the check to generate (e.g., 'A001').",
)
parser.add_argument(
"--plugin",
type=str,
required=True,
help="The plugin with which the check is associated (e.g., 'flake8-builtins').",
)
args = parser.parse_args()
main(name=args.name, code=args.code, plugin=args.plugin)

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env python3
"""Generate boilerplate for a new plugin.
"""Generate boilerplate for a new Flake8 plugin.
Example usage:
@@ -31,9 +31,9 @@ def main(*, plugin: str, url: str) -> None:
# Create the Rust module.
os.makedirs(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}"), exist_ok=True)
with open(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}/rules"), "a"):
pass
with open(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}/rules"), "w+") as fp:
with open(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}/rules.rs"), "w+") as fp:
fp.write("use crate::checkers::ast::Checker;\n")
with open(os.path.join(ROOT_DIR, f"src/{dir_name(plugin)}/mod.rs"), "w+") as fp:
fp.write("pub mod rules;\n")
fp.write("\n")
fp.write(
@@ -49,13 +49,13 @@ mod tests {
use crate::linter::test_path;
use crate::settings;
fn rules(check_code: RuleCode, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", check_code.as_ref(), path.to_string_lossy());
fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let diagnostics =test_path(
Path::new("./resources/test/fixtures/%s")
.join(path)
.as_path(),
&settings::Settings::for_rule(check_code),
&settings::Settings::for_rule(rule_code),
)?;
insta::assert_yaml_snapshot!(snapshot, diagnostics);
Ok(())
@@ -67,10 +67,10 @@ mod tests {
# Add the plugin to `lib.rs`.
with open(os.path.join(ROOT_DIR, "src/lib.rs"), "a") as fp:
fp.write(f"pub mod {dir_name(plugin)};")
fp.write(f"mod {dir_name(plugin)};")
# Add the relevant sections to `src/registry.rs`.
with open(os.path.join(ROOT_DIR, "src/registry.rs"), "r") as fp:
with open(os.path.join(ROOT_DIR, "src/registry.rs")) as fp:
content = fp.read()
with open(os.path.join(ROOT_DIR, "src/registry.rs"), "w") as fp:
@@ -85,23 +85,37 @@ mod tests {
fp.write(f"{indent}{pascal_case(plugin)},")
fp.write("\n")
elif line.strip() == 'CheckCategory::Ruff => "Ruff-specific rules",':
indent = line.split('CheckCategory::Ruff => "Ruff-specific rules",')[0]
fp.write(f'{indent}CheckCategory::{pascal_case(plugin)} => "{plugin}",')
elif line.strip() == 'RuleOrigin::Ruff => "Ruff-specific rules",':
indent = line.split('RuleOrigin::Ruff => "Ruff-specific rules",')[0]
fp.write(f'{indent}RuleOrigin::{pascal_case(plugin)} => "{plugin}",')
fp.write("\n")
elif line.strip() == "CheckCategory::Ruff => vec![RuleCodePrefix::RUF],":
indent = line.split("CheckCategory::Ruff => vec![RuleCodePrefix::RUF],")[0]
elif line.strip() == "RuleOrigin::Ruff => vec![RuleCodePrefix::RUF],":
indent = line.split("RuleOrigin::Ruff => vec![RuleCodePrefix::RUF],")[0]
fp.write(
f"{indent}CheckCategory::{pascal_case(plugin)} => vec![\n"
f"{indent}RuleOrigin::{pascal_case(plugin)} => vec![\n"
f'{indent} todo!("Fill-in prefix after generating codes")\n'
f"{indent}],"
)
fp.write("\n")
elif line.strip() == "CheckCategory::Ruff => None,":
indent = line.split("CheckCategory::Ruff => None,")[0]
fp.write(f"{indent}CheckCategory::{pascal_case(plugin)} => " f'Some(("{url}", &Platform::PyPI)),')
elif line.strip() == "RuleOrigin::Ruff => None,":
indent = line.split("RuleOrigin::Ruff => None,")[0]
fp.write(f"{indent}RuleOrigin::{pascal_case(plugin)} => " f'Some(("{url}", &Platform::PyPI)),')
fp.write("\n")
fp.write(line)
fp.write("\n")
# Add the relevant section to `src/violations.rs`.
with open(os.path.join(ROOT_DIR, "src/violations.rs")) as fp:
content = fp.read()
with open(os.path.join(ROOT_DIR, "src/violations.rs"), "w") as fp:
for line in content.splitlines():
if line.strip() == "// Ruff":
indent = line.split("// Ruff")[0]
fp.write(f"{indent}// {plugin}")
fp.write("\n")
fp.write(line)
@@ -110,7 +124,7 @@ mod tests {
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Generate boilerplate for a new plugin.",
description="Generate boilerplate for a new Flake8 plugin.",
epilog=(
"Example usage: python scripts/add_plugin.py flake8-pie "
"--url https://pypi.org/project/flake8-pie/0.16.0/"
@@ -118,7 +132,6 @@ if __name__ == "__main__":
)
parser.add_argument(
"plugin",
required=True,
type=str,
help="The name of the plugin to generate.",
)

145
scripts/add_rule.py Normal file
View File

@@ -0,0 +1,145 @@
#!/usr/bin/env python3
"""Generate boilerplate for a new rule.
Example usage:
python scripts/add_rule.py \
--name PreferListBuiltin \
--code PIE807 \
--origin flake8-pie
"""
import argparse
import os
ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
def dir_name(origin: str) -> str:
return origin.replace("-", "_")
def pascal_case(origin: str) -> str:
"""Convert from snake-case to PascalCase."""
return "".join(word.title() for word in origin.split("-"))
def snake_case(name: str) -> str:
"""Convert from PascalCase to snake_case."""
return "".join(f"_{word.lower()}" if word.isupper() else word for word in name).lstrip("_")
def main(*, name: str, code: str, origin: str) -> None:
# Create a test fixture.
with open(
os.path.join(ROOT_DIR, f"resources/test/fixtures/{dir_name(origin)}/{code}.py"),
"a",
):
pass
# Add the relevant `#testcase` macro.
with open(os.path.join(ROOT_DIR, f"src/{dir_name(origin)}/mod.rs")) as fp:
content = fp.read()
with open(os.path.join(ROOT_DIR, f"src/{dir_name(origin)}/mod.rs"), "w") as fp:
for line in content.splitlines():
if line.strip() == "fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {":
indent = line.split("fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {")[0]
fp.write(f'{indent}#[test_case(RuleCode::{code}, Path::new("{code}.py"); "{code}")]')
fp.write("\n")
fp.write(line)
fp.write("\n")
# Add the relevant rule function.
with open(os.path.join(ROOT_DIR, f"src/{dir_name(origin)}/rules.rs"), "a") as fp:
fp.write(
f"""
/// {code}
pub fn {snake_case(name)}(checker: &mut Checker) {{}}
"""
)
fp.write("\n")
# Add the relevant struct to `src/violations.rs`.
with open(os.path.join(ROOT_DIR, "src/violations.rs")) as fp:
content = fp.read()
with open(os.path.join(ROOT_DIR, "src/violations.rs"), "w") as fp:
for line in content.splitlines():
fp.write(line)
fp.write("\n")
if line.startswith(f"// {origin}"):
fp.write(
"""define_violation!(
pub struct %s;
);
impl Violation for %s {
fn message(&self) -> String {
todo!("Implement message")
}
fn placeholder() -> Self {
%s
}
}
"""
% (name, name, name)
)
fp.write("\n")
# Add the relevant code-to-violation pair to `src/registry.rs`.
with open(os.path.join(ROOT_DIR, "src/registry.rs")) as fp:
content = fp.read()
seen_macro = False
has_written = False
with open(os.path.join(ROOT_DIR, "src/registry.rs"), "w") as fp:
for line in content.splitlines():
fp.write(line)
fp.write("\n")
if has_written:
continue
if line.startswith("define_rule_mapping!"):
seen_macro = True
continue
if not seen_macro:
continue
if line.strip() == f"// {origin}":
indent = line.split("//")[0]
fp.write(f"{indent}{code} => violations::{name},")
fp.write("\n")
has_written = True
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Generate boilerplate for a new rule.",
epilog="python scripts/add_rule.py --name PreferListBuiltin --code PIE807 --origin flake8-pie",
)
parser.add_argument(
"--name",
type=str,
required=True,
help="The name of the check to generate, in PascalCase (e.g., 'LineTooLong').",
)
parser.add_argument(
"--code",
type=str,
required=True,
help="The code of the check to generate (e.g., 'A001').",
)
parser.add_argument(
"--origin",
type=str,
required=True,
help="The source with which the check originated (e.g., 'flake8-builtins').",
)
args = parser.parse_args()
main(name=args.name, code=args.code, origin=args.origin)

View File

@@ -1,5 +1,12 @@
use rustpython_ast::{Expr, Stmt, StmtKind};
pub fn name(stmt: &Stmt) -> &str {
match &stmt.node {
StmtKind::FunctionDef { name, .. } | StmtKind::AsyncFunctionDef { name, .. } => name,
_ => panic!("Expected StmtKind::FunctionDef | StmtKind::AsyncFunctionDef"),
}
}
pub fn decorator_list(stmt: &Stmt) -> &Vec<Expr> {
match &stmt.node {
StmtKind::FunctionDef { decorator_list, .. }

View File

@@ -388,6 +388,12 @@ impl<'a> From<&'a Box<Expr>> for Box<ComparableExpr<'a>> {
}
}
impl<'a> From<&'a Box<Expr>> for ComparableExpr<'a> {
fn from(expr: &'a Box<Expr>) -> Self {
(&**expr).into()
}
}
impl<'a> From<&'a Expr> for ComparableExpr<'a> {
fn from(expr: &'a Expr) -> Self {
match &expr.node {

View File

@@ -179,6 +179,131 @@ pub fn match_call_path(
}
}
/// Return `true` if the `Expr` contains a reference to `${module}.${target}`.
pub fn contains_call_path(
expr: &Expr,
module: &str,
member: &str,
import_aliases: &FxHashMap<&str, &str>,
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
) -> bool {
any_over_expr(expr, &|expr| {
let call_path = collect_call_paths(expr);
if !call_path.is_empty() {
if match_call_path(
&dealias_call_path(call_path, import_aliases),
module,
member,
from_imports,
) {
return true;
}
}
false
})
}
/// Call `func` over every `Expr` in `expr`, returning `true` if any expression
/// returns `true`..
pub fn any_over_expr<F>(expr: &Expr, func: &F) -> bool
where
F: Fn(&Expr) -> bool,
{
if func(expr) {
return true;
}
match &expr.node {
ExprKind::BoolOp { values, .. } | ExprKind::JoinedStr { values } => {
values.iter().any(|expr| any_over_expr(expr, func))
}
ExprKind::NamedExpr { target, value } => {
any_over_expr(target, func) || any_over_expr(value, func)
}
ExprKind::BinOp { left, right, .. } => {
any_over_expr(left, func) || any_over_expr(right, func)
}
ExprKind::UnaryOp { operand, .. } => any_over_expr(operand, func),
ExprKind::Lambda { body, .. } => any_over_expr(body, func),
ExprKind::IfExp { test, body, orelse } => {
any_over_expr(test, func) || any_over_expr(body, func) || any_over_expr(orelse, func)
}
ExprKind::Dict { keys, values } => values
.iter()
.chain(keys.iter())
.any(|expr| any_over_expr(expr, func)),
ExprKind::Set { elts } | ExprKind::List { elts, .. } | ExprKind::Tuple { elts, .. } => {
elts.iter().any(|expr| any_over_expr(expr, func))
}
ExprKind::ListComp { elt, generators }
| ExprKind::SetComp { elt, generators }
| ExprKind::GeneratorExp { elt, generators } => {
any_over_expr(elt, func)
|| generators.iter().any(|generator| {
any_over_expr(&generator.target, func)
|| any_over_expr(&generator.iter, func)
|| generator.ifs.iter().any(|expr| any_over_expr(expr, func))
})
}
ExprKind::DictComp {
key,
value,
generators,
} => {
any_over_expr(key, func)
|| any_over_expr(value, func)
|| generators.iter().any(|generator| {
any_over_expr(&generator.target, func)
|| any_over_expr(&generator.iter, func)
|| generator.ifs.iter().any(|expr| any_over_expr(expr, func))
})
}
ExprKind::Await { value }
| ExprKind::YieldFrom { value }
| ExprKind::Attribute { value, .. }
| ExprKind::Starred { value, .. } => any_over_expr(value, func),
ExprKind::Yield { value } => value
.as_ref()
.map_or(false, |value| any_over_expr(value, func)),
ExprKind::Compare {
left, comparators, ..
} => any_over_expr(left, func) || comparators.iter().any(|expr| any_over_expr(expr, func)),
ExprKind::Call {
func: call_func,
args,
keywords,
} => {
any_over_expr(call_func, func)
|| args.iter().any(|expr| any_over_expr(expr, func))
|| keywords
.iter()
.any(|keyword| any_over_expr(&keyword.node.value, func))
}
ExprKind::FormattedValue {
value, format_spec, ..
} => {
any_over_expr(value, func)
|| format_spec
.as_ref()
.map_or(false, |value| any_over_expr(value, func))
}
ExprKind::Subscript { value, slice, .. } => {
any_over_expr(value, func) || any_over_expr(slice, func)
}
ExprKind::Slice { lower, upper, step } => {
lower
.as_ref()
.map_or(false, |value| any_over_expr(value, func))
|| upper
.as_ref()
.map_or(false, |value| any_over_expr(value, func))
|| step
.as_ref()
.map_or(false, |value| any_over_expr(value, func))
}
ExprKind::Name { .. } | ExprKind::Constant { .. } => false,
}
}
static DUNDER_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"__[^\s]+__").unwrap());
/// Return `true` if the `Stmt` is an assignment to a dunder (like `__all__`).
@@ -191,12 +316,12 @@ pub fn is_assignment_to_a_dunder(stmt: &Stmt) -> bool {
return false;
}
match &targets[0].node {
ExprKind::Name { id, ctx: _ } => DUNDER_REGEX.is_match(id),
ExprKind::Name { id, .. } => DUNDER_REGEX.is_match(id),
_ => false,
}
}
StmtKind::AnnAssign { target, .. } => match &target.node {
ExprKind::Name { id, ctx: _ } => DUNDER_REGEX.is_match(id),
ExprKind::Name { id, .. } => DUNDER_REGEX.is_match(id),
_ => false,
},
_ => false,
@@ -305,6 +430,13 @@ pub fn collect_arg_names<'a>(arguments: &'a Arguments) -> FxHashSet<&'a str> {
arg_names
}
/// Returns `true` if a statement or expression includes at least one comment.
pub fn has_comments<T>(located: &Located<T>, locator: &SourceCodeLocator) -> bool {
lexer::make_tokenizer(&locator.slice_source_code_range(&Range::from_located(located)))
.flatten()
.any(|(_, tok, _)| matches!(tok, Tok::Comment(..)))
}
/// Returns `true` if a call is an argumented `super` invocation.
pub fn is_super_call_with_arguments(func: &Expr, args: &[Expr]) -> bool {
if let ExprKind::Name { id, .. } = &func.node {
@@ -589,6 +721,21 @@ pub fn followed_by_multi_statement_line(stmt: &Stmt, locator: &SourceCodeLocator
match_trailing_content(stmt, locator)
}
/// Return `true` if a `Stmt` is a docstring.
pub fn is_docstring_stmt(stmt: &Stmt) -> bool {
if let StmtKind::Expr { value } = &stmt.node {
matches!(
value.node,
ExprKind::Constant {
value: Constant::Str { .. },
..
}
)
} else {
false
}
}
#[derive(Default)]
/// A simple representation of a call's positional and keyword arguments.
pub struct SimpleCallArgs<'a> {
@@ -634,6 +781,11 @@ impl<'a> SimpleCallArgs<'a> {
}
None
}
/// Get the number of positional and keyword arguments used.
pub fn len(&self) -> usize {
self.args.len() + self.kwargs.len()
}
}
#[cfg(test)]

View File

@@ -4,6 +4,7 @@ use rustpython_parser::ast::{Constant, Expr, ExprKind, Stmt, StmtKind};
use rustpython_parser::lexer;
use rustpython_parser::lexer::Tok;
use crate::ast::helpers::any_over_expr;
use crate::ast::types::{Binding, BindingKind, Scope};
use crate::ast::visitor;
use crate::ast::visitor::Visitor;
@@ -129,76 +130,13 @@ pub fn in_nested_block<'a>(mut parents: impl Iterator<Item = &'a Stmt>) -> bool
})
}
/// Returns `true` if `parent` contains `child`.
fn contains(parent: &Expr, child: &Expr) -> bool {
match &parent.node {
ExprKind::BoolOp { values, .. } => values.iter().any(|parent| contains(parent, child)),
ExprKind::NamedExpr { target, value } => contains(target, child) || contains(value, child),
ExprKind::BinOp { left, right, .. } => contains(left, child) || contains(right, child),
ExprKind::UnaryOp { operand, .. } => contains(operand, child),
ExprKind::Lambda { body, .. } => contains(body, child),
ExprKind::IfExp { test, body, orelse } => {
contains(test, child) || contains(body, child) || contains(orelse, child)
}
ExprKind::Dict { keys, values } => keys
.iter()
.chain(values.iter())
.any(|parent| contains(parent, child)),
ExprKind::Set { elts } => elts.iter().any(|parent| contains(parent, child)),
ExprKind::ListComp { elt, .. } => contains(elt, child),
ExprKind::SetComp { elt, .. } => contains(elt, child),
ExprKind::DictComp { key, value, .. } => contains(key, child) || contains(value, child),
ExprKind::GeneratorExp { elt, .. } => contains(elt, child),
ExprKind::Await { value } => contains(value, child),
ExprKind::Yield { value } => value.as_ref().map_or(false, |value| contains(value, child)),
ExprKind::YieldFrom { value } => contains(value, child),
ExprKind::Compare {
left, comparators, ..
} => contains(left, child) || comparators.iter().any(|parent| contains(parent, child)),
ExprKind::Call {
func,
args,
keywords,
} => {
contains(func, child)
|| args.iter().any(|parent| contains(parent, child))
|| keywords
.iter()
.any(|keyword| contains(&keyword.node.value, child))
}
ExprKind::FormattedValue {
value, format_spec, ..
} => {
contains(value, child)
|| format_spec
.as_ref()
.map_or(false, |value| contains(value, child))
}
ExprKind::JoinedStr { values } => values.iter().any(|parent| contains(parent, child)),
ExprKind::Constant { .. } => false,
ExprKind::Attribute { value, .. } => contains(value, child),
ExprKind::Subscript { value, slice, .. } => {
contains(value, child) || contains(slice, child)
}
ExprKind::Starred { value, .. } => contains(value, child),
ExprKind::Name { .. } => parent == child,
ExprKind::List { elts, .. } => elts.iter().any(|parent| contains(parent, child)),
ExprKind::Tuple { elts, .. } => elts.iter().any(|parent| contains(parent, child)),
ExprKind::Slice { lower, upper, step } => {
lower.as_ref().map_or(false, |value| contains(value, child))
|| upper.as_ref().map_or(false, |value| contains(value, child))
|| step.as_ref().map_or(false, |value| contains(value, child))
}
}
}
/// Check if a node represents an unpacking assignment.
pub fn is_unpacking_assignment(parent: &Stmt, child: &Expr) -> bool {
match &parent.node {
StmtKind::With { items, .. } => items.iter().any(|item| {
if let Some(optional_vars) = &item.optional_vars {
if matches!(optional_vars.node, ExprKind::Tuple { .. }) {
if contains(optional_vars, child) {
if any_over_expr(optional_vars, &|expr| expr == child) {
return true;
}
}
@@ -227,7 +165,7 @@ pub fn is_unpacking_assignment(parent: &Stmt, child: &Expr) -> bool {
matches!(
item.node,
ExprKind::Set { .. } | ExprKind::List { .. } | ExprKind::Tuple { .. }
) && contains(item, child)
) && any_over_expr(item, &|expr| expr == child)
});
// If our child is a tuple, and value is not, it's always an unpacking

View File

@@ -174,9 +174,26 @@ impl<'a> Checker<'a> {
/// Return `true` if the call path is a reference to `typing.${target}`.
pub fn match_typing_call_path(&self, call_path: &[&str], target: &str) -> bool {
match_call_path(call_path, "typing", target, &self.from_imports)
|| (typing::in_extensions(target)
&& match_call_path(call_path, "typing_extensions", target, &self.from_imports))
if match_call_path(call_path, "typing", target, &self.from_imports) {
return true;
}
if typing::TYPING_EXTENSIONS.contains(target) {
if match_call_path(call_path, "typing_extensions", target, &self.from_imports) {
return true;
}
}
if self
.settings
.typing_modules
.iter()
.any(|module| match_call_path(call_path, module, target, &self.from_imports))
{
return true;
}
false
}
/// Return the current `Binding` for a given `name`.
@@ -641,7 +658,7 @@ where
}
if self.settings.enabled.contains(&RuleCode::PIE794) {
flake8_pie::rules::dupe_class_field_definitions(self, bases, body);
flake8_pie::rules::dupe_class_field_definitions(self, stmt, body);
}
self.check_builtin_shadowing(name, stmt, false);
@@ -1190,14 +1207,14 @@ where
}
if self.settings.enabled.contains(&RuleCode::UP024) {
if let Some(item) = exc {
pyupgrade::rules::os_error_alias(self, item);
pyupgrade::rules::os_error_alias(self, &item);
}
}
}
StmtKind::AugAssign { target, .. } => {
self.handle_node_load(target);
}
StmtKind::If { test, .. } => {
StmtKind::If { test, body, orelse } => {
if self.settings.enabled.contains(&RuleCode::F634) {
pyflakes::rules::if_tuple(self, stmt, test);
}
@@ -1214,6 +1231,11 @@ where
self.current_stmt_parent().map(|parent| parent.0),
);
}
if self.settings.enabled.contains(&RuleCode::SIM401) {
flake8_simplify::rules::use_dict_get_with_default(
self, stmt, test, body, orelse,
);
}
}
StmtKind::Assert { test, msg } => {
if self.settings.enabled.contains(&RuleCode::F631) {
@@ -1316,7 +1338,7 @@ where
flake8_bugbear::rules::redundant_tuple_in_exception_handler(self, handlers);
}
if self.settings.enabled.contains(&RuleCode::UP024) {
pyupgrade::rules::os_error_alias(self, handlers);
pyupgrade::rules::os_error_alias(self, &handlers);
}
if self.settings.enabled.contains(&RuleCode::PT017) {
self.diagnostics.extend(
@@ -1388,6 +1410,9 @@ where
if self.settings.enabled.contains(&RuleCode::B015) {
flake8_bugbear::rules::useless_comparison(self, value);
}
if self.settings.enabled.contains(&RuleCode::SIM112) {
flake8_simplify::rules::use_capital_environment_variables(self, value);
}
}
_ => {}
}
@@ -1810,6 +1835,8 @@ where
|| self.settings.enabled.contains(&RuleCode::F523)
|| self.settings.enabled.contains(&RuleCode::F524)
|| self.settings.enabled.contains(&RuleCode::F525)
// pyupgrade
|| self.settings.enabled.contains(&RuleCode::UP030)
{
if let ExprKind::Attribute { value, attr, .. } = &func.node {
if let ExprKind::Constant {
@@ -1856,6 +1883,10 @@ where
self, &summary, location,
);
}
if self.settings.enabled.contains(&RuleCode::UP030) {
pyupgrade::rules::format_literals(self, &summary, expr);
}
}
}
}
@@ -1895,7 +1926,7 @@ where
pyupgrade::rules::replace_stdout_stderr(self, expr, keywords);
}
if self.settings.enabled.contains(&RuleCode::UP024) {
pyupgrade::rules::os_error_alias(self, expr);
pyupgrade::rules::os_error_alias(self, &expr);
}
// flake8-print
@@ -1971,6 +2002,28 @@ where
self.diagnostics.push(diagnostic);
}
}
if self.settings.enabled.contains(&RuleCode::S508) {
if let Some(diagnostic) = flake8_bandit::rules::snmp_insecure_version(
func,
args,
keywords,
&self.from_imports,
&self.import_aliases,
) {
self.diagnostics.push(diagnostic);
}
}
if self.settings.enabled.contains(&RuleCode::S509) {
if let Some(diagnostic) = flake8_bandit::rules::snmp_weak_cryptography(
func,
args,
keywords,
&self.from_imports,
&self.import_aliases,
) {
self.diagnostics.push(diagnostic);
}
}
if self.settings.enabled.contains(&RuleCode::S106) {
self.diagnostics
.extend(flake8_bandit::rules::hardcoded_password_func_arg(keywords));
@@ -2000,205 +2053,75 @@ where
// flake8-comprehensions
if self.settings.enabled.contains(&RuleCode::C400) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_generator_list(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C400),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_generator_list(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C401) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_generator_set(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C401),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_generator_set(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C402) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_generator_dict(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C402),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_generator_dict(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C403) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_list_comprehension_set(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C403),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_list_comprehension_set(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C404) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_list_comprehension_dict(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C404),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_list_comprehension_dict(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C405) {
if let Some(diagnostic) = flake8_comprehensions::rules::unnecessary_literal_set(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C405),
Range::from_located(expr),
) {
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_literal_set(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C406) {
if let Some(diagnostic) = flake8_comprehensions::rules::unnecessary_literal_dict(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C406),
Range::from_located(expr),
) {
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_literal_dict(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C408) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_collection_call(
expr,
func,
args,
keywords,
self.locator,
self.patch(&RuleCode::C408),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_collection_call(
self, expr, func, args, keywords,
);
}
if self.settings.enabled.contains(&RuleCode::C409) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_literal_within_tuple_call(
expr,
func,
args,
self.locator,
self.patch(&RuleCode::C409),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_literal_within_tuple_call(
self, expr, func, args,
);
}
if self.settings.enabled.contains(&RuleCode::C410) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_literal_within_list_call(
expr,
func,
args,
self.locator,
self.patch(&RuleCode::C410),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_literal_within_list_call(
self, expr, func, args,
);
}
if self.settings.enabled.contains(&RuleCode::C411) {
if let Some(diagnostic) = flake8_comprehensions::rules::unnecessary_list_call(
expr,
func,
args,
self.locator,
self.patch(&RuleCode::C411),
Range::from_located(expr),
) {
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_list_call(self, expr, func, args);
}
if self.settings.enabled.contains(&RuleCode::C413) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_call_around_sorted(
expr,
func,
args,
self.locator,
self.patch(&RuleCode::C413),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_call_around_sorted(
self, expr, func, args,
);
}
if self.settings.enabled.contains(&RuleCode::C414) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_double_cast_or_process(
func,
args,
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_double_cast_or_process(
self, expr, func, args,
);
}
if self.settings.enabled.contains(&RuleCode::C415) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_subscript_reversal(
func,
args,
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_subscript_reversal(
self, expr, func, args,
);
}
if self.settings.enabled.contains(&RuleCode::C417) {
if let Some(diagnostic) = flake8_comprehensions::rules::unnecessary_map(
func,
args,
Range::from_located(expr),
) {
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_map(self, expr, func, args);
}
// flake8-boolean-trap
@@ -2403,6 +2326,11 @@ where
args, keywords,
));
}
// flake8-simplify
if self.settings.enabled.contains(&RuleCode::SIM115) {
flake8_simplify::rules::open_file_with_context_handler(self, func);
}
}
ExprKind::Dict { keys, values } => {
if self.settings.enabled.contains(&RuleCode::F601)
@@ -2742,18 +2670,9 @@ where
}
ExprKind::ListComp { elt, generators } | ExprKind::SetComp { elt, generators } => {
if self.settings.enabled.contains(&RuleCode::C416) {
if let Some(diagnostic) =
flake8_comprehensions::rules::unnecessary_comprehension(
expr,
elt,
generators,
self.locator,
self.patch(&RuleCode::C416),
Range::from_located(expr),
)
{
self.diagnostics.push(diagnostic);
};
flake8_comprehensions::rules::unnecessary_comprehension(
self, expr, elt, generators,
);
}
if self.settings.enabled.contains(&RuleCode::B023) {
flake8_bugbear::rules::function_uses_loop_variable(self, &Node::Expr(expr));
@@ -2954,6 +2873,7 @@ where
value,
&self.from_imports,
&self.import_aliases,
self.settings.typing_modules.iter().map(String::as_str),
|member| self.is_builtin(member),
) {
Some(subscript) => {

View File

@@ -7,33 +7,12 @@ use rustpython_parser::ast::Suite;
use crate::ast::visitor::Visitor;
use crate::directives::IsortDirectives;
use crate::isort;
use crate::isort::track::ImportTracker;
use crate::registry::Diagnostic;
use crate::isort::track::{Block, ImportTracker};
use crate::registry::{Diagnostic, RuleCode};
use crate::settings::{flags, Settings};
use crate::source_code_locator::SourceCodeLocator;
use crate::source_code_style::SourceCodeStyleDetector;
fn check_import_blocks(
tracker: ImportTracker,
locator: &SourceCodeLocator,
settings: &Settings,
stylist: &SourceCodeStyleDetector,
autofix: flags::Autofix,
package: Option<&Path>,
) -> Vec<Diagnostic> {
let mut diagnostics = vec![];
for block in tracker.into_iter() {
if !block.imports.is_empty() {
if let Some(diagnostic) =
isort::rules::check_imports(&block, locator, settings, stylist, autofix, package)
{
diagnostics.push(diagnostic);
}
}
}
diagnostics
}
#[allow(clippy::too_many_arguments)]
pub fn check_imports(
python_ast: &Suite,
@@ -45,9 +24,33 @@ pub fn check_imports(
path: &Path,
package: Option<&Path>,
) -> Vec<Diagnostic> {
let mut tracker = ImportTracker::new(locator, directives, path);
for stmt in python_ast {
tracker.visit_stmt(stmt);
// Extract all imports from the AST.
let tracker = {
let mut tracker = ImportTracker::new(locator, directives, path);
for stmt in python_ast {
tracker.visit_stmt(stmt);
}
tracker
};
let blocks: Vec<&Block> = tracker.iter().collect();
// Enforce import rules.
let mut diagnostics = vec![];
if settings.enabled.contains(&RuleCode::I001) {
for block in &blocks {
if !block.imports.is_empty() {
if let Some(diagnostic) = isort::rules::organize_imports(
block, locator, settings, stylist, autofix, package,
) {
diagnostics.push(diagnostic);
}
}
}
}
check_import_blocks(tracker, locator, settings, stylist, autofix, package)
if settings.enabled.contains(&RuleCode::I002) {
diagnostics.extend(isort::rules::add_required_imports(
&blocks, python_ast, locator, settings, autofix,
));
}
diagnostics
}

View File

@@ -1,6 +1,6 @@
//! Lint rules based on checking raw physical lines.
use crate::pycodestyle::rules::{line_too_long, no_newline_at_end_of_file};
use crate::pycodestyle::rules::{doc_line_too_long, line_too_long, no_newline_at_end_of_file};
use crate::pygrep_hooks::rules::{blanket_noqa, blanket_type_ignore};
use crate::pyupgrade::rules::unnecessary_coding_comment;
use crate::registry::{Diagnostic, RuleCode};
@@ -9,18 +9,21 @@ use crate::settings::{flags, Settings};
pub fn check_lines(
contents: &str,
commented_lines: &[usize],
doc_lines: &[usize],
settings: &Settings,
autofix: flags::Autofix,
) -> Vec<Diagnostic> {
let mut diagnostics: Vec<Diagnostic> = vec![];
let enforce_unnecessary_coding_comment = settings.enabled.contains(&RuleCode::UP009);
let enforce_blanket_noqa = settings.enabled.contains(&RuleCode::PGH004);
let enforce_blanket_type_ignore = settings.enabled.contains(&RuleCode::PGH003);
let enforce_doc_line_too_long = settings.enabled.contains(&RuleCode::W505);
let enforce_line_too_long = settings.enabled.contains(&RuleCode::E501);
let enforce_no_newline_at_end_of_file = settings.enabled.contains(&RuleCode::W292);
let enforce_blanket_type_ignore = settings.enabled.contains(&RuleCode::PGH003);
let enforce_blanket_noqa = settings.enabled.contains(&RuleCode::PGH004);
let enforce_unnecessary_coding_comment = settings.enabled.contains(&RuleCode::UP009);
let mut commented_lines_iter = commented_lines.iter().peekable();
let mut doc_lines_iter = doc_lines.iter().peekable();
for (index, line) in contents.lines().enumerate() {
while commented_lines_iter
.next_if(|lineno| &(index + 1) == *lineno)
@@ -40,18 +43,25 @@ pub fn check_lines(
}
if enforce_blanket_type_ignore {
if commented_lines.contains(&(index + 1)) {
if let Some(diagnostic) = blanket_type_ignore(index, line) {
diagnostics.push(diagnostic);
}
if let Some(diagnostic) = blanket_type_ignore(index, line) {
diagnostics.push(diagnostic);
}
}
if enforce_blanket_noqa {
if commented_lines.contains(&(index + 1)) {
if let Some(diagnostic) = blanket_noqa(index, line) {
diagnostics.push(diagnostic);
}
if let Some(diagnostic) = blanket_noqa(index, line) {
diagnostics.push(diagnostic);
}
}
}
while doc_lines_iter
.next_if(|lineno| &(index + 1) == *lineno)
.is_some()
{
if enforce_doc_line_too_long {
if let Some(diagnostic) = doc_line_too_long(index, line, settings) {
diagnostics.push(diagnostic);
}
}
}
@@ -90,6 +100,7 @@ mod tests {
check_lines(
line,
&[],
&[],
&Settings {
line_length,
..Settings::for_rule(RuleCode::E501)

View File

@@ -47,7 +47,7 @@ pub fn check_noqa(
continue;
}
// Is the check ignored by a `noqa` directive on the parent line?
// Is the violation ignored by a `noqa` directive on the parent line?
if let Some(parent_lineno) = diagnostic.parent.map(|location| location.row()) {
let noqa_lineno = noqa_line_for.get(&parent_lineno).unwrap_or(&parent_lineno);
if commented_lines.contains(noqa_lineno) {

View File

@@ -25,26 +25,26 @@ pub struct Cli {
/// Enable verbose logging.
#[arg(short, long, group = "verbosity")]
pub verbose: bool,
/// Only log errors.
/// Print lint violations, but nothing else.
#[arg(short, long, group = "verbosity")]
pub quiet: bool,
/// Disable all logging (but still exit with status code "1" upon detecting
/// errors).
/// lint violations).
#[arg(short, long, group = "verbosity")]
pub silent: bool,
/// Exit with status code "0", even upon detecting errors.
/// Exit with status code "0", even upon detecting lint violations.
#[arg(short, long)]
pub exit_zero: bool,
/// Run in watch mode by re-running whenever files change.
#[arg(short, long)]
pub watch: bool,
/// Attempt to automatically fix lint errors.
/// Attempt to automatically fix lint violations.
#[arg(long, overrides_with("no_fix"))]
fix: bool,
#[clap(long, overrides_with("fix"), hide = true)]
no_fix: bool,
/// Fix any fixable lint errors, but don't report on leftover violations.
/// Implies `--fix`.
/// Fix any fixable lint violations, but don't report on leftover
/// violations. Implies `--fix`.
#[arg(long, overrides_with("no_fix_only"))]
fix_only: bool,
#[clap(long, overrides_with("fix_only"), hide = true)]
@@ -61,38 +61,38 @@ pub struct Cli {
pub isolated: bool,
/// Comma-separated list of rule codes to enable (or ALL, to enable all
/// rules).
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub select: Option<Vec<RuleCodePrefix>>,
/// Like --select, but adds additional error codes on top of the selected
/// Like --select, but adds additional rule codes on top of the selected
/// ones.
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub extend_select: Option<Vec<RuleCodePrefix>>,
/// Comma-separated list of error codes to disable.
#[arg(long, value_delimiter = ',')]
/// Comma-separated list of rule codes to disable.
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub ignore: Option<Vec<RuleCodePrefix>>,
/// Like --ignore, but adds additional error codes on top of the ignored
/// Like --ignore, but adds additional rule codes on top of the ignored
/// ones.
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub extend_ignore: Option<Vec<RuleCodePrefix>>,
/// List of paths, used to exclude files and/or directories from checks.
#[arg(long, value_delimiter = ',')]
/// List of paths, used to omit files and/or directories from analysis.
#[arg(long, value_delimiter = ',', value_name = "FILE_PATTERN")]
pub exclude: Option<Vec<FilePattern>>,
/// Like --exclude, but adds additional files and directories on top of the
/// excluded ones.
#[arg(long, value_delimiter = ',')]
/// Like --exclude, but adds additional files and directories on top of
/// those already excluded.
#[arg(long, value_delimiter = ',', value_name = "FILE_PATTERN")]
pub extend_exclude: Option<Vec<FilePattern>>,
/// List of error codes to treat as eligible for autofix. Only applicable
/// List of rule codes to treat as eligible for autofix. Only applicable
/// when autofix itself is enabled (e.g., via `--fix`).
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub fixable: Option<Vec<RuleCodePrefix>>,
/// List of error codes to treat as ineligible for autofix. Only applicable
/// List of rule codes to treat as ineligible for autofix. Only applicable
/// when autofix itself is enabled (e.g., via `--fix`).
#[arg(long, value_delimiter = ',')]
#[arg(long, value_delimiter = ',', value_name = "RULE_CODE")]
pub unfixable: Option<Vec<RuleCodePrefix>>,
/// List of mappings from file pattern to code to exclude
#[arg(long, value_delimiter = ',')]
pub per_file_ignores: Option<Vec<PatternPrefixPair>>,
/// Output serialization format for error messages.
/// Output serialization format for violations.
#[arg(long, value_enum, env = "RUFF_FORMAT")]
pub format: Option<SerializationFormat>,
/// The name of the file when passing it through stdin.
@@ -129,7 +129,7 @@ pub struct Cli {
/// The minimum Python version that should be supported.
#[arg(long)]
pub target_version: Option<PythonVersion>,
/// Set the line-length for length-associated checks and automatic
/// Set the line-length for length-associated rules and automatic
/// formatting.
#[arg(long)]
pub line_length: Option<usize>,
@@ -212,7 +212,7 @@ pub struct Cli {
conflicts_with = "watch",
)]
pub show_files: bool,
/// See the settings Ruff will use to check a given Python file.
/// See the settings Ruff will use to lint a given Python file.
#[arg(
long,
// Fake subcommands.

View File

@@ -1,5 +1,7 @@
use anyhow::{bail, Result};
use libcst_native::{Expr, Import, ImportFrom, Module, SmallStatement, Statement};
use libcst_native::{
Call, Expr, Expression, Import, ImportFrom, Module, SmallStatement, Statement,
};
pub fn match_module(module_text: &str) -> Result<Module> {
match libcst_native::parse_module(module_text, None) {
@@ -8,6 +10,13 @@ pub fn match_module(module_text: &str) -> Result<Module> {
}
}
pub fn match_expression(expression_text: &str) -> Result<Expression> {
match libcst_native::parse_expression(expression_text) {
Ok(expression) => Ok(expression),
Err(_) => bail!("Failed to extract CST from source"),
}
}
pub fn match_expr<'a, 'b>(module: &'a mut Module<'b>) -> Result<&'a mut Expr<'b>> {
if let Some(Statement::Simple(expr)) = module.body.first_mut() {
if let Some(SmallStatement::Expr(expr)) = expr.body.first_mut() {
@@ -43,3 +52,11 @@ pub fn match_import_from<'a, 'b>(module: &'a mut Module<'b>) -> Result<&'a mut I
bail!("Expected Statement::Simple")
}
}
pub fn match_call<'a, 'b>(expression: &'a mut Expression<'b>) -> Result<&'a mut Call<'b>> {
if let Expression::Call(call) = expression {
Ok(call)
} else {
bail!("Expected SmallStatement::Expr")
}
}

View File

@@ -33,6 +33,7 @@ impl Flags {
pub struct IsortDirectives {
pub exclusions: IntSet<usize>,
pub splits: Vec<usize>,
pub skip_file: bool,
}
pub struct Directives {
@@ -89,17 +90,11 @@ pub fn extract_noqa_line_for(lxr: &[LexResult]) -> IntMap<usize, usize> {
pub fn extract_isort_directives(lxr: &[LexResult]) -> IsortDirectives {
let mut exclusions: IntSet<usize> = IntSet::default();
let mut splits: Vec<usize> = Vec::default();
let mut skip_file: bool = false;
let mut off: Option<Location> = None;
let mut last: Option<Location> = None;
for &(start, ref tok, end) in lxr.iter().flatten() {
last = Some(end);
// No need to keep processing, but we do need to determine the last token.
if skip_file {
continue;
}
let Tok::Comment(comment_text) = tok else {
continue;
};
@@ -111,7 +106,10 @@ pub fn extract_isort_directives(lxr: &[LexResult]) -> IsortDirectives {
if comment_text == "# isort: split" {
splits.push(start.row());
} else if comment_text == "# isort: skip_file" || comment_text == "# isort:skip_file" {
skip_file = true;
return IsortDirectives {
skip_file: true,
..IsortDirectives::default()
};
} else if off.is_some() {
if comment_text == "# isort: on" {
if let Some(start) = off {
@@ -130,14 +128,7 @@ pub fn extract_isort_directives(lxr: &[LexResult]) -> IsortDirectives {
}
}
if skip_file {
// Enforce `isort: skip_file`.
if let Some(end) = last {
for row in 1..=end.row() {
exclusions.insert(row);
}
}
} else if let Some(start) = off {
if let Some(start) = off {
// Enforce unterminated `isort: off`.
if let Some(end) = last {
for row in start.row() + 1..=end.row() {
@@ -145,7 +136,11 @@ pub fn extract_isort_directives(lxr: &[LexResult]) -> IsortDirectives {
}
}
}
IsortDirectives { exclusions, splits }
IsortDirectives {
exclusions,
splits,
..IsortDirectives::default()
}
}
#[cfg(test)]
@@ -283,10 +278,7 @@ x = 1
y = 2
z = x + 1";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
assert_eq!(
extract_isort_directives(&lxr).exclusions,
IntSet::from_iter([1, 2, 3, 4])
);
assert_eq!(extract_isort_directives(&lxr).exclusions, IntSet::default());
let contents = "# isort: off
x = 1
@@ -295,10 +287,7 @@ y = 2
# isort: skip_file
z = x + 1";
let lxr: Vec<LexResult> = lexer::make_tokenizer(contents).collect();
assert_eq!(
extract_isort_directives(&lxr).exclusions,
IntSet::from_iter([1, 2, 3, 4, 5, 6])
);
assert_eq!(extract_isort_directives(&lxr).exclusions, IntSet::default());
}
#[test]

58
src/doc_lines.rs Normal file
View File

@@ -0,0 +1,58 @@
//! Doc line extraction. In this context, a doc line is a line consisting of a
//! standalone comment or a constant string statement.
use rustpython_ast::{Constant, ExprKind, Stmt, StmtKind, Suite};
use rustpython_parser::lexer::{LexResult, Tok};
use crate::ast::visitor;
use crate::ast::visitor::Visitor;
/// Extract doc lines (standalone comments) from a token sequence.
pub fn doc_lines_from_tokens(lxr: &[LexResult]) -> Vec<usize> {
let mut doc_lines: Vec<usize> = Vec::default();
let mut prev: Option<usize> = None;
for (start, tok, end) in lxr.iter().flatten() {
if matches!(tok, Tok::Indent | Tok::Dedent | Tok::Newline) {
continue;
}
if matches!(tok, Tok::Comment(..)) {
if let Some(prev) = prev {
if start.row() > prev {
doc_lines.push(start.row());
}
} else {
doc_lines.push(start.row());
}
}
prev = Some(end.row());
}
doc_lines
}
#[derive(Default)]
struct StringLinesVisitor {
string_lines: Vec<usize>,
}
impl Visitor<'_> for StringLinesVisitor {
fn visit_stmt(&mut self, stmt: &Stmt) {
if let StmtKind::Expr { value } = &stmt.node {
if let ExprKind::Constant {
value: Constant::Str(..),
..
} = &value.node
{
self.string_lines
.extend(value.location.row()..=value.end_location.unwrap().row());
}
}
visitor::walk_stmt(self, stmt);
}
}
/// Extract doc lines (standalone strings) from an AST.
pub fn doc_lines_from_ast(python_ast: &Suite) -> Vec<usize> {
let mut visitor = StringLinesVisitor::default();
visitor.visit_body(python_ast);
visitor.string_lines
}

View File

@@ -319,7 +319,7 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
helpers::identifier_range(stmt, checker.locator),
));
}
} else if visibility::is_init(stmt) {
} else if visibility::is_init(cast::name(stmt)) {
// Allow omission of return annotation in `__init__` functions, as long as at
// least one argument is typed.
if checker.settings.enabled.contains(&RuleCode::ANN204) {
@@ -341,7 +341,7 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
checker.diagnostics.push(diagnostic);
}
}
} else if visibility::is_magic(stmt) {
} else if visibility::is_magic(cast::name(stmt)) {
if checker.settings.enabled.contains(&RuleCode::ANN204) {
checker.diagnostics.push(Diagnostic::new(
violations::MissingReturnTypeSpecialMethod(name.to_string()),

View File

@@ -26,7 +26,7 @@ pub struct Options {
value_type = "bool",
example = "suppress-dummy-args = true"
)]
/// Whether to suppress `ANN000`-level errors for arguments matching the
/// Whether to suppress `ANN000`-level violations for arguments matching the
/// "dummy" variable regex (like `_`).
pub suppress_dummy_args: Option<bool>,
#[option(
@@ -34,8 +34,8 @@ pub struct Options {
value_type = "bool",
example = "suppress-none-returning = true"
)]
/// Whether to suppress `ANN200`-level errors for functions that meet either
/// of the following criteria:
/// Whether to suppress `ANN200`-level violations for functions that meet
/// either of the following criteria:
///
/// - Contain no `return` statement.
/// - Explicit `return` statement(s) all return `None` (explicitly or

View File

@@ -25,6 +25,8 @@ mod tests {
#[test_case(RuleCode::S324, Path::new("S324.py"); "S324")]
#[test_case(RuleCode::S501, Path::new("S501.py"); "S501")]
#[test_case(RuleCode::S506, Path::new("S506.py"); "S506")]
#[test_case(RuleCode::S508, Path::new("S508.py"); "S508")]
#[test_case(RuleCode::S509, Path::new("S509.py"); "S509")]
fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let diagnostics = test_path(

View File

@@ -11,6 +11,8 @@ pub use hardcoded_tmp_directory::hardcoded_tmp_directory;
pub use hashlib_insecure_hash_functions::hashlib_insecure_hash_functions;
pub use request_with_no_cert_validation::request_with_no_cert_validation;
pub use request_without_timeout::request_without_timeout;
pub use snmp_insecure_version::snmp_insecure_version;
pub use snmp_weak_cryptography::snmp_weak_cryptography;
pub use unsafe_yaml_load::unsafe_yaml_load;
mod assert_used;
@@ -24,4 +26,6 @@ mod hardcoded_tmp_directory;
mod hashlib_insecure_hash_functions;
mod request_with_no_cert_validation;
mod request_without_timeout;
mod snmp_insecure_version;
mod snmp_weak_cryptography;
mod unsafe_yaml_load;

View File

@@ -0,0 +1,40 @@
use num_traits::{One, Zero};
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::{Expr, ExprKind, Keyword};
use rustpython_parser::ast::Constant;
use crate::ast::helpers::{collect_call_paths, dealias_call_path, match_call_path, SimpleCallArgs};
use crate::ast::types::Range;
use crate::registry::Diagnostic;
use crate::violations;
/// S508
pub fn snmp_insecure_version(
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> Option<Diagnostic> {
let call_path = dealias_call_path(collect_call_paths(func), import_aliases);
if match_call_path(&call_path, "pysnmp.hlapi", "CommunityData", from_imports) {
let call_args = SimpleCallArgs::new(args, keywords);
if let Some(mp_model_arg) = call_args.get_argument("mpModel", None) {
if let ExprKind::Constant {
value: Constant::Int(value),
..
} = &mp_model_arg.node
{
if value.is_zero() || value.is_one() {
return Some(Diagnostic::new(
violations::SnmpInsecureVersion,
Range::from_located(mp_model_arg),
));
}
}
}
}
None
}

View File

@@ -0,0 +1,30 @@
use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::{Expr, Keyword};
use crate::ast::helpers::{collect_call_paths, dealias_call_path, match_call_path, SimpleCallArgs};
use crate::ast::types::Range;
use crate::registry::Diagnostic;
use crate::violations;
/// S509
pub fn snmp_weak_cryptography(
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
from_imports: &FxHashMap<&str, FxHashSet<&str>>,
import_aliases: &FxHashMap<&str, &str>,
) -> Option<Diagnostic> {
let call_path = dealias_call_path(collect_call_paths(func), import_aliases);
if match_call_path(&call_path, "pysnmp.hlapi", "UsmUserData", from_imports) {
let call_args = SimpleCallArgs::new(args, keywords);
if call_args.len() < 3 {
return Some(Diagnostic::new(
violations::SnmpWeakCryptography,
Range::from_located(func),
));
}
}
None
}

View File

@@ -0,0 +1,25 @@
---
source: src/flake8_bandit/mod.rs
expression: diagnostics
---
- kind:
SnmpInsecureVersion: ~
location:
row: 3
column: 32
end_location:
row: 3
column: 33
fix: ~
parent: ~
- kind:
SnmpInsecureVersion: ~
location:
row: 4
column: 32
end_location:
row: 4
column: 33
fix: ~
parent: ~

View File

@@ -0,0 +1,25 @@
---
source: src/flake8_bandit/mod.rs
expression: diagnostics
---
- kind:
SnmpWeakCryptography: ~
location:
row: 4
column: 11
end_location:
row: 4
column: 22
fix: ~
parent: ~
- kind:
SnmpWeakCryptography: ~
location:
row: 5
column: 15
end_location:
row: 5
column: 26
fix: ~
parent: ~

View File

@@ -12,7 +12,9 @@ use crate::violations;
#[derive(Default)]
struct LoadedNamesVisitor<'a> {
// Tuple of: name, defining expression, and defining range.
names: Vec<(&'a str, &'a Expr, Range)>,
loaded: Vec<(&'a str, &'a Expr, Range)>,
// Tuple of: name, defining expression, and defining range.
stored: Vec<(&'a str, &'a Expr, Range)>,
}
/// `Visitor` to collect all used identifiers in a statement.
@@ -22,12 +24,11 @@ where
{
fn visit_expr(&mut self, expr: &'b Expr) {
match &expr.node {
ExprKind::JoinedStr { .. } => {
visitor::walk_expr(self, expr);
}
ExprKind::Name { id, ctx } if matches!(ctx, ExprContext::Load) => {
self.names.push((id, expr, Range::from_located(expr)));
}
ExprKind::Name { id, ctx } => match ctx {
ExprContext::Load => self.loaded.push((id, expr, Range::from_located(expr))),
ExprContext::Store => self.stored.push((id, expr, Range::from_located(expr))),
ExprContext::Del => {}
},
_ => visitor::walk_expr(self, expr),
}
}
@@ -36,6 +37,7 @@ where
#[derive(Default)]
struct SuspiciousVariablesVisitor<'a> {
names: Vec<(&'a str, &'a Expr, Range)>,
safe_functions: Vec<&'a Expr>,
}
/// `Visitor` to collect all suspicious variables (those referenced in
@@ -50,45 +52,90 @@ where
| StmtKind::AsyncFunctionDef { args, body, .. } => {
// Collect all loaded variable names.
let mut visitor = LoadedNamesVisitor::default();
for stmt in body {
visitor.visit_stmt(stmt);
}
visitor.visit_body(body);
// Collect all argument names.
let arg_names = collect_arg_names(args);
let mut arg_names = collect_arg_names(args);
arg_names.extend(visitor.stored.iter().map(|(id, ..)| id));
// Treat any non-arguments as "suspicious".
self.names.extend(
visitor
.names
.into_iter()
.loaded
.iter()
.filter(|(id, ..)| !arg_names.contains(id)),
);
}
_ => visitor::walk_stmt(self, stmt),
StmtKind::Return { value: Some(value) } => {
// Mark `return lambda: x` as safe.
if matches!(value.node, ExprKind::Lambda { .. }) {
self.safe_functions.push(value);
}
}
_ => {}
}
visitor::walk_stmt(self, stmt);
}
fn visit_expr(&mut self, expr: &'b Expr) {
match &expr.node {
ExprKind::Lambda { args, body } => {
// Collect all loaded variable names.
let mut visitor = LoadedNamesVisitor::default();
visitor.visit_expr(body);
// Collect all argument names.
let arg_names = collect_arg_names(args);
// Treat any non-arguments as "suspicious".
self.names.extend(
visitor
.names
.into_iter()
.filter(|(id, ..)| !arg_names.contains(id)),
);
ExprKind::Call {
func,
args,
keywords,
} => {
if let ExprKind::Name { id, .. } = &func.node {
if id == "filter" || id == "reduce" || id == "map" {
for arg in args {
if matches!(arg.node, ExprKind::Lambda { .. }) {
self.safe_functions.push(arg);
}
}
}
}
if let ExprKind::Attribute { value, attr, .. } = &func.node {
if attr == "reduce" {
if let ExprKind::Name { id, .. } = &value.node {
if id == "functools" {
for arg in args {
if matches!(arg.node, ExprKind::Lambda { .. }) {
self.safe_functions.push(arg);
}
}
}
}
}
}
for keyword in keywords {
if keyword.node.arg.as_ref().map_or(false, |arg| arg == "key")
&& matches!(keyword.node.value.node, ExprKind::Lambda { .. })
{
self.safe_functions.push(&keyword.node.value);
}
}
}
_ => visitor::walk_expr(self, expr),
ExprKind::Lambda { args, body } => {
if !self.safe_functions.contains(&expr) {
// Collect all loaded variable names.
let mut visitor = LoadedNamesVisitor::default();
visitor.visit_expr(body);
// Collect all argument names.
let mut arg_names = collect_arg_names(args);
arg_names.extend(visitor.stored.iter().map(|(id, ..)| id));
// Treat any non-arguments as "suspicious".
self.names.extend(
visitor
.loaded
.iter()
.filter(|(id, ..)| !arg_names.contains(id)),
);
}
}
_ => {}
}
visitor::walk_expr(self, expr);
}
}

View File

@@ -22,7 +22,7 @@ pub struct Options {
"#
)]
/// Additional callable functions to consider "immutable" when evaluating,
/// e.g., `no-mutable-default-argument` checks (`B006`).
/// e.g., the `no-mutable-default-argument` rule (`B006`).
pub extend_immutable_calls: Option<Vec<String>>,
}

View File

@@ -1,6 +1,6 @@
---
source: src/flake8_bugbear/mod.rs
expression: checks
expression: diagnostics
---
- kind:
FunctionUsesLoopVariable: x
@@ -172,4 +172,74 @@ expression: checks
column: 16
fix: ~
parent: ~
- kind:
FunctionUsesLoopVariable: x
location:
row: 117
column: 23
end_location:
row: 117
column: 24
fix: ~
parent: ~
- kind:
FunctionUsesLoopVariable: x
location:
row: 118
column: 26
end_location:
row: 118
column: 27
fix: ~
parent: ~
- kind:
FunctionUsesLoopVariable: x
location:
row: 119
column: 36
end_location:
row: 119
column: 37
fix: ~
parent: ~
- kind:
FunctionUsesLoopVariable: x
location:
row: 120
column: 37
end_location:
row: 120
column: 38
fix: ~
parent: ~
- kind:
FunctionUsesLoopVariable: x
location:
row: 121
column: 36
end_location:
row: 121
column: 37
fix: ~
parent: ~
- kind:
FunctionUsesLoopVariable: name
location:
row: 171
column: 28
end_location:
row: 171
column: 32
fix: ~
parent: ~
- kind:
FunctionUsesLoopVariable: i
location:
row: 174
column: 28
end_location:
row: 174
column: 29
fix: ~
parent: ~

View File

@@ -1,13 +1,11 @@
use log::error;
use num_bigint::BigInt;
use rustpython_ast::{
Comprehension, Constant, Expr, ExprKind, Keyword, KeywordData, Located, Unaryop,
};
use rustpython_ast::{Comprehension, Constant, Expr, ExprKind, Keyword, Unaryop};
use crate::ast::types::Range;
use crate::checkers::ast::Checker;
use crate::flake8_comprehensions::fixes;
use crate::registry::Diagnostic;
use crate::source_code_locator::SourceCodeLocator;
use crate::registry::{Diagnostic, RuleCode};
use crate::violations;
fn function_name(func: &Expr) -> Option<&str> {
@@ -41,237 +39,266 @@ fn first_argument_with_matching_function<'a>(
func: &Expr,
args: &'a [Expr],
) -> Option<&'a ExprKind> {
if function_name(func)? != name {
return None;
if function_name(func)? == name {
Some(&args.first()?.node)
} else {
None
}
Some(&args.first()?.node)
}
/// C400 (`list(generator)`)
pub fn unnecessary_generator_list(
checker: &mut Checker,
expr: &Expr,
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
locator: &SourceCodeLocator,
fix: bool,
location: Range,
) -> Option<Diagnostic> {
let argument = exactly_one_argument_with_matching_function("list", func, args, keywords)?;
) {
let Some(argument) = exactly_one_argument_with_matching_function("list", func, args, keywords) else {
return;
};
if !checker.is_builtin("list") {
return;
}
if let ExprKind::GeneratorExp { .. } = argument {
let mut diagnostic = Diagnostic::new(violations::UnnecessaryGeneratorList, location);
if fix {
match fixes::fix_unnecessary_generator_list(locator, expr) {
let mut diagnostic = Diagnostic::new(
violations::UnnecessaryGeneratorList,
Range::from_located(expr),
);
if checker.patch(&RuleCode::C400) {
match fixes::fix_unnecessary_generator_list(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
}
Err(e) => error!("Failed to generate fix: {e}"),
}
}
return Some(diagnostic);
checker.diagnostics.push(diagnostic);
}
None
}
/// C401 (`set(generator)`)
pub fn unnecessary_generator_set(
checker: &mut Checker,
expr: &Expr,
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
locator: &SourceCodeLocator,
fix: bool,
location: Range,
) -> Option<Diagnostic> {
let argument = exactly_one_argument_with_matching_function("set", func, args, keywords)?;
) {
let Some(argument) = exactly_one_argument_with_matching_function("set", func, args, keywords) else {
return;
};
if !checker.is_builtin("set") {
return;
}
if let ExprKind::GeneratorExp { .. } = argument {
let mut diagnostic = Diagnostic::new(violations::UnnecessaryGeneratorSet, location);
if fix {
match fixes::fix_unnecessary_generator_set(locator, expr) {
let mut diagnostic = Diagnostic::new(
violations::UnnecessaryGeneratorSet,
Range::from_located(expr),
);
if checker.patch(&RuleCode::C401) {
match fixes::fix_unnecessary_generator_set(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
}
Err(e) => error!("Failed to generate fix: {e}"),
}
}
return Some(diagnostic);
checker.diagnostics.push(diagnostic);
}
None
}
/// C402 (`dict((x, y) for x, y in iterable)`)
pub fn unnecessary_generator_dict(
checker: &mut Checker,
expr: &Expr,
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
locator: &SourceCodeLocator,
fix: bool,
location: Range,
) -> Option<Diagnostic> {
let argument = exactly_one_argument_with_matching_function("dict", func, args, keywords)?;
) {
let Some(argument) = exactly_one_argument_with_matching_function("dict", func, args, keywords) else {
return;
};
if let ExprKind::GeneratorExp { elt, .. } = argument {
match &elt.node {
ExprKind::Tuple { elts, .. } if elts.len() == 2 => {
let mut diagnostic =
Diagnostic::new(violations::UnnecessaryGeneratorDict, location);
if fix {
match fixes::fix_unnecessary_generator_dict(locator, expr) {
let mut diagnostic = Diagnostic::new(
violations::UnnecessaryGeneratorDict,
Range::from_located(expr),
);
if checker.patch(&RuleCode::C402) {
match fixes::fix_unnecessary_generator_dict(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
}
Err(e) => error!("Failed to generate fix: {e}"),
}
}
return Some(diagnostic);
checker.diagnostics.push(diagnostic);
}
_ => {}
}
}
None
}
/// C403 (`set([...])`)
pub fn unnecessary_list_comprehension_set(
checker: &mut Checker,
expr: &Expr,
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
locator: &SourceCodeLocator,
fix: bool,
location: Range,
) -> Option<Diagnostic> {
let argument = exactly_one_argument_with_matching_function("set", func, args, keywords)?;
) {
let Some(argument) = exactly_one_argument_with_matching_function("set", func, args, keywords) else {
return;
};
if !checker.is_builtin("set") {
return;
}
if let ExprKind::ListComp { .. } = &argument {
let mut diagnostic = Diagnostic::new(violations::UnnecessaryListComprehensionSet, location);
if fix {
match fixes::fix_unnecessary_list_comprehension_set(locator, expr) {
let mut diagnostic = Diagnostic::new(
violations::UnnecessaryListComprehensionSet,
Range::from_located(expr),
);
if checker.patch(&RuleCode::C403) {
match fixes::fix_unnecessary_list_comprehension_set(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
}
Err(e) => error!("Failed to generate fix: {e}"),
}
}
return Some(diagnostic);
checker.diagnostics.push(diagnostic);
}
None
}
/// C404 (`dict([...])`)
pub fn unnecessary_list_comprehension_dict(
checker: &mut Checker,
expr: &Expr,
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
locator: &SourceCodeLocator,
fix: bool,
location: Range,
) -> Option<Diagnostic> {
let argument = exactly_one_argument_with_matching_function("dict", func, args, keywords)?;
) {
let Some(argument) = exactly_one_argument_with_matching_function("dict", func, args, keywords) else {
return;
};
if !checker.is_builtin("dict") {
return;
}
let ExprKind::ListComp { elt, .. } = &argument else {
return None;
return;
};
let ExprKind::Tuple { elts, .. } = &elt.node else {
return None;
return;
};
if elts.len() != 2 {
return None;
return;
}
let mut diagnostic = Diagnostic::new(violations::UnnecessaryListComprehensionDict, location);
if fix {
match fixes::fix_unnecessary_list_comprehension_dict(locator, expr) {
let mut diagnostic = Diagnostic::new(
violations::UnnecessaryListComprehensionDict,
Range::from_located(expr),
);
if checker.patch(&RuleCode::C404) {
match fixes::fix_unnecessary_list_comprehension_dict(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
}
Err(e) => error!("Failed to generate fix: {e}"),
}
}
Some(diagnostic)
checker.diagnostics.push(diagnostic);
}
/// C405 (`set([1, 2])`)
pub fn unnecessary_literal_set(
checker: &mut Checker,
expr: &Expr,
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
locator: &SourceCodeLocator,
fix: bool,
location: Range,
) -> Option<Diagnostic> {
let argument = exactly_one_argument_with_matching_function("set", func, args, keywords)?;
) {
let Some(argument) = exactly_one_argument_with_matching_function("set", func, args, keywords) else {
return;
};
if !checker.is_builtin("set") {
return;
}
let kind = match argument {
ExprKind::List { .. } => "list",
ExprKind::Tuple { .. } => "tuple",
_ => return None,
_ => return,
};
let mut diagnostic = Diagnostic::new(
violations::UnnecessaryLiteralSet(kind.to_string()),
location,
Range::from_located(expr),
);
if fix {
match fixes::fix_unnecessary_literal_set(locator, expr) {
if checker.patch(&RuleCode::C405) {
match fixes::fix_unnecessary_literal_set(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
}
Err(e) => error!("Failed to generate fix: {e}"),
}
}
Some(diagnostic)
checker.diagnostics.push(diagnostic);
}
/// C406 (`dict([(1, 2)])`)
pub fn unnecessary_literal_dict(
checker: &mut Checker,
expr: &Expr,
func: &Expr,
args: &[Expr],
keywords: &[Keyword],
locator: &SourceCodeLocator,
fix: bool,
location: Range,
) -> Option<Diagnostic> {
let argument = exactly_one_argument_with_matching_function("dict", func, args, keywords)?;
) {
let Some(argument) = exactly_one_argument_with_matching_function("dict", func, args, keywords) else {
return;
};
if !checker.is_builtin("dict") {
return;
}
let (kind, elts) = match argument {
ExprKind::Tuple { elts, .. } => ("tuple", elts),
ExprKind::List { elts, .. } => ("list", elts),
_ => return None,
_ => return,
};
// Accept `dict((1, 2), ...))` `dict([(1, 2), ...])`.
if !elts
.iter()
.all(|elt| matches!(&elt.node, ExprKind::Tuple { elts, .. } if elts.len() == 2))
{
return None;
return;
}
let mut diagnostic = Diagnostic::new(
violations::UnnecessaryLiteralDict(kind.to_string()),
location,
Range::from_located(expr),
);
if fix {
match fixes::fix_unnecessary_literal_dict(locator, expr) {
if checker.patch(&RuleCode::C406) {
match fixes::fix_unnecessary_literal_dict(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
}
Err(e) => error!("Failed to generate fix: {e}"),
}
}
Some(diagnostic)
checker.diagnostics.push(diagnostic);
}
/// C408
pub fn unnecessary_collection_call(
checker: &mut Checker,
expr: &Expr,
func: &Expr,
args: &[Expr],
keywords: &[Located<KeywordData>],
locator: &SourceCodeLocator,
fix: bool,
location: Range,
) -> Option<Diagnostic> {
keywords: &[Keyword],
) {
if !args.is_empty() {
return None;
return;
}
let id = function_name(func)?;
let Some(id) = function_name(func) else {
return;
};
match id {
"dict" if keywords.is_empty() || keywords.iter().all(|kw| kw.node.arg.is_some()) => {
// `dict()` or `dict(a=1)` (as opposed to `dict(**a)`)
@@ -279,296 +306,377 @@ pub fn unnecessary_collection_call(
"list" | "tuple" => {
// `list()` or `tuple()`
}
_ => return None,
_ => return,
};
if !checker.is_builtin(id) {
return;
}
let mut diagnostic = Diagnostic::new(
violations::UnnecessaryCollectionCall(id.to_string()),
location,
Range::from_located(expr),
);
if fix {
match fixes::fix_unnecessary_collection_call(locator, expr) {
if checker.patch(&RuleCode::C408) {
match fixes::fix_unnecessary_collection_call(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
}
Err(e) => error!("Failed to generate fix: {e}"),
}
}
Some(diagnostic)
checker.diagnostics.push(diagnostic);
}
/// C409
pub fn unnecessary_literal_within_tuple_call(
checker: &mut Checker,
expr: &Expr,
func: &Expr,
args: &[Expr],
locator: &SourceCodeLocator,
fix: bool,
location: Range,
) -> Option<Diagnostic> {
let argument = first_argument_with_matching_function("tuple", func, args)?;
) {
let Some(argument) = first_argument_with_matching_function("tuple", func, args) else {
return;
};
if !checker.is_builtin("tuple") {
return;
}
let argument_kind = match argument {
ExprKind::Tuple { .. } => "tuple",
ExprKind::List { .. } => "list",
_ => return None,
_ => return,
};
let mut diagnostic = Diagnostic::new(
violations::UnnecessaryLiteralWithinTupleCall(argument_kind.to_string()),
location,
Range::from_located(expr),
);
if fix {
match fixes::fix_unnecessary_literal_within_tuple_call(locator, expr) {
if checker.patch(&RuleCode::C409) {
match fixes::fix_unnecessary_literal_within_tuple_call(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
}
Err(e) => error!("Failed to generate fix: {e}"),
}
}
Some(diagnostic)
checker.diagnostics.push(diagnostic);
}
/// C410
pub fn unnecessary_literal_within_list_call(
checker: &mut Checker,
expr: &Expr,
func: &Expr,
args: &[Expr],
locator: &SourceCodeLocator,
fix: bool,
location: Range,
) -> Option<Diagnostic> {
let argument = first_argument_with_matching_function("list", func, args)?;
) {
let Some(argument) = first_argument_with_matching_function("list", func, args) else {
return;
};
if !checker.is_builtin("list") {
return;
}
let argument_kind = match argument {
ExprKind::Tuple { .. } => "tuple",
ExprKind::List { .. } => "list",
_ => return None,
_ => return,
};
let mut diagnostic = Diagnostic::new(
violations::UnnecessaryLiteralWithinListCall(argument_kind.to_string()),
location,
Range::from_located(expr),
);
if fix {
match fixes::fix_unnecessary_literal_within_list_call(locator, expr) {
if checker.patch(&RuleCode::C410) {
match fixes::fix_unnecessary_literal_within_list_call(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
}
Err(e) => error!("Failed to generate fix: {e}"),
}
}
Some(diagnostic)
checker.diagnostics.push(diagnostic);
}
/// C411
pub fn unnecessary_list_call(
expr: &Expr,
func: &Expr,
args: &[Expr],
locator: &SourceCodeLocator,
fix: bool,
location: Range,
) -> Option<Diagnostic> {
let argument = first_argument_with_matching_function("list", func, args)?;
if !matches!(argument, ExprKind::ListComp { .. }) {
return None;
pub fn unnecessary_list_call(checker: &mut Checker, expr: &Expr, func: &Expr, args: &[Expr]) {
let Some(argument) = first_argument_with_matching_function("list", func, args) else {
return;
};
if !checker.is_builtin("list") {
return;
}
let mut diagnostic = Diagnostic::new(violations::UnnecessaryListCall, location);
if fix {
match fixes::fix_unnecessary_list_call(locator, expr) {
if !matches!(argument, ExprKind::ListComp { .. }) {
return;
}
let mut diagnostic =
Diagnostic::new(violations::UnnecessaryListCall, Range::from_located(expr));
if checker.patch(&RuleCode::C411) {
match fixes::fix_unnecessary_list_call(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
}
Err(e) => error!("Failed to generate fix: {e}"),
}
}
Some(diagnostic)
checker.diagnostics.push(diagnostic);
}
/// C413
pub fn unnecessary_call_around_sorted(
checker: &mut Checker,
expr: &Expr,
func: &Expr,
args: &[Expr],
locator: &SourceCodeLocator,
fix: bool,
location: Range,
) -> Option<Diagnostic> {
let outer = function_name(func)?;
if !(outer == "list" || outer == "reversed") {
return None;
}
let ExprKind::Call { func, .. } = &args.first()?.node else {
return None;
) {
let Some(outer) = function_name(func) else {
return;
};
if function_name(func)? != "sorted" {
return None;
if !(outer == "list" || outer == "reversed") {
return;
}
let Some(arg) = args.first() else {
return;
};
let ExprKind::Call { func, .. } = &arg.node else {
return;
};
let Some(inner) = function_name(func) else {
return;
};
if inner != "sorted" {
return;
}
if !checker.is_builtin(inner) || !checker.is_builtin(outer) {
return;
}
let mut diagnostic = Diagnostic::new(
violations::UnnecessaryCallAroundSorted(outer.to_string()),
location,
Range::from_located(expr),
);
if fix {
match fixes::fix_unnecessary_call_around_sorted(locator, expr) {
if checker.patch(&RuleCode::C413) {
match fixes::fix_unnecessary_call_around_sorted(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
}
Err(e) => error!("Failed to generate fix: {e}"),
}
}
Some(diagnostic)
checker.diagnostics.push(diagnostic);
}
/// C414
pub fn unnecessary_double_cast_or_process(
checker: &mut Checker,
expr: &Expr,
func: &Expr,
args: &[Expr],
location: Range,
) -> Option<Diagnostic> {
fn new_check(inner: &str, outer: &str, location: Range) -> Diagnostic {
) {
fn diagnostic(inner: &str, outer: &str, location: Range) -> Diagnostic {
Diagnostic::new(
violations::UnnecessaryDoubleCastOrProcess(inner.to_string(), outer.to_string()),
location,
)
}
let outer = function_name(func)?;
if !["list", "tuple", "set", "reversed", "sorted"].contains(&outer) {
return None;
let Some(outer) = function_name(func) else {
return;
};
if !(outer == "list"
|| outer == "tuple"
|| outer == "set"
|| outer == "reversed"
|| outer == "sorted")
{
return;
}
let Some(arg) = args.first() else {
return;
};
let ExprKind::Call { func, .. } = &arg.node else {
return;
};
let Some(inner) = function_name(func) else {
return;
};
if !checker.is_builtin(inner) || !checker.is_builtin(outer) {
return;
}
let ExprKind::Call { func, .. } = &args.first()?.node else {
return None;
};
let inner = function_name(func)?;
// Ex) set(tuple(...))
if (outer == "set" || outer == "sorted")
&& (inner == "list" || inner == "tuple" || inner == "reversed" || inner == "sorted")
{
return Some(new_check(inner, outer, location));
checker
.diagnostics
.push(diagnostic(inner, outer, Range::from_located(expr)));
return;
}
// Ex) list(tuple(...))
if (outer == "list" || outer == "tuple") && (inner == "list" || inner == "tuple") {
return Some(new_check(inner, outer, location));
checker
.diagnostics
.push(diagnostic(inner, outer, Range::from_located(expr)));
return;
}
// Ex) set(set(...))
if outer == "set" && inner == "set" {
return Some(new_check(inner, outer, location));
checker
.diagnostics
.push(diagnostic(inner, outer, Range::from_located(expr)));
}
None
}
/// C415
pub fn unnecessary_subscript_reversal(
checker: &mut Checker,
expr: &Expr,
func: &Expr,
args: &[Expr],
location: Range,
) -> Option<Diagnostic> {
let first_arg = args.first()?;
let id = function_name(func)?;
if !["set", "sorted", "reversed"].contains(&id) {
return None;
) {
let Some(first_arg) = args.first() else {
return;
};
let Some(id) = function_name(func) else {
return;
};
if !(id == "set" || id == "sorted" || id == "reversed") {
return;
}
if !checker.is_builtin(id) {
return;
}
let ExprKind::Subscript { slice, .. } = &first_arg.node else {
return None;
return;
};
let ExprKind::Slice { lower, upper, step } = &slice.node else {
return None;
return;
};
if lower.is_some() || upper.is_some() {
return None;
return;
}
let Some(step) = step.as_ref() else {
return;
};
let ExprKind::UnaryOp {
op: Unaryop::USub,
operand,
} = &step.as_ref()?.node else {
return None;
} = &step.node else {
return;
};
let ExprKind::Constant {
value: Constant::Int(val),
..
} = &operand.node else {
return None;
return;
};
if *val != BigInt::from(1) {
return None;
return;
};
Some(Diagnostic::new(
checker.diagnostics.push(Diagnostic::new(
violations::UnnecessarySubscriptReversal(id.to_string()),
location,
))
Range::from_located(expr),
));
}
/// C416
pub fn unnecessary_comprehension(
checker: &mut Checker,
expr: &Expr,
elt: &Expr,
generators: &[Comprehension],
locator: &SourceCodeLocator,
fix: bool,
location: Range,
) -> Option<Diagnostic> {
) {
if generators.len() != 1 {
return None;
return;
}
let generator = &generators[0];
if !(generator.ifs.is_empty() && generator.is_async == 0) {
return None;
return;
}
let elt_id = function_name(elt)?;
let target_id = function_name(&generator.target)?;
let Some(elt_id) = function_name(elt) else {
return;
};
let Some(target_id) = function_name(&generator.target) else {
return;
};
if elt_id != target_id {
return None;
return;
}
let expr_kind = match &expr.node {
let id = match &expr.node {
ExprKind::ListComp { .. } => "list",
ExprKind::SetComp { .. } => "set",
_ => return None,
_ => return,
};
if !checker.is_builtin(id) {
return;
}
let mut diagnostic = Diagnostic::new(
violations::UnnecessaryComprehension(expr_kind.to_string()),
location,
violations::UnnecessaryComprehension(id.to_string()),
Range::from_located(expr),
);
if fix {
match fixes::fix_unnecessary_comprehension(locator, expr) {
if checker.patch(&RuleCode::C416) {
match fixes::fix_unnecessary_comprehension(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
}
Err(e) => error!("Failed to generate fix: {e}"),
}
}
Some(diagnostic)
checker.diagnostics.push(diagnostic);
}
/// C417
pub fn unnecessary_map(func: &Expr, args: &[Expr], location: Range) -> Option<Diagnostic> {
fn new_check(kind: &str, location: Range) -> Diagnostic {
pub fn unnecessary_map(checker: &mut Checker, expr: &Expr, func: &Expr, args: &[Expr]) {
fn diagnostic(kind: &str, location: Range) -> Diagnostic {
Diagnostic::new(violations::UnnecessaryMap(kind.to_string()), location)
}
let id = function_name(func)?;
let Some(id) = function_name(func) else {
return;
};
match id {
"map" => {
if !checker.is_builtin(id) {
return;
}
if args.len() == 2 && matches!(&args[0].node, ExprKind::Lambda { .. }) {
return Some(new_check("generator", location));
checker
.diagnostics
.push(diagnostic("generator", Range::from_located(expr)));
}
}
"list" | "set" => {
if let ExprKind::Call { func, args, .. } = &args.first()?.node {
let argument = first_argument_with_matching_function("map", func, args)?;
if let ExprKind::Lambda { .. } = argument {
return Some(new_check(id, location));
if !checker.is_builtin(id) {
return;
}
if let Some(arg) = args.first() {
if let ExprKind::Call { func, args, .. } = &arg.node {
let Some(argument) = first_argument_with_matching_function("map", func, args) else {
return;
};
if let ExprKind::Lambda { .. } = argument {
checker
.diagnostics
.push(diagnostic(id, Range::from_located(expr)));
}
}
}
}
"dict" => {
if !checker.is_builtin(id) {
return;
}
if args.len() == 1 {
if let ExprKind::Call { func, args, .. } = &args[0].node {
let argument = first_argument_with_matching_function("map", func, args)?;
let Some(argument) = first_argument_with_matching_function("map", func, args) else {
return;
};
if let ExprKind::Lambda { body, .. } = &argument {
if matches!(&body.node, ExprKind::Tuple { elts, .. } | ExprKind::List { elts, .. } if elts.len() == 2)
{
return Some(new_check(id, location));
checker
.diagnostics
.push(diagnostic(id, Range::from_located(expr)));
}
}
}
@@ -576,5 +684,4 @@ pub fn unnecessary_map(func: &Expr, args: &[Expr], location: Range) -> Option<Di
}
_ => (),
}
None
}

View File

@@ -1,6 +1,6 @@
---
source: src/flake8_comprehensions/mod.rs
expression: checks
expression: diagnostics
---
- kind:
UnnecessaryGeneratorList: ~

View File

@@ -1,6 +1,6 @@
---
source: src/flake8_comprehensions/mod.rs
expression: checks
expression: diagnostics
---
- kind:
UnnecessaryGeneratorSet: ~

View File

@@ -1,6 +1,6 @@
---
source: src/flake8_comprehensions/mod.rs
expression: checks
expression: diagnostics
---
- kind:
UnnecessaryGeneratorDict: ~

View File

@@ -2,7 +2,7 @@ use log::error;
use rustc_hash::FxHashSet;
use rustpython_ast::{Constant, Expr, ExprKind, Stmt, StmtKind};
use crate::ast::types::Range;
use crate::ast::types::{Range, RefEquality};
use crate::autofix::helpers::delete_stmt;
use crate::autofix::Fix;
use crate::checkers::ast::Checker;
@@ -48,12 +48,14 @@ pub fn no_unnecessary_pass(checker: &mut Checker, body: &[Stmt]) {
}
/// PIE794
pub fn dupe_class_field_definitions(checker: &mut Checker, bases: &[Expr], body: &[Stmt]) {
if bases.is_empty() {
return;
}
let mut seen_targets = FxHashSet::default();
pub fn dupe_class_field_definitions<'a, 'b>(
checker: &mut Checker<'a>,
parent: &'b Stmt,
body: &'b [Stmt],
) where
'b: 'a,
{
let mut seen_targets: FxHashSet<&str> = FxHashSet::default();
for stmt in body {
// Extract the property name from the assignment statement.
let target = match &stmt.node {
@@ -77,17 +79,29 @@ pub fn dupe_class_field_definitions(checker: &mut Checker, bases: &[Expr], body:
_ => continue,
};
if seen_targets.contains(target) {
if !seen_targets.insert(target) {
let mut diagnostic = Diagnostic::new(
violations::DupeClassFieldDefinitions(target.to_string()),
Range::from_located(stmt),
);
if checker.patch(&RuleCode::PIE794) {
diagnostic.amend(Fix::deletion(stmt.location, stmt.end_location.unwrap()));
let deleted: Vec<&Stmt> = checker
.deletions
.iter()
.map(std::convert::Into::into)
.collect();
let locator = checker.locator;
match delete_stmt(stmt, Some(parent), &deleted, locator) {
Ok(fix) => {
checker.deletions.insert(RefEquality(stmt));
diagnostic.amend(fix);
}
Err(err) => {
error!("Failed to remove duplicate class definition: {}", err);
}
}
}
checker.diagnostics.push(diagnostic);
} else {
seen_targets.insert(target);
}
}
}

View File

@@ -1,6 +1,6 @@
---
source: src/flake8_pie/mod.rs
expression: checks
expression: diagnostics
---
- kind:
DupeClassFieldDefinitions: name
@@ -14,10 +14,10 @@ expression: checks
content: ""
location:
row: 4
column: 4
column: 0
end_location:
row: 4
column: 24
row: 5
column: 0
parent: ~
- kind:
DupeClassFieldDefinitions: name
@@ -31,10 +31,10 @@ expression: checks
content: ""
location:
row: 13
column: 4
column: 0
end_location:
row: 13
column: 24
row: 14
column: 0
parent: ~
- kind:
DupeClassFieldDefinitions: bar
@@ -48,9 +48,26 @@ expression: checks
content: ""
location:
row: 23
column: 4
column: 0
end_location:
row: 23
column: 23
row: 24
column: 0
parent: ~
- kind:
DupeClassFieldDefinitions: bar
location:
row: 40
column: 4
end_location:
row: 40
column: 23
fix:
content: ""
location:
row: 40
column: 0
end_location:
row: 41
column: 0
parent: ~

View File

@@ -4,7 +4,7 @@ use super::helpers::{
get_mark_decorators, get_mark_name, is_abstractmethod_decorator, is_pytest_fixture,
is_pytest_yield_fixture, keyword_is_literal,
};
use crate::ast::helpers::{collect_arg_names, collect_call_paths, identifier_range};
use crate::ast::helpers::{collect_arg_names, collect_call_paths};
use crate::ast::types::Range;
use crate::ast::visitor;
use crate::ast::visitor::Visitor;
@@ -156,33 +156,19 @@ fn check_fixture_returns(checker: &mut Checker, func: &Stmt, func_name: &str, bo
&& visitor.has_return_with_value
&& func_name.starts_with('_')
{
let mut diagnostic = Diagnostic::new(
checker.diagnostics.push(Diagnostic::new(
violations::IncorrectFixtureNameUnderscore(func_name.to_string()),
Range::from_located(func),
);
if checker.patch(diagnostic.kind.code()) {
let func_name_range = identifier_range(func, checker.locator);
let num_underscores = func_name.len() - func_name.trim_start_matches('_').len();
diagnostic.amend(Fix::deletion(
func_name_range.location,
func_name_range.location.with_col_offset(num_underscores),
));
}
checker.diagnostics.push(diagnostic);
));
} else if checker.settings.enabled.contains(&RuleCode::PT004)
&& !visitor.has_return_with_value
&& !visitor.has_yield_from
&& !func_name.starts_with('_')
{
let mut diagnostic = Diagnostic::new(
checker.diagnostics.push(Diagnostic::new(
violations::MissingFixtureNameUnderscore(func_name.to_string()),
Range::from_located(func),
);
if checker.patch(diagnostic.kind.code()) {
let func_name_range = identifier_range(func, checker.locator);
diagnostic.amend(Fix::insertion("_".to_string(), func_name_range.location));
}
checker.diagnostics.push(diagnostic);
));
}
if checker.settings.enabled.contains(&RuleCode::PT022) {

View File

@@ -36,9 +36,9 @@ pub struct Options {
)]
/// Boolean flag specifying whether `@pytest.fixture()` without parameters
/// should have parentheses. If the option is set to `true` (the
/// default), `@pytest.fixture()` is valid and `@pytest.fixture` is an
/// error. If set to `false`, `@pytest.fixture` is valid and
/// `@pytest.fixture()` is an error.
/// default), `@pytest.fixture()` is valid and `@pytest.fixture` is
/// invalid. If set to `false`, `@pytest.fixture` is valid and
/// `@pytest.fixture()` is invalid.
pub fixture_parentheses: Option<bool>,
#[option(
default = "tuple",
@@ -104,9 +104,9 @@ pub struct Options {
)]
/// Boolean flag specifying whether `@pytest.mark.foo()` without parameters
/// should have parentheses. If the option is set to `true` (the
/// default), `@pytest.mark.foo()` is valid and `@pytest.mark.foo` is an
/// error. If set to `false`, `@pytest.fixture` is valid and
/// `@pytest.mark.foo()` is an error.
/// default), `@pytest.mark.foo()` is valid and `@pytest.mark.foo` is
/// invalid. If set to `false`, `@pytest.fixture` is valid and
/// `@pytest.mark.foo()` is invalid.
pub mark_parentheses: Option<bool>,
}

View File

@@ -10,14 +10,7 @@ expression: diagnostics
end_location:
row: 52
column: 30
fix:
content: _
location:
row: 51
column: 4
end_location:
row: 51
column: 4
fix: ~
parent: ~
- kind:
MissingFixtureNameUnderscore: activate_context
@@ -27,13 +20,6 @@ expression: diagnostics
end_location:
row: 58
column: 13
fix:
content: _
location:
row: 56
column: 4
end_location:
row: 56
column: 4
fix: ~
parent: ~

View File

@@ -10,14 +10,7 @@ expression: diagnostics
end_location:
row: 42
column: 12
fix:
content: ""
location:
row: 41
column: 4
end_location:
row: 41
column: 5
fix: ~
parent: ~
- kind:
IncorrectFixtureNameUnderscore: _activate_context
@@ -27,14 +20,7 @@ expression: diagnostics
end_location:
row: 48
column: 21
fix:
content: ""
location:
row: 46
column: 4
end_location:
row: 46
column: 5
fix: ~
parent: ~
- kind:
IncorrectFixtureNameUnderscore: _activate_context
@@ -44,13 +30,6 @@ expression: diagnostics
end_location:
row: 57
column: 34
fix:
content: ""
location:
row: 52
column: 4
end_location:
row: 52
column: 5
fix: ~
parent: ~

View File

@@ -36,8 +36,8 @@ fn good_multiline_ending(quote: &Quote) -> &str {
fn good_docstring(quote: &Quote) -> &str {
match quote {
Quote::Single => "'''",
Quote::Double => "\"\"\"",
Quote::Single => "'",
Quote::Double => "\"",
}
}

View File

@@ -1,6 +1,6 @@
---
source: src/flake8_quotes/mod.rs
expression: checks
expression: diagnostics
---
- kind:
BadQuotesMultilineString: single
@@ -45,10 +45,10 @@ expression: checks
- kind:
BadQuotesMultilineString: single
location:
row: 21
row: 22
column: 4
end_location:
row: 21
row: 22
column: 27
fix: ~
parent: ~

View File

@@ -1,6 +1,6 @@
---
source: src/flake8_quotes/mod.rs
expression: checks
expression: diagnostics
---
- kind:
BadQuotesDocstring: double
@@ -22,4 +22,14 @@ expression: checks
column: 7
fix: ~
parent: ~
- kind:
BadQuotesDocstring: double
location:
row: 27
column: 4
end_location:
row: 27
column: 27
fix: ~
parent: ~

View File

@@ -1,6 +1,6 @@
---
source: src/flake8_quotes/mod.rs
expression: checks
expression: diagnostics
---
- kind:
BadQuotesDocstring: single
@@ -22,4 +22,14 @@ expression: checks
column: 7
fix: ~
parent: ~
- kind:
BadQuotesDocstring: single
location:
row: 27
column: 4
end_location:
row: 27
column: 27
fix: ~
parent: ~

View File

@@ -1,6 +1,6 @@
---
source: src/flake8_quotes/mod.rs
expression: checks
expression: diagnostics
---
- kind:
BadQuotesMultilineString: double
@@ -45,10 +45,10 @@ expression: checks
- kind:
BadQuotesMultilineString: double
location:
row: 21
row: 22
column: 4
end_location:
row: 21
row: 22
column: 27
fix: ~
parent: ~

View File

@@ -21,19 +21,22 @@ mod tests {
#[test_case(RuleCode::SIM109, Path::new("SIM109.py"); "SIM109")]
#[test_case(RuleCode::SIM110, Path::new("SIM110.py"); "SIM110")]
#[test_case(RuleCode::SIM111, Path::new("SIM111.py"); "SIM111")]
#[test_case(RuleCode::SIM112, Path::new("SIM112.py"); "SIM112")]
#[test_case(RuleCode::SIM115, Path::new("SIM115.py"); "SIM115")]
#[test_case(RuleCode::SIM117, Path::new("SIM117.py"); "SIM117")]
#[test_case(RuleCode::SIM118, Path::new("SIM118.py"); "SIM118")]
#[test_case(RuleCode::SIM201, Path::new("SIM201.py"); "SIM201")]
#[test_case(RuleCode::SIM202, Path::new("SIM202.py"); "SIM202")]
#[test_case(RuleCode::SIM208, Path::new("SIM208.py"); "SIM208")]
#[test_case(RuleCode::SIM210, Path::new("SIM210.py"); "SIM210")]
#[test_case(RuleCode::SIM211, Path::new("SIM211.py"); "SIM211")]
#[test_case(RuleCode::SIM212, Path::new("SIM212.py"); "SIM212")]
#[test_case(RuleCode::SIM118, Path::new("SIM118.py"); "SIM118")]
#[test_case(RuleCode::SIM220, Path::new("SIM220.py"); "SIM220")]
#[test_case(RuleCode::SIM221, Path::new("SIM221.py"); "SIM221")]
#[test_case(RuleCode::SIM222, Path::new("SIM222.py"); "SIM222")]
#[test_case(RuleCode::SIM223, Path::new("SIM223.py"); "SIM223")]
#[test_case(RuleCode::SIM300, Path::new("SIM300.py"); "SIM300")]
#[test_case(RuleCode::SIM401, Path::new("SIM401.py"); "SIM401")]
fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let diagnostics = test_path(

View File

@@ -0,0 +1,106 @@
use rustpython_ast::{Constant, Expr, ExprKind};
use crate::ast::helpers::{create_expr, match_module_member, unparse_expr};
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::checkers::ast::Checker;
use crate::registry::{Diagnostic, RuleCode};
use crate::violations;
/// SIM112
pub fn use_capital_environment_variables(checker: &mut Checker, expr: &Expr) {
// check `os.environ['foo']`
if let ExprKind::Subscript { .. } = &expr.node {
check_os_environ_subscript(checker, expr);
return;
}
// check `os.environ.get('foo')` and `os.getenv('foo')``
let is_os_environ_get = match_module_member(
expr,
"os.environ",
"get",
&checker.from_imports,
&checker.import_aliases,
);
let is_os_getenv = match_module_member(
expr,
"os",
"getenv",
&checker.from_imports,
&checker.import_aliases,
);
if !(is_os_environ_get || is_os_getenv) {
return;
}
let ExprKind::Call { args, .. } = &expr.node else {
return;
};
let Some(arg) = args.get(0) else {
return;
};
let ExprKind::Constant { value: Constant::Str(env_var), kind } = &arg.node else {
return;
};
let capital_env_var = env_var.to_ascii_uppercase();
if &capital_env_var == env_var {
return;
}
let mut diagnostic = Diagnostic::new(
violations::UseCapitalEnvironmentVariables(capital_env_var.clone(), env_var.clone()),
Range::from_located(arg),
);
if checker.patch(&RuleCode::SIM112) {
let new_env_var = create_expr(ExprKind::Constant {
value: capital_env_var.into(),
kind: kind.clone(),
});
diagnostic.amend(Fix::replacement(
unparse_expr(&new_env_var, checker.style),
arg.location,
arg.end_location.unwrap(),
));
}
checker.diagnostics.push(diagnostic);
}
fn check_os_environ_subscript(checker: &mut Checker, expr: &Expr) {
let ExprKind::Subscript { value, slice, .. } = &expr.node else {
return;
};
let ExprKind::Attribute { value: attr_value, attr, .. } = &value.node else {
return;
};
let ExprKind::Name { id, .. } = &attr_value.node else {
return;
};
if id != "os" || attr != "environ" {
return;
}
let ExprKind::Constant { value: Constant::Str(env_var), kind } = &slice.node else {
return;
};
let capital_env_var = env_var.to_ascii_uppercase();
if &capital_env_var == env_var {
return;
}
let mut diagnostic = Diagnostic::new(
violations::UseCapitalEnvironmentVariables(capital_env_var.clone(), env_var.clone()),
Range::from_located(slice),
);
if checker.patch(&RuleCode::SIM112) {
let new_env_var = create_expr(ExprKind::Constant {
value: capital_env_var.into(),
kind: kind.clone(),
});
diagnostic.amend(Fix::replacement(
unparse_expr(&new_env_var, checker.style),
slice.location,
slice.end_location.unwrap(),
));
}
checker.diagnostics.push(diagnostic);
}

View File

@@ -1,6 +1,9 @@
use rustpython_ast::{Constant, Expr, ExprKind, Stmt, StmtKind};
use rustpython_ast::{Cmpop, Constant, Expr, ExprContext, ExprKind, Stmt, StmtKind};
use crate::ast::helpers::{create_expr, create_stmt, unparse_expr, unparse_stmt};
use crate::ast::comparable::ComparableExpr;
use crate::ast::helpers::{
contains_call_path, create_expr, create_stmt, has_comments, unparse_expr, unparse_stmt,
};
use crate::ast::types::Range;
use crate::autofix::Fix;
use crate::checkers::ast::Checker;
@@ -144,7 +147,28 @@ pub fn use_ternary_operator(checker: &mut Checker, stmt: &Stmt, parent: Option<&
return;
}
let target_var = &body_targets[0];
// Avoid suggesting ternary for `if sys.version_info >= ...`-style checks.
if contains_call_path(
test,
"sys",
"version_info",
&checker.import_aliases,
&checker.from_imports,
) {
return;
}
// Avoid suggesting ternary for `if sys.platform.startswith("...")`-style
// checks.
if contains_call_path(
test,
"sys",
"platform",
&checker.import_aliases,
&checker.from_imports,
) {
return;
}
// It's part of a bigger if-elif block:
// https://github.com/MartinThoma/flake8-simplify/issues/115
@@ -176,15 +200,129 @@ pub fn use_ternary_operator(checker: &mut Checker, stmt: &Stmt, parent: Option<&
}
}
let target_var = &body_targets[0];
let ternary = ternary(target_var, body_value, test, orelse_value);
let content = unparse_stmt(&ternary, checker.style);
let contents = unparse_stmt(&ternary, checker.style);
// Don't flag for simplified ternaries if the resulting expression would exceed
// the maximum line length.
if stmt.location.column() + contents.len() > checker.settings.line_length {
return;
}
// Don't flag for simplified ternaries if the if-expression contains any
// comments.
if has_comments(stmt, checker.locator) {
return;
}
let mut diagnostic = Diagnostic::new(
violations::UseTernaryOperator(content.clone()),
violations::UseTernaryOperator(contents.clone()),
Range::from_located(stmt),
);
if checker.patch(&RuleCode::SIM108) {
diagnostic.amend(Fix::replacement(
content,
contents,
stmt.location,
stmt.end_location.unwrap(),
));
}
checker.diagnostics.push(diagnostic);
}
fn compare_expr(expr1: &ComparableExpr, expr2: &ComparableExpr) -> bool {
expr1.eq(expr2)
}
/// SIM401
pub fn use_dict_get_with_default(
checker: &mut Checker,
stmt: &Stmt,
test: &Expr,
body: &Vec<Stmt>,
orelse: &Vec<Stmt>,
) {
if body.len() != 1 || orelse.len() != 1 {
return;
}
let StmtKind::Assign { targets: body_var, value: body_val, ..} = &body[0].node else {
return;
};
if body_var.len() != 1 {
return;
};
let StmtKind::Assign { targets: orelse_var, value: orelse_val, .. } = &orelse[0].node else {
return;
};
if orelse_var.len() != 1 {
return;
};
let ExprKind::Compare { left: test_key, ops , comparators: test_dict } = &test.node else {
return;
};
if test_dict.len() != 1 {
return;
}
let (expected_var, expected_val, default_var, default_val) = match ops[..] {
[Cmpop::In] => (&body_var[0], body_val, &orelse_var[0], orelse_val),
[Cmpop::NotIn] => (&orelse_var[0], orelse_val, &body_var[0], body_val),
_ => {
return;
}
};
let test_dict = &test_dict[0];
let ExprKind::Subscript { value: expected_subscript, slice: expected_slice, .. } = &expected_val.node else {
return;
};
// Check that the dictionary key, target variables, and dictionary name are all
// equivalent.
if !compare_expr(&expected_slice.into(), &test_key.into())
|| !compare_expr(&expected_var.into(), &default_var.into())
|| !compare_expr(&test_dict.into(), &expected_subscript.into())
{
return;
}
let contents = unparse_stmt(
&create_stmt(StmtKind::Assign {
targets: vec![create_expr(expected_var.node.clone())],
value: Box::new(create_expr(ExprKind::Call {
func: Box::new(create_expr(ExprKind::Attribute {
value: expected_subscript.clone(),
attr: "get".to_string(),
ctx: ExprContext::Load,
})),
args: vec![
create_expr(test_key.node.clone()),
create_expr(default_val.node.clone()),
],
keywords: vec![],
})),
type_comment: None,
}),
checker.style,
);
// Don't flag for simplified `dict.get` if the resulting expression would exceed
// the maximum line length.
if stmt.location.column() + contents.len() > checker.settings.line_length {
return;
}
// Don't flag for simplified `dict.get` if the if-expression contains any
// comments.
if has_comments(stmt, checker.locator) {
return;
}
let mut diagnostic = Diagnostic::new(
violations::DictGetWithDefault(contents.clone()),
Range::from_located(stmt),
);
if checker.patch(&RuleCode::SIM401) {
diagnostic.amend(Fix::replacement(
contents,
stmt.location,
stmt.end_location.unwrap(),
));

View File

@@ -1,25 +1,32 @@
pub use ast_bool_op::{
a_and_not_a, a_or_not_a, and_false, compare_with_tuple, duplicate_isinstance_call, or_true,
};
pub use ast_expr::use_capital_environment_variables;
pub use ast_for::convert_loop_to_any_all;
pub use ast_if::{nested_if_statements, return_bool_condition_directly, use_ternary_operator};
pub use ast_if::{
nested_if_statements, return_bool_condition_directly, use_dict_get_with_default,
use_ternary_operator,
};
pub use ast_ifexp::{
explicit_false_true_in_ifexpr, explicit_true_false_in_ifexpr, twisted_arms_in_ifexpr,
};
pub use ast_unary_op::{double_negation, negation_with_equal_op, negation_with_not_equal_op};
pub use ast_with::multiple_with_statements;
pub use key_in_dict::{key_in_dict_compare, key_in_dict_for};
pub use open_file_with_context_handler::open_file_with_context_handler;
pub use return_in_try_except_finally::return_in_try_except_finally;
pub use use_contextlib_suppress::use_contextlib_suppress;
pub use yoda_conditions::yoda_conditions;
mod ast_bool_op;
mod ast_expr;
mod ast_for;
mod ast_if;
mod ast_ifexp;
mod ast_unary_op;
mod ast_with;
mod key_in_dict;
mod open_file_with_context_handler;
mod return_in_try_except_finally;
mod use_contextlib_suppress;
mod yoda_conditions;

View File

@@ -0,0 +1,30 @@
use rustpython_ast::Expr;
use rustpython_parser::ast::StmtKind;
use crate::ast::helpers::{collect_call_paths, dealias_call_path, match_call_path};
use crate::ast::types::Range;
use crate::checkers::ast::Checker;
use crate::registry::Diagnostic;
use crate::violations;
/// SIM115
pub fn open_file_with_context_handler(checker: &mut Checker, func: &Expr) {
if match_call_path(
&dealias_call_path(collect_call_paths(func), &checker.import_aliases),
"",
"open",
&checker.from_imports,
) {
if checker.is_builtin("open") {
match checker.current_stmt().node {
StmtKind::With { .. } => (),
_ => {
checker.diagnostics.push(Diagnostic::new(
violations::OpenFileWithContextHandler,
Range::from_located(func),
));
}
}
}
}
}

View File

@@ -1,6 +1,6 @@
---
source: src/flake8_simplify/mod.rs
expression: checks
expression: diagnostics
---
- kind:
UseTernaryOperator: b = c if a else d
@@ -19,4 +19,21 @@ expression: checks
row: 5
column: 9
parent: ~
- kind:
UseTernaryOperator: b = cccccccccccccccccccccccccccccccccccc if a else ddddddddddddddddddddddddddddddddddddd
location:
row: 82
column: 0
end_location:
row: 85
column: 45
fix:
content: b = cccccccccccccccccccccccccccccccccccc if a else ddddddddddddddddddddddddddddddddddddd
location:
row: 82
column: 0
end_location:
row: 85
column: 45
parent: ~

View File

@@ -0,0 +1,81 @@
---
source: src/flake8_simplify/mod.rs
expression: diagnostics
---
- kind:
UseCapitalEnvironmentVariables:
- FOO
- foo
location:
row: 4
column: 11
end_location:
row: 4
column: 16
fix:
content: "'FOO'"
location:
row: 4
column: 11
end_location:
row: 4
column: 16
parent: ~
- kind:
UseCapitalEnvironmentVariables:
- FOO
- foo
location:
row: 6
column: 15
end_location:
row: 6
column: 20
fix:
content: "'FOO'"
location:
row: 6
column: 15
end_location:
row: 6
column: 20
parent: ~
- kind:
UseCapitalEnvironmentVariables:
- FOO
- foo
location:
row: 8
column: 15
end_location:
row: 8
column: 20
fix:
content: "'FOO'"
location:
row: 8
column: 15
end_location:
row: 8
column: 20
parent: ~
- kind:
UseCapitalEnvironmentVariables:
- FOO
- foo
location:
row: 10
column: 10
end_location:
row: 10
column: 15
fix:
content: "'FOO'"
location:
row: 10
column: 10
end_location:
row: 10
column: 15
parent: ~

View File

@@ -0,0 +1,15 @@
---
source: src/flake8_simplify/mod.rs
expression: diagnostics
---
- kind:
OpenFileWithContextHandler: ~
location:
row: 1
column: 4
end_location:
row: 1
column: 8
fix: ~
parent: ~

View File

@@ -0,0 +1,107 @@
---
source: src/flake8_simplify/mod.rs
expression: diagnostics
---
- kind:
DictGetWithDefault: "var = a_dict.get(key, \"default1\")"
location:
row: 6
column: 0
end_location:
row: 9
column: 20
fix:
content: "var = a_dict.get(key, \"default1\")"
location:
row: 6
column: 0
end_location:
row: 9
column: 20
parent: ~
- kind:
DictGetWithDefault: "var = a_dict.get(key, \"default2\")"
location:
row: 12
column: 0
end_location:
row: 15
column: 21
fix:
content: "var = a_dict.get(key, \"default2\")"
location:
row: 12
column: 0
end_location:
row: 15
column: 21
parent: ~
- kind:
DictGetWithDefault: "var = a_dict.get(key, val1 + val2)"
location:
row: 18
column: 0
end_location:
row: 21
column: 21
fix:
content: "var = a_dict.get(key, val1 + val2)"
location:
row: 18
column: 0
end_location:
row: 21
column: 21
parent: ~
- kind:
DictGetWithDefault: "var = a_dict.get(keys[idx], \"default\")"
location:
row: 24
column: 0
end_location:
row: 27
column: 19
fix:
content: "var = a_dict.get(keys[idx], \"default\")"
location:
row: 24
column: 0
end_location:
row: 27
column: 19
parent: ~
- kind:
DictGetWithDefault: "var = dicts[idx].get(key, \"default\")"
location:
row: 30
column: 0
end_location:
row: 33
column: 19
fix:
content: "var = dicts[idx].get(key, \"default\")"
location:
row: 30
column: 0
end_location:
row: 33
column: 19
parent: ~
- kind:
DictGetWithDefault: "vars[idx] = a_dict.get(key, \"default\")"
location:
row: 36
column: 0
end_location:
row: 39
column: 25
fix:
content: "vars[idx] = a_dict.get(key, \"default\")"
location:
row: 36
column: 0
end_location:
row: 39
column: 25
parent: ~

View File

@@ -54,7 +54,7 @@ pub struct Options {
"#
)]
/// Specific modules or module members that may not be imported or accessed.
/// Note that this check is only meant to flag accidental uses,
/// Note that this rule is only meant to flag accidental uses,
/// and can be circumvented via `eval` or `importlib`.
pub banned_api: Option<FxHashMap<String, BannedApi>>,
}

View File

@@ -1,19 +1,6 @@
use rustpython_ast::{Constant, ExprKind, Stmt, StmtKind};
/// Return `true` if a `Stmt` is a docstring.
fn is_docstring_stmt(stmt: &Stmt) -> bool {
if let StmtKind::Expr { value } = &stmt.node {
matches!(
value.node,
ExprKind::Constant {
value: Constant::Str { .. },
..
}
)
} else {
false
}
}
use crate::ast::helpers::is_docstring_stmt;
/// Return `true` if a `Stmt` is a "empty": a `pass`, `...`, `raise
/// NotImplementedError`, or `raise NotImplemented` (with or without arguments).

Some files were not shown because too many files have changed in this diff Show More