Compare commits

...

45 Commits

Author SHA1 Message Date
Evan Rittenhouse
889c05c87e Explicitly put Path(...) in Pathlib violations (#3333) 2023-03-04 04:33:12 +00:00
Charlie Marsh
bbbc44336e Bump version to 0.0.254 (#3331) 2023-03-03 19:11:07 -05:00
Charlie Marsh
d216b2aaa8 Treat callables within type definitions as default-non-types (#3329) 2023-03-03 23:07:30 +00:00
Charlie Marsh
367cc43c42 Un-gate PEP 604 isinstance rewrites from keep_runtime_typing checks (#3328) 2023-03-03 17:29:41 -05:00
Charlie Marsh
b5b26d5a3e Gate PEP604 isinstance rewrites behind Python 3.10+ (#3327) 2023-03-03 22:22:14 +00:00
Charlie Marsh
dedf8aa5cc Use presence of convention-specific sections during docstring inference (#3325) 2023-03-03 17:13:11 -05:00
Charlie Marsh
eb42ce9319 Extend RET503 autofixes to "end of statement", including comments (#3324) 2023-03-03 19:15:55 +00:00
Micha Reiser
cdbe2ee496 refactor: Introduce CacheKey trait (#3323)
This PR introduces a new `CacheKey` trait for types that can be used as a cache key.

I'm not entirely sure if this is worth the "overhead", but I was surprised to find `HashableHashSet` and got scared when I looked at the time complexity of the `hash` function. These implementations must be extremely slow in hashed collections.

I then searched for usages and quickly realized that only the cache uses these `Hash` implementations, where performance is less sensitive.

This PR introduces a new `CacheKey` trait to communicate the difference between a hash and computing a key for the cache. The new trait can be implemented for types that don't implement `Hash` for performance reasons, and we can define additional constraints on the implementation:  For example, we'll want to enforce portability when we add remote caching support. Using a different trait further allows us not to implement it for types without stable identities (e.g. pointers) or use other implementations than the standard hash function.
2023-03-03 18:29:49 +00:00
konstin
d1288dc2b1 Remove maturin build from CI (#3322)
maturin build generally works when cargo build works, so imho it's not worth running it with every CI run.
2023-03-03 16:50:26 +01:00
konstin
3bcffb5bdd Add flake-pyi PYI033 "Do not use type comments in stubs" (#3302) 2023-03-03 10:45:34 -05:00
Martin Packman
98209be8aa Detect quote style ignoring docstrings (#3306)
Currently the quote style of the first string in a file is used for autodetecting what to use when rewriting code for fixes. This is an okay heuristic, but often the first line in a file is a docstring, rather than a string constant, and it's not uncommon for pre-Black code to have different quoting styles for those.

For example, in the Google style guide:
https://google.github.io/styleguide/pyguide.html
> Be consistent with your choice of string quote character within a file. Pick ' or " and stick with it. ... Docstrings must use """ regardless.

This branch adjusts the logic to instead skip over any `"""` triple doublequote string tokens. The default, if there are no single quoted strings, is still to use double quote as the style.
2023-03-02 23:59:33 -05:00
Charlie Marsh
a03fa93c3a Abort when unable to fix relative imports past module root (#3319) 2023-03-03 04:38:51 +00:00
Charlie Marsh
4de3882088 Upgrade RustPython (#3316) 2023-03-02 22:59:29 -05:00
Charlie Marsh
3a98b68dc0 Always include @classmethod and @staticmethod in decorator lists (#3314) 2023-03-03 03:49:04 +00:00
Carlos Gonçalves
7e291e542d feat(e225,226,227,228): add rules (#3300) 2023-03-02 22:54:45 +00:00
Carlos Gonçalves
6f649d6579 feat(E211): add rule + autofix (#3313) 2023-03-02 22:48:04 +00:00
Yaroslav Chvanov
508bc605a5 Implement property-decorators configuration option for pydocstyle (#3311) 2023-03-02 16:59:56 -05:00
Charlie Marsh
ffdf6e35e6 Treat function type annotations within classes as runtime-required (#3312) 2023-03-02 16:45:26 -05:00
Martin Lehoux
886992c6c2 Replace tuples with type union in isinstance or issubclass calls (#3280) 2023-03-02 15:59:15 -05:00
Charlie Marsh
187104e396 Flag out-of-date docs on CI (#3309) 2023-03-02 15:55:39 -05:00
Charlie Marsh
3ed539d50e Add a CLI flag to force-ignore noqa directives (#3296) 2023-03-01 22:28:13 -05:00
Charlie Marsh
4a70a4c323 Ignore unused imports in ModuleNotFoundError blocks (#3288) 2023-03-01 18:08:37 -05:00
Charlie Marsh
310f13c7db Redirect RUF004 to B026 (#3293) 2023-03-01 13:00:02 -05:00
konstin
2168404fc2 flake8-pyi PYI006 bad version info comparison (#3291)
Implement PYI006 "bad version info comparison"

## What it does

Ensures that you only `<` and `>=` for version info comparisons with
`sys.version_info` in `.pyi` files. All other comparisons such as
`<`, `<=` and `==` are banned.

## Why is this bad?

```python
>>> import sys
>>> print(sys.version_info)
sys.version_info(major=3, minor=8, micro=10, releaselevel='final', serial=0)
>>> print(sys.version_info > (3, 8))
True
>>> print(sys.version_info == (3, 8))
False
>>> print(sys.version_info <= (3, 8))
False
>>> print(sys.version_info in (3, 8))
False
```

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-03-01 18:58:57 +01:00
Charlie Marsh
a032b66c2e Avoid PEP 585 rewrites when builtins are shadowed (#3286) 2023-02-28 23:25:42 +00:00
Charlie Marsh
af5f7dbd83 Avoid pluralization for single --add-noqa result (#3282) 2023-02-28 15:41:18 -05:00
Charlie Marsh
8066607ea3 Add a preliminary tutorial (#3281) 2023-02-28 20:31:27 +00:00
Andy Freeland
0ed9fccce9 Upgrade RustPython (#3277)
Fixes #3207.
2023-02-28 12:21:28 -05:00
Carlos Gonçalves
074a343a63 feat(E251,E252): add rules (#3274) 2023-02-28 12:02:36 -05:00
Charlie Marsh
c7e09b54b0 Use expression span for yoda-conditions fixes (#3276) 2023-02-28 16:59:02 +00:00
Charlie Marsh
67d1f74587 Avoid raising TRY200 violations within new scopes (#3275) 2023-02-28 11:56:29 -05:00
Matthew Lloyd
1c79dff3bd Improve the message for PLW2901: use "outer" and "inner" judiciously (#3263) 2023-02-28 16:33:01 +00:00
Charlie Marsh
f5f09b489b Introduce dedicated CST tokens for other operator kinds (#3267) 2023-02-27 23:54:57 -05:00
Charlie Marsh
061495a9eb Make BoolOp its own located token (#3265) 2023-02-28 03:43:28 +00:00
Charlie Marsh
470e1c1754 Preserve comments on non-defaulted arguments (#3264) 2023-02-27 23:41:40 +00:00
Charlie Marsh
16be691712 Enable more non-panicking formatter tests (#3262) 2023-02-27 18:21:53 -05:00
Charlie Marsh
270015865b Don't flag keyword-based logging format strings (#3261) 2023-02-27 23:11:13 +00:00
Charlie Marsh
ccfa9d5b20 Deduplicate SIM116 errors (#3260) 2023-02-27 18:08:45 -05:00
Charlie Marsh
2261e194a0 Create dedicated Body nodes in the formatter CST (#3223) 2023-02-27 22:55:05 +00:00
Ville Skyttä
cd6413ca09 Match non-lowercase with S105 again (#3258) 2023-02-27 16:38:23 -05:00
Charlie Marsh
c65585e14a Use identifier_range for a few more rules (#3254) 2023-02-27 18:23:33 +00:00
Charlie Marsh
d2a6ed7be6 Upgrade RustPython (#3252) 2023-02-27 18:21:06 +00:00
Charlie Marsh
16e2dae0c2 Handle empty NamedTuple and TypedDict conversions (#3251) 2023-02-27 11:18:34 -05:00
Carlos Gonçalves
e8ba9c9e21 feat(W191): add indentation_contains_tabs (#3249) 2023-02-27 10:36:03 -05:00
Jonathan Plasse
d285f5c90a Run automatically format code blocks with Black (#3191) 2023-02-27 10:14:05 -05:00
298 changed files with 11220 additions and 3201 deletions

View File

@@ -29,7 +29,7 @@ jobs:
- run: ./target/debug/ruff_dev generate-all
- run: git diff --quiet README.md || echo "::error file=README.md::This file is outdated. Run 'cargo dev generate-all'."
- run: git diff --quiet ruff.schema.json || echo "::error file=ruff.schema.json::This file is outdated. Run 'cargo dev generate-all'."
- run: git diff --exit-code -- README.md ruff.schema.json
- run: git diff --exit-code -- README.md ruff.schema.json docs
cargo-fmt:
name: "cargo fmt"
@@ -109,21 +109,6 @@ jobs:
./scripts/add_rule.py --name FirstRule --code TST001 --linter test
- run: cargo check
maturin-build:
name: "maturin build"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: "Install Rust toolchain"
run: rustup show
- uses: Swatinem/rust-cache@v1
- uses: actions/setup-python@v4
with:
python-version: "3.11"
- run: pip install maturin
- run: maturin build -b bin
- run: python scripts/transform_readme.py --target pypi
typos:
name: "spell check"
runs-on: ubuntu-latest

View File

@@ -5,6 +5,14 @@ repos:
hooks:
- id: validate-pyproject
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.16
hooks:
- id: mdformat
additional_dependencies:
- mdformat-black
- black==23.1.0 # Must be the latest version of Black
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.33.0
hooks:

View File

@@ -20,8 +20,7 @@ the intention of adding a stable public API in the future.
### `select`, `extend-select`, `ignore`, and `extend-ignore` have new semantics ([#2312](https://github.com/charliermarsh/ruff/pull/2312))
Previously, the interplay between `select` and its related options could lead to unexpected
behavior. For example, `ruff --select E501 --ignore ALL` and `ruff --select E501 --extend-ignore
ALL` behaved differently. (See [#2312](https://github.com/charliermarsh/ruff/pull/2312) for more
behavior. For example, `ruff --select E501 --ignore ALL` and `ruff --select E501 --extend-ignore ALL` behaved differently. (See [#2312](https://github.com/charliermarsh/ruff/pull/2312) for more
examples.)
When Ruff determines the enabled rule set, it has to reconcile `select` and `ignore` from a variety
@@ -74,7 +73,7 @@ ruff rule E402 --format json # Works! (And preferred.)
This change is largely backwards compatible -- most users should experience
no change in behavior. However, please note the following exceptions:
* Subcommands will now fail when invoked with unsupported arguments, instead
- Subcommands will now fail when invoked with unsupported arguments, instead
of silently ignoring them. For example, the following will now fail:
```console
@@ -83,16 +82,16 @@ no change in behavior. However, please note the following exceptions:
(the `clean` command doesn't support `--respect-gitignore`.)
* The semantics of `ruff <arg>` have changed slightly when `<arg>` is a valid subcommand.
- The semantics of `ruff <arg>` have changed slightly when `<arg>` is a valid subcommand.
For example, prior to this release, running `ruff rule` would run `ruff` over a file or
directory called `rule`. Now, `ruff rule` would invoke the `rule` subcommand. This should
only impact projects with files or directories named `rule`, `check`, `explain`, `clean`,
or `generate-shell-completion`.
* Scripts that invoke ruff should supply `--` before any positional arguments.
- Scripts that invoke ruff should supply `--` before any positional arguments.
(The semantics of `ruff -- <arg>` have not changed.)
* `--explain` previously treated `--format grouped` as a synonym for `--format text`.
- `--explain` previously treated `--format grouped` as a synonym for `--format text`.
This is no longer supported; instead, use `--format text`.
## 0.0.226

View File

@@ -1,16 +1,16 @@
# Contributor Covenant Code of Conduct
* [Our Pledge](#our-pledge)
* [Our Standards](#our-standards)
* [Enforcement Responsibilities](#enforcement-responsibilities)
* [Scope](#scope)
* [Enforcement](#enforcement)
* [Enforcement Guidelines](#enforcement-guidelines)
* [1. Correction](#1-correction)
* [2. Warning](#2-warning)
* [3. Temporary Ban](#3-temporary-ban)
* [4. Permanent Ban](#4-permanent-ban)
* [Attribution](#attribution)
- [Our Pledge](#our-pledge)
- [Our Standards](#our-standards)
- [Enforcement Responsibilities](#enforcement-responsibilities)
- [Scope](#scope)
- [Enforcement](#enforcement)
- [Enforcement Guidelines](#enforcement-guidelines)
- [1. Correction](#1-correction)
- [2. Warning](#2-warning)
- [3. Temporary Ban](#3-temporary-ban)
- [4. Permanent Ban](#4-permanent-ban)
- [Attribution](#attribution)
## Our Pledge
@@ -29,23 +29,23 @@ diverse, inclusive, and healthy community.
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
- Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
- The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
- Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
@@ -132,7 +132,7 @@ version 2.0, available [here](https://www.contributor-covenant.org/version/2/0/c
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the [FAQ](https://www.contributor-covenant.org/faq).
Translations are available [here](https://www.contributor-covenant.org/translations).
[homepage]: https://www.contributor-covenant.org

View File

@@ -2,16 +2,16 @@
Welcome! We're happy to have you here. Thank you in advance for your contribution to Ruff.
* [The Basics](#the-basics)
* [Prerequisites](#prerequisites)
* [Development](#development)
* [Project Structure](#project-structure)
* [Example: Adding a new lint rule](#example-adding-a-new-lint-rule)
* [Rule naming convention](#rule-naming-convention)
* [Example: Adding a new configuration option](#example-adding-a-new-configuration-option)
* [MkDocs](#mkdocs)
* [Release Process](#release-process)
* [Benchmarks](#benchmarks)
- [The Basics](#the-basics)
- [Prerequisites](#prerequisites)
- [Development](#development)
- [Project Structure](#project-structure)
- [Example: Adding a new lint rule](#example-adding-a-new-lint-rule)
- [Rule naming convention](#rule-naming-convention)
- [Example: Adding a new configuration option](#example-adding-a-new-configuration-option)
- [MkDocs](#mkdocs)
- [Release Process](#release-process)
- [Benchmarks](#benchmarks)
## The Basics
@@ -91,27 +91,27 @@ The vast majority of the code, including all lint rules, lives in the `ruff` cra
At time of writing, the repository includes the following crates:
* `crates/ruff`: library crate containing all lint rules and the core logic for running them.
* `crates/ruff_cli`: binary crate containing Ruff's command-line interface.
* `crates/ruff_dev`: binary crate containing utilities used in the development of Ruff itself (e.g., `cargo dev generate-all`).
* `crates/ruff_macros`: library crate containing macros used by Ruff.
* `crates/ruff_python`: library crate implementing Python-specific functionality (e.g., lists of standard library modules by versionb).
* `crates/flake8_to_ruff`: binary crate for generating Ruff configuration from Flake8 configuration.
- `crates/ruff`: library crate containing all lint rules and the core logic for running them.
- `crates/ruff_cli`: binary crate containing Ruff's command-line interface.
- `crates/ruff_dev`: binary crate containing utilities used in the development of Ruff itself (e.g., `cargo dev generate-all`).
- `crates/ruff_macros`: library crate containing macros used by Ruff.
- `crates/ruff_python`: library crate implementing Python-specific functionality (e.g., lists of standard library modules by versionb).
- `crates/flake8_to_ruff`: binary crate for generating Ruff configuration from Flake8 configuration.
### Example: Adding a new lint rule
At a high level, the steps involved in adding a new lint rule are as follows:
1. Determine a name for the new rule as per our [rule naming convention](#rule-naming-convention).
2. Create a file for your rule (e.g., `crates/ruff/src/rules/flake8_bugbear/rules/abstract_base_class.rs`).
3. In that file, define a violation struct. You can grep for `define_violation!` to see examples.
4. Map the violation struct to a rule code in `crates/ruff/src/registry.rs` (e.g., `E402`).
5. Define the logic for triggering the violation in `crates/ruff/src/checkers/ast.rs` (for AST-based
1. Create a file for your rule (e.g., `crates/ruff/src/rules/flake8_bugbear/rules/abstract_base_class.rs`).
1. In that file, define a violation struct. You can grep for `define_violation!` to see examples.
1. Map the violation struct to a rule code in `crates/ruff/src/registry.rs` (e.g., `E402`).
1. Define the logic for triggering the violation in `crates/ruff/src/checkers/ast.rs` (for AST-based
checks), `crates/ruff/src/checkers/tokens.rs` (for token-based checks), `crates/ruff/src/checkers/lines.rs`
(for text-based checks), or `crates/ruff/src/checkers/filesystem.rs` (for filesystem-based
checks).
6. Add a test fixture.
7. Update the generated files (documentation and generated code).
1. Add a test fixture.
1. Update the generated files (documentation and generated code).
To define the violation, start by creating a dedicated file for your rule under the appropriate
rule linter (e.g., `crates/ruff/src/rules/flake8_bugbear/rules/abstract_base_class.rs`). That file should
@@ -147,9 +147,9 @@ The rule name should make sense when read as "allow _rule-name_" or "allow _rule
This implies that rule names:
* should state the bad thing being checked for
- should state the bad thing being checked for
* should not contain instructions on what you what you should use instead
- should not contain instructions on what you what you should use instead
(these belong in the rule documentation and the `autofix_title` for rules that have autofix)
When re-implementing rules from other linters, this convention is given more importance than
@@ -191,13 +191,13 @@ To preview any changes to the documentation locally:
pip install -r docs/requirements.txt
```
2. Generate the MkDocs site with:
1. Generate the MkDocs site with:
```shell
python scripts/generate_mkdocs.py
```
3. Run the development server with:
1. Run the development server with:
```shell
mkdocs serve

92
Cargo.lock generated
View File

@@ -770,7 +770,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.253"
version = "0.0.254"
dependencies = [
"anyhow",
"clap 4.1.6",
@@ -1080,12 +1080,6 @@ version = "1.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fad582f4b9e86b6caa621cabeb0963332d92eea04729ab12892c2533951e6440"
[[package]]
name = "joinery"
version = "2.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72167d68f5fce3b8655487b8038691a3c9984ee769590f93f2a631f4ad64e4f5"
[[package]]
name = "js-sys"
version = "0.3.61"
@@ -1405,7 +1399,6 @@ dependencies = [
"autocfg",
"num-integer",
"num-traits",
"serde",
]
[[package]]
@@ -1415,7 +1408,6 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "02e0d21255c828d6f128a1e41534206671e8c3ea0c62f32291e808dc82cff17d"
dependencies = [
"num-traits",
"serde",
]
[[package]]
@@ -1447,27 +1439,6 @@ dependencies = [
"libc",
]
[[package]]
name = "num_enum"
version = "0.5.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3e0072973714303aa6e3631c7e8e777970cf4bdd25dc4932e41031027b8bcc4e"
dependencies = [
"num_enum_derive",
]
[[package]]
name = "num_enum_derive"
version = "0.5.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0629cbd6b897944899b1f10496d9c4a7ac5878d45fd61bc22e9e79bfbbc29597"
dependencies = [
"proc-macro-crate",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "once_cell"
version = "1.17.1"
@@ -1757,16 +1728,6 @@ dependencies = [
"termtree",
]
[[package]]
name = "proc-macro-crate"
version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "66618389e4ec1c7afe67d51a9bf34ff9236480f8d51e7489b7d5ab0303c13f34"
dependencies = [
"once_cell",
"toml_edit",
]
[[package]]
name = "proc-macro-error"
version = "1.0.4"
@@ -1981,7 +1942,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.253"
version = "0.0.254"
dependencies = [
"anyhow",
"bisection",
@@ -2015,6 +1976,7 @@ dependencies = [
"path-absolutize",
"regex",
"result-like",
"ruff_cache",
"ruff_macros",
"ruff_python",
"ruff_rustpython",
@@ -2032,15 +1994,25 @@ dependencies = [
"test-case",
"textwrap",
"thiserror",
"titlecase",
"toml",
"wasm-bindgen",
"wasm-bindgen-test",
]
[[package]]
name = "ruff_cache"
version = "0.0.0"
dependencies = [
"filetime",
"globset",
"itertools",
"regex",
"ruff_macros",
]
[[package]]
name = "ruff_cli"
version = "0.0.253"
version = "0.0.254"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -2066,6 +2038,7 @@ dependencies = [
"rayon",
"regex",
"ruff",
"ruff_cache",
"rustc-hash",
"serde",
"serde_json",
@@ -2141,6 +2114,7 @@ dependencies = [
"clap 4.1.6",
"insta",
"is-macro",
"itertools",
"once_cell",
"ruff_formatter",
"ruff_python",
@@ -2230,7 +2204,7 @@ dependencies = [
[[package]]
name = "rustpython-ast"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=c4b67896662b16b5699a779c0e52aa0ca2587fec#c4b67896662b16b5699a779c0e52aa0ca2587fec"
source = "git+https://github.com/RustPython/RustPython.git?rev=822f6a9fa6b9142413634858c4c6908f678ce887#822f6a9fa6b9142413634858c4c6908f678ce887"
dependencies = [
"num-bigint",
"rustpython-compiler-core",
@@ -2239,7 +2213,7 @@ dependencies = [
[[package]]
name = "rustpython-common"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=c4b67896662b16b5699a779c0e52aa0ca2587fec#c4b67896662b16b5699a779c0e52aa0ca2587fec"
source = "git+https://github.com/RustPython/RustPython.git?rev=822f6a9fa6b9142413634858c4c6908f678ce887#822f6a9fa6b9142413634858c4c6908f678ce887"
dependencies = [
"ascii",
"bitflags",
@@ -2264,24 +2238,21 @@ dependencies = [
[[package]]
name = "rustpython-compiler-core"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=c4b67896662b16b5699a779c0e52aa0ca2587fec#c4b67896662b16b5699a779c0e52aa0ca2587fec"
source = "git+https://github.com/RustPython/RustPython.git?rev=822f6a9fa6b9142413634858c4c6908f678ce887#822f6a9fa6b9142413634858c4c6908f678ce887"
dependencies = [
"bincode",
"bitflags",
"bstr 0.2.17",
"itertools",
"lz4_flex",
"num-bigint",
"num-complex",
"num_enum",
"serde",
"thiserror",
]
[[package]]
name = "rustpython-parser"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=c4b67896662b16b5699a779c0e52aa0ca2587fec#c4b67896662b16b5699a779c0e52aa0ca2587fec"
source = "git+https://github.com/RustPython/RustPython.git?rev=822f6a9fa6b9142413634858c4c6908f678ce887#822f6a9fa6b9142413634858c4c6908f678ce887"
dependencies = [
"ahash",
"anyhow",
@@ -2296,7 +2267,7 @@ dependencies = [
"rustc-hash",
"rustpython-ast",
"rustpython-compiler-core",
"thiserror",
"serde",
"tiny-keccak",
"unic-emoji-char",
"unic-ucd-ident",
@@ -2751,17 +2722,6 @@ version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
[[package]]
name = "titlecase"
version = "2.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "38397a8cdb017cfeb48bf6c154d6de975ac69ffeed35980fde199d2ee0842042"
dependencies = [
"joinery",
"lazy_static",
"regex",
]
[[package]]
name = "toml"
version = "0.6.0"
@@ -2947,9 +2907,11 @@ checksum = "f962df74c8c05a667b5ee8bcf162993134c104e96440b663c8daa176dc772d8c"
[[package]]
name = "unicode_names2"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "029df4cc8238cefc911704ff8fa210853a0f3bce2694d8f51181dd41ee0f3301"
version = "0.6.0"
source = "git+https://github.com/youknowone/unicode_names2.git?rev=4ce16aa85cbcdd9cc830410f1a72ef9a235f2fde#4ce16aa85cbcdd9cc830410f1a72ef9a235f2fde"
dependencies = [
"phf",
]
[[package]]
name = "untrusted"

View File

@@ -14,8 +14,8 @@ libcst = { git = "https://github.com/charliermarsh/LibCST", rev = "80e4c1399f95e
once_cell = { version = "1.16.0" }
regex = { version = "1.6.0" }
rustc-hash = { version = "1.1.0" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "c4b67896662b16b5699a779c0e52aa0ca2587fec" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "c4b67896662b16b5699a779c0e52aa0ca2587fec" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "822f6a9fa6b9142413634858c4c6908f678ce887" }
rustpython-parser = { features = ["lalrpop", "serde"], git = "https://github.com/RustPython/RustPython.git", rev = "822f6a9fa6b9142413634858c4c6908f678ce887" }
schemars = { version = "0.8.11" }
serde = { version = "1.0.147", features = ["derive"] }
serde_json = { version = "1.0.87" }

122
README.md
View File

@@ -24,17 +24,17 @@ An extremely fast Python linter, written in Rust.
<i>Linting the CPython codebase from scratch.</i>
</p>
* ⚡️ 10-100x faster than existing linters
* 🐍 Installable via `pip`
* 🛠️ `pyproject.toml` support
* 🤝 Python 3.11 compatibility
* 📦 Built-in caching, to avoid re-analyzing unchanged files
* 🔧 Autofix support, for automatic error correction (e.g., automatically remove unused imports)
* 📏 Over [500 built-in rules](https://beta.ruff.rs/docs/rules/) (and growing)
* ⚖️ [Near-parity](https://beta.ruff.rs/docs/faq/#how-does-ruff-compare-to-flake8) with the built-in Flake8 rule set
* 🔌 Native re-implementations of dozens of Flake8 plugins, like flake8-bugbear
* ⌨️ First-party editor integrations for [VS Code](https://github.com/charliermarsh/ruff-vscode) and [more](https://github.com/charliermarsh/ruff-lsp)
* 🌎 Monorepo-friendly, with [hierarchical and cascading configuration](https://beta.ruff.rs/docs/configuration/#pyprojecttoml-discovery)
- ⚡️ 10-100x faster than existing linters
- 🐍 Installable via `pip`
- 🛠️ `pyproject.toml` support
- 🤝 Python 3.11 compatibility
- 📦 Built-in caching, to avoid re-analyzing unchanged files
- 🔧 Autofix support, for automatic error correction (e.g., automatically remove unused imports)
- 📏 Over [500 built-in rules](https://beta.ruff.rs/docs/rules/)
- ⚖️ [Near-parity](https://beta.ruff.rs/docs/faq/#how-does-ruff-compare-to-flake8) with the built-in Flake8 rule set
- 🔌 Native re-implementations of dozens of Flake8 plugins, like flake8-bugbear
- ⌨️ First-party editor integrations for [VS Code](https://github.com/charliermarsh/ruff-vscode) and [more](https://github.com/charliermarsh/ruff-lsp)
- 🌎 Monorepo-friendly, with [hierarchical and cascading configuration](https://beta.ruff.rs/docs/configuration/#pyprojecttoml-discovery)
Ruff aims to be orders of magnitude faster than alternative tools while integrating more
functionality behind a single, common interface.
@@ -47,11 +47,11 @@ all while executing tens or hundreds of times faster than any individual tool.
Ruff is extremely actively developed and used in major open-source projects like:
* [pandas](https://github.com/pandas-dev/pandas)
* [FastAPI](https://github.com/tiangolo/fastapi)
* [Transformers (Hugging Face)](https://github.com/huggingface/transformers)
* [Apache Airflow](https://github.com/apache/airflow)
* [SciPy](https://github.com/scipy/scipy)
- [pandas](https://github.com/pandas-dev/pandas)
- [FastAPI](https://github.com/tiangolo/fastapi)
- [Transformers (Hugging Face)](https://github.com/huggingface/transformers)
- [Apache Airflow](https://github.com/apache/airflow)
- [SciPy](https://github.com/scipy/scipy)
...and many more.
@@ -98,13 +98,13 @@ developer of [Zulip](https://github.com/zulip/zulip):
For more, see the [documentation](https://beta.ruff.rs/docs/).
1. [Getting Started](#getting-started)
2. [Configuration](#configuration)
3. [Rules](#rules)
4. [Contributing](#contributing)
5. [Support](#support)
6. [Acknowledgements](#acknowledgements)
7. [Who's Using Ruff?](#whos-using-ruff)
8. [License](#license)
1. [Configuration](#configuration)
1. [Rules](#rules)
1. [Contributing](#contributing)
1. [Support](#support)
1. [Acknowledgements](#acknowledgements)
1. [Who's Using Ruff?](#whos-using-ruff)
1. [License](#license)
## Getting Started
@@ -137,7 +137,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com) hook:
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.253'
rev: 'v0.0.254'
hooks:
- id: ruff
```
@@ -186,7 +186,6 @@ exclude = [
"node_modules",
"venv",
]
per-file-ignores = {}
# Same as Black.
line-length = 88
@@ -225,6 +224,9 @@ and a [subset](https://beta.ruff.rs/docs/rules/#error-e) of the `E` category, om
stylistic rules made obsolete by the use of an autoformatter, like
[Black](https://github.com/psf/black).
If you're just getting started with Ruff, **the default rule set is a great place to start**: it
catches a wide variety of common errors (like unused imports) with zero configuration.
<!-- End section: Rules -->
For a complete enumeration of the supported rules, see [_Rules_](https://beta.ruff.rs/docs/rules/).
@@ -269,41 +271,41 @@ Ruff is released under the MIT license.
Ruff is used in a number of major open-source projects, including:
* [pandas](https://github.com/pandas-dev/pandas)
* [FastAPI](https://github.com/tiangolo/fastapi)
* [Transformers (Hugging Face)](https://github.com/huggingface/transformers)
* [Diffusers (Hugging Face)](https://github.com/huggingface/diffusers)
* [Apache Airflow](https://github.com/apache/airflow)
* [SciPy](https://github.com/scipy/scipy)
* [Zulip](https://github.com/zulip/zulip)
* [Bokeh](https://github.com/bokeh/bokeh)
* [Pydantic](https://github.com/pydantic/pydantic)
* [Dagster](https://github.com/dagster-io/dagster)
* [Dagger](https://github.com/dagger/dagger)
* [Sphinx](https://github.com/sphinx-doc/sphinx)
* [Hatch](https://github.com/pypa/hatch)
* [PDM](https://github.com/pdm-project/pdm)
* [Jupyter](https://github.com/jupyter-server/jupyter_server)
* [Great Expectations](https://github.com/great-expectations/great_expectations)
* [ONNX](https://github.com/onnx/onnx)
* [Polars](https://github.com/pola-rs/polars)
* [Ibis](https://github.com/ibis-project/ibis)
* [Synapse (Matrix)](https://github.com/matrix-org/synapse)
* [SnowCLI (Snowflake)](https://github.com/Snowflake-Labs/snowcli)
* [Dispatch (Netflix)](https://github.com/Netflix/dispatch)
* [Saleor](https://github.com/saleor/saleor)
* [Pynecone](https://github.com/pynecone-io/pynecone)
* [OpenBB](https://github.com/OpenBB-finance/OpenBBTerminal)
* [Home Assistant](https://github.com/home-assistant/core)
* [Pylint](https://github.com/PyCQA/pylint)
* [Cryptography (PyCA)](https://github.com/pyca/cryptography)
* [cibuildwheel (PyPA)](https://github.com/pypa/cibuildwheel)
* [build (PyPA)](https://github.com/pypa/build)
* [Babel](https://github.com/python-babel/babel)
* [featuretools](https://github.com/alteryx/featuretools)
* [meson-python](https://github.com/mesonbuild/meson-python)
* [ZenML](https://github.com/zenml-io/zenml)
* [delta-rs](https://github.com/delta-io/delta-rs)
- [pandas](https://github.com/pandas-dev/pandas)
- [FastAPI](https://github.com/tiangolo/fastapi)
- [Transformers (Hugging Face)](https://github.com/huggingface/transformers)
- [Diffusers (Hugging Face)](https://github.com/huggingface/diffusers)
- [Apache Airflow](https://github.com/apache/airflow)
- [SciPy](https://github.com/scipy/scipy)
- [Zulip](https://github.com/zulip/zulip)
- [Bokeh](https://github.com/bokeh/bokeh)
- [Pydantic](https://github.com/pydantic/pydantic)
- [Dagster](https://github.com/dagster-io/dagster)
- [Dagger](https://github.com/dagger/dagger)
- [Sphinx](https://github.com/sphinx-doc/sphinx)
- [Hatch](https://github.com/pypa/hatch)
- [PDM](https://github.com/pdm-project/pdm)
- [Jupyter](https://github.com/jupyter-server/jupyter_server)
- [Great Expectations](https://github.com/great-expectations/great_expectations)
- [ONNX](https://github.com/onnx/onnx)
- [Polars](https://github.com/pola-rs/polars)
- [Ibis](https://github.com/ibis-project/ibis)
- [Synapse (Matrix)](https://github.com/matrix-org/synapse)
- [SnowCLI (Snowflake)](https://github.com/Snowflake-Labs/snowcli)
- [Dispatch (Netflix)](https://github.com/Netflix/dispatch)
- [Saleor](https://github.com/saleor/saleor)
- [Pynecone](https://github.com/pynecone-io/pynecone)
- [OpenBB](https://github.com/OpenBB-finance/OpenBBTerminal)
- [Home Assistant](https://github.com/home-assistant/core)
- [Pylint](https://github.com/PyCQA/pylint)
- [Cryptography (PyCA)](https://github.com/pyca/cryptography)
- [cibuildwheel (PyPA)](https://github.com/pypa/cibuildwheel)
- [build (PyPA)](https://github.com/pypa/build)
- [Babel](https://github.com/python-babel/babel)
- [featuretools](https://github.com/alteryx/featuretools)
- [meson-python](https://github.com/mesonbuild/meson-python)
- [ZenML](https://github.com/zenml-io/zenml)
- [delta-rs](https://github.com/delta-io/delta-rs)
## License

View File

@@ -1,6 +1,6 @@
[package]
name = "flake8-to-ruff"
version = "0.0.253"
version = "0.0.254"
edition = { workspace = true }
rust-version = { workspace = true }

View File

@@ -84,7 +84,7 @@ flake8-to-ruff path/to/.flake8 --plugin flake8-builtins --plugin flake8-quotes
1. Ruff only supports a subset of the Flake configuration options. `flake8-to-ruff` will warn on and
ignore unsupported options in the `.flake8` file (or equivalent). (Similarly, Ruff has a few
configuration options that don't exist in Flake8.)
2. Ruff will omit any rule codes that are unimplemented or unsupported by Ruff, including rule
1. Ruff will omit any rule codes that are unimplemented or unsupported by Ruff, including rule
codes from unsupported plugins. (See the [Ruff README](https://github.com/charliermarsh/ruff#user-content-how-does-ruff-compare-to-flake8)
for the complete list of supported plugins.)

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.0.253"
version = "0.0.254"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = { workspace = true }
rust-version = { workspace = true }
@@ -19,6 +19,7 @@ doctest = false
ruff_macros = { path = "../ruff_macros" }
ruff_python = { path = "../ruff_python" }
ruff_rustpython = { path = "../ruff_rustpython" }
ruff_cache = { path = "../ruff_cache" }
anyhow = { workspace = true }
bisection = { version = "0.1.0" }
@@ -58,7 +59,6 @@ strum = { workspace = true }
strum_macros = { workspace = true }
textwrap = { version = "0.16.0" }
thiserror = { version = "1.0" }
titlecase = { version = "2.2.1" }
toml = { workspace = true }
# https://docs.rs/getrandom/0.2.7/getrandom/#webassembly-support

View File

@@ -19,6 +19,8 @@ token = "s3cr3t"
secrete = "s3cr3t"
safe = password = "s3cr3t"
password = safe = "s3cr3t"
PASSWORD = "s3cr3t"
PassWord = "s3cr3t"
d["password"] = "s3cr3t"
d["pass"] = "s3cr3t"
@@ -68,6 +70,8 @@ passed_msg = "You have passed!"
compassion = "Please don't match!"
impassable = "You shall not pass!"
passwords = ""
PASSWORDS = ""
passphrases = ""
PassPhrases = ""
tokens = ""
secrets = ""

View File

@@ -0,0 +1,14 @@
import sys
from sys import version_info as python_version
if sys.version_info < (3, 9): ... # OK
if sys.version_info >= (3, 9): ... # OK
if sys.version_info == (3, 9): ... # OK
if sys.version_info <= (3, 10): ... # OK
if sys.version_info > (3, 10): ... # OK
if python_version > (3, 10): ... # OK

View File

@@ -0,0 +1,18 @@
import sys
from sys import version_info as python_version
if sys.version_info < (3, 9): ... # OK
if sys.version_info >= (3, 9): ... # OK
if sys.version_info == (3, 9): ... # OK
if sys.version_info == (3, 9): ... # Error: PYI006 Use only `<` and `>=` for version info comparisons
if sys.version_info <= (3, 10): ... # Error: PYI006 Use only `<` and `>=` for version info comparisons
if sys.version_info <= (3, 10): ... # Error: PYI006 Use only `<` and `>=` for version info comparisons
if sys.version_info > (3, 10): ... # Error: PYI006 Use only `<` and `>=` for version info comparisons
if python_version > (3, 10): ... # Error: PYI006 Use only `<` and `>=` for version info comparisons

View File

@@ -0,0 +1,33 @@
# From https://github.com/PyCQA/flake8-pyi/blob/4212bec43dbc4020a59b90e2957c9488575e57ba/tests/type_comments.pyi
from collections.abc import Sequence
from typing import TypeAlias
A: TypeAlias = None # type: int # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
B: TypeAlias = None # type: str # And here's an extra comment about why it's that type # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
C: TypeAlias = None #type: int # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
D: TypeAlias = None # type: int # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
E: TypeAlias = None# type: int # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
F: TypeAlias = None#type:int # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
def func(
arg1, # type: dict[str, int] # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
arg2 # type: Sequence[bytes] # And here's some more info about this arg # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
): ...
class Foo:
Attr: TypeAlias = None # type: set[str] # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
G: TypeAlias = None # type: ignore
H: TypeAlias = None # type: ignore[attr-defined]
I: TypeAlias = None #type: ignore
J: TypeAlias = None # type: ignore
K: TypeAlias = None# type: ignore
L: TypeAlias = None#type:ignore
# Whole line commented out # type: int
M: TypeAlias = None # type: can't parse me!
class Bar:
N: TypeAlias = None # type: can't parse me either!
# This whole line is commented out and indented # type: str

View File

@@ -0,0 +1,33 @@
# From https://github.com/PyCQA/flake8-pyi/blob/4212bec43dbc4020a59b90e2957c9488575e57ba/tests/type_comments.pyi
from collections.abc import Sequence
from typing import TypeAlias
A: TypeAlias = None # type: int # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
B: TypeAlias = None # type: str # And here's an extra comment about why it's that type # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
C: TypeAlias = None #type: int # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
D: TypeAlias = None # type: int # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
E: TypeAlias = None# type: int # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
F: TypeAlias = None#type:int # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
def func(
arg1, # type: dict[str, int] # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
arg2 # type: Sequence[bytes] # And here's some more info about this arg # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
): ...
class Foo:
Attr: TypeAlias = None # type: set[str] # Y033 Do not use type comments in stubs (e.g. use "x: int" instead of "x = ... # type: int")
G: TypeAlias = None # type: ignore
H: TypeAlias = None # type: ignore[attr-defined]
I: TypeAlias = None #type: ignore
J: TypeAlias = None # type: ignore
K: TypeAlias = None# type: ignore
L: TypeAlias = None#type:ignore
# Whole line commented out # type: int
M: TypeAlias = None # type: can't parse me!
class Bar:
N: TypeAlias = None # type: can't parse me either!
# This whole line is commented out and indented # type: str

View File

@@ -293,3 +293,30 @@ def x(y):
def foo(baz: str) -> str:
return baz
def end_of_statement():
def example():
if True:
return ""
def example():
if True:
return ""
def example():
if True:
return "" # type: ignore
def example():
if True:
return "" ;
def example():
if True:
return "" \
; # type: ignore

View File

@@ -74,3 +74,13 @@ elif b == b"two":
return 2
elif a == b"three":
return 3
# SIM116
if func_name == "create":
return "A"
elif func_name == "modify":
return "M"
elif func_name == "remove":
return "D"
elif func_name == "move":
return "MV"

View File

@@ -11,6 +11,7 @@ YODA > age # SIM300
YODA >= age # SIM300
JediOrder.YODA == age # SIM300
0 < (number - 100) # SIM300
SomeClass().settings.SOME_CONSTANT_VALUE > (60 * 60) # SIM300
# OK
compare == "yoda"

View File

@@ -1,7 +1,8 @@
from typing import TYPE_CHECKING, Any, ClassVar
import attrs
from ....import unknown
from ..protocol import commands, definitions, responses
from ..server import example
from .. import server

View File

@@ -0,0 +1,14 @@
#: E211
spam (1)
#: E211 E211
dict ['key'] = list [index]
#: E211
dict['key'] ['subkey'] = list[index]
#: Okay
spam(1)
dict['key'] = list[index]
# This is not prohibited by PEP8, but avoid it.
class Foo (Bar, Baz):
pass

View File

@@ -0,0 +1,50 @@
#: E251 E251
def foo(bar = False):
'''Test function with an error in declaration'''
pass
#: E251
foo(bar= True)
#: E251
foo(bar =True)
#: E251 E251
foo(bar = True)
#: E251
y = bar(root= "sdasd")
#: E251:2:29
parser.add_argument('--long-option',
default=
"/rather/long/filesystem/path/here/blah/blah/blah")
#: E251:1:45
parser.add_argument('--long-option', default
="/rather/long/filesystem/path/here/blah/blah/blah")
#: E251:3:8 E251:3:10
foo(True,
baz=(1, 2),
biz = 'foo'
)
#: Okay
foo(bar=(1 == 1))
foo(bar=(1 != 1))
foo(bar=(1 >= 1))
foo(bar=(1 <= 1))
(options, args) = parser.parse_args()
d[type(None)] = _deepcopy_atomic
# Annotated Function Definitions
#: Okay
def munge(input: AnyStr, sep: AnyStr = None, limit=1000,
extra: Union[str, dict] = None) -> AnyStr:
pass
#: Okay
async def add(a: int = 0, b: int = 0) -> int:
return a + b
# Previously E251 four times
#: E271:1:6
async def add(a: int = 0, b: int = 0) -> int:
return a + b
#: E252:1:15 E252:1:16 E252:1:27 E252:1:36
def add(a: int=0, b: int =0, c: int= 0) -> int:
return a + b + c
#: Okay
def add(a: int = _default(name='f')):
return a

View File

@@ -0,0 +1,145 @@
#: W191
if False:
print # indented with 1 tab
#:
#: W191
y = x == 2 \
or x == 3
#: E101 W191 W504
if (
x == (
3
) or
y == 4):
pass
#: E101 W191
if x == 2 \
or y > 1 \
or x == 3:
pass
#: E101 W191
if x == 2 \
or y > 1 \
or x == 3:
pass
#:
#: E101 W191 W504
if (foo == bar and
baz == bop):
pass
#: E101 W191 W504
if (
foo == bar and
baz == bop
):
pass
#:
#: E101 E101 W191 W191
if start[1] > end_col and not (
over_indent == 4 and indent_next):
return (0, "E121 continuation line over-"
"indented for visual indent")
#:
#: E101 W191
def long_function_name(
var_one, var_two, var_three,
var_four):
print(var_one)
#: E101 W191 W504
if ((row < 0 or self.moduleCount <= row or
col < 0 or self.moduleCount <= col)):
raise Exception("%s,%s - %s" % (row, col, self.moduleCount))
#: E101 E101 E101 E101 W191 W191 W191 W191 W191 W191
if bar:
return (
start, 'E121 lines starting with a '
'closing bracket should be indented '
"to match that of the opening "
"bracket's line"
)
#
#: E101 W191 W504
# you want vertical alignment, so use a parens
if ((foo.bar("baz") and
foo.bar("bop")
)):
print "yes"
#: E101 W191 W504
# also ok, but starting to look like LISP
if ((foo.bar("baz") and
foo.bar("bop"))):
print "yes"
#: E101 W191 W504
if (a == 2 or
b == "abc def ghi"
"jkl mno"):
return True
#: E101 W191 W504
if (a == 2 or
b == """abc def ghi
jkl mno"""):
return True
#: W191:2:1 W191:3:1 E101:3:2
if length > options.max_line_length:
return options.max_line_length, \
"E501 line too long (%d characters)" % length
#
#: E101 W191 W191 W504
if os.path.exists(os.path.join(path, PEP8_BIN)):
cmd = ([os.path.join(path, PEP8_BIN)] +
self._pep8_options(targetfile))
#: W191
'''
multiline string with tab in it'''
#: E101 W191
'''multiline string
with tabs
and spaces
'''
#: Okay
'''sometimes, you just need to go nuts in a multiline string
and allow all sorts of crap
like mixed tabs and spaces
or trailing whitespace
or long long long long long long long long long long long long long long long long long lines
''' # nopep8
#: Okay
'''this one
will get no warning
even though the noqa comment is not immediately after the string
''' + foo # noqa
#
#: E101 W191
if foo is None and bar is "bop" and \
blah == 'yeah':
blah = 'yeahnah'
#
#: W191 W191 W191
if True:
foo(
1,
2)
#: W191 W191 W191 W191 W191
def test_keys(self):
"""areas.json - All regions are accounted for."""
expected = set([
u'Norrbotten',
u'V\xe4sterbotten',
])
#: W191
x = [
'abc'
]
#:

View File

@@ -2,6 +2,8 @@
from functools import cached_property
from gi.repository import GObject
# Bad examples
def bad_liouiwnlkjl():
@@ -76,6 +78,11 @@ class Thingy:
"""This property method docstring does not need to be written in imperative mood."""
return self._beep
@GObject.Property
def good_custom_property(self):
"""This property method docstring does not need to be written in imperative mood."""
return self._beep
@cached_property
def good_cached_property(self):
"""This property method docstring does not need to be written in imperative mood."""

View File

@@ -0,0 +1,10 @@
"""Test: imports within `ModuleNotFoundError` handlers."""
def check_orjson():
try:
import orjson
return True
except ModuleNotFoundError:
return False

View File

@@ -0,0 +1,23 @@
"""Test case: strings used within calls within type annotations."""
from typing import Callable
import bpy
from mypy_extensions import VarArg
from foo import Bar
class LightShow(bpy.types.Operator):
label = "Create Character"
name = "lightshow.letter_creation"
filepath: bpy.props.StringProperty(subtype="FILE_PATH") # OK
def f(x: Callable[[VarArg("os")], None]): # F821
pass
f(Callable[["Bar"], None])
f(Callable[["Baz"], None])

View File

@@ -0,0 +1,25 @@
"""Test case: strings used within calls within type annotations."""
from __future__ import annotations
from typing import Callable
import bpy
from mypy_extensions import VarArg
from foo import Bar
class LightShow(bpy.types.Operator):
label = "Create Character"
name = "lightshow.letter_creation"
filepath: bpy.props.StringProperty(subtype="FILE_PATH") # OK
def f(x: Callable[[VarArg("os")], None]): # F821
pass
f(Callable[["Bar"], None])
f(Callable[["Baz"], None])

View File

@@ -16,6 +16,9 @@ logging.error("Example log %s, %s", "foo", "bar", "baz", *args)
# do not handle calls with **kwargs
logging.error("Example log %s, %s", "foo", "bar", "baz", **kwargs)
# do not handle keyword arguments
logging.error("%(objects)d modifications: %(modifications)d errors: %(errors)d")
import warning
warning.warning("Hello %s %s", "World!")

View File

@@ -12,6 +12,9 @@ logging.error("Example log %s, %s", "foo", "bar", "baz", *args)
# do not handle calls with **kwargs
logging.error("Example log %s, %s", "foo", "bar", "baz", **kwargs)
# do not handle keyword arguments
logging.error("%(objects)d modifications: %(modifications)d errors: %(errors)d", {"objects": 1, "modifications": 1, "errors": 1})
import warning
warning.warning("Hello %s", "World!", "again")

View File

@@ -25,22 +25,7 @@ avoid-escape = true
max-complexity = 10
[tool.ruff.pep8-naming]
ignore-names = [
"setUp",
"tearDown",
"setUpClass",
"tearDownClass",
"setUpModule",
"tearDownModule",
"asyncSetUp",
"asyncTearDown",
"setUpTestData",
"failureException",
"longMessage",
"maxDiff",
]
classmethod-decorators = ["classmethod", "pydantic.validator"]
staticmethod-decorators = ["staticmethod"]
classmethod-decorators = ["pydantic.validator"]
[tool.ruff.flake8-tidy-imports]
ban-relative-imports = "parents"

View File

@@ -28,3 +28,10 @@ def f(x: IList[str]) -> None:
def f(x: "List[str]") -> None:
...
list = "abc"
def f(x: List[str]) -> None:
...

View File

@@ -35,3 +35,9 @@ MyType = TypedDict("MyType", {"in": int, "x-y": int})
# unpacking (OK)
c = {"c": float}
MyType = TypedDict("MyType", {"a": int, "b": str, **c})
# Empty dict literal
MyType = TypedDict("MyType", {})
# Empty dict call
MyType = TypedDict("MyType", dict())

View File

@@ -23,3 +23,9 @@ MyType = NamedTuple(
# invalid identifiers (OK)
MyType = NamedTuple("MyType", [("x-y", int), ("b", tuple[str, ...])])
# no fields
MyType = typing.NamedTuple("MyType")
# empty fields
MyType = typing.NamedTuple("MyType", [])

View File

@@ -0,0 +1,7 @@
isinstance(1, (int, float)) # UP038
issubclass("yes", (int, float, str)) # UP038
isinstance(1, int) # OK
issubclass("yes", int) # OK
isinstance(1, int | float) # OK
issubclass("yes", int | str) # OK

View File

@@ -1,15 +0,0 @@
def f(*args, **kwargs):
pass
a = (1, 2)
b = (3, 4)
c = (5, 6)
d = (7, 8)
f(a, b)
f(a, kw=b)
f(*a, kw=b)
f(kw=a, *b)
f(kw=a, *b, *c)
f(*a, kw=b, *c, kw1=d)

View File

@@ -30,3 +30,11 @@ def good():
raise e # This is verbose violation, shouldn't trigger no cause
except Exception:
raise # Just re-raising don't need 'from'
def good():
try:
from mod import f
except ImportError:
def f():
raise MyException() # Raising within a new scope is fine

View File

@@ -32,9 +32,10 @@ pub fn classify(
checker
.resolve_call_path(map_callable(expr))
.map_or(false, |call_path| {
staticmethod_decorators
.iter()
.any(|decorator| call_path == to_call_path(decorator))
call_path.as_slice() == ["", "staticmethod"]
|| staticmethod_decorators
.iter()
.any(|decorator| call_path == to_call_path(decorator))
})
}) {
FunctionType::StaticMethod
@@ -51,6 +52,7 @@ pub fn classify(
|| decorator_list.iter().any(|expr| {
// The method is decorated with a class method decorator (like `@classmethod`).
checker.resolve_call_path(map_callable(expr)).map_or(false, |call_path| {
call_path.as_slice() == ["", "classmethod"] ||
classmethod_decorators
.iter()
.any(|decorator| call_path == to_call_path(decorator))

View File

@@ -1,5 +1,6 @@
use std::path::Path;
use bitflags::bitflags;
use itertools::Itertools;
use log::error;
use once_cell::sync::Lazy;
@@ -575,33 +576,32 @@ pub fn has_non_none_keyword(keywords: &[Keyword], keyword: &str) -> bool {
})
}
bitflags! {
pub struct Exceptions: u32 {
const NAME_ERROR = 0b0000_0001;
const MODULE_NOT_FOUND_ERROR = 0b0000_0010;
}
}
/// Extract the names of all handled exceptions.
pub fn extract_handler_names(handlers: &[Excepthandler]) -> Vec<CallPath> {
// TODO(charlie): Use `resolve_call_path` to avoid false positives for
// overridden builtins.
let mut handler_names = vec![];
pub fn extract_handled_exceptions(handlers: &[Excepthandler]) -> Vec<&Expr> {
let mut handled_exceptions = Vec::new();
for handler in handlers {
match &handler.node {
ExcepthandlerKind::ExceptHandler { type_, .. } => {
if let Some(type_) = type_ {
if let ExprKind::Tuple { elts, .. } = &type_.node {
for type_ in elts {
let call_path = collect_call_path(type_);
if !call_path.is_empty() {
handler_names.push(call_path);
}
handled_exceptions.push(type_);
}
} else {
let call_path = collect_call_path(type_);
if !call_path.is_empty() {
handler_names.push(call_path);
}
handled_exceptions.push(type_);
}
}
}
}
}
handler_names
handled_exceptions
}
/// Return the set of all bound argument names.
@@ -765,7 +765,7 @@ pub fn from_relative_import<'a>(module: &'a [String], name: &'a str) -> CallPath
call_path
}
/// A [`Visitor`] that collects all return statements in a function or method.
/// A [`Visitor`] that collects all `return` statements in a function or method.
#[derive(Default)]
pub struct ReturnStatementVisitor<'a> {
pub returns: Vec<Option<&'a Expr>>,
@@ -786,6 +786,48 @@ where
}
}
/// A [`Visitor`] that collects all `raise` statements in a function or method.
#[derive(Default)]
pub struct RaiseStatementVisitor<'a> {
pub raises: Vec<(Range, Option<&'a Expr>, Option<&'a Expr>)>,
}
impl<'a, 'b> Visitor<'b> for RaiseStatementVisitor<'b>
where
'b: 'a,
{
fn visit_stmt(&mut self, stmt: &'b Stmt) {
match &stmt.node {
StmtKind::Raise { exc, cause } => {
self.raises
.push((Range::from_located(stmt), exc.as_deref(), cause.as_deref()));
}
StmtKind::ClassDef { .. }
| StmtKind::FunctionDef { .. }
| StmtKind::AsyncFunctionDef { .. }
| StmtKind::Try { .. }
| StmtKind::TryStar { .. } => {}
StmtKind::If { body, orelse, .. } => {
visitor::walk_body(self, body);
visitor::walk_body(self, orelse);
}
StmtKind::While { body, .. }
| StmtKind::With { body, .. }
| StmtKind::AsyncWith { body, .. }
| StmtKind::For { body, .. }
| StmtKind::AsyncFor { body, .. } => {
visitor::walk_body(self, body);
}
StmtKind::Match { cases, .. } => {
for case in cases {
visitor::walk_body(self, &case.body);
}
}
_ => {}
}
}
}
/// Convert a location within a file (relative to `base`) to an absolute
/// position.
pub fn to_absolute(relative: Location, base: Location) -> Location {
@@ -1071,6 +1113,32 @@ pub fn first_colon_range(range: Range, locator: &Locator) -> Option<Range> {
range
}
/// Given a statement, find its "logical end".
///
/// For example: the statement could be following by a trailing semicolon, by an end-of-line
/// comment, or by any number of continuation lines (and then by a comment, and so on).
pub fn end_of_statement(stmt: &Stmt, locator: &Locator) -> Location {
let contents = locator.skip(stmt.end_location.unwrap());
// End-of-file, so just return the end of the statement.
if contents.is_empty() {
return stmt.end_location.unwrap();
}
// Otherwise, find the end of the last line that's "part of" the statement.
for (lineno, line) in contents.lines().enumerate() {
if line.ends_with('\\') {
continue;
}
return to_absolute(
Location::new(lineno + 1, line.chars().count()),
stmt.end_location.unwrap(),
);
}
unreachable!("Expected to find end-of-statement")
}
/// Return the `Range` of the first `Elif` or `Else` token in an `If` statement.
pub fn elif_else_range(stmt: &Stmt, locator: &Locator) -> Option<Range> {
let StmtKind::If { body, orelse, .. } = &stmt.node else {

View File

@@ -19,7 +19,8 @@ use rustpython_parser::ast::{
use smallvec::smallvec;
use crate::ast::helpers::{
binding_range, collect_call_path, extract_handler_names, from_relative_import, to_module_path,
binding_range, collect_call_path, extract_handled_exceptions, from_relative_import,
to_module_path, Exceptions,
};
use crate::ast::operations::{extract_all_names, AllNamesFlags};
use crate::ast::relocate::relocate_expr;
@@ -109,7 +110,7 @@ pub struct Checker<'a> {
pub(crate) seen_import_boundary: bool,
futures_allowed: bool,
annotations_future_enabled: bool,
except_handlers: Vec<Vec<Vec<&'a str>>>,
handled_exceptions: Vec<Exceptions>,
// Check-specific state.
pub(crate) flake8_bugbear_seen: Vec<&'a Expr>,
}
@@ -178,7 +179,7 @@ impl<'a> Checker<'a> {
seen_import_boundary: false,
futures_allowed: true,
annotations_future_enabled: is_interface_definition,
except_handlers: vec![],
handled_exceptions: vec![],
// Check-specific state.
flake8_bugbear_seen: vec![],
}
@@ -233,6 +234,14 @@ impl<'a> Checker<'a> {
.map_or(false, |binding| binding.kind.is_builtin())
}
/// Resolves the call path, e.g. if you have a file
///
/// ```python
/// from sys import version_info as python_version
/// print(python_version)
/// ```
///
/// then `python_version` from the print statement will resolve to `sys.version_info`.
pub fn resolve_call_path<'b>(&'a self, value: &'b Expr) -> Option<CallPath<'a>>
where
'b: 'a,
@@ -301,6 +310,26 @@ impl<'a> Checker<'a> {
}
}
/// Visit an [`Expr`], and treat it as a type definition.
macro_rules! visit_type_definition {
($self:ident, $expr:expr) => {{
let prev_in_type_definition = $self.in_type_definition;
$self.in_type_definition = true;
$self.visit_expr($expr);
$self.in_type_definition = prev_in_type_definition;
}};
}
/// Visit an [`Expr`], and treat it as _not_ a type definition.
macro_rules! visit_non_type_definition {
($self:ident, $expr:expr) => {{
let prev_in_type_definition = $self.in_type_definition;
$self.in_type_definition = false;
$self.visit_expr($expr);
$self.in_type_definition = prev_in_type_definition;
}};
}
impl<'a, 'b> Visitor<'b> for Checker<'a>
where
'b: 'a,
@@ -746,33 +775,67 @@ where
for expr in decorator_list {
self.visit_expr(expr);
}
// If we're in a class or module scope, then the annotation needs to be
// available at runtime.
// See: https://docs.python.org/3/reference/simple_stmts.html#annotated-assignment-statements
let runtime_annotation = !self.annotations_future_enabled
&& matches!(
self.current_scope().kind,
ScopeKind::Class(..) | ScopeKind::Module
);
for arg in &args.posonlyargs {
if let Some(expr) = &arg.node.annotation {
self.visit_annotation(expr);
if runtime_annotation {
visit_type_definition!(self, expr);
} else {
self.visit_annotation(expr);
};
}
}
for arg in &args.args {
if let Some(expr) = &arg.node.annotation {
self.visit_annotation(expr);
if runtime_annotation {
visit_type_definition!(self, expr);
} else {
self.visit_annotation(expr);
};
}
}
if let Some(arg) = &args.vararg {
if let Some(expr) = &arg.node.annotation {
self.visit_annotation(expr);
if runtime_annotation {
visit_type_definition!(self, expr);
} else {
self.visit_annotation(expr);
};
}
}
for arg in &args.kwonlyargs {
if let Some(expr) = &arg.node.annotation {
self.visit_annotation(expr);
if runtime_annotation {
visit_type_definition!(self, expr);
} else {
self.visit_annotation(expr);
};
}
}
if let Some(arg) = &args.kwarg {
if let Some(expr) = &arg.node.annotation {
self.visit_annotation(expr);
if runtime_annotation {
visit_type_definition!(self, expr);
} else {
self.visit_annotation(expr);
};
}
}
for expr in returns {
self.visit_annotation(expr);
if runtime_annotation {
visit_type_definition!(self, expr);
} else {
self.visit_annotation(expr);
};
}
for expr in &args.kw_defaults {
self.visit_expr(expr);
@@ -962,6 +1025,13 @@ where
pyupgrade::rules::rewrite_mock_import(self, stmt);
}
// If a module is imported within a `ModuleNotFoundError` body, treat that as a
// synthetic usage.
let is_handled = self
.handled_exceptions
.iter()
.any(|exceptions| exceptions.contains(Exceptions::MODULE_NOT_FOUND_ERROR));
for alias in names {
if alias.node.name == "__future__" {
let name = alias.node.asname.as_ref().unwrap_or(&alias.node.name);
@@ -1004,7 +1074,18 @@ where
Binding {
kind: BindingKind::SubmoduleImportation(name, full_name),
runtime_usage: None,
synthetic_usage: None,
synthetic_usage: if is_handled {
Some((
self.scopes[*(self
.scope_stack
.last()
.expect("No current scope found"))]
.id,
Range::from_located(alias),
))
} else {
None
},
typing_usage: None,
range: Range::from_located(alias),
source: Some(self.current_stmt().clone()),
@@ -1012,6 +1093,14 @@ where
},
);
} else {
// Treat explicit re-export as usage (e.g., `from .applications
// import FastAPI as FastAPI`).
let is_explicit_reexport = alias
.node
.asname
.as_ref()
.map_or(false, |asname| asname == &alias.node.name);
// Given `import foo`, `name` and `full_name` would both be `foo`.
// Given `import foo as bar`, `name` would be `bar` and `full_name` would
// be `foo`.
@@ -1022,14 +1111,7 @@ where
Binding {
kind: BindingKind::Importation(name, full_name),
runtime_usage: None,
// Treat explicit re-export as usage (e.g., `import applications
// as applications`).
synthetic_usage: if alias
.node
.asname
.as_ref()
.map_or(false, |asname| asname == &alias.node.name)
{
synthetic_usage: if is_handled || is_explicit_reexport {
Some((
self.scopes[*(self
.scope_stack
@@ -1285,6 +1367,13 @@ where
}
}
// If a module is imported within a `ModuleNotFoundError` body, treat that as a
// synthetic usage.
let is_handled = self
.handled_exceptions
.iter()
.any(|exceptions| exceptions.contains(Exceptions::MODULE_NOT_FOUND_ERROR));
for alias in names {
if let Some("__future__") = module.as_deref() {
let name = alias.node.asname.as_ref().unwrap_or(&alias.node.name);
@@ -1331,7 +1420,18 @@ where
Binding {
kind: BindingKind::StarImportation(*level, module.clone()),
runtime_usage: None,
synthetic_usage: None,
synthetic_usage: if is_handled {
Some((
self.scopes[*(self
.scope_stack
.last()
.expect("No current scope found"))]
.id,
Range::from_located(alias),
))
} else {
None
},
typing_usage: None,
range: Range::from_located(stmt),
source: Some(self.current_stmt().clone()),
@@ -1375,6 +1475,14 @@ where
self.check_builtin_shadowing(asname, stmt, false);
}
// Treat explicit re-export as usage (e.g., `from .applications
// import FastAPI as FastAPI`).
let is_explicit_reexport = alias
.node
.asname
.as_ref()
.map_or(false, |asname| asname == &alias.node.name);
// Given `from foo import bar`, `name` would be "bar" and `full_name` would
// be "foo.bar". Given `from foo import bar as baz`, `name` would be "baz"
// and `full_name` would be "foo.bar".
@@ -1384,33 +1492,25 @@ where
module.as_deref(),
&alias.node.name,
);
let range = Range::from_located(alias);
self.add_binding(
name,
Binding {
kind: BindingKind::FromImportation(name, full_name),
runtime_usage: None,
// Treat explicit re-export as usage (e.g., `from .applications
// import FastAPI as FastAPI`).
synthetic_usage: if alias
.node
.asname
.as_ref()
.map_or(false, |asname| asname == &alias.node.name)
{
synthetic_usage: if is_handled || is_explicit_reexport {
Some((
self.scopes[*(self
.scope_stack
.last()
.expect("No current scope found"))]
.id,
range,
Range::from_located(alias),
))
} else {
None
},
typing_usage: None,
range,
range: Range::from_located(alias),
source: Some(self.current_stmt().clone()),
context: self.execution_context(),
},
@@ -1636,7 +1736,14 @@ where
flake8_simplify::rules::needless_bool(self, stmt);
}
if self.settings.rules.enabled(&Rule::ManualDictLookup) {
flake8_simplify::rules::manual_dict_lookup(self, stmt, test, body, orelse);
flake8_simplify::rules::manual_dict_lookup(
self,
stmt,
test,
body,
orelse,
self.current_stmt_parent().map(std::convert::Into::into),
);
}
if self.settings.rules.enabled(&Rule::UseTernaryOperator) {
flake8_simplify::rules::use_ternary_operator(
@@ -2113,17 +2220,23 @@ where
orelse,
finalbody,
} => {
// TODO(charlie): The use of `smallvec` here leads to a lifetime issue.
let handler_names = extract_handler_names(handlers)
.into_iter()
.map(|call_path| call_path.to_vec())
.collect();
self.except_handlers.push(handler_names);
let mut handled_exceptions = Exceptions::empty();
for type_ in extract_handled_exceptions(handlers) {
if let Some(call_path) = self.resolve_call_path(type_) {
if call_path.as_slice() == ["", "NameError"] {
handled_exceptions |= Exceptions::NAME_ERROR;
} else if call_path.as_slice() == ["", "ModuleNotFoundError"] {
handled_exceptions |= Exceptions::MODULE_NOT_FOUND_ERROR;
}
}
}
self.handled_exceptions.push(handled_exceptions);
if self.settings.rules.enabled(&Rule::JumpStatementInFinally) {
flake8_bugbear::rules::jump_statement_in_finally(self, finalbody);
}
self.visit_body(body);
self.except_handlers.pop();
self.handled_exceptions.pop();
self.in_exception_handler = true;
for excepthandler in handlers {
@@ -2143,23 +2256,20 @@ where
// If we're in a class or module scope, then the annotation needs to be
// available at runtime.
// See: https://docs.python.org/3/reference/simple_stmts.html#annotated-assignment-statements
if !self.annotations_future_enabled
let runtime_annotation = !self.annotations_future_enabled
&& matches!(
self.current_scope().kind,
ScopeKind::Class(..) | ScopeKind::Module
)
{
self.in_type_definition = true;
self.visit_expr(annotation);
self.in_type_definition = false;
);
if runtime_annotation {
visit_type_definition!(self, annotation);
} else {
self.visit_annotation(annotation);
}
if let Some(expr) = value {
if self.match_typing_expr(annotation, "TypeAlias") {
self.in_type_definition = true;
self.visit_expr(expr);
self.in_type_definition = false;
visit_type_definition!(self, expr);
} else {
self.visit_expr(expr);
}
@@ -2216,12 +2326,9 @@ where
fn visit_annotation(&mut self, expr: &'b Expr) {
let prev_in_annotation = self.in_annotation;
let prev_in_type_definition = self.in_type_definition;
self.in_annotation = true;
self.in_type_definition = true;
self.visit_expr(expr);
visit_type_definition!(self, expr);
self.in_annotation = prev_in_annotation;
self.in_type_definition = prev_in_type_definition;
}
fn visit_expr(&mut self, expr: &'b Expr) {
@@ -2253,7 +2360,6 @@ where
self.push_expr(expr);
let prev_in_literal = self.in_literal;
let prev_in_type_definition = self.in_type_definition;
// Pre-visit.
match &expr.node {
@@ -2525,6 +2631,11 @@ where
if self.settings.rules.enabled(&Rule::OSErrorAlias) {
pyupgrade::rules::os_error_alias(self, &expr);
}
if self.settings.rules.enabled(&Rule::IsinstanceWithTuple)
&& self.settings.target_version >= PythonVersion::Py310
{
pyupgrade::rules::use_pep604_isinstance(self, expr, func, args);
}
// flake8-print
if self.settings.rules.enabled(&Rule::PrintFound)
@@ -2939,18 +3050,6 @@ where
flake8_pytest_style::rules::fail_call(self, func, args, keywords);
}
// ruff
if self
.settings
.rules
.enabled(&Rule::KeywordArgumentBeforeStarArgument)
{
self.diagnostics
.extend(ruff::rules::keyword_argument_before_star_argument(
args, keywords,
));
}
// flake8-simplify
if self
.settings
@@ -3384,6 +3483,16 @@ where
comparators,
);
}
if self.settings.rules.enabled(&Rule::BadVersionInfoComparison) {
flake8_pyi::rules::bad_version_info_comparison(
self,
expr,
left,
ops,
comparators,
);
}
}
}
ExprKind::Constant {
@@ -3428,32 +3537,7 @@ where
flake8_pie::rules::prefer_list_builtin(self, expr);
}
// Visit the arguments, but avoid the body, which will be deferred.
for arg in &args.posonlyargs {
if let Some(expr) = &arg.node.annotation {
self.visit_annotation(expr);
}
}
for arg in &args.args {
if let Some(expr) = &arg.node.annotation {
self.visit_annotation(expr);
}
}
if let Some(arg) = &args.vararg {
if let Some(expr) = &arg.node.annotation {
self.visit_annotation(expr);
}
}
for arg in &args.kwonlyargs {
if let Some(expr) = &arg.node.annotation {
self.visit_annotation(expr);
}
}
if let Some(arg) = &args.kwarg {
if let Some(expr) = &arg.node.annotation {
self.visit_annotation(expr);
}
}
// Visit the default arguments, but avoid the body, which will be deferred.
for expr in &args.kw_defaults {
self.visit_expr(expr);
}
@@ -3551,9 +3635,16 @@ where
Some(Callable::NamedTuple)
} else if self.match_typing_call_path(&call_path, "TypedDict") {
Some(Callable::TypedDict)
} else if ["Arg", "DefaultArg", "NamedArg", "DefaultNamedArg"]
.iter()
.any(|target| call_path.as_slice() == ["mypy_extensions", target])
} else if [
"Arg",
"DefaultArg",
"NamedArg",
"DefaultNamedArg",
"VarArg",
"KwArg",
]
.iter()
.any(|target| call_path.as_slice() == ["mypy_extensions", target])
{
Some(Callable::MypyExtension)
} else {
@@ -3564,17 +3655,13 @@ where
Some(Callable::ForwardRef) => {
self.visit_expr(func);
for expr in args {
self.in_type_definition = true;
self.visit_expr(expr);
self.in_type_definition = prev_in_type_definition;
visit_type_definition!(self, expr);
}
}
Some(Callable::Cast) => {
self.visit_expr(func);
if !args.is_empty() {
self.in_type_definition = true;
self.visit_expr(&args[0]);
self.in_type_definition = prev_in_type_definition;
visit_type_definition!(self, &args[0]);
}
for expr in args.iter().skip(1) {
self.visit_expr(expr);
@@ -3583,29 +3670,21 @@ where
Some(Callable::NewType) => {
self.visit_expr(func);
for expr in args.iter().skip(1) {
self.in_type_definition = true;
self.visit_expr(expr);
self.in_type_definition = prev_in_type_definition;
visit_type_definition!(self, expr);
}
}
Some(Callable::TypeVar) => {
self.visit_expr(func);
for expr in args.iter().skip(1) {
self.in_type_definition = true;
self.visit_expr(expr);
self.in_type_definition = prev_in_type_definition;
visit_type_definition!(self, expr);
}
for keyword in keywords {
let KeywordData { arg, value } = &keyword.node;
if let Some(id) = arg {
if id == "bound" {
self.in_type_definition = true;
self.visit_expr(value);
self.in_type_definition = prev_in_type_definition;
visit_type_definition!(self, value);
} else {
self.in_type_definition = false;
self.visit_expr(value);
self.in_type_definition = prev_in_type_definition;
visit_non_type_definition!(self, value);
}
}
}
@@ -3622,16 +3701,8 @@ where
ExprKind::List { elts, .. }
| ExprKind::Tuple { elts, .. } => {
if elts.len() == 2 {
self.in_type_definition = false;
self.visit_expr(&elts[0]);
self.in_type_definition =
prev_in_type_definition;
self.in_type_definition = true;
self.visit_expr(&elts[1]);
self.in_type_definition =
prev_in_type_definition;
visit_non_type_definition!(self, &elts[0]);
visit_type_definition!(self, &elts[1]);
}
}
_ => {}
@@ -3645,9 +3716,7 @@ where
// Ex) NamedTuple("a", a=int)
for keyword in keywords {
let KeywordData { value, .. } = &keyword.node;
self.in_type_definition = true;
self.visit_expr(value);
self.in_type_definition = prev_in_type_definition;
visit_type_definition!(self, value);
}
}
Some(Callable::TypedDict) => {
@@ -3657,14 +3726,10 @@ where
if args.len() > 1 {
if let ExprKind::Dict { keys, values } = &args[1].node {
for key in keys.iter().flatten() {
self.in_type_definition = false;
self.visit_expr(key);
self.in_type_definition = prev_in_type_definition;
visit_non_type_definition!(self, key);
}
for value in values {
self.in_type_definition = true;
self.visit_expr(value);
self.in_type_definition = prev_in_type_definition;
visit_type_definition!(self, value);
}
}
}
@@ -3672,9 +3737,7 @@ where
// Ex) TypedDict("a", a=int)
for keyword in keywords {
let KeywordData { value, .. } = &keyword.node;
self.in_type_definition = true;
self.visit_expr(value);
self.in_type_definition = prev_in_type_definition;
visit_type_definition!(self, value);
}
}
Some(Callable::MypyExtension) => {
@@ -3682,39 +3745,39 @@ where
if let Some(arg) = args.first() {
// Ex) DefaultNamedArg(bool | None, name="some_prop_name")
self.in_type_definition = true;
self.visit_expr(arg);
self.in_type_definition = prev_in_type_definition;
visit_type_definition!(self, arg);
for arg in args.iter().skip(1) {
self.in_type_definition = false;
self.visit_expr(arg);
self.in_type_definition = prev_in_type_definition;
visit_non_type_definition!(self, arg);
}
for keyword in keywords {
let KeywordData { value, .. } = &keyword.node;
self.in_type_definition = false;
self.visit_expr(value);
self.in_type_definition = prev_in_type_definition;
visit_non_type_definition!(self, value);
}
} else {
// Ex) DefaultNamedArg(type="bool", name="some_prop_name")
for keyword in keywords {
let KeywordData { value, arg, .. } = &keyword.node;
if arg.as_ref().map_or(false, |arg| arg == "type") {
self.in_type_definition = true;
self.visit_expr(value);
self.in_type_definition = prev_in_type_definition;
visit_type_definition!(self, value);
} else {
self.in_type_definition = false;
self.visit_expr(value);
self.in_type_definition = prev_in_type_definition;
visit_non_type_definition!(self, value);
}
}
}
}
None => {
visitor::walk_expr(self, expr);
// If we're in a type definition, we need to treat the arguments to any
// other callables as non-type definitions (i.e., we don't want to treat
// any strings as deferred type definitions).
self.visit_expr(func);
for arg in args {
visit_non_type_definition!(self, arg);
}
for keyword in keywords {
let KeywordData { value, .. } = &keyword.node;
visit_non_type_definition!(self, value);
}
}
}
}
@@ -3740,9 +3803,7 @@ where
// Ex) Optional[int]
SubscriptKind::AnnotatedSubscript => {
self.visit_expr(value);
self.in_type_definition = true;
self.visit_expr(slice);
self.in_type_definition = prev_in_type_definition;
visit_type_definition!(self, slice);
self.visit_expr_context(ctx);
}
// Ex) Annotated[int, "Hello, world!"]
@@ -3753,11 +3814,9 @@ where
if let ExprKind::Tuple { elts, ctx } = &slice.node {
if let Some(expr) = elts.first() {
self.visit_expr(expr);
self.in_type_definition = false;
for expr in elts.iter().skip(1) {
self.visit_expr(expr);
visit_non_type_definition!(self, expr);
}
self.in_type_definition = prev_in_type_definition;
self.visit_expr_context(ctx);
}
} else {
@@ -3792,7 +3851,6 @@ where
_ => {}
};
self.in_type_definition = prev_in_type_definition;
self.in_literal = prev_in_literal;
self.pop_expr();
@@ -4480,13 +4538,12 @@ impl<'a> Checker<'a> {
}
// Avoid flagging if NameError is handled.
if let Some(handler_names) = self.except_handlers.last() {
if handler_names
.iter()
.any(|call_path| call_path.as_slice() == ["NameError"])
{
return;
}
if self
.handled_exceptions
.iter()
.any(|handler_names| handler_names.contains(Exceptions::NAME_ERROR))
{
return;
}
self.diagnostics.push(Diagnostic::new(
@@ -5555,7 +5612,11 @@ impl<'a> Checker<'a> {
pydocstyle::rules::ends_with_period(self, &docstring);
}
if self.settings.rules.enabled(&Rule::NonImperativeMood) {
pydocstyle::rules::non_imperative_mood(self, &docstring);
pydocstyle::rules::non_imperative_mood(
self,
&docstring,
&self.settings.pydocstyle.property_decorators,
);
}
if self.settings.rules.enabled(&Rule::NoSignature) {
pydocstyle::rules::no_signature(self, &docstring);

View File

@@ -1,16 +1,20 @@
#![allow(dead_code, unused_imports, unused_variables)]
use bisection::bisect_left;
use itertools::Itertools;
use rustpython_parser::ast::Location;
use rustpython_parser::lexer::LexResult;
use crate::ast::types::Range;
use crate::registry::Diagnostic;
use crate::registry::{Diagnostic, Rule};
use crate::rules::pycodestyle::logical_lines::{iter_logical_lines, TokenFlags};
use crate::rules::pycodestyle::rules::{
extraneous_whitespace, indentation, missing_whitespace_after_keyword, space_around_operator,
whitespace_around_keywords, whitespace_before_comment,
extraneous_whitespace, indentation, missing_whitespace_after_keyword,
missing_whitespace_around_operator, space_around_operator, whitespace_around_keywords,
whitespace_around_named_parameter_equals, whitespace_before_comment,
whitespace_before_parameters,
};
use crate::settings::Settings;
use crate::settings::{flags, Settings};
use crate::source_code::{Locator, Stylist};
/// Return the amount of indentation, expanding tabs to the next multiple of 8.
@@ -39,6 +43,7 @@ pub fn check_logical_lines(
locator: &Locator,
stylist: &Stylist,
settings: &Settings,
autofix: flags::Autofix,
) -> Vec<Diagnostic> {
let mut diagnostics = vec![];
@@ -132,6 +137,47 @@ pub fn check_logical_lines(
}
}
}
if line.flags.contains(TokenFlags::OPERATOR) {
for (location, kind) in
whitespace_around_named_parameter_equals(&line.tokens, &line.text)
{
if settings.rules.enabled(kind.rule()) {
diagnostics.push(Diagnostic {
kind,
location,
end_location: location,
fix: None,
parent: None,
});
}
}
for (location, kind) in missing_whitespace_around_operator(&line.tokens) {
if settings.rules.enabled(kind.rule()) {
diagnostics.push(Diagnostic {
kind,
location,
end_location: location,
fix: None,
parent: None,
});
}
}
}
if line.flags.contains(TokenFlags::BRACKET) {
#[cfg(feature = "logical_lines")]
let should_fix =
autofix.into() && settings.rules.should_fix(&Rule::WhitespaceBeforeParameters);
#[cfg(not(feature = "logical_lines"))]
let should_fix = false;
for diagnostic in whitespace_before_parameters(&line.tokens, should_fix) {
if settings.rules.enabled(diagnostic.kind.rule()) {
diagnostics.push(diagnostic);
}
}
}
for (index, kind) in indentation(
&line,

View File

@@ -8,8 +8,8 @@ use crate::rules::flake8_executable::rules::{
shebang_missing, shebang_newline, shebang_not_executable, shebang_python, shebang_whitespace,
};
use crate::rules::pycodestyle::rules::{
doc_line_too_long, line_too_long, mixed_spaces_and_tabs, no_newline_at_end_of_file,
trailing_whitespace,
doc_line_too_long, indentation_contains_tabs, line_too_long, mixed_spaces_and_tabs,
no_newline_at_end_of_file, trailing_whitespace,
};
use crate::rules::pygrep_hooks::rules::{blanket_noqa, blanket_type_ignore};
use crate::rules::pylint;
@@ -45,6 +45,7 @@ pub fn check_physical_lines(
let enforce_trailing_whitespace = settings.rules.enabled(&Rule::TrailingWhitespace);
let enforce_blank_line_contains_whitespace =
settings.rules.enabled(&Rule::BlankLineContainsWhitespace);
let enforce_indentation_contains_tabs = settings.rules.enabled(&Rule::IndentationContainsTabs);
let fix_unnecessary_coding_comment =
autofix.into() && settings.rules.should_fix(&Rule::UTF8EncodingDeclaration);
@@ -149,6 +150,12 @@ pub fn check_physical_lines(
diagnostics.push(diagnostic);
}
}
if enforce_indentation_contains_tabs {
if let Some(diagnostic) = indentation_contains_tabs(index, line) {
diagnostics.push(diagnostic);
}
}
}
if enforce_no_newline_at_end_of_file {

View File

@@ -7,8 +7,8 @@ use crate::lex::docstring_detection::StateMachine;
use crate::registry::{Diagnostic, Rule};
use crate::rules::ruff::rules::Context;
use crate::rules::{
eradicate, flake8_commas, flake8_implicit_str_concat, flake8_quotes, pycodestyle, pyupgrade,
ruff,
eradicate, flake8_commas, flake8_implicit_str_concat, flake8_pyi, flake8_quotes, pycodestyle,
pyupgrade, ruff,
};
use crate::settings::{flags, Settings};
use crate::source_code::Locator;
@@ -18,6 +18,7 @@ pub fn check_tokens(
tokens: &[LexResult],
settings: &Settings,
autofix: flags::Autofix,
is_interface_definition: bool,
) -> Vec<Diagnostic> {
let mut diagnostics: Vec<Diagnostic> = vec![];
@@ -55,6 +56,7 @@ pub fn check_tokens(
.enabled(&Rule::TrailingCommaOnBareTupleProhibited)
|| settings.rules.enabled(&Rule::TrailingCommaProhibited);
let enforce_extraneous_parenthesis = settings.rules.enabled(&Rule::ExtraneousParentheses);
let enforce_type_comment_in_stub = settings.rules.enabled(&Rule::TypeCommentInStub);
// RUF001, RUF002, RUF003
if enforce_ambiguous_unicode_character {
@@ -161,5 +163,10 @@ pub fn check_tokens(
);
}
// PYI033
if enforce_type_comment_in_stub && is_interface_definition {
diagnostics.extend(flake8_pyi::rules::type_comment_in_stub(tokens));
}
diagnostics
}

View File

@@ -29,6 +29,8 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E203") => Rule::WhitespaceBeforePunctuation,
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E211") => Rule::WhitespaceBeforeParameters,
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E221") => Rule::MultipleSpacesBeforeOperator,
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E222") => Rule::MultipleSpacesAfterOperator,
@@ -37,6 +39,18 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E224") => Rule::TabAfterOperator,
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E225") => Rule::MissingWhitespaceAroundOperator,
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E226") => Rule::MissingWhitespaceAroundArithmeticOperator,
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E227") => Rule::MissingWhitespaceAroundBitwiseOrShiftOperator,
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E228") => Rule::MissingWhitespaceAroundModuloOperator,
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E251") => Rule::UnexpectedSpacesAroundKeywordParameterEquals,
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E252") => Rule::MissingWhitespaceAroundParameterEquals,
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E261") => Rule::TooFewSpacesBeforeInlineComment,
#[cfg(feature = "logical_lines")]
(Pycodestyle, "E262") => Rule::NoSpaceAfterInlineComment,
@@ -74,6 +88,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Pycodestyle, "E999") => Rule::SyntaxError,
// pycodestyle warnings
(Pycodestyle, "W191") => Rule::IndentationContainsTabs,
(Pycodestyle, "W291") => Rule::TrailingWhitespace,
(Pycodestyle, "W292") => Rule::NoNewLineAtEndOfFile,
(Pycodestyle, "W293") => Rule::BlankLineContainsWhitespace,
@@ -339,6 +354,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Pyupgrade, "035") => Rule::ImportReplacements,
(Pyupgrade, "036") => Rule::OutdatedVersionBlock,
(Pyupgrade, "037") => Rule::QuotedAnnotation,
(Pyupgrade, "038") => Rule::IsinstanceWithTuple,
// pydocstyle
(Pydocstyle, "100") => Rule::PublicModule,
@@ -487,6 +503,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
// flake8-pyi
(Flake8Pyi, "001") => Rule::PrefixTypeParams,
(Flake8Pyi, "006") => Rule::BadVersionInfoComparison,
(Flake8Pyi, "007") => Rule::UnrecognizedPlatformCheck,
(Flake8Pyi, "008") => Rule::UnrecognizedPlatformName,
(Flake8Pyi, "009") => Rule::PassStatementStubBody,
@@ -494,6 +511,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Flake8Pyi, "011") => Rule::TypedArgumentSimpleDefaults,
(Flake8Pyi, "014") => Rule::ArgumentSimpleDefaults,
(Flake8Pyi, "021") => Rule::DocstringInStub,
(Flake8Pyi, "033") => Rule::TypeCommentInStub,
// flake8-pytest-style
(Flake8PytestStyle, "001") => Rule::IncorrectFixtureParenthesesStyle,
@@ -616,7 +634,6 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Ruff, "001") => Rule::AmbiguousUnicodeCharacterString,
(Ruff, "002") => Rule::AmbiguousUnicodeCharacterDocstring,
(Ruff, "003") => Rule::AmbiguousUnicodeCharacterComment,
(Ruff, "004") => Rule::KeywordArgumentBeforeStarArgument,
(Ruff, "005") => Rule::UnpackInsteadOfConcatenatingToCollectionLiteral,
(Ruff, "006") => Rule::AsyncioDanglingTask,
(Ruff, "100") => Rule::UnusedNOQA,

View File

@@ -1,70 +1,36 @@
//! Abstractions for Google-style docstrings.
use once_cell::sync::Lazy;
use rustc_hash::FxHashSet;
use crate::docstrings::sections::SectionKind;
pub(crate) static GOOGLE_SECTION_NAMES: Lazy<FxHashSet<&'static str>> = Lazy::new(|| {
FxHashSet::from_iter([
"Args",
"Arguments",
"Attention",
"Attributes",
"Caution",
"Danger",
"Error",
"Example",
"Examples",
"Hint",
"Important",
"Keyword Args",
"Keyword Arguments",
"Methods",
"Note",
"Notes",
"Return",
"Returns",
"Raises",
"References",
"See Also",
"Tip",
"Todo",
"Warning",
"Warnings",
"Warns",
"Yield",
"Yields",
])
});
pub(crate) static LOWERCASE_GOOGLE_SECTION_NAMES: Lazy<FxHashSet<&'static str>> = Lazy::new(|| {
FxHashSet::from_iter([
"args",
"arguments",
"attention",
"attributes",
"caution",
"danger",
"error",
"example",
"examples",
"hint",
"important",
"keyword args",
"keyword arguments",
"methods",
"note",
"notes",
"return",
"returns",
"raises",
"references",
"see also",
"tip",
"todo",
"warning",
"warnings",
"warns",
"yield",
"yields",
])
});
pub(crate) static GOOGLE_SECTIONS: &[SectionKind] = &[
SectionKind::Attributes,
SectionKind::Examples,
SectionKind::Methods,
SectionKind::Notes,
SectionKind::Raises,
SectionKind::References,
SectionKind::Returns,
SectionKind::SeeAlso,
SectionKind::Yields,
// Google-only
SectionKind::Args,
SectionKind::Arguments,
SectionKind::Attention,
SectionKind::Caution,
SectionKind::Danger,
SectionKind::Error,
SectionKind::Example,
SectionKind::Hint,
SectionKind::Important,
SectionKind::KeywordArgs,
SectionKind::KeywordArguments,
SectionKind::Note,
SectionKind::Notes,
SectionKind::Return,
SectionKind::Tip,
SectionKind::Todo,
SectionKind::Warning,
SectionKind::Warnings,
SectionKind::Warns,
SectionKind::Yield,
];

View File

@@ -1,40 +1,20 @@
//! Abstractions for NumPy-style docstrings.
use once_cell::sync::Lazy;
use rustc_hash::FxHashSet;
use crate::docstrings::sections::SectionKind;
pub(crate) static LOWERCASE_NUMPY_SECTION_NAMES: Lazy<FxHashSet<&'static str>> = Lazy::new(|| {
FxHashSet::from_iter([
"short summary",
"extended summary",
"parameters",
"returns",
"yields",
"other parameters",
"raises",
"see also",
"notes",
"references",
"examples",
"attributes",
"methods",
])
});
pub(crate) static NUMPY_SECTION_NAMES: Lazy<FxHashSet<&'static str>> = Lazy::new(|| {
FxHashSet::from_iter([
"Short Summary",
"Extended Summary",
"Parameters",
"Returns",
"Yields",
"Other Parameters",
"Raises",
"See Also",
"Notes",
"References",
"Examples",
"Attributes",
"Methods",
])
});
pub(crate) static NUMPY_SECTIONS: &[SectionKind] = &[
SectionKind::Attributes,
SectionKind::Examples,
SectionKind::Methods,
SectionKind::Notes,
SectionKind::Raises,
SectionKind::References,
SectionKind::Returns,
SectionKind::SeeAlso,
SectionKind::Yields,
// NumPy-only
SectionKind::ExtendedSummary,
SectionKind::OtherParameters,
SectionKind::Parameters,
SectionKind::ShortSummary,
];

View File

@@ -1,8 +1,126 @@
use strum_macros::EnumIter;
use crate::ast::whitespace;
use crate::docstrings::styles::SectionStyle;
#[derive(EnumIter, PartialEq, Eq, Debug, Clone, Copy)]
pub enum SectionKind {
Args,
Arguments,
Attention,
Attributes,
Caution,
Danger,
Error,
Example,
Examples,
ExtendedSummary,
Hint,
Important,
KeywordArgs,
KeywordArguments,
Methods,
Note,
Notes,
OtherParameters,
Parameters,
Raises,
References,
Return,
Returns,
SeeAlso,
ShortSummary,
Tip,
Todo,
Warning,
Warnings,
Warns,
Yield,
Yields,
}
impl SectionKind {
pub fn from_str(s: &str) -> Option<Self> {
match s.to_ascii_lowercase().as_str() {
"args" => Some(Self::Args),
"arguments" => Some(Self::Arguments),
"attention" => Some(Self::Attention),
"attributes" => Some(Self::Attributes),
"caution" => Some(Self::Caution),
"danger" => Some(Self::Danger),
"error" => Some(Self::Error),
"example" => Some(Self::Example),
"examples" => Some(Self::Examples),
"extended summary" => Some(Self::ExtendedSummary),
"hint" => Some(Self::Hint),
"important" => Some(Self::Important),
"keyword args" => Some(Self::KeywordArgs),
"keyword arguments" => Some(Self::KeywordArguments),
"methods" => Some(Self::Methods),
"note" => Some(Self::Note),
"notes" => Some(Self::Notes),
"other parameters" => Some(Self::OtherParameters),
"parameters" => Some(Self::Parameters),
"raises" => Some(Self::Raises),
"references" => Some(Self::References),
"return" => Some(Self::Return),
"returns" => Some(Self::Returns),
"see also" => Some(Self::SeeAlso),
"short summary" => Some(Self::ShortSummary),
"tip" => Some(Self::Tip),
"todo" => Some(Self::Todo),
"warning" => Some(Self::Warning),
"warnings" => Some(Self::Warnings),
"warns" => Some(Self::Warns),
"yield" => Some(Self::Yield),
"yields" => Some(Self::Yields),
_ => None,
}
}
pub fn as_str(self) -> &'static str {
match self {
Self::Args => "Args",
Self::Arguments => "Arguments",
Self::Attention => "Attention",
Self::Attributes => "Attributes",
Self::Caution => "Caution",
Self::Danger => "Danger",
Self::Error => "Error",
Self::Example => "Example",
Self::Examples => "Examples",
Self::ExtendedSummary => "Extended Summary",
Self::Hint => "Hint",
Self::Important => "Important",
Self::KeywordArgs => "Keyword Args",
Self::KeywordArguments => "Keyword Arguments",
Self::Methods => "Methods",
Self::Note => "Note",
Self::Notes => "Notes",
Self::OtherParameters => "Other Parameters",
Self::Parameters => "Parameters",
Self::Raises => "Raises",
Self::References => "References",
Self::Return => "Return",
Self::Returns => "Returns",
Self::SeeAlso => "See Also",
Self::ShortSummary => "Short Summary",
Self::Tip => "Tip",
Self::Todo => "Todo",
Self::Warning => "Warning",
Self::Warnings => "Warnings",
Self::Warns => "Warns",
Self::Yield => "Yield",
Self::Yields => "Yields",
}
}
}
#[derive(Debug)]
pub(crate) struct SectionContext<'a> {
/// The "kind" of the section, e.g. "SectionKind::Args" or "SectionKind::Returns".
pub(crate) kind: SectionKind,
/// The name of the section as it appears in the docstring, e.g. "Args" or "Returns".
pub(crate) section_name: &'a str,
pub(crate) previous_line: &'a str,
pub(crate) line: &'a str,
@@ -11,10 +129,13 @@ pub(crate) struct SectionContext<'a> {
pub(crate) original_index: usize,
}
fn suspected_as_section(line: &str, style: &SectionStyle) -> bool {
style
.lowercase_section_names()
.contains(&whitespace::leading_words(line).to_lowercase().as_str())
fn suspected_as_section(line: &str, style: &SectionStyle) -> Option<SectionKind> {
if let Some(kind) = SectionKind::from_str(whitespace::leading_words(line)) {
if style.sections().contains(&kind) {
return Some(kind);
}
}
None
}
/// Check if the suspected context is really a section header.
@@ -49,21 +170,15 @@ pub(crate) fn section_contexts<'a>(
lines: &'a [&'a str],
style: &SectionStyle,
) -> Vec<SectionContext<'a>> {
let suspected_section_indices: Vec<usize> = lines
let mut contexts = vec![];
for (kind, lineno) in lines
.iter()
.enumerate()
.filter_map(|(lineno, line)| {
if lineno > 0 && suspected_as_section(line, style) {
Some(lineno)
} else {
None
}
})
.collect();
let mut contexts = vec![];
for lineno in suspected_section_indices {
.skip(1)
.filter_map(|(lineno, line)| suspected_as_section(line, style).map(|kind| (kind, lineno)))
{
let context = SectionContext {
kind,
section_name: whitespace::leading_words(lines[lineno]),
previous_line: lines[lineno - 1],
line: lines[lineno],
@@ -76,11 +191,12 @@ pub(crate) fn section_contexts<'a>(
}
}
let mut truncated_contexts = vec![];
let mut truncated_contexts = Vec::with_capacity(contexts.len());
let mut end: Option<usize> = None;
for context in contexts.into_iter().rev() {
let next_end = context.original_index;
truncated_contexts.push(SectionContext {
kind: context.kind,
section_name: context.section_name,
previous_line: context.previous_line,
line: context.line,

View File

@@ -1,8 +1,6 @@
use once_cell::sync::Lazy;
use rustc_hash::FxHashSet;
use crate::docstrings::google::{GOOGLE_SECTION_NAMES, LOWERCASE_GOOGLE_SECTION_NAMES};
use crate::docstrings::numpy::{LOWERCASE_NUMPY_SECTION_NAMES, NUMPY_SECTION_NAMES};
use crate::docstrings::google::GOOGLE_SECTIONS;
use crate::docstrings::numpy::NUMPY_SECTIONS;
use crate::docstrings::sections::SectionKind;
pub(crate) enum SectionStyle {
Numpy,
@@ -10,17 +8,10 @@ pub(crate) enum SectionStyle {
}
impl SectionStyle {
pub(crate) fn section_names(&self) -> &Lazy<FxHashSet<&'static str>> {
pub(crate) fn sections(&self) -> &[SectionKind] {
match self {
SectionStyle::Numpy => &NUMPY_SECTION_NAMES,
SectionStyle::Google => &GOOGLE_SECTION_NAMES,
}
}
pub(crate) fn lowercase_section_names(&self) -> &Lazy<FxHashSet<&'static str>> {
match self {
SectionStyle::Numpy => &LOWERCASE_NUMPY_SECTION_NAMES,
SectionStyle::Google => &LOWERCASE_GOOGLE_SECTION_NAMES,
SectionStyle::Numpy => NUMPY_SECTIONS,
SectionStyle::Google => GOOGLE_SECTIONS,
}
}
}

View File

@@ -577,6 +577,7 @@ mod tests {
pydocstyle: Some(pydocstyle::settings::Options {
convention: Some(Convention::Numpy),
ignore_decorators: None,
property_decorators: None,
}),
..default_options([Linter::Pydocstyle.into()])
});

View File

@@ -1,13 +1,12 @@
use std::ops::Deref;
use std::path::{Path, PathBuf};
use anyhow::{anyhow, Result};
use globset::GlobMatcher;
use log::debug;
use path_absolutize::{path_dedot, Absolutize};
use rustc_hash::FxHashSet;
use crate::registry::Rule;
use crate::settings::hashable::{HashableGlobMatcher, HashableHashSet};
/// Extract the absolute path and basename (as strings) from a Path.
pub fn extract_path_names(path: &Path) -> Result<(&str, &str)> {
@@ -25,11 +24,7 @@ pub fn extract_path_names(path: &Path) -> Result<(&str, &str)> {
/// Create a set with codes matching the pattern/code pairs.
pub(crate) fn ignores_from_path<'a>(
path: &Path,
pattern_code_pairs: &'a [(
HashableGlobMatcher,
HashableGlobMatcher,
HashableHashSet<Rule>,
)],
pattern_code_pairs: &'a [(GlobMatcher, GlobMatcher, FxHashSet<Rule>)],
) -> FxHashSet<&'a Rule> {
let (file_path, file_basename) = extract_path_names(path).expect("Unable to parse filename");
pattern_code_pairs
@@ -39,8 +34,8 @@ pub(crate) fn ignores_from_path<'a>(
debug!(
"Adding per-file ignores for {:?} due to basename match on {:?}: {:?}",
path,
basename.deref().glob().regex(),
&**codes
basename.glob().regex(),
codes
);
return Some(codes.iter());
}
@@ -48,8 +43,8 @@ pub(crate) fn ignores_from_path<'a>(
debug!(
"Adding per-file ignores for {:?} due to absolute match on {:?}: {:?}",
path,
absolute.deref().glob().regex(),
&**codes
absolute.glob().regex(),
codes
);
return Some(codes.iter());
}

View File

@@ -13,7 +13,6 @@ pub use violation::{AutofixKind, Availability as AutofixAvailability};
mod ast;
mod autofix;
pub mod cache;
mod checkers;
mod codes;
mod cst;

View File

@@ -201,8 +201,8 @@ pub fn check(contents: &str, options: JsValue) -> Result<JsValue, JsValue> {
&indexer,
&directives,
&settings,
flags::Autofix::Enabled,
flags::Noqa::Enabled,
flags::Autofix::Enabled,
);
let messages: Vec<ExpandedMessage> = diagnostics

View File

@@ -21,6 +21,7 @@ use crate::doc_lines::{doc_lines_from_ast, doc_lines_from_tokens};
use crate::message::{Message, Source};
use crate::noqa::{add_noqa, rule_is_ignored};
use crate::registry::{Diagnostic, Rule};
use crate::resolver::is_interface_definition_path;
use crate::rules::pycodestyle;
use crate::settings::{flags, Settings};
use crate::source_code::{Indexer, Locator, Stylist};
@@ -62,8 +63,8 @@ pub fn check_path(
indexer: &Indexer,
directives: &Directives,
settings: &Settings,
autofix: flags::Autofix,
noqa: flags::Noqa,
autofix: flags::Autofix,
) -> LinterResult<Vec<Diagnostic>> {
// Aggregate all diagnostics.
let mut diagnostics = vec![];
@@ -83,7 +84,14 @@ pub fn check_path(
.iter_enabled()
.any(|rule_code| rule_code.lint_source().is_tokens())
{
diagnostics.extend(check_tokens(locator, &tokens, settings, autofix));
let is_interface_definition = is_interface_definition_path(path);
diagnostics.extend(check_tokens(
locator,
&tokens,
settings,
autofix,
is_interface_definition,
));
}
// Run the filesystem-based rules.
@@ -101,7 +109,13 @@ pub fn check_path(
.iter_enabled()
.any(|rule_code| rule_code.lint_source().is_logical_lines())
{
diagnostics.extend(check_logical_lines(&tokens, locator, stylist, settings));
diagnostics.extend(check_logical_lines(
&tokens,
locator,
stylist,
settings,
flags::Autofix::Enabled,
));
}
// Run the AST-based rules.
@@ -255,8 +269,8 @@ pub fn add_noqa_to_path(path: &Path, package: Option<&Path>, settings: &Settings
&indexer,
&directives,
settings,
flags::Autofix::Disabled,
flags::Noqa::Disabled,
flags::Autofix::Disabled,
);
// Log any parse errors.
@@ -287,6 +301,7 @@ pub fn lint_only(
path: &Path,
package: Option<&Path>,
settings: &Settings,
noqa: flags::Noqa,
autofix: flags::Autofix,
) -> LinterResult<Vec<Message>> {
// Tokenize once.
@@ -316,8 +331,8 @@ pub fn lint_only(
&indexer,
&directives,
settings,
noqa,
autofix,
flags::Noqa::Enabled,
);
// Convert from diagnostics to messages.
@@ -345,6 +360,7 @@ pub fn lint_fix<'a>(
contents: &'a str,
path: &Path,
package: Option<&Path>,
noqa: flags::Noqa,
settings: &Settings,
) -> Result<(LinterResult<Vec<Message>>, Cow<'a, str>, FixTable)> {
let mut transformed = Cow::Borrowed(contents);
@@ -387,8 +403,8 @@ pub fn lint_fix<'a>(
&indexer,
&directives,
settings,
noqa,
flags::Autofix::Enabled,
flags::Noqa::Enabled,
);
if iterations == 0 {

View File

@@ -57,8 +57,22 @@ ruff_macros::register_rules!(
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::MultipleSpacesBeforeKeyword,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::MissingWhitespaceAroundOperator,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::MissingWhitespaceAroundArithmeticOperator,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::MissingWhitespaceAroundBitwiseOrShiftOperator,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::MissingWhitespaceAroundModuloOperator,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::TabAfterKeyword,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::UnexpectedSpacesAroundKeywordParameterEquals,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::MissingWhitespaceAroundParameterEquals,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::WhitespaceBeforeParameters,
#[cfg(feature = "logical_lines")]
rules::pycodestyle::rules::TabBeforeKeyword,
rules::pycodestyle::rules::MultipleImportsOnOneLine,
rules::pycodestyle::rules::ModuleImportNotAtTopOfFile,
@@ -79,6 +93,7 @@ ruff_macros::register_rules!(
rules::pycodestyle::rules::IOError,
rules::pycodestyle::rules::SyntaxError,
// pycodestyle warnings
rules::pycodestyle::rules::IndentationContainsTabs,
rules::pycodestyle::rules::TrailingWhitespace,
rules::pycodestyle::rules::NoNewLineAtEndOfFile,
rules::pycodestyle::rules::BlankLineContainsWhitespace,
@@ -326,6 +341,7 @@ ruff_macros::register_rules!(
rules::pyupgrade::rules::ImportReplacements,
rules::pyupgrade::rules::OutdatedVersionBlock,
rules::pyupgrade::rules::QuotedAnnotation,
rules::pyupgrade::rules::IsinstanceWithTuple,
// pydocstyle
rules::pydocstyle::rules::PublicModule,
rules::pydocstyle::rules::PublicClass,
@@ -461,6 +477,7 @@ ruff_macros::register_rules!(
rules::flake8_errmsg::rules::DotFormatInException,
// flake8-pyi
rules::flake8_pyi::rules::PrefixTypeParams,
rules::flake8_pyi::rules::BadVersionInfoComparison,
rules::flake8_pyi::rules::UnrecognizedPlatformCheck,
rules::flake8_pyi::rules::UnrecognizedPlatformName,
rules::flake8_pyi::rules::PassStatementStubBody,
@@ -468,6 +485,7 @@ ruff_macros::register_rules!(
rules::flake8_pyi::rules::DocstringInStub,
rules::flake8_pyi::rules::TypedArgumentSimpleDefaults,
rules::flake8_pyi::rules::ArgumentSimpleDefaults,
rules::flake8_pyi::rules::TypeCommentInStub,
// flake8-pytest-style
rules::flake8_pytest_style::rules::IncorrectFixtureParenthesesStyle,
rules::flake8_pytest_style::rules::FixturePositionalArgs,
@@ -577,7 +595,6 @@ ruff_macros::register_rules!(
rules::ruff::rules::AmbiguousUnicodeCharacterString,
rules::ruff::rules::AmbiguousUnicodeCharacterDocstring,
rules::ruff::rules::AmbiguousUnicodeCharacterComment,
rules::ruff::rules::KeywordArgumentBeforeStarArgument,
rules::ruff::rules::UnpackInsteadOfConcatenatingToCollectionLiteral,
rules::ruff::rules::AsyncioDanglingTask,
rules::ruff::rules::UnusedNOQA,
@@ -803,6 +820,7 @@ impl Rule {
| Rule::ShebangPython
| Rule::ShebangWhitespace
| Rule::TrailingWhitespace
| Rule::IndentationContainsTabs
| Rule::BlankLineContainsWhitespace => &LintSource::PhysicalLines,
Rule::AmbiguousUnicodeCharacterComment
| Rule::AmbiguousUnicodeCharacterDocstring
@@ -821,19 +839,25 @@ impl Rule {
| Rule::MultipleStatementsOnOneLineColon
| Rule::UselessSemicolon
| Rule::MultipleStatementsOnOneLineSemicolon
| Rule::TrailingCommaProhibited => &LintSource::Tokens,
| Rule::TrailingCommaProhibited
| Rule::TypeCommentInStub => &LintSource::Tokens,
Rule::IOError => &LintSource::Io,
Rule::UnsortedImports | Rule::MissingRequiredImport => &LintSource::Imports,
Rule::ImplicitNamespacePackage | Rule::InvalidModuleName => &LintSource::Filesystem,
#[cfg(feature = "logical_lines")]
Rule::IndentationWithInvalidMultiple
| Rule::IndentationWithInvalidMultipleComment
| Rule::MissingWhitespaceAfterKeyword
| Rule::MissingWhitespaceAroundArithmeticOperator
| Rule::MissingWhitespaceAroundBitwiseOrShiftOperator
| Rule::MissingWhitespaceAroundModuloOperator
| Rule::MissingWhitespaceAroundOperator
| Rule::MissingWhitespaceAroundParameterEquals
| Rule::MultipleLeadingHashesForBlockComment
| Rule::MultipleSpacesAfterKeyword
| Rule::MultipleSpacesAfterOperator
| Rule::MultipleSpacesBeforeKeyword
| Rule::MultipleSpacesBeforeOperator
| Rule::MissingWhitespaceAfterKeyword
| Rule::NoIndentedBlock
| Rule::NoIndentedBlockComment
| Rule::NoSpaceAfterBlockComment
@@ -846,8 +870,10 @@ impl Rule {
| Rule::TooFewSpacesBeforeInlineComment
| Rule::UnexpectedIndentation
| Rule::UnexpectedIndentationComment
| Rule::UnexpectedSpacesAroundKeywordParameterEquals
| Rule::WhitespaceAfterOpenBracket
| Rule::WhitespaceBeforeCloseBracket
| Rule::WhitespaceBeforeParameters
| Rule::WhitespaceBeforePunctuation => &LintSource::LogicalLines,
_ => &LintSource::Ast,
}

View File

@@ -92,5 +92,7 @@ static REDIRECTS: Lazy<HashMap<&'static str, &'static str>> = Lazy::new(|| {
// TODO(charlie): Remove by 2023-04-01.
("TYP", "TCH"),
("TYP001", "TCH001"),
// TODO(charlie): Remove by 2023-06-01.
("RUF004", "B026"),
])
});

View File

@@ -398,9 +398,9 @@ define_violation!(
/// ```
///
/// ## References
/// * [PEP 484](https://www.python.org/dev/peps/pep-0484/#the-any-type)
/// * [`typing.Any`](https://docs.python.org/3/library/typing.html#typing.Any)
/// * [Mypy: The Any type](https://mypy.readthedocs.io/en/stable/kinds_of_types.html#the-any-type)
/// - [PEP 484](https://www.python.org/dev/peps/pep-0484/#the-any-type)
/// - [`typing.Any`](https://docs.python.org/3/library/typing.html#typing.Any)
/// - [Mypy: The Any type](https://mypy.readthedocs.io/en/stable/kinds_of_types.html#the-any-type)
pub struct AnyType {
pub name: String,
}

View File

@@ -1,5 +1,6 @@
//! Settings for the `flake-annotations` plugin.
use ruff_macros::CacheKey;
use ruff_macros::ConfigurationOptions;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
@@ -37,8 +38,8 @@ pub struct Options {
/// Whether to suppress `ANN200`-level violations for functions that meet
/// either of the following criteria:
///
/// * Contain no `return` statement.
/// * Explicit `return` statement(s) all return `None` (explicitly or
/// - Contain no `return` statement.
/// - Explicit `return` statement(s) all return `None` (explicitly or
/// implicitly).
pub suppress_none_returning: Option<bool>,
#[option(
@@ -60,7 +61,7 @@ pub struct Options {
pub ignore_fully_untyped: Option<bool>,
}
#[derive(Debug, Default, Hash)]
#[derive(Debug, Default, CacheKey)]
#[allow(clippy::struct_excessive_bools)]
pub struct Settings {
pub mypy_init_return: bool,

View File

@@ -4,8 +4,9 @@ use rustpython_parser::ast::{Constant, Expr, ExprKind};
use crate::checkers::ast::Checker;
static PASSWORD_CANDIDATE_REGEX: Lazy<Regex> =
Lazy::new(|| Regex::new(r"(^|_)(pas+wo?r?d|pass(phrase)?|pwd|token|secrete?)($|_)").unwrap());
static PASSWORD_CANDIDATE_REGEX: Lazy<Regex> = Lazy::new(|| {
Regex::new(r"(^|_)(?i)(pas+wo?r?d|pass(phrase)?|pwd|token|secrete?)($|_)").unwrap()
});
pub fn string_literal(expr: &Expr) -> Option<&str> {
match &expr.node {
@@ -17,7 +18,6 @@ pub fn string_literal(expr: &Expr) -> Option<&str> {
}
}
// Maybe use regex for this?
pub fn matches_password_name(string: &str) -> bool {
PASSWORD_CANDIDATE_REGEX.is_match(string)
}

View File

@@ -33,8 +33,8 @@ define_violation!(
/// ```
///
/// ## References
/// * [B608: Test for SQL injection](https://bandit.readthedocs.io/en/latest/plugins/b608_hardcoded_sql_expressions.html)
/// * [psycopg3: Server-side binding](https://www.psycopg.org/psycopg3/docs/basic/from_pg2.html#server-side-binding)
/// - [B608: Test for SQL injection](https://bandit.readthedocs.io/en/latest/plugins/b608_hardcoded_sql_expressions.html)
/// - [psycopg3: Server-side binding](https://www.psycopg.org/psycopg3/docs/basic/from_pg2.html#server-side-binding)
pub struct HardcodedSQLExpression;
);
impl Violation for HardcodedSQLExpression {

View File

@@ -1,6 +1,6 @@
//! Settings for the `flake8-bandit` plugin.
use ruff_macros::ConfigurationOptions;
use ruff_macros::{CacheKey, ConfigurationOptions};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
@@ -45,7 +45,7 @@ pub struct Options {
pub check_typed_exception: Option<bool>,
}
#[derive(Debug, Hash)]
#[derive(Debug, CacheKey)]
pub struct Settings {
pub hardcoded_tmp_directory: Vec<String>,
pub check_typed_exception: bool,

View File

@@ -1,5 +1,5 @@
---
source: src/rules/flake8_bandit/mod.rs
source: crates/ruff/src/rules/flake8_bandit/mod.rs
expression: diagnostics
---
- kind:
@@ -105,46 +105,46 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 23
column: 16
end_location:
row: 23
column: 24
fix: ~
parent: ~
- kind:
HardcodedPasswordString:
string: s3cr3t
location:
row: 24
column: 12
end_location:
row: 24
column: 20
fix: ~
parent: ~
- kind:
HardcodedPasswordString:
string: s3cr3t
location:
row: 25
column: 14
end_location:
row: 25
column: 22
fix: ~
parent: ~
- kind:
HardcodedPasswordString:
string: s3cr3t
location:
row: 26
row: 22
column: 11
end_location:
row: 26
row: 22
column: 19
fix: ~
parent: ~
- kind:
HardcodedPasswordString:
string: s3cr3t
location:
row: 23
column: 11
end_location:
row: 23
column: 19
fix: ~
parent: ~
- kind:
HardcodedPasswordString:
string: s3cr3t
location:
row: 25
column: 16
end_location:
row: 25
column: 24
fix: ~
parent: ~
- kind:
HardcodedPasswordString:
string: s3cr3t
location:
row: 26
column: 12
end_location:
row: 26
column: 20
fix: ~
parent: ~
- kind:
HardcodedPasswordString:
string: s3cr3t
@@ -161,9 +161,31 @@ expression: diagnostics
string: s3cr3t
location:
row: 28
column: 13
column: 11
end_location:
row: 28
column: 19
fix: ~
parent: ~
- kind:
HardcodedPasswordString:
string: s3cr3t
location:
row: 29
column: 14
end_location:
row: 29
column: 22
fix: ~
parent: ~
- kind:
HardcodedPasswordString:
string: s3cr3t
location:
row: 30
column: 13
end_location:
row: 30
column: 21
fix: ~
parent: ~
@@ -171,10 +193,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 29
row: 31
column: 15
end_location:
row: 29
row: 31
column: 23
fix: ~
parent: ~
@@ -182,10 +204,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 30
row: 32
column: 23
end_location:
row: 30
row: 32
column: 31
fix: ~
parent: ~
@@ -193,10 +215,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 31
row: 33
column: 23
end_location:
row: 31
row: 33
column: 31
fix: ~
parent: ~
@@ -204,10 +226,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 35
row: 37
column: 15
end_location:
row: 35
row: 37
column: 23
fix: ~
parent: ~
@@ -215,10 +237,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 39
row: 41
column: 19
end_location:
row: 39
row: 41
column: 27
fix: ~
parent: ~
@@ -226,10 +248,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 40
row: 42
column: 16
end_location:
row: 40
row: 42
column: 24
fix: ~
parent: ~
@@ -237,10 +259,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 41
row: 43
column: 17
end_location:
row: 41
row: 43
column: 25
fix: ~
parent: ~
@@ -248,10 +270,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 42
row: 44
column: 14
end_location:
row: 42
row: 44
column: 22
fix: ~
parent: ~
@@ -259,10 +281,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 43
row: 45
column: 17
end_location:
row: 43
row: 45
column: 25
fix: ~
parent: ~
@@ -270,10 +292,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 44
row: 46
column: 16
end_location:
row: 44
row: 46
column: 24
fix: ~
parent: ~
@@ -281,10 +303,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 45
row: 47
column: 18
end_location:
row: 45
row: 47
column: 26
fix: ~
parent: ~
@@ -292,10 +314,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 47
row: 49
column: 12
end_location:
row: 47
row: 49
column: 20
fix: ~
parent: ~
@@ -303,10 +325,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 48
row: 50
column: 9
end_location:
row: 48
row: 50
column: 17
fix: ~
parent: ~
@@ -314,10 +336,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 49
row: 51
column: 10
end_location:
row: 49
row: 51
column: 18
fix: ~
parent: ~
@@ -325,10 +347,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 50
row: 52
column: 7
end_location:
row: 50
row: 52
column: 15
fix: ~
parent: ~
@@ -336,10 +358,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 51
row: 53
column: 10
end_location:
row: 51
row: 53
column: 18
fix: ~
parent: ~
@@ -347,10 +369,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 52
row: 54
column: 9
end_location:
row: 52
row: 54
column: 17
fix: ~
parent: ~
@@ -358,10 +380,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 53
row: 55
column: 11
end_location:
row: 53
row: 55
column: 19
fix: ~
parent: ~
@@ -369,10 +391,10 @@ expression: diagnostics
HardcodedPasswordString:
string: s3cr3t
location:
row: 54
row: 56
column: 20
end_location:
row: 54
row: 56
column: 28
fix: ~
parent: ~
@@ -380,10 +402,10 @@ expression: diagnostics
HardcodedPasswordString:
string: "1\n2"
location:
row: 56
row: 58
column: 12
end_location:
row: 56
row: 58
column: 18
fix: ~
parent: ~
@@ -391,10 +413,10 @@ expression: diagnostics
HardcodedPasswordString:
string: "3\t4"
location:
row: 59
row: 61
column: 12
end_location:
row: 59
row: 61
column: 18
fix: ~
parent: ~
@@ -402,10 +424,10 @@ expression: diagnostics
HardcodedPasswordString:
string: "5\r6"
location:
row: 62
row: 64
column: 12
end_location:
row: 62
row: 64
column: 18
fix: ~
parent: ~

View File

@@ -1,10 +1,10 @@
use rustpython_parser::ast::{ExprKind, Stmt};
use ruff_macros::{define_violation, derive_message_formats};
use ruff_python::str::is_lower;
use rustpython_parser::ast::{ExprKind, Stmt, StmtKind};
use crate::ast::types::Range;
use crate::ast::helpers::RaiseStatementVisitor;
use crate::ast::visitor;
use crate::ast::visitor::Visitor;
use crate::checkers::ast::Checker;
use crate::registry::Diagnostic;
use crate::violation::Violation;
@@ -16,62 +16,32 @@ impl Violation for RaiseWithoutFromInsideExcept {
#[derive_message_formats]
fn message(&self) -> String {
format!(
"Within an except clause, raise exceptions with `raise ... from err` or `raise ... \
"Within an `except` clause, raise exceptions with `raise ... from err` or `raise ... \
from None` to distinguish them from errors in exception handling"
)
}
}
struct RaiseVisitor {
diagnostics: Vec<Diagnostic>,
}
/// B904
pub fn raise_without_from_inside_except(checker: &mut Checker, body: &[Stmt]) {
let raises = {
let mut visitor = RaiseStatementVisitor::default();
visitor::walk_body(&mut visitor, body);
visitor.raises
};
impl<'a> Visitor<'a> for RaiseVisitor {
fn visit_stmt(&mut self, stmt: &'a Stmt) {
match &stmt.node {
StmtKind::Raise {
exc: Some(exc),
cause: None,
} => match &exc.node {
ExprKind::Name { id, .. } if is_lower(id) => {}
_ => {
self.diagnostics.push(Diagnostic::new(
RaiseWithoutFromInsideExcept,
Range::from_located(stmt),
));
}
},
StmtKind::ClassDef { .. }
| StmtKind::FunctionDef { .. }
| StmtKind::AsyncFunctionDef { .. }
| StmtKind::Try { .. }
| StmtKind::TryStar { .. } => {}
StmtKind::If { body, orelse, .. } => {
visitor::walk_body(self, body);
visitor::walk_body(self, orelse);
}
StmtKind::While { body, .. }
| StmtKind::With { body, .. }
| StmtKind::AsyncWith { body, .. }
| StmtKind::For { body, .. }
| StmtKind::AsyncFor { body, .. } => {
visitor::walk_body(self, body);
}
StmtKind::Match { cases, .. } => {
for case in cases {
visitor::walk_body(self, &case.body);
for (range, exc, cause) in raises {
if cause.is_none() {
if let Some(exc) = exc {
match &exc.node {
ExprKind::Name { id, .. } if is_lower(id) => {}
_ => {
checker
.diagnostics
.push(Diagnostic::new(RaiseWithoutFromInsideExcept, range));
}
}
}
_ => {}
}
}
}
/// B904
pub fn raise_without_from_inside_except(checker: &mut Checker, body: &[Stmt]) {
let mut visitor = RaiseVisitor {
diagnostics: vec![],
};
visitor::walk_body(&mut visitor, body);
checker.diagnostics.extend(visitor.diagnostics);
}

View File

@@ -1,6 +1,6 @@
//! Settings for the `flake8-bugbear` plugin.
use ruff_macros::ConfigurationOptions;
use ruff_macros::{CacheKey, ConfigurationOptions};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
@@ -26,7 +26,7 @@ pub struct Options {
pub extend_immutable_calls: Option<Vec<String>>,
}
#[derive(Debug, Default, Hash)]
#[derive(Debug, Default, CacheKey)]
pub struct Settings {
pub extend_immutable_calls: Vec<String>,
}

View File

@@ -23,7 +23,7 @@ define_violation!(
///
/// ## Options
///
/// * `flake8-builtins.builtins-ignorelist`
/// - `flake8-builtins.builtins-ignorelist`
///
/// ## Example
/// ```python
@@ -45,7 +45,7 @@ define_violation!(
/// return result
/// ```
///
/// * [Why is it a bad idea to name a variable `id` in Python?_](https://stackoverflow.com/questions/77552/id-is-a-bad-variable-name-in-python)
/// - [_Why is it a bad idea to name a variable `id` in Python?_](https://stackoverflow.com/questions/77552/id-is-a-bad-variable-name-in-python)
pub struct BuiltinVariableShadowing {
pub name: String,
}
@@ -73,7 +73,7 @@ define_violation!(
///
/// ## Options
///
/// * `flake8-builtins.builtins-ignorelist`
/// - `flake8-builtins.builtins-ignorelist`
///
/// ## Example
/// ```python
@@ -128,7 +128,7 @@ define_violation!(
///
/// ## Options
///
/// * `flake8-builtins.builtins-ignorelist`
/// - `flake8-builtins.builtins-ignorelist`
///
/// ## Example
/// ```python

View File

@@ -1,6 +1,6 @@
//! Settings for the `flake8-builtins` plugin.
use ruff_macros::ConfigurationOptions;
use ruff_macros::{CacheKey, ConfigurationOptions};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
@@ -22,7 +22,7 @@ pub struct Options {
pub builtins_ignorelist: Option<Vec<String>>,
}
#[derive(Debug, Default, Hash)]
#[derive(Debug, Default, CacheKey)]
pub struct Settings {
pub builtins_ignorelist: Vec<String>,
}

View File

@@ -32,19 +32,19 @@ define_violation!(
///
/// This rule applies to a variety of functions, including `list`, `reversed`,
/// `set`, `sorted`, and `tuple`. For example:
/// * Instead of `list(list(iterable))`, use `list(iterable)`.
/// * Instead of `list(tuple(iterable))`, use `list(iterable)`.
/// * Instead of `tuple(list(iterable))`, use `tuple(iterable)`.
/// * Instead of `tuple(tuple(iterable))`, use `tuple(iterable)`.
/// * Instead of `set(set(iterable))`, use `set(iterable)`.
/// * Instead of `set(list(iterable))`, use `set(iterable)`.
/// * Instead of `set(tuple(iterable))`, use `set(iterable)`.
/// * Instead of `set(sorted(iterable))`, use `set(iterable)`.
/// * Instead of `set(reversed(iterable))`, use `set(iterable)`.
/// * Instead of `sorted(list(iterable))`, use `sorted(iterable)`.
/// * Instead of `sorted(tuple(iterable))`, use `sorted(iterable)`.
/// * Instead of `sorted(sorted(iterable))`, use `sorted(iterable)`.
/// * Instead of `sorted(reversed(iterable))`, use `sorted(iterable)`.
/// - Instead of `list(list(iterable))`, use `list(iterable)`.
/// - Instead of `list(tuple(iterable))`, use `list(iterable)`.
/// - Instead of `tuple(list(iterable))`, use `tuple(iterable)`.
/// - Instead of `tuple(tuple(iterable))`, use `tuple(iterable)`.
/// - Instead of `set(set(iterable))`, use `set(iterable)`.
/// - Instead of `set(list(iterable))`, use `set(iterable)`.
/// - Instead of `set(tuple(iterable))`, use `set(iterable)`.
/// - Instead of `set(sorted(iterable))`, use `set(iterable)`.
/// - Instead of `set(reversed(iterable))`, use `set(iterable)`.
/// - Instead of `sorted(list(iterable))`, use `sorted(iterable)`.
/// - Instead of `sorted(tuple(iterable))`, use `sorted(iterable)`.
/// - Instead of `sorted(sorted(iterable))`, use `sorted(iterable)`.
/// - Instead of `sorted(reversed(iterable))`, use `sorted(iterable)`.
pub struct UnnecessaryDoubleCastOrProcess {
pub inner: String,
pub outer: String,

View File

@@ -32,11 +32,11 @@ define_violation!(
///
/// This rule also applies to `map` calls within `list`, `set`, and `dict`
/// calls. For example:
/// * Instead of `list(map(lambda num: num * 2, nums))`, use
/// - Instead of `list(map(lambda num: num * 2, nums))`, use
/// `[num * 2 for num in nums]`.
/// * Instead of `set(map(lambda num: num % 2 == 0, nums))`, use
/// - Instead of `set(map(lambda num: num % 2 == 0, nums))`, use
/// `{num % 2 == 0 for num in nums}`.
/// * Instead of `dict(map(lambda v: (v, v ** 2), values))`, use
/// - Instead of `dict(map(lambda v: (v, v ** 2), values))`, use
/// `{v: v ** 2 for v in values}`.
pub struct UnnecessaryMap {
pub obj_type: String,

View File

@@ -1,6 +1,6 @@
//! Settings for the `flake8-comprehensions` plugin.
use ruff_macros::ConfigurationOptions;
use ruff_macros::{CacheKey, ConfigurationOptions};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
@@ -22,7 +22,7 @@ pub struct Options {
pub allow_dict_calls_with_keyword_arguments: Option<bool>,
}
#[derive(Debug, Default, Hash)]
#[derive(Debug, Default, CacheKey)]
pub struct Settings {
pub allow_dict_calls_with_keyword_arguments: bool,
}

View File

@@ -20,6 +20,7 @@ define_violation!(
/// ```python
/// from django.forms import ModelForm
///
///
/// class PostForm(ModelForm):
/// class Meta:
/// model = Post
@@ -30,6 +31,7 @@ define_violation!(
/// ```python
/// from django.forms import ModelForm
///
///
/// class PostForm(ModelForm):
/// class Meta:
/// model = Post

View File

@@ -26,19 +26,21 @@ define_violation!(
/// ```python
/// from django.db import models
///
///
/// class MyModel(models.Model):
/// field = models.CharField(max_length=255)
/// field = models.CharField(max_length=255)
/// ```
///
/// Use instead:
/// ```python
/// from django.db import models
///
/// class MyModel(models.Model):
/// field = models.CharField(max_length=255)
///
/// def __str__(self):
/// return f"{self.field}"
/// class MyModel(models.Model):
/// field = models.CharField(max_length=255)
///
/// def __str__(self):
/// return f"{self.field}"
/// ```
pub struct ModelWithoutDunderStr;
);

View File

@@ -22,10 +22,11 @@ define_violation!(
/// from django.dispatch import receiver
/// from django.db.models.signals import post_save
///
///
/// @transaction.atomic
/// @receiver(post_save, sender=MyModel)
/// def my_handler(sender, instance, created, **kwargs):
/// pass
/// pass
/// ```
///
/// Use instead:
@@ -33,6 +34,7 @@ define_violation!(
/// from django.dispatch import receiver
/// from django.db.models.signals import post_save
///
///
/// @receiver(post_save, sender=MyModel)
/// @transaction.atomic
/// def my_handler(sender, instance, created, **kwargs):

View File

@@ -28,14 +28,16 @@ define_violation!(
/// ```python
/// from django.db import models
///
///
/// class MyModel(models.Model):
/// field = models.CharField(max_length=255, null=True)
/// field = models.CharField(max_length=255, null=True)
/// ```
///
/// Use instead:
/// ```python
/// from django.db import models
///
///
/// class MyModel(models.Model):
/// field = models.CharField(max_length=255, default="")
/// ```

View File

@@ -24,7 +24,7 @@ define_violation!(
/// ```
///
/// Python will produce a traceback like:
/// ```python
/// ```console
/// Traceback (most recent call last):
/// File "tmp.py", line 2, in <module>
/// raise RuntimeError("Some value is incorrect")
@@ -38,7 +38,7 @@ define_violation!(
/// ```
///
/// Which will produce a traceback like:
/// ```python
/// ```console
/// Traceback (most recent call last):
/// File "tmp.py", line 3, in <module>
/// raise RuntimeError(msg)
@@ -72,7 +72,7 @@ define_violation!(
/// ```
///
/// Python will produce a traceback like:
/// ```python
/// ```console
/// Traceback (most recent call last):
/// File "tmp.py", line 2, in <module>
/// raise RuntimeError(f"{sub!r} is incorrect")
@@ -87,8 +87,7 @@ define_violation!(
/// ```
///
/// Which will produce a traceback like:
/// ```python
/// Traceback (most recent call last):
/// ```console
/// File "tmp.py", line 3, in <module>
/// raise RuntimeError(msg)
/// RuntimeError: 'Some value' is incorrect
@@ -122,7 +121,7 @@ define_violation!(
/// ```
///
/// Python will produce a traceback like:
/// ```python
/// ```console
/// Traceback (most recent call last):
/// File "tmp.py", line 2, in <module>
/// raise RuntimeError("'{}' is incorrect".format(sub))
@@ -137,7 +136,7 @@ define_violation!(
/// ```
///
/// Which will produce a traceback like:
/// ```python
/// ```console
/// Traceback (most recent call last):
/// File "tmp.py", line 3, in <module>
/// raise RuntimeError(msg)

View File

@@ -1,6 +1,6 @@
//! Settings for the `flake8-errmsg` plugin.
use ruff_macros::ConfigurationOptions;
use ruff_macros::{CacheKey, ConfigurationOptions};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
@@ -18,7 +18,7 @@ pub struct Options {
pub max_string_length: Option<usize>,
}
#[derive(Debug, Default, Hash)]
#[derive(Debug, Default, CacheKey)]
pub struct Settings {
pub max_string_length: usize,
}

View File

@@ -56,7 +56,7 @@ define_violation!(
/// to `false`.
///
/// ## Options
/// * `flake8-implicit-str-concat.allow-multiline`
/// - `flake8-implicit-str-concat.allow-multiline`
///
/// ## Example
/// ```python
@@ -73,7 +73,7 @@ define_violation!(
/// ```
///
/// ## References
/// * [PEP 8](https://peps.python.org/pep-0008/#maximum-line-length)
/// - [PEP 8](https://peps.python.org/pep-0008/#maximum-line-length)
pub struct MultiLineImplicitStringConcatenation;
);
impl Violation for MultiLineImplicitStringConcatenation {

View File

@@ -1,6 +1,6 @@
//! Settings for the `flake8-implicit-str-concat` plugin.
use ruff_macros::ConfigurationOptions;
use ruff_macros::{CacheKey, ConfigurationOptions};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
@@ -32,7 +32,7 @@ pub struct Options {
pub allow_multiline: Option<bool>,
}
#[derive(Debug, Hash)]
#[derive(Debug, CacheKey)]
pub struct Settings {
pub allow_multiline: bool,
}

View File

@@ -1,14 +1,10 @@
//! Settings for import conventions.
use std::hash::Hash;
use ruff_macros::ConfigurationOptions;
use ruff_macros::{CacheKey, ConfigurationOptions};
use rustc_hash::FxHashMap;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use crate::settings::hashable::HashableHashMap;
const CONVENTIONAL_ALIASES: &[(&str, &str)] = &[
("altair", "alt"),
("matplotlib", "mpl"),
@@ -64,9 +60,9 @@ pub struct Options {
pub extend_aliases: Option<FxHashMap<String, String>>,
}
#[derive(Debug, Hash)]
#[derive(Debug, CacheKey)]
pub struct Settings {
pub aliases: HashableHashMap<String, String>,
pub aliases: FxHashMap<String, String>,
}
fn default_aliases() -> FxHashMap<String, String> {
@@ -90,7 +86,7 @@ fn resolve_aliases(options: Options) -> FxHashMap<String, String> {
impl Default for Settings {
fn default() -> Self {
Self {
aliases: default_aliases().into(),
aliases: default_aliases(),
}
}
}
@@ -98,7 +94,7 @@ impl Default for Settings {
impl From<Options> for Settings {
fn from(options: Options) -> Self {
Self {
aliases: resolve_aliases(options).into(),
aliases: resolve_aliases(options),
}
}
}
@@ -106,7 +102,7 @@ impl From<Options> for Settings {
impl From<Settings> for Options {
fn from(settings: Settings) -> Self {
Self {
aliases: Some(settings.aliases.into()),
aliases: Some(settings.aliases),
extend_aliases: None,
}
}

View File

@@ -25,7 +25,7 @@ define_violation!(
/// the absence of the `__init__.py` file is probably an oversight.
///
/// ## Options
/// * `namespace-packages`
/// - `namespace-packages`
pub struct ImplicitNamespacePackage {
pub filename: String,
}

View File

@@ -72,7 +72,7 @@ define_violation!(
/// For example, compare the performance of `all` with a list comprehension against that
/// of a generator (~40x faster here):
///
/// ```python
/// ```console
/// In [1]: %timeit all([i for i in range(1000)])
/// 8.14 µs ± 25.4 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
///

View File

@@ -15,6 +15,8 @@ mod tests {
#[test_case(Rule::PrefixTypeParams, Path::new("PYI001.pyi"))]
#[test_case(Rule::PrefixTypeParams, Path::new("PYI001.py"))]
#[test_case(Rule::BadVersionInfoComparison, Path::new("PYI006.pyi"))]
#[test_case(Rule::BadVersionInfoComparison, Path::new("PYI006.py"))]
#[test_case(Rule::UnrecognizedPlatformCheck, Path::new("PYI007.pyi"))]
#[test_case(Rule::UnrecognizedPlatformCheck, Path::new("PYI007.py"))]
#[test_case(Rule::UnrecognizedPlatformName, Path::new("PYI008.pyi"))]
@@ -29,6 +31,8 @@ mod tests {
#[test_case(Rule::ArgumentSimpleDefaults, Path::new("PYI014.pyi"))]
#[test_case(Rule::DocstringInStub, Path::new("PYI021.py"))]
#[test_case(Rule::DocstringInStub, Path::new("PYI021.pyi"))]
#[test_case(Rule::TypeCommentInStub, Path::new("PYI033.py"))]
#[test_case(Rule::TypeCommentInStub, Path::new("PYI033.pyi"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
let diagnostics = test_path(

View File

@@ -0,0 +1,83 @@
use rustpython_parser::ast::{Cmpop, Expr};
use ruff_macros::{define_violation, derive_message_formats};
use crate::checkers::ast::Checker;
use crate::registry::Diagnostic;
use crate::violation::Violation;
use crate::Range;
define_violation!(
/// ## What it does
/// Checks for usages of comparators other than `<` and `>=` for
/// `sys.version_info` checks in `.pyi` files. All other comparators, such
/// as `>`, `<=`, and `==`, are banned.
///
/// ## Why is this bad?
/// Comparing `sys.version_info` with `==` or `<=` has unexpected behavior
/// and can lead to bugs.
///
/// For example, `sys.version_info > (3, 8)` will also match `3.8.10`,
/// while `sys.version_info <= (3, 8)` will _not_ match `3.8.10`:
///
/// ```python
/// >>> import sys
/// >>> print(sys.version_info)
/// sys.version_info(major=3, minor=8, micro=10, releaselevel='final', serial=0)
/// >>> print(sys.version_info > (3, 8))
/// True
/// >>> print(sys.version_info == (3, 8))
/// False
/// >>> print(sys.version_info <= (3, 8))
/// False
/// >>> print(sys.version_info in (3, 8))
/// False
/// ```
///
/// ## Example
/// ```python
/// import sys
///
/// if sys.version_info > (3, 8):
/// ...
/// ```
///
/// Use instead:
/// ```python
/// import sys
///
/// if sys.version_info >= (3, 9):
/// ...
/// ```
pub struct BadVersionInfoComparison;
);
impl Violation for BadVersionInfoComparison {
#[derive_message_formats]
fn message(&self) -> String {
format!("Use `<` or `>=` for version info comparisons")
}
}
/// PYI006
pub fn bad_version_info_comparison(
checker: &mut Checker,
expr: &Expr,
left: &Expr,
ops: &[Cmpop],
comparators: &[Expr],
) {
let ([op], [_right]) = (ops, comparators) else {
return;
};
if !checker.resolve_call_path(left).map_or(false, |call_path| {
call_path.as_slice() == ["sys", "version_info"]
}) {
return;
}
if !matches!(op, Cmpop::Lt | Cmpop::GtE) {
let diagnostic = Diagnostic::new(BadVersionInfoComparison, Range::from_located(expr));
checker.diagnostics.push(diagnostic);
}
}

View File

@@ -1,3 +1,4 @@
pub use bad_version_info_comparison::{bad_version_info_comparison, BadVersionInfoComparison};
pub use docstring_in_stubs::{docstring_in_stubs, DocstringInStub};
pub use non_empty_stub_body::{non_empty_stub_body, NonEmptyStubBody};
pub use pass_statement_stub_body::{pass_statement_stub_body, PassStatementStubBody};
@@ -6,14 +7,16 @@ pub use simple_defaults::{
argument_simple_defaults, typed_argument_simple_defaults, ArgumentSimpleDefaults,
TypedArgumentSimpleDefaults,
};
pub use type_comment_in_stub::{type_comment_in_stub, TypeCommentInStub};
pub use unrecognized_platform::{
unrecognized_platform, UnrecognizedPlatformCheck, UnrecognizedPlatformName,
};
mod bad_version_info_comparison;
mod docstring_in_stubs;
mod non_empty_stub_body;
mod pass_statement_stub_body;
mod prefix_type_params;
mod unrecognized_platform;
mod simple_defaults;
mod type_comment_in_stub;
mod unrecognized_platform;

View File

@@ -0,0 +1,64 @@
use once_cell::sync::Lazy;
use regex::Regex;
use rustpython_parser::lexer::LexResult;
use rustpython_parser::Tok;
use ruff_macros::{define_violation, derive_message_formats};
use crate::registry::Diagnostic;
use crate::violation::Violation;
use crate::Range;
define_violation!(
/// ## What it does
/// Checks for the use of type comments (e.g., `x = 1 # type: int`) in stub
/// files.
///
/// ## Why is this bad?
/// Stub (`.pyi`) files should use type annotations directly, rather
/// than type comments, even if they're intended to support Python 2, since
/// stub files are not executed at runtime. The one exception is `# type: ignore`.
///
/// ## Example
/// ```python
/// x = 1 # type: int
/// ```
///
/// Use instead:
/// ```python
/// x: int = 1
/// ```
pub struct TypeCommentInStub;
);
impl Violation for TypeCommentInStub {
#[derive_message_formats]
fn message(&self) -> String {
format!("Don't use type comments in stub file")
}
}
static TYPE_COMMENT_REGEX: Lazy<Regex> =
Lazy::new(|| Regex::new(r"^#\s*type:\s*([^#]+)(\s*#.*?)?$").unwrap());
static TYPE_IGNORE_REGEX: Lazy<Regex> =
Lazy::new(|| Regex::new(r"^#\s*type:\s*ignore([^#]+)?(\s*#.*?)?$").unwrap());
/// PYI033
pub fn type_comment_in_stub(tokens: &[LexResult]) -> Vec<Diagnostic> {
let mut diagnostics = vec![];
for token in tokens.iter().flatten() {
if let (location, Tok::Comment(comment), end_location) = token {
if TYPE_COMMENT_REGEX.is_match(comment) && !TYPE_IGNORE_REGEX.is_match(comment) {
diagnostics.push(Diagnostic::new(
TypeCommentInStub,
Range {
location: *location,
end_location: *end_location,
},
));
}
}
}
diagnostics
}

View File

@@ -22,21 +22,25 @@ define_violation!(
/// ## Example
/// ```python
/// if sys.platform.startswith("linux"):
/// # Linux specific definitions
/// # Linux specific definitions
/// ...
/// else:
/// # Posix specific definitions
/// # Posix specific definitions
/// ...
/// ```
///
/// Instead, use a simple string comparison, such as `==` or `!=`:
/// ```python
/// if sys.platform == "linux":
/// # Linux specific definitions
/// ...
/// else:
/// # Posix specific definitions
/// ...
/// ```
///
/// ## References
/// * [PEP 484](https://peps.python.org/pep-0484/#version-and-platform-checking)
/// - [PEP 484](https://peps.python.org/pep-0484/#version-and-platform-checking)
pub struct UnrecognizedPlatformCheck;
);
impl Violation for UnrecognizedPlatformCheck {
@@ -68,11 +72,11 @@ define_violation!(
/// Use instead:
/// ```python
/// if sys.platform == "linux":
/// ...
/// ...
/// ```
///
/// ## References
/// * [PEP 484](https://peps.python.org/pep-0484/#version-and-platform-checking)
/// - [PEP 484](https://peps.python.org/pep-0484/#version-and-platform-checking)
pub struct UnrecognizedPlatformName {
pub platform: String,
}

View File

@@ -0,0 +1,6 @@
---
source: crates/ruff/src/rules/flake8_pyi/mod.rs
expression: diagnostics
---
[]

View File

@@ -0,0 +1,65 @@
---
source: crates/ruff/src/rules/flake8_pyi/mod.rs
expression: diagnostics
---
- kind:
BadVersionInfoComparison: ~
location:
row: 8
column: 3
end_location:
row: 8
column: 29
fix: ~
parent: ~
- kind:
BadVersionInfoComparison: ~
location:
row: 10
column: 3
end_location:
row: 10
column: 29
fix: ~
parent: ~
- kind:
BadVersionInfoComparison: ~
location:
row: 12
column: 3
end_location:
row: 12
column: 30
fix: ~
parent: ~
- kind:
BadVersionInfoComparison: ~
location:
row: 14
column: 3
end_location:
row: 14
column: 30
fix: ~
parent: ~
- kind:
BadVersionInfoComparison: ~
location:
row: 16
column: 3
end_location:
row: 16
column: 29
fix: ~
parent: ~
- kind:
BadVersionInfoComparison: ~
location:
row: 18
column: 3
end_location:
row: 18
column: 27
fix: ~
parent: ~

View File

@@ -0,0 +1,6 @@
---
source: crates/ruff/src/rules/flake8_pyi/mod.rs
expression: diagnostics
---
[]

View File

@@ -0,0 +1,115 @@
---
source: crates/ruff/src/rules/flake8_pyi/mod.rs
expression: diagnostics
---
- kind:
TypeCommentInStub: ~
location:
row: 6
column: 21
end_location:
row: 6
column: 127
fix: ~
parent: ~
- kind:
TypeCommentInStub: ~
location:
row: 7
column: 21
end_location:
row: 7
column: 183
fix: ~
parent: ~
- kind:
TypeCommentInStub: ~
location:
row: 8
column: 21
end_location:
row: 8
column: 126
fix: ~
parent: ~
- kind:
TypeCommentInStub: ~
location:
row: 9
column: 21
end_location:
row: 9
column: 132
fix: ~
parent: ~
- kind:
TypeCommentInStub: ~
location:
row: 10
column: 19
end_location:
row: 10
column: 128
fix: ~
parent: ~
- kind:
TypeCommentInStub: ~
location:
row: 11
column: 19
end_location:
row: 11
column: 123
fix: ~
parent: ~
- kind:
TypeCommentInStub: ~
location:
row: 14
column: 11
end_location:
row: 14
column: 128
fix: ~
parent: ~
- kind:
TypeCommentInStub: ~
location:
row: 15
column: 10
end_location:
row: 15
column: 172
fix: ~
parent: ~
- kind:
TypeCommentInStub: ~
location:
row: 19
column: 28
end_location:
row: 19
column: 139
fix: ~
parent: ~
- kind:
TypeCommentInStub: ~
location:
row: 29
column: 21
end_location:
row: 29
column: 44
fix: ~
parent: ~
- kind:
TypeCommentInStub: ~
location:
row: 32
column: 25
end_location:
row: 32
column: 55
fix: ~
parent: ~

View File

@@ -39,6 +39,7 @@ define_violation!(
/// def test_foo():
/// assert something and something_else
///
///
/// def test_bar():
/// assert not (something or something_else)
/// ```
@@ -49,6 +50,7 @@ define_violation!(
/// assert something
/// assert something_else
///
///
/// def test_bar():
/// assert not something
/// assert not something_else

View File

@@ -1,6 +1,6 @@
//! Settings for the `flake8-pytest-style` plugin.
use ruff_macros::ConfigurationOptions;
use ruff_macros::{CacheKey, ConfigurationOptions};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
@@ -48,11 +48,11 @@ pub struct Options {
/// Expected type for multiple argument names in `@pytest.mark.parametrize`.
/// The following values are supported:
///
/// * `csv` — a comma-separated list, e.g.
/// - `csv` — a comma-separated list, e.g.
/// `@pytest.mark.parametrize('name1,name2', ...)`
/// * `tuple` (default) — e.g. `@pytest.mark.parametrize(('name1', 'name2'),
/// ...)`
/// * `list` — e.g. `@pytest.mark.parametrize(['name1', 'name2'], ...)`
/// - `tuple` (default) — e.g.
/// `@pytest.mark.parametrize(('name1', 'name2'), ...)`
/// - `list` — e.g. `@pytest.mark.parametrize(['name1', 'name2'], ...)`
pub parametrize_names_type: Option<types::ParametrizeNameType>,
#[option(
default = "list",
@@ -62,8 +62,8 @@ pub struct Options {
/// Expected type for the list of values rows in `@pytest.mark.parametrize`.
/// The following values are supported:
///
/// * `tuple` — e.g. `@pytest.mark.parametrize('name', (1, 2, 3))`
/// * `list` (default) — e.g. `@pytest.mark.parametrize('name', [1, 2, 3])`
/// - `tuple` — e.g. `@pytest.mark.parametrize('name', (1, 2, 3))`
/// - `list` (default) — e.g. `@pytest.mark.parametrize('name', [1, 2, 3])`
pub parametrize_values_type: Option<types::ParametrizeValuesType>,
#[option(
default = "tuple",
@@ -73,10 +73,10 @@ pub struct Options {
/// Expected type for each row of values in `@pytest.mark.parametrize` in
/// case of multiple parameters. The following values are supported:
///
/// * `tuple` (default) — e.g. `@pytest.mark.parametrize(('name1', 'name2'),
/// [(1, 2), (3, 4)])`
/// * `list` — e.g. `@pytest.mark.parametrize(('name1', 'name2'), [[1, 2],
/// [3, 4]])`
/// - `tuple` (default) — e.g.
/// `@pytest.mark.parametrize(('name1', 'name2'), [(1, 2), (3, 4)])`
/// - `list` — e.g.
/// `@pytest.mark.parametrize(('name1', 'name2'), [[1, 2], [3, 4]])`
pub parametrize_values_row_type: Option<types::ParametrizeValuesRowType>,
#[option(
default = r#"["BaseException", "Exception", "ValueError", "OSError", "IOError", "EnvironmentError", "socket.error"]"#,
@@ -113,7 +113,7 @@ pub struct Options {
pub mark_parentheses: Option<bool>,
}
#[derive(Debug, Hash)]
#[derive(Debug, CacheKey)]
pub struct Settings {
pub fixture_parentheses: bool,
pub parametrize_names_type: types::ParametrizeNameType,

View File

@@ -1,9 +1,10 @@
use std::fmt::{Display, Formatter};
use ruff_macros::CacheKey;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq, Serialize, Deserialize, JsonSchema)]
#[derive(Clone, Copy, Debug, CacheKey, PartialEq, Eq, Serialize, Deserialize, JsonSchema)]
pub enum ParametrizeNameType {
#[serde(rename = "csv")]
Csv,
@@ -29,7 +30,7 @@ impl Display for ParametrizeNameType {
}
}
#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq, Serialize, Deserialize, JsonSchema)]
#[derive(Clone, Copy, Debug, CacheKey, PartialEq, Eq, Serialize, Deserialize, JsonSchema)]
pub enum ParametrizeValuesType {
#[serde(rename = "tuple")]
Tuple,
@@ -52,7 +53,7 @@ impl Display for ParametrizeValuesType {
}
}
#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq, Serialize, Deserialize, JsonSchema)]
#[derive(Clone, Copy, Debug, CacheKey, PartialEq, Eq, Serialize, Deserialize, JsonSchema)]
pub enum ParametrizeValuesRowType {
#[serde(rename = "tuple")]
Tuple,

View File

@@ -22,7 +22,7 @@ define_violation!(
/// strings, but be consistent.
///
/// ## Options
/// * `flake8-quotes.inline-quotes`
/// - `flake8-quotes.inline-quotes`
///
/// ## Example
/// ```python
@@ -67,7 +67,7 @@ define_violation!(
/// strings, but be consistent.
///
/// ## Options
/// * `flake8-quotes.multiline-quotes`
/// - `flake8-quotes.multiline-quotes`
///
/// ## Example
/// ```python
@@ -115,7 +115,7 @@ define_violation!(
/// strings, but be consistent.
///
/// ## Options
/// * `flake8-quotes.docstring-quotes`
/// - `flake8-quotes.docstring-quotes`
///
/// ## Example
/// ```python

View File

@@ -1,10 +1,10 @@
//! Settings for the `flake8-quotes` plugin.
use ruff_macros::ConfigurationOptions;
use ruff_macros::{CacheKey, ConfigurationOptions};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize, Hash, JsonSchema)]
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize, CacheKey, JsonSchema)]
#[serde(deny_unknown_fields, rename_all = "kebab-case")]
pub enum Quote {
/// Use single quotes.
@@ -71,7 +71,7 @@ pub struct Options {
pub avoid_escape: Option<bool>,
}
#[derive(Debug, Hash)]
#[derive(Debug, CacheKey)]
pub struct Settings {
pub inline_quotes: Quote,
pub multiline_quotes: Quote,

View File

@@ -3,7 +3,7 @@ use rustpython_parser::ast::{Constant, Expr, ExprKind, Location, Stmt, StmtKind}
use ruff_macros::{define_violation, derive_message_formats};
use crate::ast::helpers::elif_else_range;
use crate::ast::helpers::{elif_else_range, end_of_statement};
use crate::ast::types::Range;
use crate::ast::visitor::Visitor;
use crate::ast::whitespace::indentation;
@@ -216,7 +216,12 @@ fn implicit_return(checker: &mut Checker, stmt: &Stmt) {
content.push_str(checker.stylist.line_ending().as_str());
content.push_str(indent);
content.push_str("return None");
diagnostic.amend(Fix::insertion(content, stmt.end_location.unwrap()));
// This is the last statement in the function. So it has to be followed by
// a newline, or comments, or nothing.
diagnostic.amend(Fix::insertion(
content,
end_of_statement(stmt, checker.locator),
));
}
}
checker.diagnostics.push(diagnostic);
@@ -251,7 +256,10 @@ fn implicit_return(checker: &mut Checker, stmt: &Stmt) {
content.push_str(checker.stylist.line_ending().as_str());
content.push_str(indent);
content.push_str("return None");
diagnostic.amend(Fix::insertion(content, stmt.end_location.unwrap()));
diagnostic.amend(Fix::insertion(
content,
end_of_statement(stmt, checker.locator),
));
}
}
checker.diagnostics.push(diagnostic);
@@ -287,7 +295,10 @@ fn implicit_return(checker: &mut Checker, stmt: &Stmt) {
content.push_str(checker.stylist.line_ending().as_str());
content.push_str(indent);
content.push_str("return None");
diagnostic.amend(Fix::insertion(content, stmt.end_location.unwrap()));
diagnostic.amend(Fix::insertion(
content,
end_of_statement(stmt, checker.locator),
));
}
}
checker.diagnostics.push(diagnostic);

View File

@@ -31,10 +31,10 @@ expression: diagnostics
content: "\n return None"
location:
row: 27
column: 15
column: 24
end_location:
row: 27
column: 15
column: 24
parent: ~
- kind:
ImplicitReturn: ~
@@ -48,10 +48,10 @@ expression: diagnostics
content: "\n return None"
location:
row: 36
column: 11
column: 20
end_location:
row: 36
column: 11
column: 20
parent: ~
- kind:
ImplicitReturn: ~
@@ -82,10 +82,10 @@ expression: diagnostics
content: "\n return None"
location:
row: 52
column: 15
column: 24
end_location:
row: 52
column: 15
column: 24
parent: ~
- kind:
ImplicitReturn: ~
@@ -99,10 +99,10 @@ expression: diagnostics
content: "\n return None"
location:
row: 59
column: 22
column: 31
end_location:
row: 59
column: 22
column: 31
parent: ~
- kind:
ImplicitReturn: ~
@@ -116,10 +116,10 @@ expression: diagnostics
content: "\n return None"
location:
row: 66
column: 21
column: 30
end_location:
row: 66
column: 21
column: 30
parent: ~
- kind:
ImplicitReturn: ~
@@ -235,9 +235,94 @@ expression: diagnostics
content: "\n return None"
location:
row: 291
column: 19
column: 28
end_location:
row: 291
column: 19
column: 28
parent: ~
- kind:
ImplicitReturn: ~
location:
row: 300
column: 8
end_location:
row: 301
column: 21
fix:
content: "\n return None"
location:
row: 301
column: 21
end_location:
row: 301
column: 21
parent: ~
- kind:
ImplicitReturn: ~
location:
row: 305
column: 8
end_location:
row: 306
column: 21
fix:
content: "\n return None"
location:
row: 306
column: 21
end_location:
row: 306
column: 21
parent: ~
- kind:
ImplicitReturn: ~
location:
row: 310
column: 8
end_location:
row: 311
column: 21
fix:
content: "\n return None"
location:
row: 311
column: 37
end_location:
row: 311
column: 37
parent: ~
- kind:
ImplicitReturn: ~
location:
row: 315
column: 8
end_location:
row: 316
column: 21
fix:
content: "\n return None"
location:
row: 316
column: 24
end_location:
row: 316
column: 24
parent: ~
- kind:
ImplicitReturn: ~
location:
row: 320
column: 8
end_location:
row: 321
column: 21
fix:
content: "\n return None"
location:
row: 322
column: 33
end_location:
row: 322
column: 33
parent: ~

View File

@@ -23,7 +23,7 @@ define_violation!(
/// behavior. Instead, use the class's public interface.
///
/// ## Options
/// * `flake8-self.ignore-names`
/// - `flake8-self.ignore-names`
///
/// ## Example
/// ```python
@@ -31,6 +31,7 @@ define_violation!(
/// def __init__(self):
/// self._private_member = "..."
///
///
/// var = Class()
/// print(var._private_member)
/// ```
@@ -41,12 +42,13 @@ define_violation!(
/// def __init__(self):
/// self.public_member = "..."
///
///
/// var = Class()
/// print(var.public_member)
/// ```
///
/// ## References
/// * [_What is the meaning of single or double underscores before an object name?_](https://stackoverflow.com/questions/1301346/what-is-the-meaning-of-single-and-double-underscore-before-an-object-name)
/// - [_What is the meaning of single or double underscores before an object name?_](https://stackoverflow.com/questions/1301346/what-is-the-meaning-of-single-and-double-underscore-before-an-object-name)
pub struct PrivateMemberAccess {
pub access: String,
}

View File

@@ -1,6 +1,6 @@
//! Settings for the `flake8-self` plugin.
use ruff_macros::ConfigurationOptions;
use ruff_macros::{CacheKey, ConfigurationOptions};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
@@ -28,7 +28,7 @@ pub struct Options {
pub ignore_names: Option<Vec<String>>,
}
#[derive(Debug, Hash)]
#[derive(Debug, CacheKey)]
pub struct Settings {
pub ignore_names: Vec<String>,
}

View File

@@ -38,8 +38,7 @@ define_violation!(
/// ```
///
/// ## References
/// * [Python: "isinstance"](https://docs.python.org/3/library/functions.html#isinstance)
/// ```
/// - [Python: "isinstance"](https://docs.python.org/3/library/functions.html#isinstance)
pub struct DuplicateIsinstanceCall {
pub name: String,
}

View File

@@ -76,20 +76,20 @@ impl Violation for NeedlessBool {
}
define_violation!(
/// ### What it does
/// ## What it does
/// Checks for three or more consecutive if-statements with direct returns
///
/// ### Why is this bad?
/// ## Why is this bad?
/// These can be simplified by using a dictionary
///
/// ### Example
/// ## Example
/// ```python
/// if x == 1:
/// return "Hello"
/// elif x == 2:
/// return "Goodbye"
/// else:
/// return "Goodnight"
/// return "Goodnight"
/// ```
///
/// Use instead:
@@ -129,14 +129,14 @@ impl Violation for UseTernaryOperator {
}
define_violation!(
/// ### What it does
/// ## What it does
/// Checks for `if` branches with identical arm bodies.
///
/// ### Why is this bad?
/// ## Why is this bad?
/// If multiple arms of an `if` statement have the same body, using `or`
/// better signals the intent of the statement.
///
/// ### Example
/// ## Example
/// ```python
/// if x == 1:
/// print("Hello")
@@ -592,6 +592,7 @@ pub fn manual_dict_lookup(
test: &Expr,
body: &[Stmt],
orelse: &[Stmt],
parent: Option<&Stmt>,
) {
// Throughout this rule:
// * Each if-statement's test must consist of a constant equality check with the same variable.
@@ -633,6 +634,36 @@ pub fn manual_dict_lookup(
return;
}
// It's part of a bigger if-elif block:
// https://github.com/MartinThoma/flake8-simplify/issues/115
if let Some(StmtKind::If {
orelse: parent_orelse,
..
}) = parent.map(|parent| &parent.node)
{
if parent_orelse.len() == 1 && stmt == &parent_orelse[0] {
// TODO(charlie): These two cases have the same AST:
//
// if True:
// pass
// elif a:
// b = 1
// else:
// b = 2
//
// if True:
// pass
// else:
// if a:
// b = 1
// else:
// b = 2
//
// We want to flag the latter, but not the former. Right now, we flag neither.
return;
}
}
let mut constants: FxHashSet<ComparableConstant> = FxHashSet::default();
constants.insert(constant.into());

View File

@@ -38,7 +38,7 @@ define_violation!(
/// ```
///
/// ## References
/// * [Python: "The with statement"](https://docs.python.org/3/reference/compound_stmts.html#the-with-statement)
/// - [Python: "The with statement"](https://docs.python.org/3/reference/compound_stmts.html#the-with-statement)
pub struct MultipleWithStatements {
pub fixable: bool,
}

View File

@@ -1,7 +1,9 @@
use ruff_macros::{define_violation, derive_message_formats};
use rustpython_parser::ast::{Excepthandler, ExcepthandlerKind, Located, Stmt, StmtKind};
use ruff_macros::{define_violation, derive_message_formats};
use crate::ast::helpers;
use crate::ast::helpers::compose_call_path;
use crate::ast::types::Range;
use crate::checkers::ast::Checker;
use crate::registry::Diagnostic;
@@ -52,9 +54,9 @@ pub fn use_contextlib_suppress(
let ExcepthandlerKind::ExceptHandler { body, .. } = &handler.node;
if body.len() == 1 {
if matches!(body[0].node, StmtKind::Pass) {
let handler_names: Vec<_> = helpers::extract_handler_names(handlers)
let handler_names: Vec<String> = helpers::extract_handled_exceptions(handlers)
.into_iter()
.map(|call_path| helpers::format_call_path(&call_path))
.filter_map(compose_call_path)
.collect();
let exception = if handler_names.is_empty() {
"Exception".to_string()

Some files were not shown because too many files have changed in this diff Show More