Compare commits

...

57 Commits

Author SHA1 Message Date
Charlie Marsh
cf56955ba6 Bump version to 0.0.227 2023-01-19 23:24:52 -05:00
Charlie Marsh
8a8939afd8 Avoid checking row types for single-name @parametrize decorators (#2013)
Closes #2008.
2023-01-19 22:13:17 -05:00
Martin Fischer
6acf2accc6 Improve --explain output
Previous output for `ruff --explain E711`:

    E711 (pycodestyle): Comparison to `None` should be `cond is None`

New output:

    none-comparison

    Code: E711 (pycodestyle)

    Autofix is always available.

    Message formats:

    * Comparison to `None` should be `cond is None`
    * Comparison to `None` should be `cond is not None`
2023-01-19 22:08:00 -05:00
Charlie Marsh
ec0c7647ab Avoid SIM401 in elif blocks (#2012)
For now, we're just gonna avoid flagging this for `elif` blocks, following the same reasoning as for ternaries. We can handle all of these cases, but we'll knock out the TODOs as a pair, and this avoids broken code.

Closes #2007.
2023-01-19 21:57:18 -05:00
Charlie Marsh
045229630e Upgrade RustPython (#2011)
This lets us revert the "manual" fix introduced in #1944.
2023-01-19 21:49:12 -05:00
Martin Fischer
c600991905 Change AsRef<str> impl for Rule to kebab-case
As we surface rule names more to users we want
them to be easier to type than PascalCase.

Prior art:

Pylint and ESLint also use kebab-case for their rule names.
Clippy uses snake_case but only for syntactical reasons
(so that the argument to e.g. #![allow(clippy::some_lint)]
can be parsed as a path[1]).

[1]: https://doc.rust-lang.org/reference/paths.html
2023-01-19 21:37:11 -05:00
Charlie Marsh
f6a93a4c3d Enable autofix for FitsOnOneLine (D200) (#2006)
Closes #1965.
2023-01-19 19:24:50 -05:00
Aarni Koskela
de54ff114e Add RUF005 "unpack instead of concatenating" check (#1957)
This PR adds a new check that turns expressions such as `[1, 2, 3] + foo` into `[1, 2, 3, *foo]`, since the latter is easier to read and faster:

```
~ $ python3.11 -m timeit -s 'b = [6, 5, 4]' '[1, 2, 3] + b'
5000000 loops, best of 5: 81.4 nsec per loop
~ $ python3.11 -m timeit -s 'b = [6, 5, 4]' '[1, 2, 3, *b]'
5000000 loops, best of 5: 66.2 nsec per loop
```

However there's a couple of gotchas:

* This felt like a `simplify` rule, so I borrowed an unused `SIM` code even if the upstream `flake8-simplify` doesn't do this transform. If it should be assigned some other code, let me know 😄 
* **More importantly** this transform could be unsafe if the other operand of the `+` operation has overridden `__add__` to do something else. What's the `ruff` policy around potentially unsafe operations? (I think some of the suggestions other ported rules give could be semantically different from the original code, but I'm not sure.)
* I'm not a very established Rustacean, so there's no doubt my code isn't quite idiomatic. (For instance, is there a neater way to write that four-way `match` statement?)

Thanks for `ruff`, by the way! :)
2023-01-19 17:38:17 -05:00
Charlie Marsh
64b398c72b Tweak some instructions in CONTRIBUTING.md 2023-01-19 17:17:39 -05:00
Aarni Koskela
c99bd3fa60 Split up pydocstyle rules (#2003)
As per @not-mAs per @not-my-profile's [comment](https://github.com/charliermarsh/ruff/pull/1999#discussion_r1081579337):

> we actually want to break up such rules.rs files into smaller files

this breaks up `pydocstyle/rules.rs` into a directory.y-profile's [comment](https://github.com/charliermarsh/ruff/pull/1999#discussion_r1081579337):

> we actually want to break up such rules.rs files into smaller files

this breaks up `pydocstyle/rules.rs` into a directory.
2023-01-19 13:17:25 -05:00
Martin Fischer
8ac930f886 Fix that --explain panics
This commit fixes a bug accidentally introduced in
6cf770a692,
which resulted every `ruff --explain <code>` invocation to fail with:

    thread 'main' panicked at 'Mismatch between definition and access of `explain`.
    Could not downcast to ruff::registry::Rule, need to downcast to &ruff::registry::Rule',
    ruff_cli/src/cli.rs:184:18

We also add an integration test for --explain to prevent such bugs from
going by unnoticed in the future.
2023-01-19 12:58:44 -05:00
Charlie Marsh
ad80fdc2cd Avoid SIM201 and SIM202 errors in __ne__ et al (#2001)
Closes #1986.
2023-01-19 11:27:27 -05:00
Aarni Koskela
a0ea8fe22f Apply #[derive(Default)] fixes suggested by Clippy (#2000)
These were bugging me every time I ran `clippy` 😁
2023-01-19 11:04:43 -05:00
Martin Fischer
3c3da8a88c derive-msg-formats 5/5: Remove placeholder implementations
# This commit has been generated via the following Python script:
# (followed by `cargo +nightly fmt` and `cargo dev generate-all`)
# For the reasoning see the previous commit(s).

import re
import sys

for path in (
    'src/violations.rs',
    'src/rules/flake8_tidy_imports/banned_api.rs',
    'src/rules/flake8_tidy_imports/relative_imports.rs',
):
    with open(path) as f:
        text = ''

        while line := next(f, None):

            if line.strip() != 'fn message(&self) -> String {':
                text += line
                continue

            text += '    #[derive_message_formats]\n' + line

            body = next(f)
            while (line := next(f)) != '    }\n':
                body += line

            # body = re.sub(r'(?<!code\| |\.push\()format!', 'format!', body)
            body = re.sub(
                r'("[^"]+")\s*\.to_string\(\)', r'format!(\1)', body, re.DOTALL
            )
            body = re.sub(
                r'(r#".+?"#)\s*\.to_string\(\)', r'format!(\1)', body, re.DOTALL
            )

            text += body + '    }\n'

            while (line := next(f)).strip() != 'fn placeholder() -> Self {':
                text += line
            while (line := next(f)) != '    }\n':
                pass

    with open(path, 'w') as f:
        f.write(text)
2023-01-19 11:03:32 -05:00
Martin Fischer
16e79c8db6 derive-msg-formats 4/5: Implement #[derive_message_formats]
The idea is nice and simple we replace:

    fn placeholder() -> Self;

with

    fn message_formats() -> &'static [&'static str];

So e.g. if a Violation implementation defines:

    fn message(&self) -> String {
        format!("Local variable `{name}` is assigned to but never used")
    }

it would also have to define:

   fn message_formats() -> &'static [&'static str] {
       &["Local variable `{name}` is assigned to but never used"]
   }

Since we however obviously do not want to duplicate all of our format
strings we simply introduce a new procedural macro attribute
#[derive_message_formats] that can be added to the message method
declaration in order to automatically derive the message_formats
implementation.

This commit implements the macro. The following and final commit
updates violations.rs to use the macro. (The changes have been separated
because the next commit is autogenerated via a Python script.)
2023-01-19 11:03:32 -05:00
Martin Fischer
8f6d8e215c derive-msg-formats 3/5: Introduce Violation::AUTOFIX associated constant
ruff_dev::generate_rules_table previously documented which rules are
autofixable via DiagnosticKind::fixable ... since the DiagnosticKind was
obtained via Rule::kind (and Violation::placeholder) which we both want
to get rid of we have to obtain the autofixability via another way.

This commit implements such another way by adding an AUTOFIX
associated constant to the Violation trait. The constant is of the type
Option<AutoFixkind>, AutofixKind is a new struct containing an
Availability enum { Sometimes, Always}, letting us additionally document
that some autofixes are only available sometimes (which previously
wasn't documented). We intentionally introduce this information in a
struct so that we can easily introduce further autofix metadata in the
future such as autofix applicability[1].

[1]: https://doc.rust-lang.org/stable/nightly-rustc/rustc_errors/enum.Applicability.html
2023-01-19 11:03:32 -05:00
Martin Fischer
8993baab01 derive-msg-formats 2/5: Remove DiagnosticKind::summary
While ruff displays the string returned by Violation::message in its
output for detected violations the messages displayed in the README
and in the `--explain <code>` output previously used the
DiagnosticKind::summary() function which for some verbose messages
provided shorter descriptions.

This commit removes DiagnosticKind::summary, and moves the more
extensive documentation into doc comments ... these are not displayed
yet to the user but doing that is very much planned.
2023-01-19 11:03:32 -05:00
Martin Fischer
2568627c4c derive-msg-formats 1/5: Remove unnecessary usages of Rule::kind
This commit series removes the following associated
function from the Violation trait:

    fn placeholder() -> Self;

ruff previously used this placeholder approach for the messages it
listed in the README and displayed when invoked with --explain <code>.

This approach is suboptimal for three reasons:

1. The placeholder implementations are completely boring code since they
   just initialize the struct with some dummy values.

2. Displaying concrete error messages with arbitrary interpolated values
   can be confusing for the user since they might not recognize that the
   values are interpolated.

3. Some violations have varying format strings depending on the
   violation which could not be documented with the previous approach
   (while we could have changed the signature to return Vec<Self> this
   would still very much suffer from the previous two points).

We therefore drop Violation::placeholder in favor of a new macro-based
approach, explained in commit 4/5.

Violation::placeholder is only invoked via Rule::kind, so we firstly
have to get rid of all Rule::kind invocations ... this commit starts
removing the trivial cases.
2023-01-19 11:03:32 -05:00
Martin Fischer
9603a024b3 refactor: Move a bunch of pandas-vet logic to rules::pandas_vet 2023-01-19 11:03:32 -05:00
Charlie Marsh
a122d95ef5 Preserve unmatched comparators in SIM109 (#1998)
Closes #1993.
2023-01-19 10:23:20 -05:00
Damien Allen
6ddfe50ac4 Added pylint formatter (#1995)
Fixes: #1953

@charliermarsh thank you for the tips in the issue.

I'm not very familiar with Rust, so please excuse if my string formatting syntax is messy.

In terms of testing, I compared output of `flake8 --format=pylint ` and `cargo run --format=pylint` on the same code and the output syntax seems to check out.
2023-01-19 08:01:27 -05:00
Martin Fischer
26901a78c9 Make define_rule_mapping! set rule code as doc comment of variants
Since the UI still relies on the rule codes this improves the developer
experience by letting developers view the code of a Rule enum variant by
hovering over it.
2023-01-19 07:37:16 -05:00
Martin Fischer
6649225167 rule 8/8: Automatically rewrite RuleCode to Rule
# This commit was automatically generated by running the following
# script (followed by `cargo +nightly fmt`):

import glob
import re
from typing import NamedTuple

class Rule(NamedTuple):
    code: str
    name: str
    path: str

def rules() -> list[Rule]:
    """Returns all the rules defined in `src/registry.rs`."""
    file = open('src/registry.rs')

    rules = []

    while next(file) != 'ruff_macros::define_rule_mapping!(\n':
        continue

    while (line := next(file)) != ');\n':
        line = line.strip().rstrip(',')
        if line.startswith('//'):
            continue
        code, path = line.split(' => ')
        name = path.rsplit('::')[-1]
        rules.append(Rule(code, name, path))

    return rules

code2name = {r.code: r.name for r in rules()}

for pattern in ('src/**/*.rs', 'ruff_cli/**/*.rs', 'ruff_dev/**/*.rs', 'scripts/add_*.py'):
    for name in glob.glob(pattern, recursive=True):
        with open(name) as f:
            text = f.read()

        text = re.sub('Rule(?:Code)?::([A-Z]\w+)', lambda m: 'Rule::' + code2name[m.group(1)], text)
        text = re.sub(r'(?<!"<FilePattern>:<)RuleCode\b', 'Rule', text)
        text = re.sub('(use crate::registry::{.*, Rule), Rule(.*)', r'\1\2', text) # fix duplicate import

        with open(name, 'w') as f:
            f.write(text)
2023-01-18 23:51:48 -05:00
Martin Fischer
9e3083aa2c rule 7/8: Change Rule enum definition 2023-01-18 23:51:48 -05:00
Martin Fischer
6d11ff3822 rule 6/8: Remove Serialize & Deserialize impls for Rule 2023-01-18 23:51:48 -05:00
Martin Fischer
6cf770a692 rule 5/8: Remove FromStr impl for Rule 2023-01-18 23:51:48 -05:00
Martin Fischer
3534e370e1 rule 4/8: Remove Display impl for Rule 2023-01-18 23:51:48 -05:00
Martin Fischer
dbcab5128c rule 3/8: Remove AsRef<str> impl for Rule 2023-01-18 23:51:48 -05:00
Martin Fischer
3810250bb6 rule 2/8: Rename DiagnosticKind::code to rule 2023-01-18 23:51:48 -05:00
Martin Fischer
3c1c1e1dd3 rule 1/8: Rename RuleCode to Rule (backwards-compatible for now)
This commit series refactors ruff to decouple "rules" from "rule codes",
in order to:

1. Make our code more readable by changing e.g.
   RuleCode::UP004 to Rule::UselessObjectInheritance.

2. Let us cleanly map multiple codes to one rule, for example:

   [UP004] in pyupgrade, [R0205] in pylint and [PIE792] in flake8-pie
   all refer to the rule UselessObjectInheritance but ruff currently
   only associates that rule with the UP004 code (since the
   implementation was initially modeled after pyupgrade).

3. Let us cleanly map one code to multiple rules, for example:

   [C0103] from pylint encompasses N801, N802 and N803 from pep8-naming.

The latter two steps are not yet implemented by this commit series
but this refactoring enables us to introduce such a mapping.  Such a
mapping would also let us expand flake8_to_ruff to support e.g. pylint.

After the next commit which just does some renaming the following four
commits remove all trait derivations from the Rule (previously RuleCode)
enum that depend on the variant names to guarantee that they are not
used anywhere anymore so that we can rename all of these variants in the
eigth and final commit without breaking anything.

While the plan very much is to also surface these human-friendly names
more in the user interface this is not yet done in this commit series,
which does not change anything about the UI: it's purely a refactor.

[UP004]: pyupgrade doesn't actually assign codes to its messages.
[R0205]: https://pylint.pycqa.org/en/latest/user_guide/messages/refactor/useless-object-inheritance.html
[PIE792]: https://github.com/sbdchd/flake8-pie#pie792-no-inherit-object
[C0103]: https://pylint.pycqa.org/en/latest/user_guide/messages/convention/invalid-name.html
2023-01-18 23:51:48 -05:00
Martin Fischer
5b7bd93b91 refactor: Use Self:: in match arms 2023-01-18 23:51:48 -05:00
Martin Fischer
9e096b4a4c refactor: Make define_rule_mapping! generate RuleCodePrefix directly 2023-01-18 23:51:48 -05:00
Charlie Marsh
d8645acd1f Bump version to 0.0.226 2023-01-18 20:54:38 -05:00
Charlie Marsh
92dd073191 Add Pylint settings to lib_wasm.rs 2023-01-18 20:54:03 -05:00
Charlie Marsh
ff96219e62 Exclude None, Bool, and Ellipsis from ConstantType (#1988)
These have no effect, so it's confusing that they're even settable.
2023-01-18 20:50:09 -05:00
Charlie Marsh
d33424ec9d Enable suppression of magic values by type (#1987)
Closes #1949.
2023-01-18 20:44:24 -05:00
Charlie Marsh
34412a0a01 Avoid removing side effects for boolean simplifications (#1984)
Closes #1978.
2023-01-18 19:08:14 -05:00
Charlie Marsh
ceb48d3a32 Use relative paths for INP001 (#1981) 2023-01-18 18:45:47 -05:00
Charlie Marsh
969a6f0d53 Replace misplaced-comparison-constant with SIM300 (#1980)
Closes: #1954.
2023-01-18 18:42:49 -05:00
Charlie Marsh
7628876ff2 Invert order of yoda-conditions message (#1979)
The suggestion was wrong!
2023-01-18 18:27:36 -05:00
Charlie Marsh
ef355e5c2c Remove artificial wraps from GitHub messages (#1977) 2023-01-18 18:20:56 -05:00
Charlie Marsh
97f55b8e97 Convert remaining call path sites to use SmallVec (#1972) 2023-01-18 14:50:33 -05:00
Aarni Koskela
ff2be35f51 Run cargo fmt in pre-commit (#1968)
Since `cargo fmt` is a required CI check, we could just as well run it in `pre-commit`.
2023-01-18 13:14:51 -05:00
Anders Kaseorg
1e803f7108 README: Link “Flake8” for consistency with the rest of the list (#1969)
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
2023-01-18 13:07:21 -05:00
Charlie Marsh
1ab0273aa7 Strip whitespace when injecting D209 newline (#1967)
Closes #1963.
2023-01-18 12:09:17 -05:00
Charlie Marsh
5a7d8c25f4 Treat subscript accesses as unsafe effects for autofix (#1966)
See: #1809.
2023-01-18 11:46:12 -05:00
Charlie Marsh
26d6414558 Fix UP003 check from rebase 2023-01-18 11:39:39 -05:00
Charlie Marsh
dae95626ae Use smallvec for call path representation (#1960)
This provides a ~10% speed-up for large codebases with `--select ALL`:

![Screen Shot 2023-01-18 at 11 28 20 AM](https://user-images.githubusercontent.com/1309177/213236389-cff50840-6e55-47a3-9164-2e40cbc885f6.png)
2023-01-18 11:29:05 -05:00
Maksudul Haque
9a3e525930 [isort] Add no-lines-before Option (#1955)
Closes https://github.com/charliermarsh/ruff/issues/1916.
2023-01-18 11:09:47 -05:00
Anders Kaseorg
b9c6cfc0ab Autofix SIM117 (MultipleWithStatements) (#1961)
This is slightly buggy due to Instagram/LibCST#855; it will complain `[ERROR] Failed to fix nested with: Failed to extract CST from source` when trying to fix nested parenthesized `with` statements lacking trailing commas. But presumably people who write parenthesized `with` statements already knew that they don’t need to nest them.

Signed-off-by: Anders Kaseorg <andersk@mit.edu>
2023-01-18 11:06:04 -05:00
Charlie Marsh
b1f10c8339 Confine type-of-primitive checks to builtin type calls (#1962)
Closes #1958.
2023-01-18 10:53:50 -05:00
Anders Kaseorg
83346de6e0 Autofix SIM102 (NestedIfStatements)
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
2023-01-18 07:37:27 -05:00
Anders Kaseorg
b23cc31863 Change ast::helpers::has_coments to accept a Range
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
2023-01-18 07:37:27 -05:00
Anders Kaseorg
462d81beb7 Ensure ast::whitespace::indentation extracts whitespace
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
2023-01-18 07:37:27 -05:00
Anders Kaseorg
715ea2d374 Accept a Locator for ast::whitespace::indentation
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
2023-01-18 07:37:27 -05:00
skykasko
6c7e60b4f9 Fix bad link for flake8-no-pep420 (#1952)
See https://github.com/charliermarsh/ruff/pull/1942.
2023-01-18 07:36:05 -05:00
Maksudul Haque
868d0b3e29 [isort] Add constants and variables Options (#1951)
closes https://github.com/charliermarsh/ruff/issues/1819
2023-01-18 07:30:51 -05:00
269 changed files with 8979 additions and 6768 deletions

View File

@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.0.225
rev: v0.0.227
hooks:
- id: ruff
@@ -8,3 +8,11 @@ repos:
rev: v0.10.1
hooks:
- id: validate-pyproject
- repo: local
hooks:
- id: cargo-fmt
name: cargo fmt
entry: cargo fmt --
language: rust
types: [rust]

View File

@@ -1,5 +1,12 @@
# Breaking Changes
## 0.0.226
### `misplaced-comparison-constant` (`PLC2201`) was deprecated in favor of `SIM300` ([#1980](https://github.com/charliermarsh/ruff/pull/1980))
These two rules contain (nearly) identical logic. To deduplicate the rule set, we've upgraded
`SIM300` to handle a few more cases, and deprecated `PLC2201` in favor of `SIM300`.
## 0.0.225
### `@functools.cache` rewrites have been moved to a standalone rule (`UP033`) ([#1938](https://github.com/charliermarsh/ruff/pull/1938))

View File

@@ -54,18 +54,22 @@ prior to merging.
### Example: Adding a new lint rule
There are four phases to adding a new lint rule:
At a high level, the steps involved in adding a new lint rule are as follows:
1. Define the violation struct in `src/violations.rs` (e.g., `ModuleImportNotAtTopOfFile`).
2. Map the violation struct to a rule code in `src/registry.rs` (e.g., `E402`).
3. Define the logic for triggering the violation in `src/checkers/ast.rs` (for AST-based checks),
`src/checkers/tokens.rs` (for token-based checks), `src/checkers/lines.rs` (for text-based checks) or `src/checkers/filesystem.rs` (for filesystem-based checks).
4. Add a test fixture.
5. Update the generated files (documentation and generated code).
1. Create a file for your rule (e.g., `src/rules/flake8_bugbear/rules/abstract_base_class.rs`).
2. In that file, define a violation struct. You can grep for `define_violation!` to see examples.
3. Map the violation struct to a rule code in `src/registry.rs` (e.g., `E402`).
4. Define the logic for triggering the violation in `src/checkers/ast.rs` (for AST-based checks),
`src/checkers/tokens.rs` (for token-based checks), `src/checkers/lines.rs` (for text-based
checks), or `src/checkers/filesystem.rs` (for filesystem-based checks).
5. Add a test fixture.
6. Update the generated files (documentation and generated code).
To define the violation, open up `src/violations.rs`, and define a new struct using the
`define_violation!` macro. There are plenty of examples in that file, so feel free to pattern-match
against the existing structs.
To define the violation, start by creating a dedicated file for your rule under the appropriate
rule origin (e.g., `src/rules/flake8_bugbear/rules/abstract_base_class.rs`). That file should
contain a struct defined via `define_violation!`, along with a function that creates the violation
based on any required inputs. (Many of the existing examples live in `src/violations.rs`, but we're
looking to place new rules in their own files.)
To trigger the violation, you'll likely want to augment the logic in `src/checkers/ast.rs`, which
defines the Python AST visitor, responsible for iterating over the abstract syntax tree and

20
Cargo.lock generated
View File

@@ -735,7 +735,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.225"
version = "0.0.227"
dependencies = [
"anyhow",
"clap 4.0.32",
@@ -1906,7 +1906,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.225"
version = "0.0.227"
dependencies = [
"anyhow",
"bitflags",
@@ -1946,10 +1946,12 @@ dependencies = [
"serde",
"serde-wasm-bindgen",
"shellexpand",
"smallvec",
"strum",
"strum_macros",
"test-case",
"textwrap",
"thiserror",
"titlecase",
"toml_edit",
"wasm-bindgen",
@@ -1958,7 +1960,7 @@ dependencies = [
[[package]]
name = "ruff_cli"
version = "0.0.225"
version = "0.0.227"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -1995,7 +1997,7 @@ dependencies = [
[[package]]
name = "ruff_dev"
version = "0.0.225"
version = "0.0.227"
dependencies = [
"anyhow",
"clap 4.0.32",
@@ -2016,7 +2018,7 @@ dependencies = [
[[package]]
name = "ruff_macros"
version = "0.0.225"
version = "0.0.227"
dependencies = [
"once_cell",
"proc-macro2",
@@ -2060,7 +2062,7 @@ dependencies = [
[[package]]
name = "rustpython-ast"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=acbc517b55406c76da83d7b2711941d8d3f65b87#acbc517b55406c76da83d7b2711941d8d3f65b87"
source = "git+https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa#ff90fe52eea578c8ebdd9d95e078cc041a5959fa"
dependencies = [
"num-bigint",
"rustpython-common",
@@ -2070,7 +2072,7 @@ dependencies = [
[[package]]
name = "rustpython-common"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=acbc517b55406c76da83d7b2711941d8d3f65b87#acbc517b55406c76da83d7b2711941d8d3f65b87"
source = "git+https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa#ff90fe52eea578c8ebdd9d95e078cc041a5959fa"
dependencies = [
"ascii",
"bitflags",
@@ -2095,7 +2097,7 @@ dependencies = [
[[package]]
name = "rustpython-compiler-core"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=acbc517b55406c76da83d7b2711941d8d3f65b87#acbc517b55406c76da83d7b2711941d8d3f65b87"
source = "git+https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa#ff90fe52eea578c8ebdd9d95e078cc041a5959fa"
dependencies = [
"bincode",
"bitflags",
@@ -2112,7 +2114,7 @@ dependencies = [
[[package]]
name = "rustpython-parser"
version = "0.2.0"
source = "git+https://github.com/RustPython/RustPython.git?rev=acbc517b55406c76da83d7b2711941d8d3f65b87#acbc517b55406c76da83d7b2711941d8d3f65b87"
source = "git+https://github.com/RustPython/RustPython.git?rev=ff90fe52eea578c8ebdd9d95e078cc041a5959fa#ff90fe52eea578c8ebdd9d95e078cc041a5959fa"
dependencies = [
"ahash",
"anyhow",

View File

@@ -8,7 +8,7 @@ default-members = [".", "ruff_cli"]
[package]
name = "ruff"
version = "0.0.225"
version = "0.0.227"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = "2021"
rust-version = "1.65.0"
@@ -46,20 +46,22 @@ once_cell = { version = "1.16.0" }
path-absolutize = { version = "3.0.14", features = ["once_cell_cache", "use_unix_paths_on_wasm"] }
regex = { version = "1.6.0" }
ropey = { version = "1.5.0", features = ["cr_lines", "simd"], default-features = false }
ruff_macros = { version = "0.0.225", path = "ruff_macros" }
ruff_macros = { version = "0.0.227", path = "ruff_macros" }
rustc-hash = { version = "1.1.0" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "acbc517b55406c76da83d7b2711941d8d3f65b87" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "acbc517b55406c76da83d7b2711941d8d3f65b87" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "acbc517b55406c76da83d7b2711941d8d3f65b87" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "ff90fe52eea578c8ebdd9d95e078cc041a5959fa" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "ff90fe52eea578c8ebdd9d95e078cc041a5959fa" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "ff90fe52eea578c8ebdd9d95e078cc041a5959fa" }
schemars = { version = "0.8.11" }
semver = { version = "1.0.16" }
serde = { version = "1.0.147", features = ["derive"] }
shellexpand = { version = "3.0.0" }
smallvec = { version = "1.10.0" }
strum = { version = "0.24.1", features = ["strum_macros"] }
strum_macros = { version = "0.24.3" }
textwrap = { version = "0.16.0" }
titlecase = { version = "2.2.1" }
toml_edit = { version = "0.17.1", features = ["easy"] }
thiserror = { version = "1.0" }
# https://docs.rs/getrandom/0.2.7/getrandom/#webassembly-support
# For (future) wasm-pack support

835
README.md

File diff suppressed because it is too large Load Diff

View File

@@ -771,7 +771,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8_to_ruff"
version = "0.0.225"
version = "0.0.227"
dependencies = [
"anyhow",
"clap",
@@ -1975,7 +1975,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.225"
version = "0.0.227"
dependencies = [
"anyhow",
"bincode",

View File

@@ -1,6 +1,6 @@
[package]
name = "flake8-to-ruff"
version = "0.0.225"
version = "0.0.227"
edition = "2021"
[dependencies]

View File

@@ -7,7 +7,7 @@ build-backend = "maturin"
[project]
name = "ruff"
version = "0.0.225"
version = "0.0.227"
description = "An extremely fast Python linter, written in Rust."
authors = [
{ name = "Charlie Marsh", email = "charlie.r.marsh@gmail.com" },

View File

@@ -53,3 +53,25 @@ def test_list_of_tuples(param1, param2):
)
def test_list_of_lists(param1, param2):
...
@pytest.mark.parametrize(
"param1,param2",
[
[1, 2],
[3, 4],
],
)
def test_csv_name_list_of_lists(param1, param2):
...
@pytest.mark.parametrize(
"param",
[
[1, 2],
[3, 4],
],
)
def test_single_list_of_lists(param):
...

View File

@@ -1,25 +1,88 @@
if a: # SIM102
# SIM102
if a:
if b:
c
# SIM102
if a:
pass
elif b: # SIM102
elif b:
if c:
d
# SIM102
if a:
# Unfixable due to placement of this comment.
if b:
c
# SIM102
if a:
if b:
# Fixable due to placement of this comment.
c
# OK
if a:
if b:
c
else:
d
# OK
if __name__ == "__main__":
if foo():
...
# OK
if a:
d
if b:
c
while True:
# SIM102
if True:
if True:
"""this
is valid"""
"""the indentation on
this line is significant"""
"this is" \
"allowed too"
("so is"
"this for some reason")
# SIM102
if True:
if True:
"""this
is valid"""
"""the indentation on
this line is significant"""
"this is" \
"allowed too"
("so is"
"this for some reason")
while True:
# SIM102
if node.module:
if node.module == "multiprocessing" or node.module.startswith(
"multiprocessing."
):
print("Bad module!")
# SIM102
if node.module:
if node.module == "multiprocessing" or node.module.startswith(
"multiprocessing."
):
print("Bad module!")

View File

@@ -1,7 +1,31 @@
# Bad
# SIM109
if a == b or a == c:
d
# Good
# SIM109
if (a == b or a == c) and None:
d
# SIM109
if a == b or a == c or None:
d
# SIM109
if a == b or None or a == c:
d
# OK
if a in (b, c):
d
d
# OK
if a == b or a == c():
d
# OK
if (
a == b
# This comment prevents us from raising SIM109
or a == c
):
d

View File

@@ -1,30 +1,92 @@
with A() as a: # SIM117
# SIM117
with A() as a:
with B() as b:
print("hello")
with A(): # SIM117
# SIM117
with A():
with B():
with C():
print("hello")
# SIM117
with A() as a:
# Unfixable due to placement of this comment.
with B() as b:
print("hello")
# SIM117
with A() as a:
with B() as b:
# Fixable due to placement of this comment.
print("hello")
# OK
with A() as a:
a()
with B() as b:
print("hello")
# OK
with A() as a:
with B() as b:
print("hello")
a()
# OK
async with A() as a:
with B() as b:
print("hello")
# OK
with A() as a:
async with B() as b:
print("hello")
# OK
async with A() as a:
async with B() as b:
print("hello")
while True:
# SIM117
with A() as a:
with B() as b:
"""this
is valid"""
"""the indentation on
this line is significant"""
"this is" \
"allowed too"
("so is"
"this for some reason")
# SIM117
with (
A() as a,
B() as b,
):
with C() as c:
print("hello")
# SIM117
with A() as a:
with (
B() as b,
C() as c,
):
print("hello")
# SIM117
with (
A() as a,
B() as b,
):
with (
C() as c,
D() as d,
):
print("hello")

View File

@@ -1,17 +1,27 @@
if not a == b: # SIM201
# SIM201
if not a == b:
pass
if not a == (b + c): # SIM201
# SIM201
if not a == (b + c):
pass
if not (a + b) == c: # SIM201
# SIM201
if not (a + b) == c:
pass
if not a != b: # OK
# OK
if not a != b:
pass
if a == b: # OK
# OK
if a == b:
pass
if not a == b: # OK
# OK
if not a == b:
raise ValueError()
# OK
def __ne__(self, other):
return not self == other

View File

@@ -1,14 +1,27 @@
if not a != b: # SIM202
# SIM202
if not a != b:
pass
if not a != (b + c): # SIM202
# SIM202
if not a != (b + c):
pass
if not (a + b) != c: # SIM202
# SIM202
if not (a + b) != c:
pass
if not a == b: # OK
# OK
if not a == b:
pass
if a != b: # OK
# OK
if a != b:
pass
# OK
if not a != b:
raise ValueError()
# OK
def __eq__(self, other):
return not self != other

View File

@@ -7,8 +7,12 @@ if (a or b) or True: # SIM223
if a or (b or True): # SIM223
pass
if a and True:
if a and True: # OK
pass
if True:
if True: # OK
pass
def validate(self, value):
return json.loads(value) or True # OK

View File

@@ -2,6 +2,9 @@
"yoda" == compare # SIM300
'yoda' == compare # SIM300
42 == age # SIM300
"yoda" <= compare # SIM300
'yoda' < compare # SIM300
42 > age # SIM300
# OK
compare == "yoda"

View File

@@ -85,3 +85,26 @@ if key in a_dict:
var = a_dict[key]
else:
var = foo()
# OK (complex default value)
if key in a_dict:
var = a_dict[key]
else:
var = a_dict["fallback"]
# OK (false negative for elif)
if foo():
pass
elif key in a_dict:
vars[idx] = a_dict[key]
else:
vars[idx] = "default"
# OK (false negative for nested else)
if foo():
pass
else:
if key in a_dict:
vars[idx] = a_dict[key]
else:
vars[idx] = "default"

View File

@@ -0,0 +1,9 @@
from __future__ import annotations
from typing import Any
from requests import Session
from my_first_party import my_first_party_object
from . import my_local_folder_object

View File

@@ -0,0 +1,2 @@
from sklearn.svm import XYZ, func, variable, Const, Klass, constant
from subprocess import First, var, func, Class, konst, A_constant, Last, STDOUT

View File

@@ -0,0 +1,2 @@
from sklearn.svm import VAR, Class, MyVar, CONST, abc
from subprocess import utils, var_ABC, Variable, Klass, CONSTANT, exe

View File

@@ -579,3 +579,30 @@ def multiline_trailing_and_leading_space():
"or exclamation point (not '\"')")
def endswith_quote():
"""Whitespace at the end, but also a quote" """
@expect('D209: Multi-line docstring closing quotes should be on a separate '
'line')
@expect('D213: Multi-line docstring summary should start at the second line')
def asdfljdjgf24():
"""Summary.
Description. """
@expect('D200: One-line docstring should fit on one line with quotes '
'(found 3)')
@expect('D212: Multi-line docstring summary should start at the first line')
def one_liner():
"""
Wrong."""
@expect('D200: One-line docstring should fit on one line with quotes '
'(found 3)')
@expect('D212: Multi-line docstring summary should start at the first line')
def one_liner():
r"""Wrong.
"""

View File

@@ -1,71 +1,69 @@
"""Check that magic values are not used in comparisons"""
import cmath
"""Check that magic values are not used in comparisons."""
user_input = 10
if 10 > user_input: # [magic-value-comparison]
if 10 > user_input: # [magic-value-comparison]
pass
if 10 == 100: # [comparison-of-constants] R0133
if 10 == 100: # [comparison-of-constants] R0133
pass
if 1 == 3: # [comparison-of-constants] R0133
if 1 == 3: # [comparison-of-constants] R0133
pass
x = 0
if 4 == 3 == x: # [comparison-of-constants] R0133
if 4 == 3 == x: # [comparison-of-constants] R0133
pass
time_delta = 7224
ONE_HOUR = 3600
if time_delta > ONE_HOUR: # correct
if time_delta > ONE_HOUR: # correct
pass
argc = 1
if argc != -1: # correct
if argc != -1: # correct
pass
if argc != 0: # correct
if argc != 0: # correct
pass
if argc != 1: # correct
if argc != 1: # correct
pass
if argc != 2: # [magic-value-comparison]
if argc != 2: # [magic-value-comparison]
pass
if __name__ == "__main__": # correct
if __name__ == "__main__": # correct
pass
ADMIN_PASSWORD = "SUPERSECRET"
input_password = "password"
if input_password == "": # correct
if input_password == "": # correct
pass
if input_password == ADMIN_PASSWORD: # correct
if input_password == ADMIN_PASSWORD: # correct
pass
if input_password == "Hunter2": # [magic-value-comparison]
if input_password == "Hunter2": # [magic-value-comparison]
pass
PI = 3.141592653589793238
pi_estimation = 3.14
if pi_estimation == 3.141592653589793238: # [magic-value-comparison]
if pi_estimation == 3.141592653589793238: # [magic-value-comparison]
pass
if pi_estimation == PI: # correct
if pi_estimation == PI: # correct
pass
HELLO_WORLD = b"Hello, World!"
user_input = b"Hello, There!"
if user_input == b"something": # [magic-value-comparison]
if user_input == b"something": # [magic-value-comparison]
pass
if user_input == HELLO_WORLD: # correct
if user_input == HELLO_WORLD: # correct
pass

View File

@@ -1,5 +1,12 @@
type('')
type(b'')
type("")
type(b"")
type(0)
type(0.)
type(0.0)
type(0j)
# OK
y = x.dtype.type(0.0)
# OK
type = lambda *args, **kwargs: None
type("")

22
resources/test/fixtures/ruff/RUF005.py vendored Normal file
View File

@@ -0,0 +1,22 @@
class Fun:
words = ("how", "fun!")
def yay(self):
return self.words
yay = Fun().yay
foo = [4, 5, 6]
bar = [1, 2, 3] + foo
zoob = tuple(bar)
quux = (7, 8, 9) + zoob
spam = quux + (10, 11, 12)
spom = list(spam)
eggs = spom + [13, 14, 15]
elatement = ("we all say", ) + yay()
excitement = ("we all think", ) + Fun().yay()
astonishment = ("we all feel", ) + Fun.words
chain = ['a', 'b', 'c'] + eggs + list(('yes', 'no', 'pants') + zoob)
baz = () + zoob

View File

@@ -227,7 +227,7 @@
]
},
"format": {
"description": "The style in which violation messages should be formatted: `\"text\"` (default), `\"grouped\"` (group messages by file), `\"json\"` (machine-readable), `\"junit\"` (machine-readable XML), `\"github\"` (GitHub Actions annotations) or `\"gitlab\"` (GitLab CI code quality report).",
"description": "The style in which violation messages should be formatted: `\"text\"` (default), `\"grouped\"` (group messages by file), `\"json\"` (machine-readable), `\"junit\"` (machine-readable XML), `\"github\"` (GitHub Actions annotations), `\"gitlab\"` (GitLab CI code quality report), or `\"pylint\"` (Pylint text format).",
"anyOf": [
{
"$ref": "#/definitions/SerializationFormat"
@@ -341,6 +341,17 @@
}
]
},
"pylint": {
"description": "Options for the `pylint` plugin.",
"anyOf": [
{
"$ref": "#/definitions/PylintOptions"
},
{
"type": "null"
}
]
},
"pyupgrade": {
"description": "Options for the `pyupgrade` plugin.",
"anyOf": [
@@ -461,6 +472,17 @@
},
"additionalProperties": false
},
"ConstantType": {
"type": "string",
"enum": [
"bytes",
"complex",
"float",
"int",
"str",
"tuple"
]
},
"Convention": {
"oneOf": [
{
@@ -762,6 +784,16 @@
},
"additionalProperties": false
},
"ImportType": {
"type": "string",
"enum": [
"future",
"standard-library",
"third-party",
"first-party",
"local-folder"
]
},
"IsortOptions": {
"type": "object",
"properties": {
@@ -782,6 +814,16 @@
"null"
]
},
"constants": {
"description": "An override list of tokens to always recognize as a CONSTANT for `order-by-type` regardless of casing.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"extra-standard-library": {
"description": "A list of modules to consider standard-library, in addition to those known to Ruff in advance.",
"type": [
@@ -833,6 +875,16 @@
"type": "string"
}
},
"no-lines-before": {
"description": "A list of sections that should _not_ be delineated from the previous section via empty lines.",
"type": [
"array",
"null"
],
"items": {
"$ref": "#/definitions/ImportType"
}
},
"order-by-type": {
"description": "Order imports by type, which is determined by case, in addition to alphabetically.",
"type": [
@@ -877,6 +929,16 @@
"boolean",
"null"
]
},
"variables": {
"description": "An override list of tokens to always recognize as a var for `order-by-type` regardless of casing.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
}
},
"additionalProperties": false
@@ -1006,6 +1068,22 @@
},
"additionalProperties": false
},
"PylintOptions": {
"type": "object",
"properties": {
"allow-magic-value-types": {
"description": "Constant types to ignore when used as \"magic values\".",
"type": [
"array",
"null"
],
"items": {
"$ref": "#/definitions/ConstantType"
}
}
},
"additionalProperties": false
},
"PythonVersion": {
"type": "string",
"enum": [
@@ -1444,10 +1522,6 @@
"PLC04",
"PLC041",
"PLC0414",
"PLC2",
"PLC22",
"PLC220",
"PLC2201",
"PLC3",
"PLC30",
"PLC300",
@@ -1557,6 +1631,7 @@
"RUF002",
"RUF003",
"RUF004",
"RUF005",
"RUF1",
"RUF10",
"RUF100",
@@ -1730,7 +1805,8 @@
"junit",
"grouped",
"github",
"gitlab"
"gitlab",
"pylint"
]
},
"Strictness": {

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_cli"
version = "0.0.225"
version = "0.0.227"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = "2021"
rust-version = "1.65.0"

View File

@@ -4,7 +4,7 @@ use clap::{command, Parser};
use regex::Regex;
use ruff::fs;
use ruff::logging::LogLevel;
use ruff::registry::{RuleCode, RuleCodePrefix};
use ruff::registry::{Rule, RuleCodePrefix};
use ruff::resolver::ConfigProcessor;
use ruff::settings::types::{
FilePattern, PatternPrefixPair, PerFileIgnore, PythonVersion, SerializationFormat,
@@ -169,6 +169,7 @@ pub struct Cli {
/// Explain a rule.
#[arg(
long,
value_parser=Rule::from_code,
// Fake subcommands.
conflicts_with = "add_noqa",
conflicts_with = "clean",
@@ -180,7 +181,7 @@ pub struct Cli {
conflicts_with = "stdin_filename",
conflicts_with = "watch",
)]
pub explain: Option<RuleCode>,
pub explain: Option<&'static Rule>,
/// Generate shell completion
#[arg(
long,
@@ -302,7 +303,7 @@ pub struct Arguments {
pub config: Option<PathBuf>,
pub diff: bool,
pub exit_zero: bool,
pub explain: Option<RuleCode>,
pub explain: Option<&'static Rule>,
pub files: Vec<PathBuf>,
pub generate_shell_completion: Option<clap_complete_command::Shell>,
pub isolated: bool,

View File

@@ -15,11 +15,11 @@ use ruff::cache::CACHE_DIR_NAME;
use ruff::linter::add_noqa_to_path;
use ruff::logging::LogLevel;
use ruff::message::{Location, Message};
use ruff::registry::RuleCode;
use ruff::registry::Rule;
use ruff::resolver::{FileDiscovery, PyprojectDiscovery};
use ruff::settings::flags;
use ruff::settings::types::SerializationFormat;
use ruff::{fix, fs, packaging, resolver, warn_user_once, IOError};
use ruff::{fix, fs, packaging, resolver, warn_user_once, AutofixAvailability, IOError};
use serde::Serialize;
use walkdir::WalkDir;
@@ -114,7 +114,7 @@ pub fn run(
.unwrap_or_else(|(path, message)| {
if let Some(path) = &path {
let settings = resolver.resolve(path, pyproject_strategy);
if settings.rules.enabled(&RuleCode::E902) {
if settings.rules.enabled(&Rule::IOError) {
Diagnostics::new(vec![Message {
kind: IOError(message).into(),
location: Location::default(),
@@ -289,24 +289,34 @@ struct Explanation<'a> {
summary: &'a str,
}
/// Explain a `RuleCode` to the user.
pub fn explain(code: &RuleCode, format: SerializationFormat) -> Result<()> {
/// Explain a `Rule` to the user.
pub fn explain(rule: &Rule, format: SerializationFormat) -> Result<()> {
match format {
SerializationFormat::Text | SerializationFormat::Grouped => {
println!(
"{} ({}): {}",
code.as_ref(),
code.origin().name(),
code.kind().summary()
);
println!("{}\n", rule.as_ref());
println!("Code: {} ({})\n", rule.code(), rule.origin().name());
if let Some(autofix) = rule.autofixable() {
println!(
"{}",
match autofix.available {
AutofixAvailability::Sometimes => "Autofix is sometimes available.\n",
AutofixAvailability::Always => "Autofix is always available.\n",
}
);
}
println!("Message formats:\n");
for format in rule.message_formats() {
println!("* {format}");
}
}
SerializationFormat::Json => {
println!(
"{}",
serde_json::to_string_pretty(&Explanation {
code: code.as_ref(),
origin: code.origin().name(),
summary: &code.kind().summary(),
code: rule.code(),
origin: rule.origin().name(),
summary: rule.message_formats()[0],
})?
);
}
@@ -319,6 +329,9 @@ pub fn explain(code: &RuleCode, format: SerializationFormat) -> Result<()> {
SerializationFormat::Gitlab => {
bail!("`--explain` does not support GitLab format")
}
SerializationFormat::Pylint => {
bail!("`--explain` does not support pylint format")
}
};
Ok(())
}

View File

@@ -100,8 +100,7 @@ pub fn main() -> Result<ExitCode> {
https://github.com/charliermarsh/ruff/issues/new?title=%5BPanic%5D
quoting the executed command, along with the relevant file contents and `pyproject.toml` settings,
we'd be very appreciative!
quoting the executed command, along with the relevant file contents and `pyproject.toml` settings, we'd be very appreciative!
"#,
"error".red().bold(),
);
@@ -158,8 +157,8 @@ we'd be very appreciative!
PyprojectDiscovery::Hierarchical(settings) => settings.cli.clone(),
};
if let Some(code) = cli.explain {
commands::explain(&code, format)?;
if let Some(rule) = cli.explain {
commands::explain(rule, format)?;
return Ok(ExitCode::SUCCESS);
}
if cli.show_settings {

View File

@@ -11,7 +11,7 @@ use itertools::iterate;
use ruff::fs::relativize_path;
use ruff::logging::LogLevel;
use ruff::message::{Location, Message};
use ruff::registry::RuleCode;
use ruff::registry::Rule;
use ruff::settings::types::SerializationFormat;
use ruff::{fix, notify_user};
use serde::Serialize;
@@ -35,7 +35,7 @@ struct ExpandedFix<'a> {
#[derive(Serialize)]
struct ExpandedMessage<'a> {
code: &'a RuleCode,
code: SerializeRuleAsCode<'a>,
message: String,
fix: Option<ExpandedFix<'a>>,
location: Location,
@@ -43,6 +43,23 @@ struct ExpandedMessage<'a> {
filename: &'a str,
}
struct SerializeRuleAsCode<'a>(&'a Rule);
impl Serialize for SerializeRuleAsCode<'_> {
fn serialize<S>(&self, serializer: S) -> std::result::Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
serializer.serialize_str(self.0.code())
}
}
impl<'a> From<&'a Rule> for SerializeRuleAsCode<'a> {
fn from(rule: &'a Rule) -> Self {
Self(rule)
}
}
pub struct Printer<'a> {
format: &'a SerializationFormat,
log_level: &'a LogLevel,
@@ -143,7 +160,7 @@ impl<'a> Printer<'a> {
.messages
.iter()
.map(|message| ExpandedMessage {
code: message.kind.code(),
code: message.kind.rule().into(),
message: message.kind.body(),
fix: message.fix.as_ref().map(|fix| ExpandedFix {
content: &fix.content,
@@ -177,8 +194,10 @@ impl<'a> Printer<'a> {
message.location.column(),
message.kind.body()
));
let mut case =
TestCase::new(format!("org.ruff.{}", message.kind.code()), status);
let mut case = TestCase::new(
format!("org.ruff.{}", message.kind.rule().code()),
status,
);
let file_path = Path::new(filename);
let file_stem = file_path.file_stem().unwrap().to_str().unwrap();
let classname = file_path.parent().unwrap().join(file_stem);
@@ -248,14 +267,14 @@ impl<'a> Printer<'a> {
":",
message.location.column(),
":",
message.kind.code().as_ref(),
message.kind.rule().code(),
message.kind.body(),
);
writeln!(
stdout,
"::error title=Ruff \
({}),file={},line={},col={},endLine={},endColumn={}::{}",
message.kind.code(),
message.kind.rule().code(),
message.filename,
message.location.row(),
message.location.column(),
@@ -266,7 +285,7 @@ impl<'a> Printer<'a> {
}
}
SerializationFormat::Gitlab => {
// Generate JSON with errors in GitLab CI format
// Generate JSON with violations in GitLab CI format
// https://docs.gitlab.com/ee/ci/testing/code_quality.html#implementing-a-custom-tool
writeln!(stdout,
"{}",
@@ -276,9 +295,9 @@ impl<'a> Printer<'a> {
.iter()
.map(|message| {
json!({
"description": format!("({}) {}", message.kind.code(), message.kind.body()),
"description": format!("({}) {}", message.kind.rule().code(), message.kind.body()),
"severity": "major",
"fingerprint": message.kind.code(),
"fingerprint": message.kind.rule().code(),
"location": {
"path": message.filename,
"lines": {
@@ -293,6 +312,20 @@ impl<'a> Printer<'a> {
)?
)?;
}
SerializationFormat::Pylint => {
// Generate violations in Pylint format.
// See: https://flake8.pycqa.org/en/latest/internal/formatters.html#pylint-formatter
for message in &diagnostics.messages {
let label = format!(
"{}:{}: [{}] {}",
relativize_path(Path::new(&message.filename)),
message.location.row(),
message.kind.rule().code(),
message.kind.body(),
);
writeln!(stdout, "{label}")?;
}
}
}
stdout.flush()?;
@@ -361,7 +394,7 @@ fn print_message<T: Write>(stdout: &mut T, message: &Message) -> Result<()> {
":".cyan(),
message.location.column(),
":".cyan(),
message.kind.code().as_ref().red().bold(),
message.kind.rule().code().red().bold(),
message.kind.body(),
);
writeln!(stdout, "{label}")?;
@@ -388,7 +421,7 @@ fn print_message<T: Write>(stdout: &mut T, message: &Message) -> Result<()> {
source: &source.contents,
line_start: message.location.row(),
annotations: vec![SourceAnnotation {
label: message.kind.code().as_ref(),
label: message.kind.rule().code(),
annotation_type: AnnotationType::Error,
range: source.range,
}],
@@ -425,7 +458,7 @@ fn print_grouped_message<T: Write>(
":".cyan(),
message.location.column(),
" ".repeat(column_length - num_digits(message.location.column())),
message.kind.code().as_ref().red().bold(),
message.kind.rule().code().red().bold(),
message.kind.body(),
);
writeln!(stdout, "{label}")?;
@@ -452,7 +485,7 @@ fn print_grouped_message<T: Write>(
source: &source.contents,
line_start: message.location.row(),
annotations: vec![SourceAnnotation {
label: message.kind.code().as_ref(),
label: message.kind.rule().code(),
annotation_type: AnnotationType::Error,
range: source.range,
}],

View File

@@ -151,3 +151,12 @@ fn test_show_source() -> Result<()> {
assert!(str::from_utf8(&output.get_output().stdout)?.contains("l = 1"));
Ok(())
}
#[test]
fn explain_status_codes() -> Result<()> {
let mut cmd = Command::cargo_bin(BIN_NAME)?;
cmd.args(["-", "--explain", "F401"]).assert().success();
let mut cmd = Command::cargo_bin(BIN_NAME)?;
cmd.args(["-", "--explain", "RUF404"]).assert().failure();
Ok(())
}

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_dev"
version = "0.0.225"
version = "0.0.227"
edition = "2021"
[dependencies]
@@ -11,9 +11,9 @@ libcst = { git = "https://github.com/charliermarsh/LibCST", rev = "f2f0b7a487a87
once_cell = { version = "1.16.0" }
ruff = { path = ".." }
ruff_cli = { path = "../ruff_cli" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "acbc517b55406c76da83d7b2711941d8d3f65b87" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "acbc517b55406c76da83d7b2711941d8d3f65b87" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "acbc517b55406c76da83d7b2711941d8d3f65b87" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "ff90fe52eea578c8ebdd9d95e078cc041a5959fa" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "ff90fe52eea578c8ebdd9d95e078cc041a5959fa" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "ff90fe52eea578c8ebdd9d95e078cc041a5959fa" }
schemars = { version = "0.8.11" }
serde_json = {version="1.0.91"}
strum = { version = "0.24.1", features = ["strum_macros"] }

View File

@@ -25,14 +25,17 @@ fn generate_table(table_out: &mut String, prefix: &RuleCodePrefix) {
table_out.push('\n');
table_out.push_str("| ---- | ---- | ------- | --- |");
table_out.push('\n');
for rule_code in prefix.codes() {
let kind = rule_code.kind();
let fix_token = if kind.fixable() { "🛠" } else { "" };
for rule in prefix.codes() {
let fix_token = match rule.autofixable() {
None => "",
Some(_) => "🛠",
};
table_out.push_str(&format!(
"| {} | {} | {} | {} |",
kind.code().as_ref(),
kind.as_ref(),
kind.summary().replace('|', r"\|"),
rule.code(),
rule.as_ref(),
rule.message_formats()[0].replace('|', r"\|"),
fix_token
));
table_out.push('\n');

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_macros"
version = "0.0.225"
version = "0.0.227"
edition = "2021"
[lib]

View File

@@ -1,35 +1,44 @@
use std::collections::HashMap;
use proc_macro2::Span;
use quote::quote;
use syn::parse::Parse;
use syn::{Ident, Path, Token};
use syn::{Ident, LitStr, Path, Token};
pub fn define_rule_mapping(mapping: Mapping) -> proc_macro2::TokenStream {
let mut rulecode_variants = quote!();
pub fn define_rule_mapping(mapping: &Mapping) -> proc_macro2::TokenStream {
let mut rule_variants = quote!();
let mut diagkind_variants = quote!();
let mut rulecode_kind_match_arms = quote!();
let mut rulecode_origin_match_arms = quote!();
let mut rule_message_formats_match_arms = quote!();
let mut rule_autofixable_match_arms = quote!();
let mut rule_origin_match_arms = quote!();
let mut rule_code_match_arms = quote!();
let mut rule_from_code_match_arms = quote!();
let mut diagkind_code_match_arms = quote!();
let mut diagkind_body_match_arms = quote!();
let mut diagkind_fixable_match_arms = quote!();
let mut diagkind_commit_match_arms = quote!();
let mut from_impls_for_diagkind = quote!();
for (code, path, name) in mapping.entries {
rulecode_variants.extend(quote! {#code,});
for (code, path, name) in &mapping.entries {
let code_str = LitStr::new(&code.to_string(), Span::call_site());
rule_variants.extend(quote! {
#[doc = #code_str]
#name,
});
diagkind_variants.extend(quote! {#name(#path),});
rulecode_kind_match_arms.extend(
quote! {RuleCode::#code => DiagnosticKind::#name(<#path as Violation>::placeholder()),},
);
let origin = get_origin(&code);
rulecode_origin_match_arms.extend(quote! {RuleCode::#code => RuleOrigin::#origin,});
diagkind_code_match_arms.extend(quote! {DiagnosticKind::#name(..) => &RuleCode::#code, });
diagkind_body_match_arms
.extend(quote! {DiagnosticKind::#name(x) => Violation::message(x), });
rule_message_formats_match_arms
.extend(quote! {Self::#name => <#path as Violation>::message_formats(),});
rule_autofixable_match_arms.extend(quote! {Self::#name => <#path as Violation>::AUTOFIX,});
let origin = get_origin(code);
rule_origin_match_arms.extend(quote! {Self::#name => RuleOrigin::#origin,});
rule_code_match_arms.extend(quote! {Self::#name => #code_str,});
rule_from_code_match_arms.extend(quote! {#code_str => Ok(&Rule::#name), });
diagkind_code_match_arms.extend(quote! {Self::#name(..) => &Rule::#name, });
diagkind_body_match_arms.extend(quote! {Self::#name(x) => Violation::message(x), });
diagkind_fixable_match_arms
.extend(quote! {DiagnosticKind::#name(x) => x.autofix_title_formatter().is_some(),});
diagkind_commit_match_arms.extend(
quote! {DiagnosticKind::#name(x) => x.autofix_title_formatter().map(|f| f(x)), },
);
.extend(quote! {Self::#name(x) => x.autofix_title_formatter().is_some(),});
diagkind_commit_match_arms
.extend(quote! {Self::#name(x) => x.autofix_title_formatter().map(|f| f(x)), });
from_impls_for_diagkind.extend(quote! {
impl From<#path> for DiagnosticKind {
fn from(x: #path) -> Self {
@@ -39,44 +48,73 @@ pub fn define_rule_mapping(mapping: Mapping) -> proc_macro2::TokenStream {
});
}
let code_to_name: HashMap<_, _> = mapping
.entries
.iter()
.map(|(code, _, name)| (code.to_string(), name))
.collect();
let rulecodeprefix = super::rule_code_prefix::expand(
&Ident::new("Rule", Span::call_site()),
&Ident::new("RuleCodePrefix", Span::call_site()),
mapping.entries.iter().map(|(code, ..)| code),
|code| code_to_name[code],
);
quote! {
#[derive(
AsRefStr,
RuleCodePrefix,
EnumIter,
EnumString,
Debug,
Display,
PartialEq,
Eq,
Clone,
Serialize,
Deserialize,
Hash,
PartialOrd,
Ord,
AsRefStr,
)]
pub enum RuleCode { #rulecode_variants }
#[strum(serialize_all = "kebab-case")]
pub enum Rule { #rule_variants }
#[derive(AsRefStr, Debug, PartialEq, Eq, Serialize, Deserialize)]
pub enum DiagnosticKind { #diagkind_variants }
#[derive(thiserror::Error, Debug)]
pub enum FromCodeError {
#[error("unknown rule code")]
Unknown,
}
impl RuleCode {
/// A placeholder representation of the `DiagnosticKind` for the diagnostic.
pub fn kind(&self) -> DiagnosticKind {
match self { #rulecode_kind_match_arms }
impl Rule {
/// Returns the format strings used to report violations of this rule.
pub fn message_formats(&self) -> &'static [&'static str] {
match self { #rule_message_formats_match_arms }
}
pub fn autofixable(&self) -> Option<crate::violation::AutofixKind> {
match self { #rule_autofixable_match_arms }
}
pub fn origin(&self) -> RuleOrigin {
match self { #rulecode_origin_match_arms }
match self { #rule_origin_match_arms }
}
pub fn code(&self) -> &'static str {
match self { #rule_code_match_arms }
}
pub fn from_code(code: &str) -> Result<&'static Self, FromCodeError> {
match code {
#rule_from_code_match_arms
_ => Err(FromCodeError::Unknown),
}
}
}
impl DiagnosticKind {
/// A four-letter shorthand code for the diagnostic.
pub fn code(&self) -> &'static RuleCode {
/// The rule of the diagnostic.
pub fn rule(&self) -> &'static Rule {
match self { #diagkind_code_match_arms }
}
@@ -97,6 +135,8 @@ pub fn define_rule_mapping(mapping: Mapping) -> proc_macro2::TokenStream {
}
#from_impls_for_diagkind
#rulecodeprefix
}
}

View File

@@ -0,0 +1,55 @@
use proc_macro2::TokenStream;
use quote::{quote, quote_spanned, ToTokens};
use syn::spanned::Spanned;
use syn::{Block, Expr, ItemFn, Stmt};
pub fn derive_message_formats(func: &ItemFn) -> proc_macro2::TokenStream {
let mut strings = quote!();
if let Err(err) = parse_block(&func.block, &mut strings) {
return err;
}
quote! {
#func
fn message_formats() -> &'static [&'static str] {
&[#strings]
}
}
}
fn parse_block(block: &Block, strings: &mut TokenStream) -> Result<(), TokenStream> {
let Some(Stmt::Expr(last)) = block.stmts.last() else {panic!("expected last statement in block to be an expression")};
parse_expr(last, strings)?;
Ok(())
}
fn parse_expr(expr: &Expr, strings: &mut TokenStream) -> Result<(), TokenStream> {
match expr {
Expr::Macro(mac) if mac.mac.path.is_ident("format") => {
let Some(first_token) = mac.mac.tokens.to_token_stream().into_iter().next() else {
return Err(quote_spanned!(expr.span() => compile_error!("expected format! to have an argument")))
};
strings.extend(quote! {#first_token,});
Ok(())
}
Expr::Block(block) => parse_block(&block.block, strings),
Expr::If(expr) => {
parse_block(&expr.then_branch, strings)?;
if let Some((_, then)) = &expr.else_branch {
parse_expr(then, strings)?;
}
Ok(())
}
Expr::Match(block) => {
for arm in &block.arms {
parse_expr(&arm.body, strings)?;
}
Ok(())
}
_ => Err(quote_spanned!(
expr.span() =>
compile_error!("expected last expression to be a format! macro or a match block")
)),
}
}

View File

@@ -13,10 +13,12 @@
)]
#![forbid(unsafe_code)]
use syn::{parse_macro_input, DeriveInput};
use proc_macro::TokenStream;
use syn::{parse_macro_input, DeriveInput, ItemFn};
mod config;
mod define_rule_mapping;
mod derive_message_formats;
mod prefixes;
mod rule_code_prefix;
@@ -29,17 +31,14 @@ pub fn derive_config(input: proc_macro::TokenStream) -> proc_macro::TokenStream
.into()
}
#[proc_macro_derive(RuleCodePrefix)]
pub fn derive_rule_code_prefix(input: proc_macro::TokenStream) -> proc_macro::TokenStream {
let input = parse_macro_input!(input as DeriveInput);
rule_code_prefix::derive_impl(input)
.unwrap_or_else(syn::Error::into_compile_error)
.into()
}
#[proc_macro]
pub fn define_rule_mapping(item: proc_macro::TokenStream) -> proc_macro::TokenStream {
let mapping = parse_macro_input!(item as define_rule_mapping::Mapping);
define_rule_mapping::define_rule_mapping(mapping).into()
define_rule_mapping::define_rule_mapping(&mapping).into()
}
#[proc_macro_attribute]
pub fn derive_message_formats(_attr: TokenStream, item: TokenStream) -> TokenStream {
let func = parse_macro_input!(item as ItemFn);
derive_message_formats::derive_message_formats(&func).into()
}

View File

@@ -3,9 +3,7 @@ use std::collections::{BTreeMap, BTreeSet, HashMap};
use once_cell::sync::Lazy;
use proc_macro2::Span;
use quote::quote;
use syn::punctuated::Punctuated;
use syn::token::Comma;
use syn::{DataEnum, DeriveInput, Ident, Variant};
use syn::Ident;
const ALL: &str = "ALL";
@@ -86,43 +84,17 @@ pub static PREFIX_REDIRECTS: Lazy<HashMap<&'static str, &'static str>> = Lazy::n
])
});
pub fn derive_impl(input: DeriveInput) -> syn::Result<proc_macro2::TokenStream> {
let DeriveInput { ident, data, .. } = input;
let syn::Data::Enum(DataEnum { variants, .. }) = data else {
return Err(syn::Error::new(
ident.span(),
"Can only derive `RuleCodePrefix` from enums.",
));
};
let prefix_ident = Ident::new(&format!("{ident}Prefix"), ident.span());
let prefix = expand(&ident, &prefix_ident, &variants);
let expanded = quote! {
#[derive(PartialEq, Eq, PartialOrd, Ord)]
pub enum SuffixLength {
None,
Zero,
One,
Two,
Three,
Four,
}
#prefix
};
Ok(expanded)
}
fn expand(
ident: &Ident,
pub fn expand<'a>(
rule_type: &Ident,
prefix_ident: &Ident,
variants: &Punctuated<Variant, Comma>,
variants: impl Iterator<Item = &'a Ident>,
variant_name: impl Fn(&str) -> &'a Ident,
) -> proc_macro2::TokenStream {
// Build up a map from prefix to matching RuleCodes.
let mut prefix_to_codes: BTreeMap<Ident, BTreeSet<String>> = BTreeMap::default();
for variant in variants {
let span = variant.ident.span();
let code_str = variant.ident.to_string();
let span = variant.span();
let code_str = variant.to_string();
let code_prefix_len = code_str
.chars()
.take_while(|char| char.is_alphabetic())
@@ -158,7 +130,7 @@ fn expand(
}
});
let prefix_impl = generate_impls(ident, prefix_ident, &prefix_to_codes);
let prefix_impl = generate_impls(rule_type, prefix_ident, &prefix_to_codes, variant_name);
let prefix_redirects = PREFIX_REDIRECTS.iter().map(|(alias, rule_code)| {
let code = Ident::new(rule_code, Span::call_site());
@@ -168,6 +140,16 @@ fn expand(
});
quote! {
#[derive(PartialEq, Eq, PartialOrd, Ord)]
pub enum SuffixLength {
None,
Zero,
One,
Two,
Three,
Four,
}
#[derive(
::strum_macros::EnumString,
::strum_macros::AsRefStr,
@@ -196,16 +178,17 @@ fn expand(
}
}
fn generate_impls(
ident: &Ident,
fn generate_impls<'a>(
rule_type: &Ident,
prefix_ident: &Ident,
prefix_to_codes: &BTreeMap<Ident, BTreeSet<String>>,
variant_name: impl Fn(&str) -> &'a Ident,
) -> proc_macro2::TokenStream {
let codes_match_arms = prefix_to_codes.iter().map(|(prefix, codes)| {
let codes = codes.iter().map(|code| {
let code = Ident::new(code, Span::call_site());
let rule_variant = variant_name(code);
quote! {
#ident::#code
#rule_type::#rule_variant
}
});
let prefix_str = prefix.to_string();
@@ -265,7 +248,7 @@ fn generate_impls(
quote! {
impl #prefix_ident {
pub fn codes(&self) -> Vec<#ident> {
pub fn codes(&self) -> Vec<#rule_type> {
use colored::Colorize;
#[allow(clippy::match_same_arms)]

View File

@@ -47,11 +47,11 @@ mod tests {
use anyhow::Result;
use test_case::test_case;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::linter::test_path;
use crate::settings;
fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let diagnostics =test_path(
Path::new("./resources/test/fixtures/%s")

View File

@@ -44,9 +44,9 @@ def main(*, name: str, code: str, origin: str) -> None:
with open(mod_rs, "w") as fp:
for line in content.splitlines():
if line.strip() == "fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {":
indent = line.split("fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {")[0]
fp.write(f'{indent}#[test_case(RuleCode::{code}, Path::new("{code}.py"); "{code}")]')
if line.strip() == "fn rules(rule_code: Rule, path: &Path) -> Result<()> {":
indent = line.split("fn rules(rule_code: Rule, path: &Path) -> Result<()> {")[0]
fp.write(f'{indent}#[test_case(Rule::{code}, Path::new("{code}.py"); "{code}")]')
fp.write("\n")
fp.write(line)

View File

@@ -33,7 +33,7 @@ pub fn classify(
checker.resolve_call_path(expr).map_or(false, |call_path| {
METACLASS_BASES
.iter()
.any(|(module, member)| call_path == [*module, *member])
.any(|(module, member)| call_path.as_slice() == [*module, *member])
})
})
|| decorator_list.iter().any(|expr| {

View File

@@ -10,8 +10,9 @@ use rustpython_ast::{
use rustpython_parser::lexer;
use rustpython_parser::lexer::Tok;
use rustpython_parser::token::StringKind;
use smallvec::smallvec;
use crate::ast::types::{Binding, BindingKind, Range};
use crate::ast::types::{Binding, BindingKind, CallPath, Range};
use crate::checkers::ast::Checker;
use crate::source_code::{Generator, Indexer, Locator, Stylist};
@@ -39,7 +40,7 @@ pub fn unparse_stmt(stmt: &Stmt, stylist: &Stylist) -> String {
generator.generate()
}
fn collect_call_path_inner<'a>(expr: &'a Expr, parts: &mut Vec<&'a str>) {
fn collect_call_path_inner<'a>(expr: &'a Expr, parts: &mut CallPath<'a>) {
match &expr.node {
ExprKind::Call { func, .. } => {
collect_call_path_inner(func, parts);
@@ -55,9 +56,9 @@ fn collect_call_path_inner<'a>(expr: &'a Expr, parts: &mut Vec<&'a str>) {
}
}
/// Convert an `Expr` to its call path segments (like ["typing", "List"]).
pub fn collect_call_path(expr: &Expr) -> Vec<&str> {
let mut segments = vec![];
/// Convert an `Expr` to its [`CallPath`] segments (like `["typing", "List"]`).
pub fn collect_call_path(expr: &Expr) -> CallPath {
let mut segments = smallvec![];
collect_call_path_inner(expr, &mut segments);
segments
}
@@ -90,7 +91,7 @@ pub fn contains_call_path(checker: &Checker, expr: &Expr, target: &[&str]) -> bo
any_over_expr(expr, &|expr| {
checker
.resolve_call_path(expr)
.map_or(false, |call_path| call_path == target)
.map_or(false, |call_path| call_path.as_slice() == target)
})
}
@@ -121,12 +122,13 @@ pub fn contains_effect(checker: &Checker, expr: &Expr) -> bool {
// Otherwise, avoid all complex expressions.
matches!(
expr.node,
ExprKind::Call { .. }
| ExprKind::Await { .. }
ExprKind::Await { .. }
| ExprKind::Call { .. }
| ExprKind::DictComp { .. }
| ExprKind::GeneratorExp { .. }
| ExprKind::ListComp { .. }
| ExprKind::SetComp { .. }
| ExprKind::DictComp { .. }
| ExprKind::Subscript { .. }
| ExprKind::Yield { .. }
| ExprKind::YieldFrom { .. }
)
@@ -313,7 +315,9 @@ pub fn has_non_none_keyword(keywords: &[Keyword], keyword: &str) -> bool {
}
/// Extract the names of all handled exceptions.
pub fn extract_handler_names(handlers: &[Excepthandler]) -> Vec<Vec<&str>> {
pub fn extract_handler_names(handlers: &[Excepthandler]) -> Vec<CallPath> {
// TODO(charlie): Use `resolve_call_path` to avoid false positives for
// overridden builtins.
let mut handler_names = vec![];
for handler in handlers {
match &handler.node {
@@ -361,10 +365,9 @@ pub fn collect_arg_names<'a>(arguments: &'a Arguments) -> FxHashSet<&'a str> {
}
/// Returns `true` if a statement or expression includes at least one comment.
pub fn has_comments<T>(located: &Located<T>, locator: &Locator) -> bool {
lexer::make_tokenizer(&locator.slice_source_code_range(&Range::from_located(located)))
.flatten()
.any(|(_, tok, _)| matches!(tok, Tok::Comment(..)))
pub fn has_comments_in(range: Range, locator: &Locator) -> bool {
lexer::make_tokenizer(&locator.slice_source_code_range(&range))
.any(|result| result.map_or(false, |(_, tok, _)| matches!(tok, Tok::Comment(..))))
}
/// Returns `true` if a call is an argumented `super` invocation.
@@ -416,11 +419,11 @@ pub fn format_import_from_member(
}
/// Split a target string (like `typing.List`) into (`typing`, `List`).
pub fn to_call_path(target: &str) -> Vec<&str> {
pub fn to_call_path(target: &str) -> CallPath {
if target.contains('.') {
target.split('.').collect()
} else {
vec!["", target]
smallvec!["", target]
}
}

View File

@@ -242,3 +242,5 @@ impl<'a> From<&RefEquality<'a, Expr>> for &'a Expr {
r.0
}
}
pub type CallPath<'a> = smallvec::SmallVec<[&'a str; 8]>;

View File

@@ -4,15 +4,20 @@ use std::str::Lines;
use rustpython_ast::{Located, Location};
use crate::ast::types::Range;
use crate::checkers::ast::Checker;
use crate::source_code::Locator;
/// Extract the leading indentation from a line.
pub fn indentation<'a, T>(checker: &'a Checker, located: &'a Located<T>) -> Cow<'a, str> {
pub fn indentation<'a, T>(locator: &'a Locator, located: &'a Located<T>) -> Option<Cow<'a, str>> {
let range = Range::from_located(located);
checker.locator.slice_source_code_range(&Range::new(
let indentation = locator.slice_source_code_range(&Range::new(
Location::new(range.location.row(), 0),
Location::new(range.location.row(), range.location.column()),
))
));
if indentation.chars().all(char::is_whitespace) {
Some(indentation)
} else {
None
}
}
/// Extract the leading words from a line of text.

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
use std::path::Path;
use crate::registry::{Diagnostic, RuleCode};
use crate::registry::{Diagnostic, Rule};
use crate::rules::flake8_no_pep420::rules::implicit_namespace_package;
use crate::settings::Settings;
@@ -8,7 +8,7 @@ pub fn check_file_path(path: &Path, settings: &Settings) -> Vec<Diagnostic> {
let mut diagnostics: Vec<Diagnostic> = vec![];
// flake8-no-pep420
if settings.rules.enabled(&RuleCode::INP001) {
if settings.rules.enabled(&Rule::ImplicitNamespacePackage) {
if let Some(diagnostic) = implicit_namespace_package(path) {
diagnostics.push(diagnostic);
}

View File

@@ -6,7 +6,7 @@ use rustpython_parser::ast::Suite;
use crate::ast::visitor::Visitor;
use crate::directives::IsortDirectives;
use crate::registry::{Diagnostic, RuleCode};
use crate::registry::{Diagnostic, Rule};
use crate::rules::isort;
use crate::rules::isort::track::{Block, ImportTracker};
use crate::settings::{flags, Settings};
@@ -36,7 +36,7 @@ pub fn check_imports(
// Enforce import rules.
let mut diagnostics = vec![];
if settings.rules.enabled(&RuleCode::I001) {
if settings.rules.enabled(&Rule::UnsortedImports) {
for block in &blocks {
if !block.imports.is_empty() {
if let Some(diagnostic) = isort::rules::organize_imports(
@@ -47,7 +47,7 @@ pub fn check_imports(
}
}
}
if settings.rules.enabled(&RuleCode::I002) {
if settings.rules.enabled(&Rule::MissingRequiredImport) {
diagnostics.extend(isort::rules::add_required_imports(
&blocks, python_ast, locator, settings, autofix,
));

View File

@@ -1,6 +1,6 @@
//! Lint rules based on checking raw physical lines.
use crate::registry::{Diagnostic, RuleCode};
use crate::registry::{Diagnostic, Rule};
use crate::rules::pycodestyle::rules::{
doc_line_too_long, line_too_long, no_newline_at_end_of_file,
};
@@ -17,12 +17,14 @@ pub fn check_lines(
) -> Vec<Diagnostic> {
let mut diagnostics: Vec<Diagnostic> = vec![];
let enforce_blanket_noqa = settings.rules.enabled(&RuleCode::PGH004);
let enforce_blanket_type_ignore = settings.rules.enabled(&RuleCode::PGH003);
let enforce_doc_line_too_long = settings.rules.enabled(&RuleCode::W505);
let enforce_line_too_long = settings.rules.enabled(&RuleCode::E501);
let enforce_no_newline_at_end_of_file = settings.rules.enabled(&RuleCode::W292);
let enforce_unnecessary_coding_comment = settings.rules.enabled(&RuleCode::UP009);
let enforce_blanket_noqa = settings.rules.enabled(&Rule::BlanketNOQA);
let enforce_blanket_type_ignore = settings.rules.enabled(&Rule::BlanketTypeIgnore);
let enforce_doc_line_too_long = settings.rules.enabled(&Rule::DocLineTooLong);
let enforce_line_too_long = settings.rules.enabled(&Rule::LineTooLong);
let enforce_no_newline_at_end_of_file = settings.rules.enabled(&Rule::NoNewLineAtEndOfFile);
let enforce_unnecessary_coding_comment = settings
.rules
.enabled(&Rule::PEP3120UnnecessaryCodingComment);
let mut commented_lines_iter = commented_lines.iter().peekable();
let mut doc_lines_iter = doc_lines.iter().peekable();
@@ -37,7 +39,9 @@ pub fn check_lines(
index,
line,
matches!(autofix, flags::Autofix::Enabled)
&& settings.rules.should_fix(&RuleCode::UP009),
&& settings
.rules
.should_fix(&Rule::PEP3120UnnecessaryCodingComment),
) {
diagnostics.push(diagnostic);
}
@@ -79,7 +83,7 @@ pub fn check_lines(
if let Some(diagnostic) = no_newline_at_end_of_file(
contents,
matches!(autofix, flags::Autofix::Enabled)
&& settings.rules.should_fix(&RuleCode::W292),
&& settings.rules.should_fix(&Rule::NoNewLineAtEndOfFile),
) {
diagnostics.push(diagnostic);
}
@@ -92,7 +96,7 @@ pub fn check_lines(
mod tests {
use super::check_lines;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::settings::{flags, Settings};
#[test]
@@ -105,7 +109,7 @@ mod tests {
&[],
&Settings {
line_length,
..Settings::for_rule(RuleCode::E501)
..Settings::for_rule(Rule::LineTooLong)
},
flags::Autofix::Enabled,
)

View File

@@ -1,14 +1,12 @@
//! `NoQA` enforcement and validation.
use std::str::FromStr;
use nohash_hasher::IntMap;
use rustpython_parser::ast::Location;
use crate::ast::types::Range;
use crate::fix::Fix;
use crate::noqa::{is_file_exempt, Directive};
use crate::registry::{Diagnostic, DiagnosticKind, RuleCode, CODE_REDIRECTS};
use crate::registry::{Diagnostic, DiagnosticKind, Rule, CODE_REDIRECTS};
use crate::settings::{flags, Settings};
use crate::violations::UnusedCodes;
use crate::{noqa, violations};
@@ -24,7 +22,7 @@ pub fn check_noqa(
let mut noqa_directives: IntMap<usize, (Directive, Vec<&str>)> = IntMap::default();
let mut ignored = vec![];
let enforce_noqa = settings.rules.enabled(&RuleCode::RUF100);
let enforce_noqa = settings.rules.enabled(&Rule::UnusedNOQA);
let lines: Vec<&str> = contents.lines().collect();
for lineno in commented_lines {
@@ -56,13 +54,13 @@ pub fn check_noqa(
});
match noqa {
(Directive::All(..), matches) => {
matches.push(diagnostic.kind.code().as_ref());
matches.push(diagnostic.kind.rule().code());
ignored.push(index);
continue;
}
(Directive::Codes(.., codes), matches) => {
if noqa::includes(diagnostic.kind.code(), codes) {
matches.push(diagnostic.kind.code().as_ref());
if noqa::includes(diagnostic.kind.rule(), codes) {
matches.push(diagnostic.kind.rule().code());
ignored.push(index);
continue;
}
@@ -83,12 +81,12 @@ pub fn check_noqa(
.or_insert_with(|| (noqa::extract_noqa_directive(lines[noqa_lineno - 1]), vec![]));
match noqa {
(Directive::All(..), matches) => {
matches.push(diagnostic.kind.code().as_ref());
matches.push(diagnostic.kind.rule().code());
ignored.push(index);
}
(Directive::Codes(.., codes), matches) => {
if noqa::includes(diagnostic.kind.code(), codes) {
matches.push(diagnostic.kind.code().as_ref());
if noqa::includes(diagnostic.kind.rule(), codes) {
matches.push(diagnostic.kind.rule().code());
ignored.push(index);
}
}
@@ -108,7 +106,7 @@ pub fn check_noqa(
Range::new(Location::new(row + 1, start), Location::new(row + 1, end)),
);
if matches!(autofix, flags::Autofix::Enabled)
&& settings.rules.should_fix(diagnostic.kind.code())
&& settings.rules.should_fix(diagnostic.kind.rule())
{
diagnostic.amend(Fix::deletion(
Location::new(row + 1, start - spaces),
@@ -125,8 +123,8 @@ pub fn check_noqa(
let mut valid_codes = vec![];
let mut self_ignore = false;
for code in codes {
let code = CODE_REDIRECTS.get(code).map_or(code, AsRef::as_ref);
if code == RuleCode::RUF100.as_ref() {
let code = CODE_REDIRECTS.get(code).map_or(code, |r| r.code());
if code == Rule::UnusedNOQA.code() {
self_ignore = true;
break;
}
@@ -134,8 +132,8 @@ pub fn check_noqa(
if matches.contains(&code) || settings.external.contains(code) {
valid_codes.push(code);
} else {
if let Ok(rule_code) = RuleCode::from_str(code) {
if settings.rules.enabled(&rule_code) {
if let Ok(rule) = Rule::from_code(code) {
if settings.rules.enabled(rule) {
unmatched_codes.push(code);
} else {
disabled_codes.push(code);
@@ -172,7 +170,7 @@ pub fn check_noqa(
Range::new(Location::new(row + 1, start), Location::new(row + 1, end)),
);
if matches!(autofix, flags::Autofix::Enabled)
&& settings.rules.should_fix(diagnostic.kind.code())
&& settings.rules.should_fix(diagnostic.kind.rule())
{
if valid_codes.is_empty() {
diagnostic.amend(Fix::deletion(

View File

@@ -3,7 +3,7 @@
use rustpython_parser::lexer::{LexResult, Tok};
use crate::lex::docstring_detection::StateMachine;
use crate::registry::{Diagnostic, RuleCode};
use crate::registry::{Diagnostic, Rule};
use crate::rules::ruff::rules::Context;
use crate::rules::{
eradicate, flake8_commas, flake8_implicit_str_concat, flake8_quotes, pycodestyle, ruff,
@@ -19,20 +19,32 @@ pub fn check_tokens(
) -> Vec<Diagnostic> {
let mut diagnostics: Vec<Diagnostic> = vec![];
let enforce_ambiguous_unicode_character = settings.rules.enabled(&RuleCode::RUF001)
|| settings.rules.enabled(&RuleCode::RUF002)
|| settings.rules.enabled(&RuleCode::RUF003);
let enforce_quotes = settings.rules.enabled(&RuleCode::Q000)
|| settings.rules.enabled(&RuleCode::Q001)
|| settings.rules.enabled(&RuleCode::Q002)
|| settings.rules.enabled(&RuleCode::Q003);
let enforce_commented_out_code = settings.rules.enabled(&RuleCode::ERA001);
let enforce_invalid_escape_sequence = settings.rules.enabled(&RuleCode::W605);
let enforce_implicit_string_concatenation =
settings.rules.enabled(&RuleCode::ISC001) || settings.rules.enabled(&RuleCode::ISC002);
let enforce_trailing_comma = settings.rules.enabled(&RuleCode::COM812)
|| settings.rules.enabled(&RuleCode::COM818)
|| settings.rules.enabled(&RuleCode::COM819);
let enforce_ambiguous_unicode_character = settings
.rules
.enabled(&Rule::AmbiguousUnicodeCharacterString)
|| settings
.rules
.enabled(&Rule::AmbiguousUnicodeCharacterDocstring)
|| settings
.rules
.enabled(&Rule::AmbiguousUnicodeCharacterComment);
let enforce_quotes = settings.rules.enabled(&Rule::BadQuotesInlineString)
|| settings.rules.enabled(&Rule::BadQuotesMultilineString)
|| settings.rules.enabled(&Rule::BadQuotesDocstring)
|| settings.rules.enabled(&Rule::AvoidQuoteEscape);
let enforce_commented_out_code = settings.rules.enabled(&Rule::CommentedOutCode);
let enforce_invalid_escape_sequence = settings.rules.enabled(&Rule::InvalidEscapeSequence);
let enforce_implicit_string_concatenation = settings
.rules
.enabled(&Rule::SingleLineImplicitStringConcatenation)
|| settings
.rules
.enabled(&Rule::MultiLineImplicitStringConcatenation);
let enforce_trailing_comma = settings.rules.enabled(&Rule::TrailingCommaMissing)
|| settings
.rules
.enabled(&Rule::TrailingCommaOnBareTupleProhibited)
|| settings.rules.enabled(&Rule::TrailingCommaProhibited);
let mut state_machine = StateMachine::default();
for &(start, ref tok, end) in tokens.iter().flatten() {
@@ -75,7 +87,7 @@ pub fn check_tokens(
settings,
autofix,
) {
if settings.rules.enabled(diagnostic.kind.code()) {
if settings.rules.enabled(diagnostic.kind.rule()) {
diagnostics.push(diagnostic);
}
}
@@ -101,7 +113,7 @@ pub fn check_tokens(
start,
end,
matches!(autofix, flags::Autofix::Enabled)
&& settings.rules.should_fix(&RuleCode::W605),
&& settings.rules.should_fix(&Rule::InvalidEscapeSequence),
));
}
}
@@ -112,7 +124,7 @@ pub fn check_tokens(
diagnostics.extend(
flake8_implicit_str_concat::rules::implicit(tokens)
.into_iter()
.filter(|diagnostic| settings.rules.enabled(diagnostic.kind.code())),
.filter(|diagnostic| settings.rules.enabled(diagnostic.kind.rule())),
);
}
@@ -121,7 +133,7 @@ pub fn check_tokens(
diagnostics.extend(
flake8_commas::rules::trailing_commas(tokens, settings, autofix)
.into_iter()
.filter(|diagnostic| settings.rules.enabled(diagnostic.kind.code())),
.filter(|diagnostic| settings.rules.enabled(diagnostic.kind.rule())),
);
}

View File

@@ -454,6 +454,7 @@ mod tests {
pep8_naming: None,
pycodestyle: None,
pydocstyle: None,
pylint: None,
pyupgrade: None,
});
assert_eq!(actual, expected);
@@ -520,6 +521,7 @@ mod tests {
pep8_naming: None,
pycodestyle: None,
pydocstyle: None,
pylint: None,
pyupgrade: None,
});
assert_eq!(actual, expected);
@@ -586,6 +588,7 @@ mod tests {
pep8_naming: None,
pycodestyle: None,
pydocstyle: None,
pylint: None,
pyupgrade: None,
});
assert_eq!(actual, expected);
@@ -652,6 +655,7 @@ mod tests {
pep8_naming: None,
pycodestyle: None,
pydocstyle: None,
pylint: None,
pyupgrade: None,
});
assert_eq!(actual, expected);
@@ -723,6 +727,7 @@ mod tests {
pep8_naming: None,
pycodestyle: None,
pydocstyle: None,
pylint: None,
pyupgrade: None,
});
assert_eq!(actual, expected);
@@ -795,6 +800,7 @@ mod tests {
pydocstyle: Some(pydocstyle::settings::Options {
convention: Some(Convention::Numpy),
}),
pylint: None,
pyupgrade: None,
});
assert_eq!(actual, expected);
@@ -867,6 +873,7 @@ mod tests {
pep8_naming: None,
pycodestyle: None,
pydocstyle: None,
pylint: None,
pyupgrade: None,
});
assert_eq!(actual, expected);

View File

@@ -7,7 +7,7 @@ use anyhow::{anyhow, Result};
use path_absolutize::{path_dedot, Absolutize};
use rustc_hash::FxHashSet;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::settings::hashable::{HashableGlobMatcher, HashableHashSet};
/// Extract the absolute path and basename (as strings) from a Path.
@@ -29,9 +29,9 @@ pub(crate) fn ignores_from_path<'a>(
pattern_code_pairs: &'a [(
HashableGlobMatcher,
HashableGlobMatcher,
HashableHashSet<RuleCode>,
HashableHashSet<Rule>,
)],
) -> Result<FxHashSet<&'a RuleCode>> {
) -> Result<FxHashSet<&'a Rule>> {
let (file_path, file_basename) = extract_path_names(path)?;
Ok(pattern_code_pairs
.iter()

View File

@@ -18,8 +18,6 @@
)]
#![forbid(unsafe_code)]
extern crate core;
mod ast;
mod autofix;
pub mod cache;
@@ -49,6 +47,7 @@ mod violations;
mod visibility;
use cfg_if::cfg_if;
pub use violation::{AutofixKind, Availability as AutofixAvailability};
pub use violations::IOError;
cfg_if! {

View File

@@ -7,11 +7,11 @@ use wasm_bindgen::prelude::*;
use crate::directives;
use crate::linter::check_path;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::rules::{
flake8_annotations, flake8_bandit, flake8_bugbear, flake8_errmsg, flake8_import_conventions,
flake8_pytest_style, flake8_quotes, flake8_tidy_imports, flake8_unused_arguments, isort,
mccabe, pep8_naming, pycodestyle, pydocstyle, pyupgrade,
mccabe, pep8_naming, pycodestyle, pydocstyle, pylint, pyupgrade,
};
use crate::rustpython_helpers::tokenize;
use crate::settings::configuration::Configuration;
@@ -51,13 +51,30 @@ export interface Diagnostic {
#[derive(Serialize)]
struct ExpandedMessage {
code: RuleCode,
code: SerializeRuleAsCode,
message: String,
location: Location,
end_location: Location,
fix: Option<ExpandedFix>,
}
struct SerializeRuleAsCode(Rule);
impl Serialize for SerializeRuleAsCode {
fn serialize<S>(&self, serializer: S) -> std::result::Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
serializer.serialize_str(self.0.code())
}
}
impl From<Rule> for SerializeRuleAsCode {
fn from(rule: Rule) -> Self {
Self(rule)
}
}
#[derive(Serialize)]
struct ExpandedFix {
content: String,
@@ -134,6 +151,7 @@ pub fn defaultSettings() -> Result<JsValue, JsValue> {
pep8_naming: Some(pep8_naming::settings::Settings::default().into()),
pycodestyle: Some(pycodestyle::settings::Settings::default().into()),
pydocstyle: Some(pydocstyle::settings::Settings::default().into()),
pylint: Some(pylint::settings::Settings::default().into()),
pyupgrade: Some(pyupgrade::settings::Settings::default().into()),
})?)
}
@@ -181,7 +199,7 @@ pub fn check(contents: &str, options: JsValue) -> Result<JsValue, JsValue> {
let messages: Vec<ExpandedMessage> = diagnostics
.into_iter()
.map(|diagnostic| ExpandedMessage {
code: diagnostic.kind.code().clone(),
code: diagnostic.kind.rule().clone().into(),
message: diagnostic.kind.body(),
location: diagnostic.location,
end_location: diagnostic.end_location,
@@ -223,7 +241,7 @@ mod test {
"if (1, 2): pass",
r#"{}"#,
[ExpandedMessage {
code: RuleCode::F634,
code: Rule::IfTuple.into(),
message: "If test is a tuple, which is always `True`".to_string(),
location: Location::new(1, 0),
end_location: Location::new(1, 15),

View File

@@ -16,7 +16,7 @@ use crate::directives::Directives;
use crate::doc_lines::{doc_lines_from_ast, doc_lines_from_tokens};
use crate::message::{Message, Source};
use crate::noqa::add_noqa;
use crate::registry::{Diagnostic, LintSource, RuleCode};
use crate::registry::{Diagnostic, LintSource, Rule};
use crate::settings::{flags, Settings};
use crate::source_code::{Indexer, Locator, Stylist};
use crate::{directives, fs, rustpython_helpers, violations};
@@ -48,7 +48,7 @@ pub fn check_path(
// Collect doc lines. This requires a rare mix of tokens (for comments) and AST
// (for docstrings), which demands special-casing at this level.
let use_doc_lines = settings.rules.enabled(&RuleCode::W505);
let use_doc_lines = settings.rules.enabled(&Rule::DocLineTooLong);
let mut doc_lines = vec![];
if use_doc_lines {
doc_lines.extend(doc_lines_from_tokens(&tokens));
@@ -116,7 +116,7 @@ pub fn check_path(
}
}
Err(parse_error) => {
if settings.rules.enabled(&RuleCode::E999) {
if settings.rules.enabled(&Rule::SyntaxError) {
diagnostics.push(Diagnostic::new(
violations::SyntaxError(parse_error.error.to_string()),
Range::new(parse_error.location, parse_error.location),
@@ -170,7 +170,7 @@ pub fn check_path(
if !ignores.is_empty() {
return Ok(diagnostics
.into_iter()
.filter(|diagnostic| !ignores.contains(&diagnostic.kind.code()))
.filter(|diagnostic| !ignores.contains(&diagnostic.kind.rule()))
.collect());
}
}
@@ -357,8 +357,7 @@ This likely indicates a bug in `{}`. If you could open an issue at:
{}/issues/new?title=%5BInfinite%20loop%5D
quoting the contents of `{}`, along with the `pyproject.toml` settings and executed command, we'd be
very appreciative!
quoting the contents of `{}`, along with the `pyproject.toml` settings and executed command, we'd be very appreciative!
"#,
"warning".yellow().bold(),
MAX_ITERATIONS,

View File

@@ -44,13 +44,14 @@ macro_rules! notify_user {
}
}
#[derive(Debug, PartialOrd, Ord, PartialEq, Eq)]
#[derive(Debug, Default, PartialOrd, Ord, PartialEq, Eq)]
pub enum LogLevel {
// No output (+ `log::LevelFilter::Off`).
Silent,
// Only show lint violations, with no decorative output (+ `log::LevelFilter::Off`).
Quiet,
// All user-facing output (+ `log::LevelFilter::Info`).
#[default]
Default,
// All user-facing output (+ `log::LevelFilter::Debug`).
Verbose,
@@ -67,12 +68,6 @@ impl LogLevel {
}
}
impl Default for LogLevel {
fn default() -> Self {
LogLevel::Default
}
}
pub fn set_up_logging(level: &LogLevel) -> Result<()> {
fern::Dispatch::new()
.format(|out, message, record| {

View File

@@ -8,7 +8,7 @@ use once_cell::sync::Lazy;
use regex::Regex;
use rustc_hash::{FxHashMap, FxHashSet};
use crate::registry::{Diagnostic, RuleCode, CODE_REDIRECTS};
use crate::registry::{Diagnostic, Rule, CODE_REDIRECTS};
use crate::settings::hashable::HashableHashSet;
use crate::source_code::LineEnding;
@@ -69,11 +69,11 @@ pub fn extract_noqa_directive(line: &str) -> Directive {
/// Returns `true` if the string list of `codes` includes `code` (or an alias
/// thereof).
pub fn includes(needle: &RuleCode, haystack: &[&str]) -> bool {
let needle: &str = needle.as_ref();
pub fn includes(needle: &Rule, haystack: &[&str]) -> bool {
let needle: &str = needle.code();
haystack.iter().any(|candidate| {
if let Some(candidate) = CODE_REDIRECTS.get(candidate) {
needle == candidate.as_ref()
needle == candidate.code()
} else {
&needle == candidate
}
@@ -101,21 +101,21 @@ fn add_noqa_inner(
external: &HashableHashSet<String>,
line_ending: &LineEnding,
) -> (usize, String) {
let mut matches_by_line: FxHashMap<usize, FxHashSet<&RuleCode>> = FxHashMap::default();
let mut matches_by_line: FxHashMap<usize, FxHashSet<&Rule>> = FxHashMap::default();
for (lineno, line) in contents.lines().enumerate() {
// If we hit an exemption for the entire file, bail.
if is_file_exempt(line) {
return (0, contents.to_string());
}
let mut codes: FxHashSet<&RuleCode> = FxHashSet::default();
let mut codes: FxHashSet<&Rule> = FxHashSet::default();
for diagnostic in diagnostics {
// TODO(charlie): Consider respecting parent `noqa` directives. For now, we'll
// add a `noqa` for every diagnostic, on its own line. This could lead to
// duplication, whereby some parent `noqa` directives become
// redundant.
if diagnostic.location.row() == lineno + 1 {
codes.insert(diagnostic.kind.code());
codes.insert(diagnostic.kind.rule());
}
}
@@ -138,7 +138,7 @@ fn add_noqa_inner(
output.push_str(line);
output.push_str(line_ending);
}
Some(codes) => {
Some(rules) => {
match extract_noqa_directive(line) {
Directive::None => {
// Add existing content.
@@ -148,7 +148,7 @@ fn add_noqa_inner(
output.push_str(" # noqa: ");
// Add codes.
let codes: Vec<&str> = codes.iter().map(AsRef::as_ref).collect();
let codes: Vec<&str> = rules.iter().map(|r| r.code()).collect();
let suffix = codes.join(", ");
output.push_str(&suffix);
output.push_str(line_ending);
@@ -163,7 +163,7 @@ fn add_noqa_inner(
// Add codes.
let codes: Vec<&str> =
codes.iter().map(AsRef::as_ref).sorted_unstable().collect();
rules.iter().map(|r| r.code()).sorted_unstable().collect();
let suffix = codes.join(", ");
output.push_str(&suffix);
output.push_str(line_ending);
@@ -181,9 +181,9 @@ fn add_noqa_inner(
formatted.push_str(" # noqa: ");
// Add codes.
let codes: Vec<&str> = codes
let codes: Vec<&str> = rules
.iter()
.map(AsRef::as_ref)
.map(|r| r.code())
.chain(existing.into_iter().filter(|code| external.contains(*code)))
.sorted_unstable()
.collect();

View File

@@ -249,3 +249,13 @@ pub fn is_pep585_builtin(checker: &Checker, expr: &Expr) -> bool {
PEP_585_BUILTINS_ELIGIBLE.contains(&call_path.as_slice())
})
}
pub enum Callable {
ForwardRef,
Cast,
NewType,
TypeVar,
NamedTuple,
TypedDict,
MypyExtension,
}

View File

@@ -1,12 +1,11 @@
//! Registry of [`RuleCode`] to [`DiagnosticKind`] mappings.
//! Registry of [`Rule`] to [`DiagnosticKind`] mappings.
use itertools::Itertools;
use once_cell::sync::Lazy;
use ruff_macros::RuleCodePrefix;
use rustc_hash::FxHashMap;
use rustpython_parser::ast::Location;
use serde::{Deserialize, Serialize};
use strum_macros::{AsRefStr, Display, EnumIter, EnumString};
use strum_macros::{AsRefStr, EnumIter};
use crate::ast::types::Range;
use crate::fix::Fix;
@@ -80,7 +79,6 @@ ruff_macros::define_rule_mapping!(
F901 => violations::RaiseNotImplemented,
// pylint
PLC0414 => violations::UselessImportAlias,
PLC2201 => violations::MisplacedComparisonConstant,
PLC3002 => violations::UnnecessaryDirectLambdaCall,
PLE0117 => violations::NonlocalWithoutBinding,
PLE0118 => violations::UsedPriorGlobalDeclaration,
@@ -423,6 +421,7 @@ ruff_macros::define_rule_mapping!(
RUF002 => violations::AmbiguousUnicodeCharacterDocstring,
RUF003 => violations::AmbiguousUnicodeCharacterComment,
RUF004 => violations::KeywordArgumentBeforeStarArgument,
RUF005 => violations::UnpackInsteadOfConcatenatingToCollectionLiteral,
RUF100 => violations::UnusedNOQA,
);
@@ -544,81 +543,40 @@ pub enum LintSource {
Filesystem,
}
impl RuleCode {
impl Rule {
/// The source for the diagnostic (either the AST, the filesystem, or the
/// physical lines).
pub fn lint_source(&self) -> &'static LintSource {
match self {
RuleCode::RUF100 => &LintSource::NoQa,
RuleCode::E501
| RuleCode::W292
| RuleCode::W505
| RuleCode::UP009
| RuleCode::PGH003
| RuleCode::PGH004 => &LintSource::Lines,
RuleCode::ERA001
| RuleCode::ISC001
| RuleCode::ISC002
| RuleCode::Q000
| RuleCode::Q001
| RuleCode::Q002
| RuleCode::Q003
| RuleCode::W605
| RuleCode::COM812
| RuleCode::COM818
| RuleCode::COM819
| RuleCode::RUF001
| RuleCode::RUF002
| RuleCode::RUF003 => &LintSource::Tokens,
RuleCode::E902 => &LintSource::Io,
RuleCode::I001 | RuleCode::I002 => &LintSource::Imports,
RuleCode::INP001 => &LintSource::Filesystem,
Rule::UnusedNOQA => &LintSource::NoQa,
Rule::LineTooLong
| Rule::NoNewLineAtEndOfFile
| Rule::DocLineTooLong
| Rule::PEP3120UnnecessaryCodingComment
| Rule::BlanketTypeIgnore
| Rule::BlanketNOQA => &LintSource::Lines,
Rule::CommentedOutCode
| Rule::SingleLineImplicitStringConcatenation
| Rule::MultiLineImplicitStringConcatenation
| Rule::BadQuotesInlineString
| Rule::BadQuotesMultilineString
| Rule::BadQuotesDocstring
| Rule::AvoidQuoteEscape
| Rule::InvalidEscapeSequence
| Rule::TrailingCommaMissing
| Rule::TrailingCommaOnBareTupleProhibited
| Rule::TrailingCommaProhibited
| Rule::AmbiguousUnicodeCharacterString
| Rule::AmbiguousUnicodeCharacterDocstring
| Rule::AmbiguousUnicodeCharacterComment => &LintSource::Tokens,
Rule::IOError => &LintSource::Io,
Rule::UnsortedImports | Rule::MissingRequiredImport => &LintSource::Imports,
Rule::ImplicitNamespacePackage => &LintSource::Filesystem,
_ => &LintSource::Ast,
}
}
}
impl DiagnosticKind {
/// The summary text for the diagnostic. Typically a truncated form of the
/// body text.
pub fn summary(&self) -> String {
match self {
DiagnosticKind::UnaryPrefixIncrement(..) => {
"Python does not support the unary prefix increment".to_string()
}
DiagnosticKind::UnusedLoopControlVariable(violations::UnusedLoopControlVariable(
name,
)) => {
format!("Loop control variable `{name}` not used within the loop body")
}
DiagnosticKind::NoAssertRaisesException(..) => {
"`assertRaises(Exception)` should be considered evil".to_string()
}
DiagnosticKind::StarArgUnpackingAfterKeywordArg(..) => {
"Star-arg unpacking after a keyword argument is strongly discouraged".to_string()
}
// flake8-datetimez
DiagnosticKind::CallDatetimeToday(..) => {
"The use of `datetime.datetime.today()` is not allowed".to_string()
}
DiagnosticKind::CallDatetimeUtcnow(..) => {
"The use of `datetime.datetime.utcnow()` is not allowed".to_string()
}
DiagnosticKind::CallDatetimeUtcfromtimestamp(..) => {
"The use of `datetime.datetime.utcfromtimestamp()` is not allowed".to_string()
}
DiagnosticKind::CallDateToday(..) => {
"The use of `datetime.date.today()` is not allowed.".to_string()
}
DiagnosticKind::CallDateFromtimestamp(..) => {
"The use of `datetime.date.fromtimestamp()` is not allowed".to_string()
}
_ => self.body(),
}
}
}
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
pub struct Diagnostic {
pub kind: DiagnosticKind,
@@ -651,95 +609,80 @@ impl Diagnostic {
}
/// Pairs of checks that shouldn't be enabled together.
pub const INCOMPATIBLE_CODES: &[(RuleCode, RuleCode, &str)] = &[(
RuleCode::D203,
RuleCode::D211,
pub const INCOMPATIBLE_CODES: &[(Rule, Rule, &str)] = &[(
Rule::OneBlankLineBeforeClass,
Rule::NoBlankLineBeforeClass,
"`D203` (OneBlankLineBeforeClass) and `D211` (NoBlankLinesBeforeClass) are incompatible. \
Consider adding `D203` to `ignore`.",
)];
/// A hash map from deprecated to latest `RuleCode`.
pub static CODE_REDIRECTS: Lazy<FxHashMap<&'static str, RuleCode>> = Lazy::new(|| {
/// A hash map from deprecated to latest `Rule`.
pub static CODE_REDIRECTS: Lazy<FxHashMap<&'static str, Rule>> = Lazy::new(|| {
FxHashMap::from_iter([
// TODO(charlie): Remove by 2023-01-01.
("U001", RuleCode::UP001),
("U003", RuleCode::UP003),
("U004", RuleCode::UP004),
("U005", RuleCode::UP005),
("U006", RuleCode::UP006),
("U007", RuleCode::UP007),
("U008", RuleCode::UP008),
("U009", RuleCode::UP009),
("U010", RuleCode::UP010),
("U011", RuleCode::UP011),
("U012", RuleCode::UP012),
("U013", RuleCode::UP013),
("U014", RuleCode::UP014),
("U015", RuleCode::UP015),
("U016", RuleCode::UP016),
("U017", RuleCode::UP017),
("U019", RuleCode::UP019),
("U001", Rule::UselessMetaclassType),
("U003", Rule::TypeOfPrimitive),
("U004", Rule::UselessObjectInheritance),
("U005", Rule::DeprecatedUnittestAlias),
("U006", Rule::UsePEP585Annotation),
("U007", Rule::UsePEP604Annotation),
("U008", Rule::SuperCallWithParameters),
("U009", Rule::PEP3120UnnecessaryCodingComment),
("U010", Rule::UnnecessaryFutureImport),
("U011", Rule::LRUCacheWithoutParameters),
("U012", Rule::UnnecessaryEncodeUTF8),
("U013", Rule::ConvertTypedDictFunctionalToClass),
("U014", Rule::ConvertNamedTupleFunctionalToClass),
("U015", Rule::RedundantOpenModes),
("U016", Rule::RemoveSixCompat),
("U017", Rule::DatetimeTimezoneUTC),
("U019", Rule::TypingTextStrAlias),
// TODO(charlie): Remove by 2023-02-01.
("I252", RuleCode::TID252),
("M001", RuleCode::RUF100),
("I252", Rule::RelativeImports),
("M001", Rule::UnusedNOQA),
// TODO(charlie): Remove by 2023-02-01.
("PDV002", RuleCode::PD002),
("PDV003", RuleCode::PD003),
("PDV004", RuleCode::PD004),
("PDV007", RuleCode::PD007),
("PDV008", RuleCode::PD008),
("PDV009", RuleCode::PD009),
("PDV010", RuleCode::PD010),
("PDV011", RuleCode::PD011),
("PDV012", RuleCode::PD012),
("PDV013", RuleCode::PD013),
("PDV015", RuleCode::PD015),
("PDV901", RuleCode::PD901),
("PDV002", Rule::UseOfInplaceArgument),
("PDV003", Rule::UseOfDotIsNull),
("PDV004", Rule::UseOfDotNotNull),
("PDV007", Rule::UseOfDotIx),
("PDV008", Rule::UseOfDotAt),
("PDV009", Rule::UseOfDotIat),
("PDV010", Rule::UseOfDotPivotOrUnstack),
("PDV011", Rule::UseOfDotValues),
("PDV012", Rule::UseOfDotReadTable),
("PDV013", Rule::UseOfDotStack),
("PDV015", Rule::UseOfPdMerge),
("PDV901", Rule::DfIsABadVariableName),
// TODO(charlie): Remove by 2023-02-01.
("R501", RuleCode::RET501),
("R502", RuleCode::RET502),
("R503", RuleCode::RET503),
("R504", RuleCode::RET504),
("R505", RuleCode::RET505),
("R506", RuleCode::RET506),
("R507", RuleCode::RET507),
("R508", RuleCode::RET508),
("R501", Rule::UnnecessaryReturnNone),
("R502", Rule::ImplicitReturnValue),
("R503", Rule::ImplicitReturn),
("R504", Rule::UnnecessaryAssign),
("R505", Rule::SuperfluousElseReturn),
("R506", Rule::SuperfluousElseRaise),
("R507", Rule::SuperfluousElseContinue),
("R508", Rule::SuperfluousElseBreak),
// TODO(charlie): Remove by 2023-02-01.
("IC001", RuleCode::ICN001),
("IC002", RuleCode::ICN001),
("IC003", RuleCode::ICN001),
("IC004", RuleCode::ICN001),
("IC001", Rule::ImportAliasIsNotConventional),
("IC002", Rule::ImportAliasIsNotConventional),
("IC003", Rule::ImportAliasIsNotConventional),
("IC004", Rule::ImportAliasIsNotConventional),
])
});
#[cfg(test)]
mod tests {
use std::str::FromStr;
use strum::IntoEnumIterator;
use crate::registry::RuleCode;
use crate::registry::Rule;
#[test]
fn check_code_serialization() {
for check_code in RuleCode::iter() {
for rule in Rule::iter() {
assert!(
RuleCode::from_str(check_code.as_ref()).is_ok(),
"{check_code:?} could not be round-trip serialized."
Rule::from_code(rule.code()).is_ok(),
"{rule:?} could not be round-trip serialized."
);
}
}
#[test]
fn fixable_codes() {
for check_code in RuleCode::iter() {
let kind = check_code.kind();
if kind.fixable() {
assert!(
kind.commit().is_some(),
"{check_code:?} is fixable but has no commit message."
);
}
}
}
}

View File

@@ -4,19 +4,18 @@ pub(crate) mod rules;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use anyhow::Result;
use test_case::test_case;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::settings;
#[test_case(RuleCode::ERA001, Path::new("ERA001.py"); "ERA001")]
fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
#[test_case(Rule::CommentedOutCode, Path::new("ERA001.py"); "ERA001")]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.code(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("./resources/test/fixtures/eradicate")
.join(path)

View File

@@ -3,7 +3,7 @@ use rustpython_ast::Location;
use super::detection::comment_contains_code;
use crate::ast::types::Range;
use crate::fix::Fix;
use crate::registry::{Diagnostic, RuleCode};
use crate::registry::{Diagnostic, Rule};
use crate::settings::{flags, Settings};
use crate::source_code::Locator;
use crate::violations;
@@ -35,7 +35,7 @@ pub fn commented_out_code(
if is_standalone_comment(&line) && comment_contains_code(&line, &settings.task_tags[..]) {
let mut diagnostic = Diagnostic::new(violations::CommentedOutCode, Range::new(start, end));
if matches!(autofix, flags::Autofix::Enabled)
&& settings.rules.should_fix(&RuleCode::ERA001)
&& settings.rules.should_fix(&Rule::CommentedOutCode)
{
diagnostic.amend(Fix::deletion(location, end_location));
}

View File

@@ -3,28 +3,27 @@ pub(crate) mod rules;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use anyhow::Result;
use test_case::test_case;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::settings;
#[test_case(RuleCode::YTT101, Path::new("YTT101.py"); "YTT101")]
#[test_case(RuleCode::YTT102, Path::new("YTT102.py"); "YTT102")]
#[test_case(RuleCode::YTT103, Path::new("YTT103.py"); "YTT103")]
#[test_case(RuleCode::YTT201, Path::new("YTT201.py"); "YTT201")]
#[test_case(RuleCode::YTT202, Path::new("YTT202.py"); "YTT202")]
#[test_case(RuleCode::YTT203, Path::new("YTT203.py"); "YTT203")]
#[test_case(RuleCode::YTT204, Path::new("YTT204.py"); "YTT204")]
#[test_case(RuleCode::YTT301, Path::new("YTT301.py"); "YTT301")]
#[test_case(RuleCode::YTT302, Path::new("YTT302.py"); "YTT302")]
#[test_case(RuleCode::YTT303, Path::new("YTT303.py"); "YTT303")]
fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
#[test_case(Rule::SysVersionSlice3Referenced, Path::new("YTT101.py"); "YTT101")]
#[test_case(Rule::SysVersion2Referenced, Path::new("YTT102.py"); "YTT102")]
#[test_case(Rule::SysVersionCmpStr3, Path::new("YTT103.py"); "YTT103")]
#[test_case(Rule::SysVersionInfo0Eq3Referenced, Path::new("YTT201.py"); "YTT201")]
#[test_case(Rule::SixPY3Referenced, Path::new("YTT202.py"); "YTT202")]
#[test_case(Rule::SysVersionInfo1CmpInt, Path::new("YTT203.py"); "YTT203")]
#[test_case(Rule::SysVersionInfoMinorCmpInt, Path::new("YTT204.py"); "YTT204")]
#[test_case(Rule::SysVersion0Referenced, Path::new("YTT301.py"); "YTT301")]
#[test_case(Rule::SysVersionCmpStr10, Path::new("YTT302.py"); "YTT302")]
#[test_case(Rule::SysVersionSlice1Referenced, Path::new("YTT303.py"); "YTT303")]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.code(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("./resources/test/fixtures/flake8_2020")
.join(path)

View File

@@ -3,13 +3,13 @@ use rustpython_ast::{Cmpop, Constant, Expr, ExprKind, Located};
use crate::ast::types::Range;
use crate::checkers::ast::Checker;
use crate::registry::{Diagnostic, RuleCode};
use crate::registry::{Diagnostic, Rule};
use crate::violations;
fn is_sys(checker: &Checker, expr: &Expr, target: &str) -> bool {
checker
.resolve_call_path(expr)
.map_or(false, |path| path == ["sys", target])
.map_or(false, |call_path| call_path.as_slice() == ["sys", target])
}
/// YTT101, YTT102, YTT301, YTT303
@@ -27,13 +27,21 @@ pub fn subscript(checker: &mut Checker, value: &Expr, slice: &Expr) {
..
} = &upper.node
{
if *i == BigInt::from(1) && checker.settings.rules.enabled(&RuleCode::YTT303) {
if *i == BigInt::from(1)
&& checker
.settings
.rules
.enabled(&Rule::SysVersionSlice1Referenced)
{
checker.diagnostics.push(Diagnostic::new(
violations::SysVersionSlice1Referenced,
Range::from_located(value),
));
} else if *i == BigInt::from(3)
&& checker.settings.rules.enabled(&RuleCode::YTT101)
&& checker
.settings
.rules
.enabled(&Rule::SysVersionSlice3Referenced)
{
checker.diagnostics.push(Diagnostic::new(
violations::SysVersionSlice3Referenced,
@@ -47,12 +55,15 @@ pub fn subscript(checker: &mut Checker, value: &Expr, slice: &Expr) {
value: Constant::Int(i),
..
} => {
if *i == BigInt::from(2) && checker.settings.rules.enabled(&RuleCode::YTT102) {
if *i == BigInt::from(2)
&& checker.settings.rules.enabled(&Rule::SysVersion2Referenced)
{
checker.diagnostics.push(Diagnostic::new(
violations::SysVersion2Referenced,
Range::from_located(value),
));
} else if *i == BigInt::from(0) && checker.settings.rules.enabled(&RuleCode::YTT301)
} else if *i == BigInt::from(0)
&& checker.settings.rules.enabled(&Rule::SysVersion0Referenced)
{
checker.diagnostics.push(Diagnostic::new(
violations::SysVersion0Referenced,
@@ -89,7 +100,10 @@ pub fn compare(checker: &mut Checker, left: &Expr, ops: &[Cmpop], comparators: &
) = (ops, comparators)
{
if *n == BigInt::from(3)
&& checker.settings.rules.enabled(&RuleCode::YTT201)
&& checker
.settings
.rules
.enabled(&Rule::SysVersionInfo0Eq3Referenced)
{
checker.diagnostics.push(Diagnostic::new(
violations::SysVersionInfo0Eq3Referenced,
@@ -110,7 +124,7 @@ pub fn compare(checker: &mut Checker, left: &Expr, ops: &[Cmpop], comparators: &
}],
) = (ops, comparators)
{
if checker.settings.rules.enabled(&RuleCode::YTT203) {
if checker.settings.rules.enabled(&Rule::SysVersionInfo1CmpInt) {
checker.diagnostics.push(Diagnostic::new(
violations::SysVersionInfo1CmpInt,
Range::from_located(left),
@@ -136,7 +150,11 @@ pub fn compare(checker: &mut Checker, left: &Expr, ops: &[Cmpop], comparators: &
}],
) = (ops, comparators)
{
if checker.settings.rules.enabled(&RuleCode::YTT204) {
if checker
.settings
.rules
.enabled(&Rule::SysVersionInfoMinorCmpInt)
{
checker.diagnostics.push(Diagnostic::new(
violations::SysVersionInfoMinorCmpInt,
Range::from_located(left),
@@ -162,13 +180,13 @@ pub fn compare(checker: &mut Checker, left: &Expr, ops: &[Cmpop], comparators: &
) = (ops, comparators)
{
if s.len() == 1 {
if checker.settings.rules.enabled(&RuleCode::YTT302) {
if checker.settings.rules.enabled(&Rule::SysVersionCmpStr10) {
checker.diagnostics.push(Diagnostic::new(
violations::SysVersionCmpStr10,
Range::from_located(left),
));
}
} else if checker.settings.rules.enabled(&RuleCode::YTT103) {
} else if checker.settings.rules.enabled(&Rule::SysVersionCmpStr3) {
checker.diagnostics.push(Diagnostic::new(
violations::SysVersionCmpStr3,
Range::from_located(left),
@@ -182,7 +200,7 @@ pub fn compare(checker: &mut Checker, left: &Expr, ops: &[Cmpop], comparators: &
pub fn name_or_attribute(checker: &mut Checker, expr: &Expr) {
if checker
.resolve_call_path(expr)
.map_or(false, |path| path == ["six", "PY3"])
.map_or(false, |call_path| call_path.as_slice() == ["six", "PY3"])
{
checker.diagnostics.push(Diagnostic::new(
violations::SixPY3Referenced,

View File

@@ -11,7 +11,7 @@ mod tests {
use anyhow::Result;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::settings::Settings;
#[test]
@@ -20,17 +20,17 @@ mod tests {
Path::new("./resources/test/fixtures/flake8_annotations/annotation_presence.py"),
&Settings {
..Settings::for_rules(vec![
RuleCode::ANN001,
RuleCode::ANN002,
RuleCode::ANN003,
RuleCode::ANN101,
RuleCode::ANN102,
RuleCode::ANN201,
RuleCode::ANN202,
RuleCode::ANN204,
RuleCode::ANN205,
RuleCode::ANN206,
RuleCode::ANN401,
Rule::MissingTypeFunctionArgument,
Rule::MissingTypeArgs,
Rule::MissingTypeKwargs,
Rule::MissingTypeSelf,
Rule::MissingTypeCls,
Rule::MissingReturnTypePublicFunction,
Rule::MissingReturnTypePrivateFunction,
Rule::MissingReturnTypeSpecialMethod,
Rule::MissingReturnTypeStaticMethod,
Rule::MissingReturnTypeClassMethod,
Rule::DynamicallyTypedExpression,
])
},
)?;
@@ -50,11 +50,11 @@ mod tests {
allow_star_arg_any: false,
},
..Settings::for_rules(vec![
RuleCode::ANN001,
RuleCode::ANN002,
RuleCode::ANN003,
RuleCode::ANN101,
RuleCode::ANN102,
Rule::MissingTypeFunctionArgument,
Rule::MissingTypeArgs,
Rule::MissingTypeKwargs,
Rule::MissingTypeSelf,
Rule::MissingTypeCls,
])
},
)?;
@@ -74,11 +74,11 @@ mod tests {
allow_star_arg_any: false,
},
..Settings::for_rules(vec![
RuleCode::ANN201,
RuleCode::ANN202,
RuleCode::ANN204,
RuleCode::ANN205,
RuleCode::ANN206,
Rule::MissingReturnTypePublicFunction,
Rule::MissingReturnTypePrivateFunction,
Rule::MissingReturnTypeSpecialMethod,
Rule::MissingReturnTypeStaticMethod,
Rule::MissingReturnTypeClassMethod,
])
},
)?;
@@ -98,11 +98,11 @@ mod tests {
allow_star_arg_any: false,
},
..Settings::for_rules(vec![
RuleCode::ANN201,
RuleCode::ANN202,
RuleCode::ANN204,
RuleCode::ANN205,
RuleCode::ANN206,
Rule::MissingReturnTypePublicFunction,
Rule::MissingReturnTypePrivateFunction,
Rule::MissingReturnTypeSpecialMethod,
Rule::MissingReturnTypeStaticMethod,
Rule::MissingReturnTypeClassMethod,
])
},
)?;
@@ -121,7 +121,7 @@ mod tests {
suppress_none_returning: false,
allow_star_arg_any: true,
},
..Settings::for_rules(vec![RuleCode::ANN401])
..Settings::for_rules(vec![Rule::DynamicallyTypedExpression])
},
)?;
insta::assert_yaml_snapshot!(diagnostics);
@@ -134,11 +134,11 @@ mod tests {
Path::new("./resources/test/fixtures/flake8_annotations/allow_overload.py"),
&Settings {
..Settings::for_rules(vec![
RuleCode::ANN201,
RuleCode::ANN202,
RuleCode::ANN204,
RuleCode::ANN205,
RuleCode::ANN206,
Rule::MissingReturnTypePublicFunction,
Rule::MissingReturnTypePrivateFunction,
Rule::MissingReturnTypeSpecialMethod,
Rule::MissingReturnTypeStaticMethod,
Rule::MissingReturnTypeClassMethod,
])
},
)?;
@@ -152,11 +152,11 @@ mod tests {
Path::new("./resources/test/fixtures/flake8_annotations/allow_nested_overload.py"),
&Settings {
..Settings::for_rules(vec![
RuleCode::ANN201,
RuleCode::ANN202,
RuleCode::ANN204,
RuleCode::ANN205,
RuleCode::ANN206,
Rule::MissingReturnTypePublicFunction,
Rule::MissingReturnTypePrivateFunction,
Rule::MissingReturnTypeSpecialMethod,
Rule::MissingReturnTypeStaticMethod,
Rule::MissingReturnTypeClassMethod,
])
},
)?;

View File

@@ -8,7 +8,7 @@ use crate::ast::visitor::Visitor;
use crate::ast::{cast, helpers, visitor};
use crate::checkers::ast::Checker;
use crate::docstrings::definition::{Definition, DefinitionKind};
use crate::registry::{Diagnostic, RuleCode};
use crate::registry::{Diagnostic, Rule};
use crate::visibility::Visibility;
use crate::{violations, visibility};
@@ -85,14 +85,22 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
.chain(args.kwonlyargs.iter())
{
if let Some(expr) = &arg.node.annotation {
if checker.settings.rules.enabled(&RuleCode::ANN401) {
if checker
.settings
.rules
.enabled(&Rule::DynamicallyTypedExpression)
{
check_dynamically_typed(checker, expr, || arg.node.arg.to_string());
};
} else {
if !(checker.settings.flake8_annotations.suppress_dummy_args
&& checker.settings.dummy_variable_rgx.is_match(&arg.node.arg))
{
if checker.settings.rules.enabled(&RuleCode::ANN001) {
if checker
.settings
.rules
.enabled(&Rule::MissingTypeFunctionArgument)
{
checker.diagnostics.push(Diagnostic::new(
violations::MissingTypeFunctionArgument(arg.node.arg.to_string()),
Range::from_located(arg),
@@ -106,7 +114,11 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
if let Some(arg) = &args.vararg {
if let Some(expr) = &arg.node.annotation {
if !checker.settings.flake8_annotations.allow_star_arg_any {
if checker.settings.rules.enabled(&RuleCode::ANN401) {
if checker
.settings
.rules
.enabled(&Rule::DynamicallyTypedExpression)
{
let name = arg.node.arg.to_string();
check_dynamically_typed(checker, expr, || format!("*{name}"));
}
@@ -115,7 +127,7 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
if !(checker.settings.flake8_annotations.suppress_dummy_args
&& checker.settings.dummy_variable_rgx.is_match(&arg.node.arg))
{
if checker.settings.rules.enabled(&RuleCode::ANN002) {
if checker.settings.rules.enabled(&Rule::MissingTypeArgs) {
checker.diagnostics.push(Diagnostic::new(
violations::MissingTypeArgs(arg.node.arg.to_string()),
Range::from_located(arg),
@@ -129,7 +141,11 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
if let Some(arg) = &args.kwarg {
if let Some(expr) = &arg.node.annotation {
if !checker.settings.flake8_annotations.allow_star_arg_any {
if checker.settings.rules.enabled(&RuleCode::ANN401) {
if checker
.settings
.rules
.enabled(&Rule::DynamicallyTypedExpression)
{
let name = arg.node.arg.to_string();
check_dynamically_typed(checker, expr, || format!("**{name}"));
}
@@ -138,7 +154,7 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
if !(checker.settings.flake8_annotations.suppress_dummy_args
&& checker.settings.dummy_variable_rgx.is_match(&arg.node.arg))
{
if checker.settings.rules.enabled(&RuleCode::ANN003) {
if checker.settings.rules.enabled(&Rule::MissingTypeKwargs) {
checker.diagnostics.push(Diagnostic::new(
violations::MissingTypeKwargs(arg.node.arg.to_string()),
Range::from_located(arg),
@@ -150,7 +166,11 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
// ANN201, ANN202, ANN401
if let Some(expr) = &returns {
if checker.settings.rules.enabled(&RuleCode::ANN401) {
if checker
.settings
.rules
.enabled(&Rule::DynamicallyTypedExpression)
{
check_dynamically_typed(checker, expr, || name.to_string());
};
} else {
@@ -164,7 +184,11 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
match visibility {
Visibility::Public => {
if checker.settings.rules.enabled(&RuleCode::ANN201) {
if checker
.settings
.rules
.enabled(&Rule::MissingReturnTypePublicFunction)
{
checker.diagnostics.push(Diagnostic::new(
violations::MissingReturnTypePublicFunction(name.to_string()),
helpers::identifier_range(stmt, checker.locator),
@@ -172,7 +196,11 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
}
}
Visibility::Private => {
if checker.settings.rules.enabled(&RuleCode::ANN202) {
if checker
.settings
.rules
.enabled(&Rule::MissingReturnTypePrivateFunction)
{
checker.diagnostics.push(Diagnostic::new(
violations::MissingReturnTypePrivateFunction(name.to_string()),
helpers::identifier_range(stmt, checker.locator),
@@ -203,14 +231,22 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
// ANN401 for dynamically typed arguments
if let Some(annotation) = &arg.node.annotation {
has_any_typed_arg = true;
if checker.settings.rules.enabled(&RuleCode::ANN401) {
if checker
.settings
.rules
.enabled(&Rule::DynamicallyTypedExpression)
{
check_dynamically_typed(checker, annotation, || arg.node.arg.to_string());
}
} else {
if !(checker.settings.flake8_annotations.suppress_dummy_args
&& checker.settings.dummy_variable_rgx.is_match(&arg.node.arg))
{
if checker.settings.rules.enabled(&RuleCode::ANN001) {
if checker
.settings
.rules
.enabled(&Rule::MissingTypeFunctionArgument)
{
checker.diagnostics.push(Diagnostic::new(
violations::MissingTypeFunctionArgument(arg.node.arg.to_string()),
Range::from_located(arg),
@@ -225,7 +261,11 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
has_any_typed_arg = true;
if let Some(expr) = &arg.node.annotation {
if !checker.settings.flake8_annotations.allow_star_arg_any {
if checker.settings.rules.enabled(&RuleCode::ANN401) {
if checker
.settings
.rules
.enabled(&Rule::DynamicallyTypedExpression)
{
let name = arg.node.arg.to_string();
check_dynamically_typed(checker, expr, || format!("*{name}"));
}
@@ -234,7 +274,7 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
if !(checker.settings.flake8_annotations.suppress_dummy_args
&& checker.settings.dummy_variable_rgx.is_match(&arg.node.arg))
{
if checker.settings.rules.enabled(&RuleCode::ANN002) {
if checker.settings.rules.enabled(&Rule::MissingTypeArgs) {
checker.diagnostics.push(Diagnostic::new(
violations::MissingTypeArgs(arg.node.arg.to_string()),
Range::from_located(arg),
@@ -249,7 +289,11 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
has_any_typed_arg = true;
if let Some(expr) = &arg.node.annotation {
if !checker.settings.flake8_annotations.allow_star_arg_any {
if checker.settings.rules.enabled(&RuleCode::ANN401) {
if checker
.settings
.rules
.enabled(&Rule::DynamicallyTypedExpression)
{
let name = arg.node.arg.to_string();
check_dynamically_typed(checker, expr, || format!("**{name}"));
}
@@ -258,7 +302,7 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
if !(checker.settings.flake8_annotations.suppress_dummy_args
&& checker.settings.dummy_variable_rgx.is_match(&arg.node.arg))
{
if checker.settings.rules.enabled(&RuleCode::ANN003) {
if checker.settings.rules.enabled(&Rule::MissingTypeKwargs) {
checker.diagnostics.push(Diagnostic::new(
violations::MissingTypeKwargs(arg.node.arg.to_string()),
Range::from_located(arg),
@@ -273,14 +317,14 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
if let Some(arg) = args.args.first() {
if arg.node.annotation.is_none() {
if visibility::is_classmethod(checker, cast::decorator_list(stmt)) {
if checker.settings.rules.enabled(&RuleCode::ANN102) {
if checker.settings.rules.enabled(&Rule::MissingTypeCls) {
checker.diagnostics.push(Diagnostic::new(
violations::MissingTypeCls(arg.node.arg.to_string()),
Range::from_located(arg),
));
}
} else {
if checker.settings.rules.enabled(&RuleCode::ANN101) {
if checker.settings.rules.enabled(&Rule::MissingTypeSelf) {
checker.diagnostics.push(Diagnostic::new(
violations::MissingTypeSelf(arg.node.arg.to_string()),
Range::from_located(arg),
@@ -293,7 +337,11 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
// ANN201, ANN202
if let Some(expr) = &returns {
if checker.settings.rules.enabled(&RuleCode::ANN401) {
if checker
.settings
.rules
.enabled(&Rule::DynamicallyTypedExpression)
{
check_dynamically_typed(checker, expr, || name.to_string());
}
} else {
@@ -306,14 +354,22 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
}
if visibility::is_classmethod(checker, cast::decorator_list(stmt)) {
if checker.settings.rules.enabled(&RuleCode::ANN206) {
if checker
.settings
.rules
.enabled(&Rule::MissingReturnTypeClassMethod)
{
checker.diagnostics.push(Diagnostic::new(
violations::MissingReturnTypeClassMethod(name.to_string()),
helpers::identifier_range(stmt, checker.locator),
));
}
} else if visibility::is_staticmethod(checker, cast::decorator_list(stmt)) {
if checker.settings.rules.enabled(&RuleCode::ANN205) {
if checker
.settings
.rules
.enabled(&Rule::MissingReturnTypeStaticMethod)
{
checker.diagnostics.push(Diagnostic::new(
violations::MissingReturnTypeStaticMethod(name.to_string()),
helpers::identifier_range(stmt, checker.locator),
@@ -322,7 +378,11 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
} else if visibility::is_init(cast::name(stmt)) {
// Allow omission of return annotation in `__init__` functions, as long as at
// least one argument is typed.
if checker.settings.rules.enabled(&RuleCode::ANN204) {
if checker
.settings
.rules
.enabled(&Rule::MissingReturnTypeSpecialMethod)
{
if !(checker.settings.flake8_annotations.mypy_init_return
&& has_any_typed_arg)
{
@@ -330,7 +390,7 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
violations::MissingReturnTypeSpecialMethod(name.to_string()),
helpers::identifier_range(stmt, checker.locator),
);
if checker.patch(diagnostic.kind.code()) {
if checker.patch(diagnostic.kind.rule()) {
match fixes::add_return_none_annotation(checker.locator, stmt) {
Ok(fix) => {
diagnostic.amend(fix);
@@ -342,7 +402,11 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
}
}
} else if visibility::is_magic(cast::name(stmt)) {
if checker.settings.rules.enabled(&RuleCode::ANN204) {
if checker
.settings
.rules
.enabled(&Rule::MissingReturnTypeSpecialMethod)
{
checker.diagnostics.push(Diagnostic::new(
violations::MissingReturnTypeSpecialMethod(name.to_string()),
helpers::identifier_range(stmt, checker.locator),
@@ -351,7 +415,11 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
} else {
match visibility {
Visibility::Public => {
if checker.settings.rules.enabled(&RuleCode::ANN201) {
if checker
.settings
.rules
.enabled(&Rule::MissingReturnTypePublicFunction)
{
checker.diagnostics.push(Diagnostic::new(
violations::MissingReturnTypePublicFunction(name.to_string()),
helpers::identifier_range(stmt, checker.locator),
@@ -359,7 +427,11 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
}
}
Visibility::Private => {
if checker.settings.rules.enabled(&RuleCode::ANN202) {
if checker
.settings
.rules
.enabled(&Rule::MissingReturnTypePrivateFunction)
{
checker.diagnostics.push(Diagnostic::new(
violations::MissingReturnTypePrivateFunction(name.to_string()),
helpers::identifier_range(stmt, checker.locator),

View File

@@ -11,26 +11,26 @@ mod tests {
use test_case::test_case;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::settings::Settings;
#[test_case(RuleCode::S101, Path::new("S101.py"); "S101")]
#[test_case(RuleCode::S102, Path::new("S102.py"); "S102")]
#[test_case(RuleCode::S103, Path::new("S103.py"); "S103")]
#[test_case(RuleCode::S104, Path::new("S104.py"); "S104")]
#[test_case(RuleCode::S105, Path::new("S105.py"); "S105")]
#[test_case(RuleCode::S106, Path::new("S106.py"); "S106")]
#[test_case(RuleCode::S107, Path::new("S107.py"); "S107")]
#[test_case(RuleCode::S108, Path::new("S108.py"); "S108")]
#[test_case(RuleCode::S113, Path::new("S113.py"); "S113")]
#[test_case(RuleCode::S324, Path::new("S324.py"); "S324")]
#[test_case(RuleCode::S501, Path::new("S501.py"); "S501")]
#[test_case(RuleCode::S506, Path::new("S506.py"); "S506")]
#[test_case(RuleCode::S508, Path::new("S508.py"); "S508")]
#[test_case(RuleCode::S509, Path::new("S509.py"); "S509")]
#[test_case(RuleCode::S701, Path::new("S701.py"); "S701")]
fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
#[test_case(Rule::AssertUsed, Path::new("S101.py"); "S101")]
#[test_case(Rule::ExecUsed, Path::new("S102.py"); "S102")]
#[test_case(Rule::BadFilePermissions, Path::new("S103.py"); "S103")]
#[test_case(Rule::HardcodedBindAllInterfaces, Path::new("S104.py"); "S104")]
#[test_case(Rule::HardcodedPasswordString, Path::new("S105.py"); "S105")]
#[test_case(Rule::HardcodedPasswordFuncArg, Path::new("S106.py"); "S106")]
#[test_case(Rule::HardcodedPasswordDefault, Path::new("S107.py"); "S107")]
#[test_case(Rule::HardcodedTempFile, Path::new("S108.py"); "S108")]
#[test_case(Rule::RequestWithoutTimeout, Path::new("S113.py"); "S113")]
#[test_case(Rule::HashlibInsecureHashFunction, Path::new("S324.py"); "S324")]
#[test_case(Rule::RequestWithNoCertValidation, Path::new("S501.py"); "S501")]
#[test_case(Rule::UnsafeYAMLLoad, Path::new("S506.py"); "S506")]
#[test_case(Rule::SnmpInsecureVersion, Path::new("S508.py"); "S508")]
#[test_case(Rule::SnmpWeakCryptography, Path::new("S509.py"); "S509")]
#[test_case(Rule::Jinja2AutoescapeFalse, Path::new("S701.py"); "S701")]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.code(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("./resources/test/fixtures/flake8_bandit")
.join(path)
@@ -54,7 +54,7 @@ mod tests {
"/foo".to_string(),
],
},
..Settings::for_rule(RuleCode::S108)
..Settings::for_rule(Rule::HardcodedTempFile)
},
)?;
insta::assert_yaml_snapshot!("S108_extend", diagnostics);

View File

@@ -94,7 +94,7 @@ pub fn bad_file_permissions(
) {
if checker
.resolve_call_path(func)
.map_or(false, |call_path| call_path == ["os", "chmod"])
.map_or(false, |call_path| call_path.as_slice() == ["os", "chmod"])
{
let call_args = SimpleCallArgs::new(args, keywords);
if let Some(mode_arg) = call_args.get_argument("mode", Some(1)) {

View File

@@ -22,6 +22,11 @@ fn is_used_for_security(call_args: &SimpleCallArgs) -> bool {
}
}
enum HashlibCall {
New,
WeakHash(&'static str),
}
/// S324
pub fn hashlib_insecure_hash_functions(
checker: &mut Checker,
@@ -29,39 +34,46 @@ pub fn hashlib_insecure_hash_functions(
args: &[Expr],
keywords: &[Keyword],
) {
if let Some(call_path) = checker.resolve_call_path(func) {
if call_path == ["hashlib", "new"] {
let call_args = SimpleCallArgs::new(args, keywords);
if !is_used_for_security(&call_args) {
return;
}
if let Some(name_arg) = call_args.get_argument("name", Some(0)) {
if let Some(hash_func_name) = string_literal(name_arg) {
if WEAK_HASHES.contains(&hash_func_name.to_lowercase().as_str()) {
checker.diagnostics.push(Diagnostic::new(
violations::HashlibInsecureHashFunction(hash_func_name.to_string()),
Range::from_located(name_arg),
));
}
}
}
if let Some(hashlib_call) = checker.resolve_call_path(func).and_then(|call_path| {
if call_path.as_slice() == ["hashlib", "new"] {
Some(HashlibCall::New)
} else {
for func_name in &WEAK_HASHES {
if call_path == ["hashlib", func_name] {
let call_args = SimpleCallArgs::new(args, keywords);
WEAK_HASHES
.iter()
.find(|hash| call_path.as_slice() == ["hashlib", hash])
.map(|hash| HashlibCall::WeakHash(hash))
}
}) {
match hashlib_call {
HashlibCall::New => {
let call_args = SimpleCallArgs::new(args, keywords);
if !is_used_for_security(&call_args) {
return;
}
checker.diagnostics.push(Diagnostic::new(
violations::HashlibInsecureHashFunction((*func_name).to_string()),
Range::from_located(func),
));
if !is_used_for_security(&call_args) {
return;
}
if let Some(name_arg) = call_args.get_argument("name", Some(0)) {
if let Some(hash_func_name) = string_literal(name_arg) {
if WEAK_HASHES.contains(&hash_func_name.to_lowercase().as_str()) {
checker.diagnostics.push(Diagnostic::new(
violations::HashlibInsecureHashFunction(hash_func_name.to_string()),
Range::from_located(name_arg),
));
}
}
}
}
HashlibCall::WeakHash(func_name) => {
let call_args = SimpleCallArgs::new(args, keywords);
if !is_used_for_security(&call_args) {
return;
}
checker.diagnostics.push(Diagnostic::new(
violations::HashlibInsecureHashFunction((*func_name).to_string()),
Range::from_located(func),
));
}
}
}

View File

@@ -14,10 +14,9 @@ pub fn jinja2_autoescape_false(
args: &[Expr],
keywords: &[Keyword],
) {
if checker
.resolve_call_path(func)
.map_or(false, |call_path| call_path == ["jinja2", "Environment"])
{
if checker.resolve_call_path(func).map_or(false, |call_path| {
call_path.as_slice() == ["jinja2", "Environment"]
}) {
let call_args = SimpleCallArgs::new(args, keywords);
if let Some(autoescape_arg) = call_args.get_argument("autoescape", None) {

View File

@@ -29,40 +29,28 @@ pub fn request_with_no_cert_validation(
args: &[Expr],
keywords: &[Keyword],
) {
if let Some(call_path) = checker.resolve_call_path(func) {
let call_args = SimpleCallArgs::new(args, keywords);
for func_name in &REQUESTS_HTTP_VERBS {
if call_path == ["requests", func_name] {
if let Some(verify_arg) = call_args.get_argument("verify", None) {
if let ExprKind::Constant {
value: Constant::Bool(false),
..
} = &verify_arg.node
{
checker.diagnostics.push(Diagnostic::new(
violations::RequestWithNoCertValidation("requests".to_string()),
Range::from_located(verify_arg),
));
}
}
return;
if let Some(target) = checker.resolve_call_path(func).and_then(|call_path| {
if call_path.len() == 2 {
if call_path[0] == "requests" && REQUESTS_HTTP_VERBS.contains(&call_path[1]) {
return Some("requests");
}
if call_path[0] == "httpx" && HTTPX_METHODS.contains(&call_path[1]) {
return Some("httpx");
}
}
for func_name in &HTTPX_METHODS {
if call_path == ["httpx", func_name] {
if let Some(verify_arg) = call_args.get_argument("verify", None) {
if let ExprKind::Constant {
value: Constant::Bool(false),
..
} = &verify_arg.node
{
checker.diagnostics.push(Diagnostic::new(
violations::RequestWithNoCertValidation("httpx".to_string()),
Range::from_located(verify_arg),
));
}
}
return;
None
}) {
let call_args = SimpleCallArgs::new(args, keywords);
if let Some(verify_arg) = call_args.get_argument("verify", None) {
if let ExprKind::Constant {
value: Constant::Bool(false),
..
} = &verify_arg.node
{
checker.diagnostics.push(Diagnostic::new(
violations::RequestWithNoCertValidation(target.to_string()),
Range::from_located(verify_arg),
));
}
}
}

View File

@@ -19,7 +19,7 @@ pub fn request_without_timeout(
if checker.resolve_call_path(func).map_or(false, |call_path| {
HTTP_VERBS
.iter()
.any(|func_name| call_path == ["requests", func_name])
.any(|func_name| call_path.as_slice() == ["requests", func_name])
}) {
let call_args = SimpleCallArgs::new(args, keywords);
if let Some(timeout_arg) = call_args.get_argument("timeout", None) {

View File

@@ -16,7 +16,7 @@ pub fn snmp_insecure_version(
keywords: &[Keyword],
) {
if checker.resolve_call_path(func).map_or(false, |call_path| {
call_path == ["pysnmp", "hlapi", "CommunityData"]
call_path.as_slice() == ["pysnmp", "hlapi", "CommunityData"]
}) {
let call_args = SimpleCallArgs::new(args, keywords);
if let Some(mp_model_arg) = call_args.get_argument("mpModel", None) {

View File

@@ -14,7 +14,7 @@ pub fn snmp_weak_cryptography(
keywords: &[Keyword],
) {
if checker.resolve_call_path(func).map_or(false, |call_path| {
call_path == ["pysnmp", "hlapi", "UsmUserData"]
call_path.as_slice() == ["pysnmp", "hlapi", "UsmUserData"]
}) {
let call_args = SimpleCallArgs::new(args, keywords);
if call_args.len() < 3 {

View File

@@ -10,14 +10,15 @@ use crate::violations;
pub fn unsafe_yaml_load(checker: &mut Checker, func: &Expr, args: &[Expr], keywords: &[Keyword]) {
if checker
.resolve_call_path(func)
.map_or(false, |call_path| call_path == ["yaml", "load"])
.map_or(false, |call_path| call_path.as_slice() == ["yaml", "load"])
{
let call_args = SimpleCallArgs::new(args, keywords);
if let Some(loader_arg) = call_args.get_argument("Loader", Some(1)) {
if !checker
.resolve_call_path(loader_arg)
.map_or(false, |call_path| {
call_path == ["yaml", "SafeLoader"] || call_path == ["yaml", "CSafeLoader"]
call_path.as_slice() == ["yaml", "SafeLoader"]
|| call_path.as_slice() == ["yaml", "CSafeLoader"]
})
{
let loader = match &loader_arg.node {

View File

@@ -3,19 +3,18 @@ pub(crate) mod rules;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use anyhow::Result;
use test_case::test_case;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::settings;
#[test_case(RuleCode::BLE001, Path::new("BLE.py"); "BLE001")]
fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
#[test_case(Rule::BlindExcept, Path::new("BLE.py"); "BLE001")]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.code(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("./resources/test/fixtures/flake8_blind_except")
.join(path)

View File

@@ -3,21 +3,20 @@ pub(crate) mod rules;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use anyhow::Result;
use test_case::test_case;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::settings;
#[test_case(RuleCode::FBT001, Path::new("FBT.py"); "FBT001")]
#[test_case(RuleCode::FBT002, Path::new("FBT.py"); "FBT002")]
#[test_case(RuleCode::FBT003, Path::new("FBT.py"); "FBT003")]
fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
#[test_case(Rule::BooleanPositionalArgInFunctionDefinition, Path::new("FBT.py"); "FBT001")]
#[test_case(Rule::BooleanDefaultValueInFunctionDefinition, Path::new("FBT.py"); "FBT002")]
#[test_case(Rule::BooleanPositionalValueInFunctionCall, Path::new("FBT.py"); "FBT003")]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.code(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("./resources/test/fixtures/flake8_boolean_trap")
.join(path)

View File

@@ -10,39 +10,39 @@ mod tests {
use test_case::test_case;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::settings::Settings;
#[test_case(RuleCode::B002, Path::new("B002.py"); "B002")]
#[test_case(RuleCode::B003, Path::new("B003.py"); "B003")]
#[test_case(RuleCode::B004, Path::new("B004.py"); "B004")]
#[test_case(RuleCode::B005, Path::new("B005.py"); "B005")]
#[test_case(RuleCode::B006, Path::new("B006_B008.py"); "B006")]
#[test_case(RuleCode::B007, Path::new("B007.py"); "B007")]
#[test_case(RuleCode::B008, Path::new("B006_B008.py"); "B008")]
#[test_case(RuleCode::B009, Path::new("B009_B010.py"); "B009")]
#[test_case(RuleCode::B010, Path::new("B009_B010.py"); "B010")]
#[test_case(RuleCode::B011, Path::new("B011.py"); "B011")]
#[test_case(RuleCode::B012, Path::new("B012.py"); "B012")]
#[test_case(RuleCode::B013, Path::new("B013.py"); "B013")]
#[test_case(RuleCode::B014, Path::new("B014.py"); "B014")]
#[test_case(RuleCode::B015, Path::new("B015.py"); "B015")]
#[test_case(RuleCode::B016, Path::new("B016.py"); "B016")]
#[test_case(RuleCode::B017, Path::new("B017.py"); "B017")]
#[test_case(RuleCode::B018, Path::new("B018.py"); "B018")]
#[test_case(RuleCode::B019, Path::new("B019.py"); "B019")]
#[test_case(RuleCode::B020, Path::new("B020.py"); "B020")]
#[test_case(RuleCode::B021, Path::new("B021.py"); "B021")]
#[test_case(RuleCode::B022, Path::new("B022.py"); "B022")]
#[test_case(RuleCode::B023, Path::new("B023.py"); "B023")]
#[test_case(RuleCode::B024, Path::new("B024.py"); "B024")]
#[test_case(RuleCode::B025, Path::new("B025.py"); "B025")]
#[test_case(RuleCode::B026, Path::new("B026.py"); "B026")]
#[test_case(RuleCode::B027, Path::new("B027.py"); "B027")]
#[test_case(RuleCode::B904, Path::new("B904.py"); "B904")]
#[test_case(RuleCode::B905, Path::new("B905.py"); "B905")]
fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
#[test_case(Rule::UnaryPrefixIncrement, Path::new("B002.py"); "B002")]
#[test_case(Rule::AssignmentToOsEnviron, Path::new("B003.py"); "B003")]
#[test_case(Rule::UnreliableCallableCheck, Path::new("B004.py"); "B004")]
#[test_case(Rule::StripWithMultiCharacters, Path::new("B005.py"); "B005")]
#[test_case(Rule::MutableArgumentDefault, Path::new("B006_B008.py"); "B006")]
#[test_case(Rule::UnusedLoopControlVariable, Path::new("B007.py"); "B007")]
#[test_case(Rule::FunctionCallArgumentDefault, Path::new("B006_B008.py"); "B008")]
#[test_case(Rule::GetAttrWithConstant, Path::new("B009_B010.py"); "B009")]
#[test_case(Rule::SetAttrWithConstant, Path::new("B009_B010.py"); "B010")]
#[test_case(Rule::DoNotAssertFalse, Path::new("B011.py"); "B011")]
#[test_case(Rule::JumpStatementInFinally, Path::new("B012.py"); "B012")]
#[test_case(Rule::RedundantTupleInExceptionHandler, Path::new("B013.py"); "B013")]
#[test_case(Rule::DuplicateHandlerException, Path::new("B014.py"); "B014")]
#[test_case(Rule::UselessComparison, Path::new("B015.py"); "B015")]
#[test_case(Rule::CannotRaiseLiteral, Path::new("B016.py"); "B016")]
#[test_case(Rule::NoAssertRaisesException, Path::new("B017.py"); "B017")]
#[test_case(Rule::UselessExpression, Path::new("B018.py"); "B018")]
#[test_case(Rule::CachedInstanceMethod, Path::new("B019.py"); "B019")]
#[test_case(Rule::LoopVariableOverridesIterator, Path::new("B020.py"); "B020")]
#[test_case(Rule::FStringDocstring, Path::new("B021.py"); "B021")]
#[test_case(Rule::UselessContextlibSuppress, Path::new("B022.py"); "B022")]
#[test_case(Rule::FunctionUsesLoopVariable, Path::new("B023.py"); "B023")]
#[test_case(Rule::AbstractBaseClassWithoutAbstractMethod, Path::new("B024.py"); "B024")]
#[test_case(Rule::DuplicateTryBlockException, Path::new("B025.py"); "B025")]
#[test_case(Rule::StarArgUnpackingAfterKeywordArg, Path::new("B026.py"); "B026")]
#[test_case(Rule::EmptyMethodWithoutAbstractDecorator, Path::new("B027.py"); "B027")]
#[test_case(Rule::RaiseWithoutFromInsideExcept, Path::new("B904.py"); "B904")]
#[test_case(Rule::ZipWithoutExplicitStrict, Path::new("B905.py"); "B905")]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.code(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("./resources/test/fixtures/flake8_bugbear")
.join(path)
@@ -65,7 +65,7 @@ mod tests {
"fastapi.Query".to_string(),
],
},
..Settings::for_rules(vec![RuleCode::B008])
..Settings::for_rules(vec![Rule::FunctionCallArgumentDefault])
},
)?;
insta::assert_yaml_snapshot!(snapshot, diagnostics);

View File

@@ -2,7 +2,7 @@ use rustpython_ast::{Constant, Expr, ExprKind, Keyword, Stmt, StmtKind};
use crate::ast::types::Range;
use crate::checkers::ast::Checker;
use crate::registry::{Diagnostic, RuleCode};
use crate::registry::{Diagnostic, Rule};
use crate::violations;
use crate::visibility::{is_abstract, is_overload};
@@ -15,11 +15,13 @@ fn is_abc_class(checker: &Checker, bases: &[Expr], keywords: &[Keyword]) -> bool
.map_or(false, |arg| arg == "metaclass")
&& checker
.resolve_call_path(&keyword.node.value)
.map_or(false, |call_path| call_path == ["abc", "ABCMeta"])
.map_or(false, |call_path| {
call_path.as_slice() == ["abc", "ABCMeta"]
})
}) || bases.iter().any(|base| {
checker
.resolve_call_path(base)
.map_or(false, |call_path| call_path == ["abc", "ABC"])
.map_or(false, |call_path| call_path.as_slice() == ["abc", "ABC"])
})
}
@@ -76,7 +78,11 @@ pub fn abstract_base_class(
let has_abstract_decorator = is_abstract(checker, decorator_list);
has_abstract_method |= has_abstract_decorator;
if !checker.settings.rules.enabled(&RuleCode::B027) {
if !checker
.settings
.rules
.enabled(&Rule::EmptyMethodWithoutAbstractDecorator)
{
continue;
}
@@ -87,7 +93,11 @@ pub fn abstract_base_class(
));
}
}
if checker.settings.rules.enabled(&RuleCode::B024) {
if checker
.settings
.rules
.enabled(&Rule::AbstractBaseClassWithoutAbstractMethod)
{
if !has_abstract_method {
checker.diagnostics.push(Diagnostic::new(
violations::AbstractBaseClassWithoutAbstractMethod(name.to_string()),

View File

@@ -47,7 +47,7 @@ pub fn assert_false(checker: &mut Checker, stmt: &Stmt, test: &Expr, msg: Option
};
let mut diagnostic = Diagnostic::new(violations::DoNotAssertFalse, Range::from_located(test));
if checker.patch(diagnostic.kind.code()) {
if checker.patch(diagnostic.kind.rule()) {
let mut generator: Generator = checker.stylist.into();
generator.unparse_stmt(&assertion_error(msg));
diagnostic.amend(Fix::replacement(

View File

@@ -1,3 +1,13 @@
//! Checks for `self.assertRaises(Exception)`.
//!
//! ## Why is this bad?
//!
//! `assertRaises(Exception)` should be considered evil. It can lead to your
//! test passing even if the code being tested is never executed due to a
//! typo. Either assert for a more specific exception (builtin or
//! custom), use `assertRaisesRegex`, or use the context manager form of
//! `assertRaises`.
use rustpython_ast::{ExprKind, Stmt, Withitem};
use crate::ast::types::Range;
@@ -25,7 +35,7 @@ pub fn assert_raises_exception(checker: &mut Checker, stmt: &Stmt, items: &[With
}
if !checker
.resolve_call_path(args.first().unwrap())
.map_or(false, |call_path| call_path == ["", "Exception"])
.map_or(false, |call_path| call_path.as_slice() == ["", "Exception"])
{
return;
}

View File

@@ -7,7 +7,8 @@ use crate::violations;
fn is_cache_func(checker: &Checker, expr: &Expr) -> bool {
checker.resolve_call_path(expr).map_or(false, |call_path| {
call_path == ["functools", "lru_cache"] || call_path == ["functools", "cache"]
call_path.as_slice() == ["functools", "lru_cache"]
|| call_path.as_slice() == ["functools", "cache"]
})
}

View File

@@ -3,10 +3,10 @@ use rustc_hash::{FxHashMap, FxHashSet};
use rustpython_ast::{Excepthandler, ExcepthandlerKind, Expr, ExprContext, ExprKind, Location};
use crate::ast::helpers;
use crate::ast::types::Range;
use crate::ast::types::{CallPath, Range};
use crate::checkers::ast::Checker;
use crate::fix::Fix;
use crate::registry::{Diagnostic, RuleCode};
use crate::registry::{Diagnostic, Rule};
use crate::source_code::Generator;
use crate::violations;
@@ -25,9 +25,9 @@ fn duplicate_handler_exceptions<'a>(
checker: &mut Checker,
expr: &'a Expr,
elts: &'a [Expr],
) -> FxHashMap<Vec<&'a str>, &'a Expr> {
let mut seen: FxHashMap<Vec<&str>, &Expr> = FxHashMap::default();
let mut duplicates: FxHashSet<Vec<&str>> = FxHashSet::default();
) -> FxHashMap<CallPath<'a>, &'a Expr> {
let mut seen: FxHashMap<CallPath, &Expr> = FxHashMap::default();
let mut duplicates: FxHashSet<CallPath> = FxHashSet::default();
let mut unique_elts: Vec<&Expr> = Vec::default();
for type_ in elts {
let call_path = helpers::collect_call_path(type_);
@@ -41,7 +41,11 @@ fn duplicate_handler_exceptions<'a>(
}
}
if checker.settings.rules.enabled(&RuleCode::B014) {
if checker
.settings
.rules
.enabled(&Rule::DuplicateHandlerException)
{
// TODO(charlie): Handle "BaseException" and redundant exception aliases.
if !duplicates.is_empty() {
let mut diagnostic = Diagnostic::new(
@@ -54,7 +58,7 @@ fn duplicate_handler_exceptions<'a>(
),
Range::from_located(expr),
);
if checker.patch(diagnostic.kind.code()) {
if checker.patch(diagnostic.kind.rule()) {
let mut generator: Generator = checker.stylist.into();
if unique_elts.len() == 1 {
generator.unparse_expr(unique_elts[0], 0);
@@ -75,8 +79,8 @@ fn duplicate_handler_exceptions<'a>(
}
pub fn duplicate_exceptions(checker: &mut Checker, handlers: &[Excepthandler]) {
let mut seen: FxHashSet<Vec<&str>> = FxHashSet::default();
let mut duplicates: FxHashMap<Vec<&str>, Vec<&Expr>> = FxHashMap::default();
let mut seen: FxHashSet<CallPath> = FxHashSet::default();
let mut duplicates: FxHashMap<CallPath, Vec<&Expr>> = FxHashMap::default();
for handler in handlers {
let ExcepthandlerKind::ExceptHandler { type_: Some(type_), .. } = &handler.node else {
continue;
@@ -105,7 +109,11 @@ pub fn duplicate_exceptions(checker: &mut Checker, handlers: &[Excepthandler]) {
}
}
if checker.settings.rules.enabled(&RuleCode::B025) {
if checker
.settings
.rules
.enabled(&Rule::DuplicateTryBlockException)
{
for (name, exprs) in duplicates {
for expr in exprs {
checker.diagnostics.push(Diagnostic::new(

View File

@@ -2,7 +2,7 @@ use rustpython_ast::{Arguments, Constant, Expr, ExprKind};
use super::mutable_argument_default::is_mutable_func;
use crate::ast::helpers::{compose_call_path, to_call_path};
use crate::ast::types::Range;
use crate::ast::types::{CallPath, Range};
use crate::ast::visitor;
use crate::ast::visitor::Visitor;
use crate::checkers::ast::Checker;
@@ -19,9 +19,11 @@ const IMMUTABLE_FUNCS: &[&[&str]] = &[
&["re", "compile"],
];
fn is_immutable_func(checker: &Checker, expr: &Expr, extend_immutable_calls: &[Vec<&str>]) -> bool {
fn is_immutable_func(checker: &Checker, expr: &Expr, extend_immutable_calls: &[CallPath]) -> bool {
checker.resolve_call_path(expr).map_or(false, |call_path| {
IMMUTABLE_FUNCS.iter().any(|target| call_path == *target)
IMMUTABLE_FUNCS
.iter()
.any(|target| call_path.as_slice() == *target)
|| extend_immutable_calls
.iter()
.any(|target| call_path == *target)
@@ -31,7 +33,7 @@ fn is_immutable_func(checker: &Checker, expr: &Expr, extend_immutable_calls: &[V
struct ArgumentDefaultVisitor<'a> {
checker: &'a Checker<'a>,
diagnostics: Vec<(DiagnosticKind, Range)>,
extend_immutable_calls: Vec<Vec<&'a str>>,
extend_immutable_calls: Vec<CallPath<'a>>,
}
impl<'a, 'b> Visitor<'b> for ArgumentDefaultVisitor<'b>
@@ -84,7 +86,7 @@ fn is_nan_or_infinity(expr: &Expr, args: &[Expr]) -> bool {
/// B008
pub fn function_call_argument_default(checker: &mut Checker, arguments: &Arguments) {
// Map immutable calls to (module, member) format.
let extend_immutable_calls: Vec<Vec<&str>> = checker
let extend_immutable_calls: Vec<CallPath> = checker
.settings
.flake8_bugbear
.extend_immutable_calls

View File

@@ -47,7 +47,7 @@ pub fn getattr_with_constant(checker: &mut Checker, expr: &Expr, func: &Expr, ar
let mut diagnostic =
Diagnostic::new(violations::GetAttrWithConstant, Range::from_located(expr));
if checker.patch(diagnostic.kind.code()) {
if checker.patch(diagnostic.kind.rule()) {
let mut generator: Generator = checker.stylist.into();
generator.unparse_expr(&attribute(obj, value), 0);
diagnostic.amend(Fix::replacement(

View File

@@ -58,7 +58,9 @@ const IMMUTABLE_GENERIC_TYPES: &[&[&str]] = &[
pub fn is_mutable_func(checker: &Checker, expr: &Expr) -> bool {
checker.resolve_call_path(expr).map_or(false, |call_path| {
MUTABLE_FUNCS.iter().any(|target| call_path == *target)
MUTABLE_FUNCS
.iter()
.any(|target| call_path.as_slice() == *target)
})
}
@@ -82,25 +84,25 @@ fn is_immutable_annotation(checker: &Checker, expr: &Expr) -> bool {
IMMUTABLE_TYPES
.iter()
.chain(IMMUTABLE_GENERIC_TYPES)
.any(|target| call_path == *target)
.any(|target| call_path.as_slice() == *target)
})
}
ExprKind::Subscript { value, slice, .. } => {
checker.resolve_call_path(value).map_or(false, |call_path| {
if IMMUTABLE_GENERIC_TYPES
.iter()
.any(|target| call_path == *target)
.any(|target| call_path.as_slice() == *target)
{
true
} else if call_path == ["typing", "Union"] {
} else if call_path.as_slice() == ["typing", "Union"] {
if let ExprKind::Tuple { elts, .. } = &slice.node {
elts.iter().all(|elt| is_immutable_annotation(checker, elt))
} else {
false
}
} else if call_path == ["typing", "Optional"] {
} else if call_path.as_slice() == ["typing", "Optional"] {
is_immutable_annotation(checker, slice)
} else if call_path == ["typing", "Annotated"] {
} else if call_path.as_slice() == ["typing", "Annotated"] {
if let ExprKind::Tuple { elts, .. } = &slice.node {
elts.first()
.map_or(false, |elt| is_immutable_annotation(checker, elt))

View File

@@ -23,7 +23,7 @@ pub fn redundant_tuple_in_exception_handler(checker: &mut Checker, handlers: &[E
violations::RedundantTupleInExceptionHandler(elt.to_string()),
Range::from_located(type_),
);
if checker.patch(diagnostic.kind.code()) {
if checker.patch(diagnostic.kind.rule()) {
let mut generator: Generator = checker.stylist.into();
generator.unparse_expr(elt, 0);
diagnostic.amend(Fix::replacement(

View File

@@ -62,7 +62,7 @@ pub fn setattr_with_constant(checker: &mut Checker, expr: &Expr, func: &Expr, ar
if expr == child.as_ref() {
let mut diagnostic =
Diagnostic::new(violations::SetAttrWithConstant, Range::from_located(expr));
if checker.patch(diagnostic.kind.code()) {
if checker.patch(diagnostic.kind.rule()) {
diagnostic.amend(Fix::replacement(
assignment(obj, name, value, checker.stylist),
expr.location,

View File

@@ -1,3 +1,12 @@
//! Checks for `f(x=0, *(1, 2))`.
//!
//! ## Why is this bad?
//!
//! Star-arg unpacking after a keyword argument is strongly discouraged. It only
//! works when the keyword parameter is declared after all parameters supplied
//! by the unpacked sequence, and this change of ordering can surprise and
//! mislead readers.
use rustpython_ast::{Expr, ExprKind, Keyword};
use crate::ast::types::Range;

View File

@@ -1,3 +1,22 @@
//! Checks for `++n`.
//!
//! ## Why is this bad?
//!
//! Python does not support the unary prefix increment. Writing `++n` is
//! equivalent to `+(+(n))`, which equals `n`.
//!
//! ## Example
//!
//! ```python
//! ++n;
//! ```
//!
//! Use instead:
//!
//! ```python
//! n += 1
//! ```
use rustpython_ast::{Expr, ExprKind, Unaryop};
use crate::ast::types::Range;

View File

@@ -1,3 +1,23 @@
//! Checks for unused loop variables.
//!
//! ## Why is this bad?
//!
//! Unused variables may signal a mistake or unfinished code.
//!
//! ## Example
//!
//! ```python
//! for x in range(10):
//! method()
//! ```
//!
//! Prefix the variable with an underscore:
//!
//! ```python
//! for _x in range(10):
//! method()
//! ```
use rustc_hash::FxHashMap;
use rustpython_ast::{Expr, ExprKind, Stmt};
@@ -66,7 +86,7 @@ pub fn unused_loop_control_variable(checker: &mut Checker, target: &Expr, body:
violations::UnusedLoopControlVariable(name.to_string()),
Range::from_located(expr),
);
if checker.patch(diagnostic.kind.code()) {
if checker.patch(diagnostic.kind.rule()) {
// Prefix the variable name with an underscore.
diagnostic.amend(Fix::replacement(
format!("_{name}"),

View File

@@ -8,9 +8,9 @@ use crate::violations;
/// B005
pub fn useless_contextlib_suppress(checker: &mut Checker, expr: &Expr, args: &[Expr]) {
if args.is_empty()
&& checker
.resolve_call_path(expr)
.map_or(false, |call_path| call_path == ["contextlib", "suppress"])
&& checker.resolve_call_path(expr).map_or(false, |call_path| {
call_path.as_slice() == ["contextlib", "suppress"]
})
{
checker.diagnostics.push(Diagnostic::new(
violations::UselessContextlibSuppress,

View File

@@ -4,21 +4,20 @@ pub(crate) mod types;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use anyhow::Result;
use test_case::test_case;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::settings;
#[test_case(RuleCode::A001, Path::new("A001.py"); "A001")]
#[test_case(RuleCode::A002, Path::new("A002.py"); "A002")]
#[test_case(RuleCode::A003, Path::new("A003.py"); "A003")]
fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
#[test_case(Rule::BuiltinVariableShadowing, Path::new("A001.py"); "A001")]
#[test_case(Rule::BuiltinArgumentShadowing, Path::new("A002.py"); "A002")]
#[test_case(Rule::BuiltinAttributeShadowing, Path::new("A003.py"); "A003")]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.code(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("./resources/test/fixtures/flake8_builtins")
.join(path)

View File

@@ -9,7 +9,7 @@ mod tests {
use test_case::test_case;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::settings;
#[test_case(Path::new("COM81.py"); "COM81")]
@@ -20,9 +20,9 @@ mod tests {
.join(path)
.as_path(),
&settings::Settings::for_rules(vec![
RuleCode::COM812,
RuleCode::COM818,
RuleCode::COM819,
Rule::TrailingCommaMissing,
Rule::TrailingCommaOnBareTupleProhibited,
Rule::TrailingCommaProhibited,
]),
)?;
insta::assert_yaml_snapshot!(snapshot, diagnostics);

View File

@@ -4,7 +4,7 @@ use rustpython_parser::token::Tok;
use crate::ast::types::Range;
use crate::fix::Fix;
use crate::registry::{Diagnostic, RuleCode};
use crate::registry::{Diagnostic, Rule};
use crate::settings::{flags, Settings};
use crate::violations;
@@ -219,7 +219,7 @@ pub fn trailing_commas(
},
);
if matches!(autofix, flags::Autofix::Enabled)
&& settings.rules.should_fix(&RuleCode::COM819)
&& settings.rules.should_fix(&Rule::TrailingCommaProhibited)
{
diagnostic.amend(Fix::deletion(comma.0, comma.2));
}
@@ -265,7 +265,7 @@ pub fn trailing_commas(
},
);
if matches!(autofix, flags::Autofix::Enabled)
&& settings.rules.should_fix(&RuleCode::COM812)
&& settings.rules.should_fix(&Rule::TrailingCommaMissing)
{
diagnostic.amend(Fix::insertion(",".to_owned(), missing_comma.2));
}

View File

@@ -4,35 +4,34 @@ pub(crate) mod rules;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use anyhow::Result;
use test_case::test_case;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::settings;
#[test_case(RuleCode::C400, Path::new("C400.py"); "C400")]
#[test_case(RuleCode::C401, Path::new("C401.py"); "C401")]
#[test_case(RuleCode::C402, Path::new("C402.py"); "C402")]
#[test_case(RuleCode::C403, Path::new("C403.py"); "C403")]
#[test_case(RuleCode::C404, Path::new("C404.py"); "C404")]
#[test_case(RuleCode::C405, Path::new("C405.py"); "C405")]
#[test_case(RuleCode::C406, Path::new("C406.py"); "C406")]
#[test_case(RuleCode::C408, Path::new("C408.py"); "C408")]
#[test_case(RuleCode::C409, Path::new("C409.py"); "C409")]
#[test_case(RuleCode::C410, Path::new("C410.py"); "C410")]
#[test_case(RuleCode::C411, Path::new("C411.py"); "C411")]
#[test_case(RuleCode::C413, Path::new("C413.py"); "C413")]
#[test_case(RuleCode::C414, Path::new("C414.py"); "C414")]
#[test_case(RuleCode::C415, Path::new("C415.py"); "C415")]
#[test_case(RuleCode::C416, Path::new("C416.py"); "C416")]
#[test_case(RuleCode::C417, Path::new("C417.py"); "C417")]
#[test_case(Rule::UnnecessaryGeneratorList, Path::new("C400.py"); "C400")]
#[test_case(Rule::UnnecessaryGeneratorSet, Path::new("C401.py"); "C401")]
#[test_case(Rule::UnnecessaryGeneratorDict, Path::new("C402.py"); "C402")]
#[test_case(Rule::UnnecessaryListComprehensionSet, Path::new("C403.py"); "C403")]
#[test_case(Rule::UnnecessaryListComprehensionDict, Path::new("C404.py"); "C404")]
#[test_case(Rule::UnnecessaryLiteralSet, Path::new("C405.py"); "C405")]
#[test_case(Rule::UnnecessaryLiteralDict, Path::new("C406.py"); "C406")]
#[test_case(Rule::UnnecessaryCollectionCall, Path::new("C408.py"); "C408")]
#[test_case(Rule::UnnecessaryLiteralWithinTupleCall, Path::new("C409.py"); "C409")]
#[test_case(Rule::UnnecessaryLiteralWithinListCall, Path::new("C410.py"); "C410")]
#[test_case(Rule::UnnecessaryListCall, Path::new("C411.py"); "C411")]
#[test_case(Rule::UnnecessaryCallAroundSorted, Path::new("C413.py"); "C413")]
#[test_case(Rule::UnnecessaryDoubleCastOrProcess, Path::new("C414.py"); "C414")]
#[test_case(Rule::UnnecessarySubscriptReversal, Path::new("C415.py"); "C415")]
#[test_case(Rule::UnnecessaryComprehension, Path::new("C416.py"); "C416")]
#[test_case(Rule::UnnecessaryMap, Path::new("C417.py"); "C417")]
fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.code(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("./resources/test/fixtures/flake8_comprehensions")
.join(path)

View File

@@ -5,7 +5,7 @@ use rustpython_ast::{Comprehension, Constant, Expr, ExprKind, Keyword, Unaryop};
use super::fixes;
use crate::ast::types::Range;
use crate::checkers::ast::Checker;
use crate::registry::{Diagnostic, RuleCode};
use crate::registry::{Diagnostic, Rule};
use crate::violations;
fn function_name(func: &Expr) -> Option<&str> {
@@ -65,7 +65,7 @@ pub fn unnecessary_generator_list(
violations::UnnecessaryGeneratorList,
Range::from_located(expr),
);
if checker.patch(&RuleCode::C400) {
if checker.patch(&Rule::UnnecessaryGeneratorList) {
match fixes::fix_unnecessary_generator_list(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
@@ -96,7 +96,7 @@ pub fn unnecessary_generator_set(
violations::UnnecessaryGeneratorSet,
Range::from_located(expr),
);
if checker.patch(&RuleCode::C401) {
if checker.patch(&Rule::UnnecessaryGeneratorSet) {
match fixes::fix_unnecessary_generator_set(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
@@ -126,7 +126,7 @@ pub fn unnecessary_generator_dict(
violations::UnnecessaryGeneratorDict,
Range::from_located(expr),
);
if checker.patch(&RuleCode::C402) {
if checker.patch(&Rule::UnnecessaryGeneratorDict) {
match fixes::fix_unnecessary_generator_dict(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
@@ -160,7 +160,7 @@ pub fn unnecessary_list_comprehension_set(
violations::UnnecessaryListComprehensionSet,
Range::from_located(expr),
);
if checker.patch(&RuleCode::C403) {
if checker.patch(&Rule::UnnecessaryListComprehensionSet) {
match fixes::fix_unnecessary_list_comprehension_set(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
@@ -199,7 +199,7 @@ pub fn unnecessary_list_comprehension_dict(
violations::UnnecessaryListComprehensionDict,
Range::from_located(expr),
);
if checker.patch(&RuleCode::C404) {
if checker.patch(&Rule::UnnecessaryListComprehensionDict) {
match fixes::fix_unnecessary_list_comprehension_dict(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
@@ -233,7 +233,7 @@ pub fn unnecessary_literal_set(
violations::UnnecessaryLiteralSet(kind.to_string()),
Range::from_located(expr),
);
if checker.patch(&RuleCode::C405) {
if checker.patch(&Rule::UnnecessaryLiteralSet) {
match fixes::fix_unnecessary_literal_set(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
@@ -274,7 +274,7 @@ pub fn unnecessary_literal_dict(
violations::UnnecessaryLiteralDict(kind.to_string()),
Range::from_located(expr),
);
if checker.patch(&RuleCode::C406) {
if checker.patch(&Rule::UnnecessaryLiteralDict) {
match fixes::fix_unnecessary_literal_dict(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
@@ -315,7 +315,7 @@ pub fn unnecessary_collection_call(
violations::UnnecessaryCollectionCall(id.to_string()),
Range::from_located(expr),
);
if checker.patch(&RuleCode::C408) {
if checker.patch(&Rule::UnnecessaryCollectionCall) {
match fixes::fix_unnecessary_collection_call(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
@@ -348,7 +348,7 @@ pub fn unnecessary_literal_within_tuple_call(
violations::UnnecessaryLiteralWithinTupleCall(argument_kind.to_string()),
Range::from_located(expr),
);
if checker.patch(&RuleCode::C409) {
if checker.patch(&Rule::UnnecessaryLiteralWithinTupleCall) {
match fixes::fix_unnecessary_literal_within_tuple_call(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
@@ -381,7 +381,7 @@ pub fn unnecessary_literal_within_list_call(
violations::UnnecessaryLiteralWithinListCall(argument_kind.to_string()),
Range::from_located(expr),
);
if checker.patch(&RuleCode::C410) {
if checker.patch(&Rule::UnnecessaryLiteralWithinListCall) {
match fixes::fix_unnecessary_literal_within_list_call(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
@@ -405,7 +405,7 @@ pub fn unnecessary_list_call(checker: &mut Checker, expr: &Expr, func: &Expr, ar
}
let mut diagnostic =
Diagnostic::new(violations::UnnecessaryListCall, Range::from_located(expr));
if checker.patch(&RuleCode::C411) {
if checker.patch(&Rule::UnnecessaryListCall) {
match fixes::fix_unnecessary_list_call(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
@@ -448,7 +448,7 @@ pub fn unnecessary_call_around_sorted(
violations::UnnecessaryCallAroundSorted(outer.to_string()),
Range::from_located(expr),
);
if checker.patch(&RuleCode::C413) {
if checker.patch(&Rule::UnnecessaryCallAroundSorted) {
match fixes::fix_unnecessary_call_around_sorted(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);
@@ -611,7 +611,7 @@ pub fn unnecessary_comprehension(
violations::UnnecessaryComprehension(id.to_string()),
Range::from_located(expr),
);
if checker.patch(&RuleCode::C416) {
if checker.patch(&Rule::UnnecessaryComprehension) {
match fixes::fix_unnecessary_comprehension(checker.locator, expr) {
Ok(fix) => {
diagnostic.amend(fix);

View File

@@ -3,27 +3,26 @@ pub(crate) mod rules;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use anyhow::Result;
use test_case::test_case;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::settings;
#[test_case(RuleCode::DTZ001, Path::new("DTZ001.py"); "DTZ001")]
#[test_case(RuleCode::DTZ002, Path::new("DTZ002.py"); "DTZ002")]
#[test_case(RuleCode::DTZ003, Path::new("DTZ003.py"); "DTZ003")]
#[test_case(RuleCode::DTZ004, Path::new("DTZ004.py"); "DTZ004")]
#[test_case(RuleCode::DTZ005, Path::new("DTZ005.py"); "DTZ005")]
#[test_case(RuleCode::DTZ006, Path::new("DTZ006.py"); "DTZ006")]
#[test_case(RuleCode::DTZ007, Path::new("DTZ007.py"); "DTZ007")]
#[test_case(RuleCode::DTZ011, Path::new("DTZ011.py"); "DTZ011")]
#[test_case(RuleCode::DTZ012, Path::new("DTZ012.py"); "DTZ012")]
fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
#[test_case(Rule::CallDatetimeWithoutTzinfo, Path::new("DTZ001.py"); "DTZ001")]
#[test_case(Rule::CallDatetimeToday, Path::new("DTZ002.py"); "DTZ002")]
#[test_case(Rule::CallDatetimeUtcnow, Path::new("DTZ003.py"); "DTZ003")]
#[test_case(Rule::CallDatetimeUtcfromtimestamp, Path::new("DTZ004.py"); "DTZ004")]
#[test_case(Rule::CallDatetimeNowWithoutTzinfo, Path::new("DTZ005.py"); "DTZ005")]
#[test_case(Rule::CallDatetimeFromtimestamp, Path::new("DTZ006.py"); "DTZ006")]
#[test_case(Rule::CallDatetimeStrptimeWithoutZone, Path::new("DTZ007.py"); "DTZ007")]
#[test_case(Rule::CallDateToday, Path::new("DTZ011.py"); "DTZ011")]
#[test_case(Rule::CallDateFromtimestamp, Path::new("DTZ012.py"); "DTZ012")]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.code(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("./resources/test/fixtures/flake8_datetimez")
.join(path)

View File

@@ -13,10 +13,9 @@ pub fn call_datetime_without_tzinfo(
keywords: &[Keyword],
location: Range,
) {
if !checker
.resolve_call_path(func)
.map_or(false, |call_path| call_path == ["datetime", "datetime"])
{
if !checker.resolve_call_path(func).map_or(false, |call_path| {
call_path.as_slice() == ["datetime", "datetime"]
}) {
return;
}
@@ -38,10 +37,15 @@ pub fn call_datetime_without_tzinfo(
}
}
/// DTZ002
/// Checks for `datetime.datetime.today()`. (DTZ002)
///
/// ## Why is this bad?
///
/// It uses the system local timezone.
/// Use `datetime.datetime.now(tz=)` instead.
pub fn call_datetime_today(checker: &mut Checker, func: &Expr, location: Range) {
if checker.resolve_call_path(func).map_or(false, |call_path| {
call_path == ["datetime", "datetime", "today"]
call_path.as_slice() == ["datetime", "datetime", "today"]
}) {
checker
.diagnostics
@@ -49,10 +53,17 @@ pub fn call_datetime_today(checker: &mut Checker, func: &Expr, location: Range)
}
}
/// DTZ003
/// Checks for `datetime.datetime.today()`. (DTZ003)
///
/// ## Why is this bad?
///
/// Because naive `datetime` objects are treated by many `datetime` methods as
/// local times, it is preferred to use aware datetimes to represent times in
/// UTC. As such, the recommended way to create an object representing the
/// current time in UTC is by calling `datetime.now(timezone.utc)`.
pub fn call_datetime_utcnow(checker: &mut Checker, func: &Expr, location: Range) {
if checker.resolve_call_path(func).map_or(false, |call_path| {
call_path == ["datetime", "datetime", "utcnow"]
call_path.as_slice() == ["datetime", "datetime", "utcnow"]
}) {
checker
.diagnostics
@@ -60,10 +71,18 @@ pub fn call_datetime_utcnow(checker: &mut Checker, func: &Expr, location: Range)
}
}
/// DTZ004
/// Checks for `datetime.datetime.utcfromtimestamp()`. (DTZ004)
///
/// ## Why is this bad?
///
/// Because naive `datetime` objects are treated by many `datetime` methods as
/// local times, it is preferred to use aware datetimes to represent times in
/// UTC. As such, the recommended way to create an object representing a
/// specific timestamp in UTC is by calling `datetime.fromtimestamp(timestamp,
/// tz=timezone.utc)`.
pub fn call_datetime_utcfromtimestamp(checker: &mut Checker, func: &Expr, location: Range) {
if checker.resolve_call_path(func).map_or(false, |call_path| {
call_path == ["datetime", "datetime", "utcfromtimestamp"]
call_path.as_slice() == ["datetime", "datetime", "utcfromtimestamp"]
}) {
checker.diagnostics.push(Diagnostic::new(
violations::CallDatetimeUtcfromtimestamp,
@@ -81,7 +100,7 @@ pub fn call_datetime_now_without_tzinfo(
location: Range,
) {
if !checker.resolve_call_path(func).map_or(false, |call_path| {
call_path == ["datetime", "datetime", "now"]
call_path.as_slice() == ["datetime", "datetime", "now"]
}) {
return;
}
@@ -122,7 +141,7 @@ pub fn call_datetime_fromtimestamp(
location: Range,
) {
if !checker.resolve_call_path(func).map_or(false, |call_path| {
call_path == ["datetime", "datetime", "fromtimestamp"]
call_path.as_slice() == ["datetime", "datetime", "fromtimestamp"]
}) {
return;
}
@@ -162,7 +181,7 @@ pub fn call_datetime_strptime_without_zone(
location: Range,
) {
if !checker.resolve_call_path(func).map_or(false, |call_path| {
call_path == ["datetime", "datetime", "strptime"]
call_path.as_slice() == ["datetime", "datetime", "strptime"]
}) {
return;
}
@@ -208,10 +227,15 @@ pub fn call_datetime_strptime_without_zone(
));
}
/// DTZ011
/// Checks for `datetime.date.today()`. (DTZ011)
///
/// ## Why is this bad?
///
/// It uses the system local timezone.
/// Use `datetime.datetime.now(tz=).date()` instead.
pub fn call_date_today(checker: &mut Checker, func: &Expr, location: Range) {
if checker.resolve_call_path(func).map_or(false, |call_path| {
call_path == ["datetime", "date", "today"]
call_path.as_slice() == ["datetime", "date", "today"]
}) {
checker
.diagnostics
@@ -219,10 +243,15 @@ pub fn call_date_today(checker: &mut Checker, func: &Expr, location: Range) {
}
}
/// DTZ012
/// Checks for `datetime.date.fromtimestamp()`. (DTZ012)
///
/// ## Why is this bad?
///
/// It uses the system local timezone.
/// Use `datetime.datetime.fromtimestamp(, tz=).date()` instead.
pub fn call_date_fromtimestamp(checker: &mut Checker, func: &Expr, location: Range) {
if checker.resolve_call_path(func).map_or(false, |call_path| {
call_path == ["datetime", "date", "fromtimestamp"]
call_path.as_slice() == ["datetime", "date", "fromtimestamp"]
}) {
checker
.diagnostics

View File

@@ -4,19 +4,18 @@ pub(crate) mod types;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use anyhow::Result;
use test_case::test_case;
use crate::linter::test_path;
use crate::registry::RuleCode;
use crate::registry::Rule;
use crate::settings;
#[test_case(RuleCode::T100, Path::new("T100.py"); "T100")]
fn rules(rule_code: RuleCode, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
#[test_case(Rule::Debugger, Path::new("T100.py"); "T100")]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.code(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("./resources/test/fixtures/flake8_debugger")
.join(path)

Some files were not shown because too many files have changed in this diff Show More