Compare commits

...

88 Commits

Author SHA1 Message Date
Charlie Marsh
e8810eae64 Fix version number in BREAKING_CHANGES.md 2023-01-30 18:49:11 -05:00
Charlie Marsh
ba26a60e2a Disable incompatible rules rather than merely warning (#2369)
This is another temporary fix for the problem described in #2289 and #2292. Rather than merely warning, we now disable the incompatible rules (in addition to the warning). I actually think this is quite a reasonable solution, but we can revisit later. I just can't bring myself to ship another release with autofix broken-by-default 😂
2023-01-30 18:47:05 -05:00
Charlie Marsh
42459c35b0 Update BREAKING_CHANGES.md 2023-01-30 17:50:26 -05:00
Charlie Marsh
1cbd929a0a Bump version to 0.0.238 2023-01-30 16:44:19 -05:00
Charlie Marsh
5f07e70762 Recommend disabling explicit-string-concatenation (#2366)
If `allow-multiline = false` is set, then if the user enables `explicit-string-concatenation` (`ISC003`), there's no way for them to create valid multiline strings. This PR notes that they should turn off `ISC003`.

Closes #2362.
2023-01-30 16:42:30 -05:00
Charlie Marsh
8963a62ec0 Refine criteria for exc_info logger rules (#2364)
We now only trigger `logging-exc-info` and `logging-redundant-exc-info` when in an exception handler, with an `exc_info` that isn't `true` or `sys.exc_info()`.

Closes #2356.
2023-01-30 16:32:00 -05:00
Charlie Marsh
4589daa0bd Ignore magic comparisons to bytes by default (#2365) 2023-01-30 16:31:48 -05:00
Charlie Marsh
ea0274d22c Use bold for deprecated 2023-01-30 16:28:21 -05:00
Charlie Marsh
ca1129ad27 Document new rule config resolution 2023-01-30 16:26:59 -05:00
Martin Fischer
104c63afc6 Exclude deprecated extend-ignore from the JSON schema
Now that the option is deprecated we no longer
want IDEs to suggest it in their autocompletion.
2023-01-30 16:26:59 -05:00
Martin Fischer
ba457c21b5 Improve rule config resolution
Ruff allows rules to be enabled with `select` and disabled with
`ignore`, where the more specific rule selector takes precedence,
for example:

    `--select ALL --ignore E501` selects all rules except E501
    `--ignore ALL --select E501` selects only E501

(If both selectors have the same specificity ignore selectors
take precedence.)

Ruff always had two quirks:

* If `pyproject.toml` specified `ignore = ["E501"]` then you could
  previously not override that with `--select E501` on the command-line
  (since the resolution didn't take into account that the select was
  specified after the ignore).

* If `pyproject.toml` specified `select = ["E501"]` then you could
  previously not override that with `--ignore E` on the command-line
  (since the resolution didn't take into account that the ignore was
  specified after the select).

Since d067efe265 (#1245)
`extend-select` and `extend-ignore` always override
`select` and `ignore` and are applied iteratively in pairs,
which introduced another quirk:

* If some `pyproject.toml` file specified `extend-select`
  or `extend-ignore`, `select` and `ignore` became pretty much
  unreliable after that with no way of resetting that.

This commit fixes all of these quirks by making later configuration
sources take precedence over earlier configuration sources.

While this is a breaking change, we expect most ruff configuration
files to not rely on the previous unintutive behavior.
2023-01-30 16:26:59 -05:00
Martin Fischer
a92958f941 Test that more specific select wins over less specific ignore 2023-01-30 16:26:59 -05:00
Martin Fischer
1cd206285e refactor: test impl From<&Configuration> for RuleTable
Previously we tested the resolve_codes helper function directly.
Since we want to rewrite our resolution logic in the next commit,
this commit changes the tests to test the more high-level From impl.
2023-01-30 16:26:59 -05:00
Samuel Cormier-Iijima
5ac5b69e9f [I001] fix isort for files with tab-based indentation (#2361)
This PR fixes two related issues with using isort on files using tabs for indentation:

- Multiline imports are never considered correctly formatted, since the comparison with the generated code will always fail.
- Using autofix generates code that can have mixed indentation in the same line, for imports that are within nested blocks.
2023-01-30 15:36:19 -05:00
Charlie Marsh
01fedec1e7 Add SciPy and meson-python (#2363) 2023-01-30 15:34:19 -05:00
Martin Fischer
ef20692149 fix: clap usage for CLI help generation in the README (#2358) 2023-01-30 13:14:40 -05:00
Simon Brugman
50046fbed3 Extend conventional imports defaults to include TensorFlow et al (#2353)
extend conventional imports

Based on configuration from Visual Studio for Python
(https://code.visualstudio.com/docs/python/editing#_quick-fixes)
2023-01-30 11:04:19 -05:00
Charlie Marsh
6798675db1 Avoid removing trailing comments when autofixing (#2352) 2023-01-30 07:44:20 -05:00
Akhil
8e5a944ce1 Implement Pylint's too-many-arguments rule (PLR0913) (#2308) 2023-01-30 07:34:37 -05:00
messense
1e325edfb1 Configure automatically generated release notes (#2341) 2023-01-30 07:21:29 -05:00
Simon Brugman
502574797f include tomllib in standard lib (#2345) 2023-01-30 06:59:59 -05:00
Charlie Marsh
7a83b65fbe Pre-allocate output contents during autofix application (#2340) 2023-01-29 22:40:27 -05:00
Charlie Marsh
74e3cdfd7c Add a dedicated single-fix helper (#2339) 2023-01-29 22:38:29 -05:00
Simon Brugman
2ef28f217c pandas vet autofix for PD002 and general refactor 2023-01-29 22:30:37 -05:00
Simon Brugman
63fc912ed8 refactor: use remove_argument helper in pyupgrade 2023-01-29 22:30:37 -05:00
Martin Fischer
d76a47d366 Implement ruff linter subcommand
The subcommand lists all supported upstream linters and their prefixes:

    $ ruff linter
       F Pyflakes
     E/W pycodestyle
     C90 mccabe
       I isort
       N pep8-naming
       D pydocstyle
      UP pyupgrade
     YTT flake8-2020
    # etc...

Just like with the `rule` subcommand `--format json` is supported:

    $ ruff linter --format json
    [
      {
        "prefix": "F",
        "name": "Pyflakes"
      },
      {
        "prefix": "",
        "name": "pycodestyle",
        "categories": [
          {
            "prefix": "E",
            "name": "Error"
          },
          {
            "prefix": "W",
            "name": "Warning"
          }
        ]
      },
      # etc...
2023-01-29 21:32:37 -05:00
Martin Fischer
b532fce792 refactor: Change RuleNamespace::prefixes to common_prefix
Previously Linter::parse_code("E401") returned
(Linter::Pycodestyle, "401") ... after this commit it returns
(Linter::Pycodestyle, "E401") instead, which is important
for the future implementation of the many-to-many mapping.
(The second value of the tuple isn't used currently.)
2023-01-29 21:32:37 -05:00
Charlie Marsh
3ee6a90905 Remove remove-six-compat (UP016) (#2332) 2023-01-29 21:19:59 -05:00
Samuel Cormier-Iijima
0a6d2294a7 [TRY201] don't check raise statements in nested exception handlers (#2337) 2023-01-29 21:16:18 -05:00
Simon Brugman
e66fb42d0b refactor: use patch(diagnostic.kind.rule()) (#2336) 2023-01-29 21:15:09 -05:00
Simon Brugman
5165b703d9 Add VS Code to gitignore; fix typos (#2333) 2023-01-29 21:14:38 -05:00
Simon Brugman
b40cd1fabc debug assert for fix usage (#2335) 2023-01-29 21:13:42 -05:00
Charlie Marsh
64fb0bd2cc Include both ruff help and ruff help check in README (#2325) 2023-01-29 17:01:15 -05:00
Charlie Marsh
2ad29089af Allow list comprehensions for __all__ assignment (#2326) 2023-01-29 14:26:54 -05:00
Florian Best
f41796d559 feat: add ruff --statistics (#2284)
Closes #2284.
2023-01-29 13:44:56 -05:00
Samuel Cormier-Iijima
945a9e187c Migrate violations to named fields (#2317)
Fairly mechanical. Did a few of the simple cases manually to make sure things were working, and I think the rest will be easily achievable via a quick `fastmod` command.

ref #1871
2023-01-29 13:29:53 -05:00
Charlie Marsh
546413defb Fix remaining RelativeImportsOrder typo 2023-01-29 11:33:12 -05:00
Charlie Marsh
e371ef9b1a Place star before other member imports (#2320)
I think we've never run into this case, since it's rare to import `*` from a module _and_ import some other member explicitly. But we were deviating from `isort` by placing the `*` after other members, rather than up-top.

Closes #2318.
2023-01-28 22:17:43 -05:00
Charlie Marsh
c9585fe304 Run generate-all 2023-01-28 22:13:07 -05:00
Charlie Marsh
535868f939 Update fixable list (#2316) 2023-01-28 20:18:55 -05:00
Chirag
cec993aaa9 Add ruff . to documentation (#2307) 2023-01-28 14:53:11 -05:00
Samuel Cormier-Iijima
1a32d873f0 Fix regression with line-based rules not being ignored per-file (#2311) 2023-01-28 14:48:32 -05:00
Samuel Cormier-Iijima
f308f9f27e Respect per-file-ignores when checking noqa (#2309)
`RUF100` does not take into account a rule ignored for a file via a `per-file-ignores` configuration. To see this, try the following pyproject.toml:

```toml
[tool.ruff.per-file-ignores]
"test.py" = ["F401"]
```

and this test.py file:

```python
import itertools  # noqa: F401
```

Running `ruff --extend-select RUF100 test.py`, we should expect to get this error:

```
test.py:1:19: RUF100 Unused `noqa` directive (unused: `F401`)
```

The issue is that the per-file-ignores diagnostics are filtered out after the noqa checks, rather than before.
2023-01-28 14:16:30 -05:00
Jonathan Plasse
73dccce5f5 Isolate integration tests (#2306) 2023-01-28 13:32:50 -05:00
Charlie Marsh
fc9fae6579 Remove picture tag from PyPI and MkDocs 2023-01-28 11:49:52 -05:00
Charlie Marsh
add7fefeb5 Bump version to 0.0.237 2023-01-28 10:52:14 -05:00
Charlie Marsh
ec24947865 Fix version shorthand detection to use -V instead of -v (#2301)
Fixes a regression introduced in eda2be6350 (but not yet released to users). (`-v` is a real flag, but it's an alias for `--verbose`, not `--version`.)

Closes #2299.
2023-01-28 10:47:47 -05:00
Charlie Marsh
8038d32649 Deploy under docs subdirectory 2023-01-28 10:28:40 -05:00
Charlie Marsh
0362cc1098 Allow manual trigger for docs 2023-01-28 10:24:52 -05:00
Charlie Marsh
860e3110c0 Serve docs under /docs subdirectory 2023-01-28 10:22:35 -05:00
Jonxslays
0fa8c578cb Fix typo in typing_extensions (#2298) 2023-01-28 10:03:54 -05:00
Charlie Marsh
861df12269 Preserve global binding kind during reassignments (#2297) 2023-01-28 08:40:09 -05:00
Charlie Marsh
071e3fd196 Split MkDocs site into multiple pages (#2296) 2023-01-28 08:31:16 -05:00
Martin Fischer
dd79ec293a Rename new explain subcommand to rule
We probably want to introduce multiple explain subcommands and
overloading `explain` to explain it all seems like a bad idea.
We may want to introduce a subcommand to explain config options and
config options may end up having the same name as their rules, e.g. the
current `banned-api` is both a rule name (although not yet exposed to
the user) and a config option.

The idea is:

* `ruff rule` lists all rules supported by ruff
* `ruff rule <code>` explains a specific rule
* `ruff linter` lists all linters supported by ruff
* `ruff linter <name>` lists all rules/options supported by a specific linter

(After this commit only the 2nd case is implemented.)
2023-01-28 07:26:20 -05:00
Martin Fischer
5d331e43bf fix: help text and env for --format option of explain subcommand
The doc comment and the env attribute were copied by mistake.
2023-01-28 07:26:20 -05:00
Martin Fischer
ff3563b8ce Remove --check alias introduced in eda2be63
We do not want to support the --{subcommand} legacy format for new
subcommands ... only for subcommands that used this format previously.
2023-01-28 07:26:20 -05:00
Matt Morris
caada2f8bb add missing backticks to flake8 plugin urls in README (#2291) 2023-01-28 07:16:23 -05:00
messense
0b4cc5ac12 Add readme field to pyproject.toml (#2293) 2023-01-28 07:14:58 -05:00
Charlie Marsh
8c70247188 Switch to red MkDocs theme 2023-01-27 23:15:49 -05:00
Charlie Marsh
eaac3cae5e Add MkDocs version of README (#2287)
Co-authored-by: Justin Flannery <juftin@juftin.com>
2023-01-27 22:57:42 -05:00
Charlie Marsh
fd56414b2f Re-add ALL disclaimer 2023-01-27 22:18:20 -05:00
Martin Fischer
9731f96fb4 fix: typo in BREAKING_CHANGES.md & improve wrapping 2023-01-27 21:35:41 -05:00
Charlie Marsh
1a0191f1ac Add release to breaking changes 2023-01-27 20:34:24 -05:00
Charlie Marsh
249cf73d4e Tweak some wording in CLI help (#2285) 2023-01-27 20:25:58 -05:00
Martin Fischer
eda2be6350 Use subcommands for CLI instead of incompatible boolean flags
This commit greatly simplifies the implementation of the CLI,
as well as the user expierence (since --help no longer lists all
options even though many of them are in fact incompatible).

To preserve backwards-compatability as much as possible aliases have
been added for the new subcommands, so for example the following two
commands are equivalent:

    ruff explain E402 --format json
    ruff --explain E402 --format json

However for this to work the legacy-format double-dash command has to
come first, i.e. the following no longer works:

    ruff --format json --explain E402

Since ruff previously had an implicitly default subcommand,
this is preserved for backwards compatibility, i.e. the following two
commands are equivalent:

    ruff .
    ruff check .

Previously ruff didn't complain about several argument combinations that
should have never been allowed, e.g:

    ruff --explain RUF001 --line-length 33

previously worked but now rightfully fails since the explain command
doesn't support a `--line-length` option.
2023-01-27 19:38:17 -05:00
Charlie Marsh
57a68f7c7d Document the location of the personal config file (#2283) 2023-01-27 19:25:55 -05:00
Charlie Marsh
a19dd9237b Add comparison to type checkers (#2282) 2023-01-27 19:18:40 -05:00
Simon Brugman
4f067d806e add clippy and rust_dev to pre-commit (#2256)
I presume the reasoning for not including clippy in `pre-commit` was that it passes all files. This can be turned off with `pass_filenames`, in which case it only runs once.

`cargo +nightly dev generate-all` is also added (when excluding `target` is does not give false positives).

(The overhead of these commands is not much when the build is there. People can always choose to run only certain hooks with `pre-commit run [hook] --all-files`)
2023-01-27 18:53:44 -05:00
Samuel Cormier-Iijima
dd15c69181 [flake8-bandit] Add Rule S110 (try/except/pass) (#2197) 2023-01-27 18:52:55 -05:00
Charlie Marsh
b692921160 Use rustup show in lieu of actions-rs/toolchain (#2280) 2023-01-27 18:51:41 -05:00
Charlie Marsh
b3e8b1b787 Expand heuristic for detecting logging calls (#2279) 2023-01-27 18:41:16 -05:00
Charlie Marsh
0b34ca7107 Move off nightly Rust for dev workflows (#2278) 2023-01-27 18:35:19 -05:00
Charlie Marsh
df44c5124e Add missing autofix levels to sometimes-fixable rules 2023-01-27 18:25:23 -05:00
Simon Brugman
0e27f78b3f feat: include os.getcwdb (bytes) into flake8-use-pathlib (#2276) 2023-01-27 18:25:02 -05:00
Florian Best
cd8ad1df08 mark some fixers as sometimes-fixable (#2271) 2023-01-27 18:23:32 -05:00
Charlie Marsh
d1aaf16e40 Omit typing module from flake8-type-checking by default (#2277) 2023-01-27 18:19:45 -05:00
Ville Skyttä
7320058ce2 Incompatiblity warning updates (#2272) 2023-01-27 18:17:23 -05:00
Aarni Koskela
3a8b367b1c flake8-annotations: Move has_any_typed_arg into correct nested if (#2269) 2023-01-27 18:15:46 -05:00
Ville Skyttä
221b87332c feat: add more DTZ fix suggestions in messages (#2274) 2023-01-27 18:14:17 -05:00
Franck Nijhof
8149c8cbc4 Treat attribute constants as constant for yoda-conditions (#2266)
Accessed attributes that are Python constants should be considered for yoda-conditions


```py
## Error
JediOrder.YODA == age  # SIM300

## OK
age == JediOrder.YODA
```

~~PS: This PR will fail CI, as the `main` branch currently failing.~~
2023-01-27 12:55:12 -05:00
Charlie Marsh
2c415016a6 Update F401 snapshots 2023-01-27 12:43:42 -05:00
Sladyn
bb85119ba8 Convert UnusedImport violation to struct fields (#2141) 2023-01-27 11:31:39 -05:00
Simon Brugman
94551a203e feat: pylint PLE0604 and PLE0605 (#2241) 2023-01-27 11:26:33 -05:00
Charlie Marsh
64c4e4c6c7 Treat constant tuples as constants for yoda-conditions (#2265) 2023-01-27 11:25:57 -05:00
Charlie Marsh
84e4b7c96f Treat builtins as synthetically used (#2251) 2023-01-27 11:25:45 -05:00
Franck Nijhof
ca26f664ec Fix SIM300 to take Python constants into account (#2255)
SIM300 currently doesn't take Python constants into account when looking for Yoda conditions, this PR fixes that behavior.

```python
# Errors
YODA == age  # SIM300
YODA > age  # SIM300
YODA >= age  # SIM300

# OK
age == YODA
age < YODA
age <= YODA
```

Ref: <https://github.com/home-assistant/core/pull/86793>
2023-01-27 11:20:21 -05:00
Aarni Koskela
779b232db9 Fix typo: RelatveImportsOrder (#2264) 2023-01-27 11:15:43 -05:00
Charlie Marsh
a316b26b49 Rewrite some string-format violation messages (#2242) 2023-01-26 19:42:16 -05:00
462 changed files with 8311 additions and 5346 deletions

19
.github/release.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
# https://docs.github.com/en/repositories/releasing-projects-on-github/automatically-generated-release-notes#configuring-automatically-generated-release-notes
changelog:
categories:
- title: Breaking Changes
labels:
- breaking
- title: Rules
labels:
- rule
- autofix
- title: Settings
labels:
- configuration
- title: Bug Fixes
labels:
- bug
- title: Other Changes
labels:
- "*"

View File

@@ -22,16 +22,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: nightly-2022-11-01
override: true
- name: "Install Rust toolchain"
run: rustup show
- uses: Swatinem/rust-cache@v1
- run: cargo build --all
- run: ./target/debug/ruff_dev generate-all
- run: git diff --quiet README.md || echo "::error file=README.md::This file is outdated. Run 'cargo +nightly dev generate-all'."
- run: git diff --quiet ruff.schema.json || echo "::error file=ruff.schema.json::This file is outdated. Run 'cargo +nightly dev generate-all'."
- run: git diff --quiet README.md || echo "::error file=README.md::This file is outdated. Run 'cargo dev generate-all'."
- run: git diff --quiet ruff.schema.json || echo "::error file=ruff.schema.json::This file is outdated. Run 'cargo dev generate-all'."
- run: git diff --exit-code -- README.md ruff.schema.json
cargo-fmt:
@@ -39,12 +36,8 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: nightly-2022-11-01
override: true
components: rustfmt
- name: "Install Rust toolchain"
run: rustup component add rustfmt
- run: cargo fmt --all --check
cargo_clippy:
@@ -52,13 +45,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: nightly-2022-11-01
override: true
components: clippy
target: wasm32-unknown-unknown
- name: "Install Rust toolchain"
run: |
rustup component add clippy
rustup target add wasm32-unknown-unknown
- uses: Swatinem/rust-cache@v1
- run: cargo clippy --workspace --all-targets --all-features -- -D warnings -W clippy::pedantic
- run: cargo clippy -p ruff --target wasm32-unknown-unknown --all-features -- -D warnings -W clippy::pedantic
@@ -71,20 +61,17 @@ jobs:
name: "cargo test | ${{ matrix.os }}"
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: 1.65.0
override: true
- name: "Install Rust toolchain"
run: rustup show
- uses: Swatinem/rust-cache@v1
- run: cargo install cargo-insta
- run: pip install black[d]==22.12.0
- name: Run tests (Ubuntu)
- name: "Run tests (Ubuntu)"
if: ${{ matrix.os == 'ubuntu-latest' }}
run: |
cargo insta test --all --delete-unreferenced-snapshots
git diff --exit-code
- name: Run tests (Windows)
- name: "Run tests (Windows)"
if: ${{ matrix.os == 'windows-latest' }}
shell: bash
run: |
@@ -102,10 +89,8 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
override: true
- name: "Install Rust toolchain"
run: rustup show
- uses: Swatinem/rust-cache@v1
- run: ./scripts/add_rule.py --name DoTheThing --code PLC999 --linter pylint
- run: cargo check
@@ -114,61 +99,26 @@ jobs:
./scripts/add_rule.py --name FirstRule --code TST001 --linter test
- run: cargo check
# TODO(charlie): Re-enable the `wasm-pack` tests.
# See: https://github.com/charliermarsh/ruff/issues/1425
# wasm-pack-test:
# name: "wasm-pack test"
# runs-on: ubuntu-latest
# env:
# WASM_BINDGEN_TEST_TIMEOUT: 60
# steps:
# - uses: actions/checkout@v3
# - uses: actions-rs/toolchain@v1
# with:
# profile: minimal
# toolchain: nightly-2022-11-01
# override: true
# - uses: actions/cache@v3
# env:
# cache-name: cache-cargo
# with:
# path: |
# ~/.cargo/registry
# ~/.cargo/git
# key: ${{ runner.os }}-build-${{ env.cache-name }}-${{ hashFiles('**/Cargo.lock') }}
# restore-keys: |
# ${{ runner.os }}-build-${{ env.cache-name }}-
# ${{ runner.os }}-build-
# ${{ runner.os }}-
# - uses: jetli/wasm-pack-action@v0.4.0
# - uses: jetli/wasm-bindgen-action@v0.2.0
# - run: wasm-pack test --node
maturin-build:
name: "maturin build"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: 1.65.0
override: true
- name: "Install Rust toolchain"
run: rustup show
- uses: Swatinem/rust-cache@v1
- uses: actions/setup-python@v4
with:
python-version: "3.11"
- run: pip install maturin
- run: maturin build -b bin
- run: python scripts/transform_readme.py --target pypi
typos:
name: Spell Check with Typos
name: "spell check"
runs-on: ubuntu-latest
steps:
- name: Checkout Actions Repository
uses: actions/checkout@v3
- name: Check spelling of file.txt
uses: crate-ci/typos@master
- uses: actions/checkout@v3
- uses: crate-ci/typos@master
with:
files: .

35
.github/workflows/docs.yaml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: mkdocs
on:
push:
paths:
- README.md
- mkdocs.template.yml
- .github/workflows/docs.yaml
branches: [ main ]
workflow_dispatch:
jobs:
mkdocs:
runs-on: ubuntu-latest
env:
CF_API_TOKEN_EXISTS: ${{ secrets.CF_API_TOKEN != '' }}
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- name: "Install dependencies"
run: |
pip install "mkdocs~=1.4.2" "mkdocs-material~=9.0.6"
- name: "Copy README File"
run: |
python scripts/transform_readme.py --target mkdocs
python scripts/generate_mkdocs.py
mkdocs build --strict
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
uses: cloudflare/wrangler-action@2.0.0
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}
command: pages publish site --project-name=ruff-docs --branch ${GITHUB_HEAD_REF} --commit-hash ${GITHUB_SHA}

View File

@@ -24,21 +24,17 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: x64
- name: Install Rust toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
default: true
- name: Build wheels - x86_64
- name: "Install Rust toolchain"
run: rustup show
- name: "Build wheels - x86_64"
uses: messense/maturin-action@v1
with:
target: x86_64
args: --release --out dist --sdist -m ./${{ env.CRATE_NAME }}/Cargo.toml
- name: Install built wheel - x86_64
- name: "Install built wheel - x86_64"
run: |
pip install dist/${{ env.CRATE_NAME }}-*.whl --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -51,20 +47,16 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: x64
- name: Install Rust toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
default: true
- name: Build wheels - universal2
- name: "Install Rust toolchain"
run: rustup show
- name: "Build wheels - universal2"
uses: messense/maturin-action@v1
with:
args: --release --universal2 --out dist -m ./${{ env.CRATE_NAME }}/Cargo.toml
- name: Install built wheel - universal2
- name: "Install built wheel - universal2"
run: |
pip install dist/${{ env.CRATE_NAME }}-*universal2.whl --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -81,22 +73,18 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: ${{ matrix.target }}
- name: Install Rust toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
default: true
- name: Build wheels
- name: "Install Rust toolchain"
run: rustup show
- name: "Build wheels"
uses: messense/maturin-action@v1
with:
target: ${{ matrix.target }}
args: --release --out dist -m ./${{ env.CRATE_NAME }}/Cargo.toml
- name: Install built wheel
- name: "Install built wheel"
shell: bash
run: |
python -m pip install dist/${{ env.CRATE_NAME }}-*.whl --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -113,17 +101,17 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: x64
- name: Build wheels
- name: "Build wheels"
uses: messense/maturin-action@v1
with:
target: ${{ matrix.target }}
manylinux: auto
args: --release --out dist -m ./${{ env.CRATE_NAME }}/Cargo.toml
- name: Install built wheel
- name: "Install built wheel"
if: matrix.target == 'x86_64'
run: |
pip install dist/${{ env.CRATE_NAME }}-*.whl --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -139,7 +127,7 @@ jobs:
- uses: actions/setup-python@v4
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Build wheels
- name: "Build wheels"
uses: messense/maturin-action@v1
with:
target: ${{ matrix.target }}
@@ -158,7 +146,7 @@ jobs:
pip3 install -U pip
run: |
pip3 install ${{ env.PACKAGE_NAME }} --no-index --find-links dist/ --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -177,13 +165,13 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: x64
- name: Build wheels
- name: "Build wheels"
uses: messense/maturin-action@v1
with:
target: ${{ matrix.target }}
manylinux: musllinux_1_2
args: --release --out dist -m ./${{ env.CRATE_NAME }}/Cargo.toml
- name: Install built wheel
- name: "Install built wheel"
if: matrix.target == 'x86_64-unknown-linux-musl'
uses: addnab/docker-run-action@v3
with:
@@ -192,7 +180,7 @@ jobs:
run: |
apk add py3-pip
pip3 install ${{ env.PACKAGE_NAME }} --no-index --find-links /io/dist/ --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -212,7 +200,7 @@ jobs:
- uses: actions/setup-python@v4
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Build wheels
- name: "Build wheels"
uses: messense/maturin-action@v1
with:
target: ${{ matrix.platform.target }}
@@ -228,7 +216,7 @@ jobs:
apk add py3-pip
run: |
pip3 install ${{ env.PACKAGE_NAME }} --no-index --find-links dist/ --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -252,17 +240,17 @@ jobs:
- uses: actions/setup-python@v4
with:
python-version: pypy${{ matrix.python-version }}
- name: Build wheels
- name: "Build wheels"
uses: messense/maturin-action@v1
with:
target: ${{ matrix.target }}
manylinux: auto
args: --release --out dist -i pypy${{ matrix.python-version }} -m ./${{ env.CRATE_NAME }}/Cargo.toml
- name: Install built wheel
- name: "Install built wheel"
if: matrix.target == 'x86_64'
run: |
pip install dist/${{ env.CRATE_NAME }}-*.whl --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -285,7 +273,7 @@ jobs:
with:
name: wheels
- uses: actions/setup-python@v4
- name: Publish to PyPi
- name: "Publish to PyPi"
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.FLAKE8_TO_RUFF_TOKEN }}

View File

@@ -18,12 +18,8 @@ jobs:
CF_API_TOKEN_EXISTS: ${{ secrets.CF_API_TOKEN != '' }}
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: nightly-2022-11-01
override: true
target: wasm32-unknown-unknown
- name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown
- uses: actions/setup-node@v3
with:
node-version: 18

View File

@@ -26,21 +26,17 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: x64
- name: Install Rust toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
default: true
- name: Build wheels - x86_64
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels - x86_64"
uses: messense/maturin-action@v1
with:
target: x86_64
args: --release --out dist --sdist
- name: Install built wheel - x86_64
- name: "Install built wheel - x86_64"
run: |
pip install dist/${{ env.PACKAGE_NAME }}-*.whl --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -53,20 +49,16 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: x64
- name: Install Rust toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
default: true
- name: Build wheels - universal2
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels - universal2"
uses: messense/maturin-action@v1
with:
args: --release --universal2 --out dist
- name: Install built wheel - universal2
- name: "Install built wheel - universal2"
run: |
pip install dist/${{ env.PACKAGE_NAME }}-*universal2.whl --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -83,22 +75,18 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: ${{ matrix.target }}
- name: Install Rust toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
default: true
- name: Build wheels
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: messense/maturin-action@v1
with:
target: ${{ matrix.target }}
args: --release --out dist
- name: Install built wheel
- name: "Install built wheel"
shell: bash
run: |
python -m pip install dist/${{ env.PACKAGE_NAME }}-*.whl --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -115,17 +103,19 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: x64
- name: Build wheels
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: messense/maturin-action@v1
with:
target: ${{ matrix.target }}
manylinux: auto
args: --release --out dist
- name: Install built wheel
- name: "Install built wheel"
if: matrix.target == 'x86_64'
run: |
pip install dist/${{ env.PACKAGE_NAME }}-*.whl --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -141,7 +131,9 @@ jobs:
- uses: actions/setup-python@v4
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Build wheels
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: messense/maturin-action@v1
with:
target: ${{ matrix.target }}
@@ -160,7 +152,7 @@ jobs:
pip3 install -U pip
run: |
pip3 install ${{ env.PACKAGE_NAME }} --no-index --find-links dist/ --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -179,13 +171,15 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: x64
- name: Build wheels
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: messense/maturin-action@v1
with:
target: ${{ matrix.target }}
manylinux: musllinux_1_2
args: --release --out dist
- name: Install built wheel
- name: "Install built wheel"
if: matrix.target == 'x86_64-unknown-linux-musl'
uses: addnab/docker-run-action@v3
with:
@@ -194,7 +188,7 @@ jobs:
run: |
apk add py3-pip
pip3 install ${{ env.PACKAGE_NAME }} --no-index --find-links /io/dist/ --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -214,7 +208,9 @@ jobs:
- uses: actions/setup-python@v4
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Build wheels
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: messense/maturin-action@v1
with:
target: ${{ matrix.platform.target }}
@@ -230,7 +226,7 @@ jobs:
apk add py3-pip
run: |
pip3 install ${{ env.PACKAGE_NAME }} --no-index --find-links dist/ --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -254,17 +250,19 @@ jobs:
- uses: actions/setup-python@v4
with:
python-version: pypy${{ matrix.python-version }}
- name: Build wheels
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: messense/maturin-action@v1
with:
target: ${{ matrix.target }}
manylinux: auto
args: --release --out dist -i pypy${{ matrix.python-version }}
- name: Install built wheel
- name: "Install built wheel"
if: matrix.target == 'x86_64'
run: |
pip install dist/${{ env.PACKAGE_NAME }}-*.whl --force-reinstall
- name: Upload wheels
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
name: wheels
@@ -288,13 +286,13 @@ jobs:
with:
name: wheels
- uses: actions/setup-python@v4
- name: Publish to PyPi
- name: "Publish to PyPi"
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.RUFF_TOKEN }}
run: |
pip install --upgrade twine
twine upload --skip-existing *
- name: Update pre-commit mirror
- name: "Update pre-commit mirror"
run: |
curl -X POST -H "Accept: application/vnd.github+json" -H "Authorization: Bearer ${{ secrets.RUFF_PRE_COMMIT_PAT }}" -H "X-GitHub-Api-Version: 2022-11-28" https://api.github.com/repos/charliermarsh/ruff-pre-commit/dispatches --data '{"event_type": "pypi_release"}'

5
.gitignore vendored
View File

@@ -1,6 +1,8 @@
# Local cache
.ruff_cache
resources/test/cpython
docs/
mkdocs.yml
###
# Rust.gitignore
@@ -182,3 +184,6 @@ cython_debug/
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
.idea/
.vimspector.json
# Visual Studio Code
.vscode/

View File

@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.0.236
rev: v0.0.238
hooks:
- id: ruff
args: [--fix]
@@ -15,6 +15,17 @@ repos:
hooks:
- id: cargo-fmt
name: cargo fmt
entry: cargo +nightly fmt --
entry: cargo fmt --
language: rust
types: [rust]
- id: clippy
name: clippy
entry: cargo clippy --workspace --all-targets --all-features
language: rust
pass_filenames: false
- id: dev-generate-all
name: dev-generate-all
entry: cargo dev generate-all
language: rust
pass_filenames: false
exclude: target

View File

@@ -1,5 +1,81 @@
# Breaking Changes
## 0.0.238
### `select`, `extend-select`, `ignore`, and `extend-ignore` have new semantics ([#2312](https://github.com/charliermarsh/ruff/pull/2312))
Previously, the interplay between `select` and its related options could lead to unexpected
behavior. For example, `ruff --select E501 --ignore ALL` and `ruff --select E501 --extend-ignore
ALL` behaved differently. (See [#2312](https://github.com/charliermarsh/ruff/pull/2312) for more
examples.)
When Ruff determines the enabled rule set, it has to reconcile `select` and `ignore` from a variety
of sources, including the current `pyproject.toml`, any inherited `pyproject.toml` files, and the
CLI.
The new semantics are such that Ruff uses the "highest-priority" `select` as the basis for the rule
set, and then applies any `extend-select`, `ignore`, and `extend-ignore` adjustments. CLI options
are given higher priority than `pyproject.toml` options, and the current `pyproject.toml` file is
given higher priority than any inherited `pyproject.toml` files.
`extend-select` and `extend-ignore` are no longer given "top priority"; instead, they merely append
to the `select` and `ignore` lists, as in Flake8.
This change is largely backwards compatible -- most users should experience no change in behavior.
However, as an example of a breaking change, consider the following:
```toml
[tool.ruff]
ignore = ["F401"]
```
Running `ruff --select F` would previously have enabled all `F` rules, apart from `F401`. Now, it
will enable all `F` rules, including `F401`, as the command line's `--select` resets the resolution.
### `remove-six-compat` (`UP016`) has been removed ([#2332](https://github.com/charliermarsh/ruff/pull/2332))
The `remove-six-compat` rule has been removed. This rule was only useful for one-time Python 2-to-3
upgrades.
## 0.0.238
### `--explain`, `--clean`, and `--generate-shell-completion` are now subcommands ([#2190](https://github.com/charliermarsh/ruff/pull/2190))
`--explain`, `--clean`, and `--generate-shell-completion` are now implemented as subcommands:
ruff . # Still works! And will always work.
ruff check . # New! Also works.
ruff --explain E402 # Still works.
ruff rule E402 # New! Also works. (And preferred.)
# Oops! The command has to come first.
ruff --format json --explain E402 # No longer works.
ruff --explain E402 --format json # Still works!
ruff rule E402 --format json # Works! (And preferred.)
This change is largely backwards compatible -- most users should experience
no change in behavior. However, please note the following exceptions:
* Subcommands will now fail when invoked with unsupported arguments, instead
of silently ignoring them. For example, the following will now fail:
ruff --clean --respect-gitignore
(the `clean` command doesn't support `--respect-gitignore`.)
* The semantics of `ruff <arg>` have changed slightly when `<arg>` is a valid subcommand.
For example, prior to this release, running `ruff rule` would run `ruff` over a file or
directory called `rule`. Now, `ruff rule` would invoke the `rule` subcommand. This should
only impact projects with files or directories named `rule`, `check`, `explain`, `clean`,
or `generate-shell-completion`.
* Scripts that invoke ruff should supply `--` before any positional arguments.
(The semantics of `ruff -- <arg>` have not changed.)
* `--explain` previously treated `--format grouped` as a synonym for `--format text`.
This is no longer supported; instead, use `--format text`.
## 0.0.226
### `misplaced-comparison-constant` (`PLC2201`) was deprecated in favor of `SIM300` ([#1980](https://github.com/charliermarsh/ruff/pull/1980))

View File

@@ -39,20 +39,24 @@ cargo run resources/test/fixtures --no-cache
```
Prior to opening a pull request, ensure that your code has been auto-formatted,
and that it passes both the lint and test validation checks.
For rustfmt and Clippy, we use [nightly Rust][nightly], as it is stricter than stable Rust.
(However, tests and builds use stable Rust.)
and that it passes both the lint and test validation checks:
```shell
cargo +nightly fmt --all # Auto-formatting...
cargo +nightly clippy --fix --workspace --all-targets --all-features # Linting...
cargo fmt --all # Auto-formatting...
cargo clippy --fix --workspace --all-targets --all-features # Linting...
cargo test --all # Testing...
```
These checks will run on GitHub Actions when you open your Pull Request, but running them locally
will save you time and expedite the merge process.
If you have `pre-commit` [installed](https://pre-commit.com/#installation) then you can use it to
assist with formatting and linting. The following command will run the `pre-commit` hooks:
```shell
pre-commit run --all-files
```
Your Pull Request will be reviewed by a maintainer, which may involve a few rounds of iteration
prior to merging.
@@ -79,7 +83,7 @@ To trigger the violation, you'll likely want to augment the logic in `src/checke
defines the Python AST visitor, responsible for iterating over the abstract syntax tree and
collecting diagnostics as it goes.
If you need to inspect the AST, you can run `cargo +nightly dev print-ast` with a Python file. Grep
If you need to inspect the AST, you can run `cargo dev print-ast` with a Python file. Grep
for the `Check::new` invocations to understand how other, similar rules are implemented.
To add a test fixture, create a file under `resources/test/fixtures/[linter]`, named to match
@@ -87,7 +91,7 @@ the code you defined earlier (e.g., `resources/test/fixtures/pycodestyle/E402.py
contain a variety of violations and non-violations designed to evaluate and demonstrate the behavior
of your lint rule.
Run `cargo +nightly dev generate-all` to generate the code for your new fixture. Then run Ruff
Run `cargo dev generate-all` to generate the code for your new fixture. Then run Ruff
locally with (e.g.) `cargo run resources/test/fixtures/pycodestyle/E402.py --no-cache --select E402`.
Once you're satisfied with the output, codify the behavior as a snapshot test by adding a new
@@ -95,7 +99,7 @@ Once you're satisfied with the output, codify the behavior as a snapshot test by
Your test will fail, but you'll be prompted to follow-up with `cargo insta review`. Accept the
generated snapshot, then commit the snapshot file alongside the rest of your changes.
Finally, regenerate the documentation and generated code with `cargo +nightly dev generate-all`.
Finally, regenerate the documentation and generated code with `cargo dev generate-all`.
### Example: Adding a new configuration option
@@ -121,7 +125,7 @@ You may also want to add the new configuration option to the `flake8-to-ruff` to
responsible for converting `flake8` configuration files to Ruff's TOML format. This logic
lives in `flake8_to_ruff/src/converter.rs`.
Finally, regenerate the documentation and generated code with `cargo +nightly dev generate-all`.
Finally, regenerate the documentation and generated code with `cargo dev generate-all`.
## Release process
@@ -131,5 +135,3 @@ them to [PyPI](https://pypi.org/project/ruff/).
Ruff follows the [semver](https://semver.org/) versioning standard. However, as pre-1.0 software,
even patch releases may contain [non-backwards-compatible changes](https://semver.org/#spec-item-4).
[nightly]: https://rust-lang.github.io/rustup/concepts/channels.html#working-with-nightly-rust

10
Cargo.lock generated
View File

@@ -750,7 +750,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.236"
version = "0.0.238"
dependencies = [
"anyhow",
"clap 4.1.4",
@@ -1922,7 +1922,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.236"
version = "0.0.238"
dependencies = [
"anyhow",
"bitflags",
@@ -1977,7 +1977,7 @@ dependencies = [
[[package]]
name = "ruff_cli"
version = "0.0.236"
version = "0.0.238"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -2014,7 +2014,7 @@ dependencies = [
[[package]]
name = "ruff_dev"
version = "0.0.236"
version = "0.0.238"
dependencies = [
"anyhow",
"clap 4.1.4",
@@ -2035,7 +2035,7 @@ dependencies = [
[[package]]
name = "ruff_macros"
version = "0.0.236"
version = "0.0.238"
dependencies = [
"once_cell",
"proc-macro2",

View File

@@ -8,7 +8,7 @@ default-members = [".", "ruff_cli"]
[package]
name = "ruff"
version = "0.0.236"
version = "0.0.238"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = "2021"
rust-version = "1.65.0"
@@ -46,7 +46,7 @@ num-traits = "0.2.15"
once_cell = { version = "1.16.0" }
path-absolutize = { version = "3.0.14", features = ["once_cell_cache", "use_unix_paths_on_wasm"] }
regex = { version = "1.6.0" }
ruff_macros = { version = "0.0.236", path = "ruff_macros" }
ruff_macros = { version = "0.0.238", path = "ruff_macros" }
rustc-hash = { version = "1.1.0" }
rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/RustPython.git", rev = "4f38cb68e4a97aeea9eb19673803a0bd5f655383" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "4f38cb68e4a97aeea9eb19673803a0bd5f655383" }

337
README.md
View File

@@ -1,3 +1,5 @@
<!-- Begin section: Overview -->
# Ruff
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v1.json)](https://github.com/charliermarsh/ruff)
@@ -57,6 +59,7 @@ Ruff is extremely actively developed and used in major open-source projects like
- [Sphinx](https://github.com/sphinx-doc/sphinx)
- [Hatch](https://github.com/pypa/hatch)
- [Jupyter](https://github.com/jupyter-server/jupyter_server)
- [SciPy](https://github.com/scipy/scipy)
- [Great Expectations](https://github.com/great-expectations/great_expectations)
- [Polars](https://github.com/pola-rs/polars)
- [Ibis](https://github.com/ibis-project/ibis)
@@ -70,6 +73,7 @@ Ruff is extremely actively developed and used in major open-source projects like
- [build (PyPA)](https://github.com/pypa/build)
- [Babel](https://github.com/python-babel/babel)
- [featuretools](https://github.com/alteryx/featuretools)
- [meson-python](https://github.com/mesonbuild/meson-python)
Read the [launch blog post](https://notes.crmarsh.com/python-tooling-could-be-much-much-faster).
@@ -106,6 +110,8 @@ developer of [Zulip](https://github.com/zulip/zulip):
> This is just ridiculously fast... `ruff` is amazing.
<!-- End section: Overview -->
## Table of Contents
1. [Installation and Usage](#installation-and-usage)
@@ -161,6 +167,8 @@ developer of [Zulip](https://github.com/zulip/zulip):
## Installation and Usage
<!-- Begin section: Installation and Usage -->
### Installation
Ruff is available as [`ruff`](https://pypi.org/project/ruff/) on PyPI:
@@ -200,9 +208,10 @@ apk add ruff
To run Ruff, try any of the following:
```shell
ruff path/to/code/to/lint.py # Run Ruff over `lint.py`
ruff path/to/code/ # Run Ruff over all files in `/path/to/code` (and any subdirectories)
ruff path/to/code/*.py # Run Ruff over all `.py` files in `/path/to/code`
ruff . # Lint all files in the current directory (and any subdirectories)
ruff path/to/code/ # Lint all files in `/path/to/code` (and any subdirectories)
ruff path/to/code/*.py # Lint all `.py` files in `/path/to/code`
ruff path/to/code/to/file.py # Lint `file.py`
```
You can run Ruff in `--watch` mode to automatically re-run on-change:
@@ -216,13 +225,17 @@ Ruff also works with [pre-commit](https://pre-commit.com):
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.236'
rev: 'v0.0.238'
hooks:
- id: ruff
```
<!-- End section: Installation and Usage -->
## Configuration
<!-- Begin section: Configuration -->
Ruff is configurable both via `pyproject.toml` and the command line. For a full list of configurable
options, see the [API reference](#reference).
@@ -315,7 +328,10 @@ prefix, followed by three digits (e.g., `F401`). The prefix indicates that "sour
rules is determined by the `select` and `ignore` options, which support both the full code (e.g.,
`F401`) and the prefix (e.g., `F`).
As a special-case, Ruff also supports the `ALL` code, which enables all rules.
As a special-case, Ruff also supports the `ALL` code, which enables all rules. Note that some of the
`pydocstyle` rules conflict (e.g., `D203` and `D211`) as they represent alternative docstring
formats. Enabling `ALL` without further configuration may result in suboptimal behavior, especially
for the `pydocstyle` plugin.
If you're wondering how to configure Ruff, here are some **recommended guidelines**:
@@ -333,15 +349,14 @@ an equivalent schema (though the `[tool.ruff]` hierarchy can be omitted). For ex
`pyproject.toml` described above would be represented via the following `ruff.toml`:
```toml
# Enable Pyflakes and pycodestyle rules.
select = ["E", "F"]
# Enable flake8-bugbear (`B`) rules.
select = ["E", "F", "B"]
# Never enforce `E501` (line length violations).
ignore = ["E501"]
# Always autofix, but never try to fix `F401` (unused imports).
fix = true
unfixable = ["F401"]
# Avoid trying to fix flake8-bugbear (`B`) violations.
unfixable = ["B"]
# Ignore `E402` (import violations) in all `__init__.py` files, and in `path/to/file.py`.
[per-file-ignores]
@@ -351,22 +366,53 @@ unfixable = ["F401"]
For a full list of configurable options, see the [API reference](#reference).
Some common configuration settings can be provided via the command-line:
### Command-line interface
Some configuration settings can be provided via the command-line, such as those related to
rule enablement and disablement, file discovery, logging level, and more:
```shell
ruff path/to/code/ --select F401 --select F403
ruff path/to/code/ --select F401 --select F403 --quiet
```
See `ruff --help` for more:
See `ruff help` for more on Ruff's top-level commands:
<!-- Begin auto-generated cli help. -->
<!-- Begin auto-generated command help. -->
```
Ruff: An extremely fast Python linter.
Usage: ruff [OPTIONS] [FILES]...
Usage: ruff [OPTIONS] <COMMAND>
Commands:
check Run Ruff on the given files or directories (default)
rule Explain a rule
linter List all supported upstream linters
clean Clear any caches in the current directory and any subdirectories
help Print this message or the help of the given subcommand(s)
Options:
-h, --help Print help
-V, --version Print version
Log levels:
-v, --verbose Enable verbose logging
-q, --quiet Print lint violations, but nothing else
-s, --silent Disable all logging (but still exit with status code "1" upon detecting lint violations)
For help with a specific command, see: `ruff help <command>`.
```
<!-- End auto-generated command help. -->
Or `ruff help check` for more on the linting command:
<!-- Begin auto-generated subcommand help. -->
```
Run Ruff on the given files or directories (default)
Usage: ruff check [OPTIONS] [FILES]...
Arguments:
[FILES]...
[FILES]... List of files or directories to check
Options:
--fix Attempt to automatically fix lint violations
@@ -376,11 +422,11 @@ Options:
--fix-only Fix any fixable lint violations, but don't report on leftover violations. Implies `--fix`
--format <FORMAT> Output serialization format for violations [env: RUFF_FORMAT=] [possible values: text, json, junit, grouped, github, gitlab, pylint]
--config <CONFIG> Path to the `pyproject.toml` or `ruff.toml` file to use for configuration
--statistics Show counts for every rule with at least one violation
--add-noqa Enable automatic additions of `noqa` directives to failing lines
--show-files See the files Ruff will be run against with the current settings
--show-settings See the settings Ruff will use to lint a given Python file
-h, --help Print help
-V, --version Print version
Rule selection:
--select <RULE_CODE>
@@ -389,8 +435,6 @@ Rule selection:
Comma-separated list of rule codes to disable
--extend-select <RULE_CODE>
Like --select, but adds additional rule codes on top of the selected ones
--extend-ignore <RULE_CODE>
Like --ignore, but adds additional rule codes on top of the ignored ones
--per-file-ignores <PER_FILE_IGNORES>
List of mappings from file pattern to code to exclude
--fixable <RULE_CODE>
@@ -426,16 +470,12 @@ Miscellaneous:
--update-check
Enable or disable automatic update checks
Subcommands:
--explain <EXPLAIN> Explain a rule
--clean Clear any caches in the current directory or any subdirectories
Log levels:
-v, --verbose Enable verbose logging
-q, --quiet Print lint violations, but nothing else
-s, --silent Disable all logging (but still exit with status code "1" upon detecting lint violations)
```
<!-- End auto-generated cli help. -->
<!-- End auto-generated subcommand help. -->
### `pyproject.toml` discovery
@@ -454,10 +494,9 @@ There are a few exceptions to these rules:
resolved relative to the _current working directory_.
3. If no `pyproject.toml` file is found in the filesystem hierarchy, Ruff will fall back to using
a default configuration. If a user-specific configuration file exists
at `${config_dir}/ruff/pyproject.toml`,
that file will be used instead of the default configuration, with `${config_dir}` being
determined via the [`dirs`](https://docs.rs/dirs/4.0.0/dirs/fn.config_dir.html) crate, and all
relative paths being again resolved relative to the _current working directory_.
at `${config_dir}/ruff/pyproject.toml`, that file will be used instead of the default
configuration, with `${config_dir}` being determined via the [`dirs`](https://docs.rs/dirs/4.0.0/dirs/fn.config_dir.html)
crate, and all relative paths being again resolved relative to the _current working directory_.
4. Any `pyproject.toml`-supported settings that are provided on the command-line (e.g., via
`--select`) will override the settings in _every_ resolved configuration file.
@@ -489,6 +528,33 @@ By default, Ruff will also skip any files that are omitted via `.ignore`, `.giti
Files that are passed to `ruff` directly are always linted, regardless of the above criteria.
For example, `ruff /path/to/excluded/file.py` will always lint `file.py`.
### Rule resolution
The set of enabled rules is controlled via the [`select`](#select) and [`ignore`](#ignore) settings,
along with the [`extend-select`](#extend-select) and [`extend-ignore`](#extend-ignore) modifiers.
To resolve the enabled rule set, Ruff may need to reconcile `select` and `ignore` from a variety
of sources, including the current `pyproject.toml`, any inherited `pyproject.toml` files, and the
CLI (e.g., `--select`).
In those scenarios, Ruff uses the "highest-priority" `select` as the basis for the rule set, and
then applies any `extend-select`, `ignore`, and `extend-ignore` adjustments. CLI options are given
higher priority than `pyproject.toml` options, and the current `pyproject.toml` file is given higher
priority than any inherited `pyproject.toml` files.
For example, given the following `pyproject.toml` file:
```toml
[tool.ruff]
select = ["E", "F"]
ignore = ["F401"]
```
Running `ruff --select F401` would result in Ruff enforcing `F401`, and no other rules.
Running `ruff --extend-select B` would result in Ruff enforcing the `E`, `F`, and `B` rules, with
the exception of `F401`.
### Ignoring errors
To omit a lint rule entirely, add it to the "ignore" list via [`ignore`](#ignore) or
@@ -552,8 +618,12 @@ Third, Ruff can _automatically add_ `noqa` directives to all failing lines. This
migrating a new codebase to Ruff. You can run `ruff /path/to/file.py --add-noqa` to automatically
add `noqa` directives to all failing lines, with the appropriate rule codes.
<!-- End section: Configuration -->
## Supported Rules
<!-- Begin section: Rules -->
Regardless of the rule's origin, Ruff re-implements every rule in Rust as a first-party feature.
By default, Ruff enables Flake8's `E` and `F` rules. Ruff supports all rules from the `F` category,
@@ -577,20 +647,20 @@ For more, see [Pyflakes](https://pypi.org/project/pyflakes/) on PyPI.
| F405 | import-star-usage | `{name}` may be undefined, or defined from star imports: {sources} | |
| F406 | import-star-not-permitted | `from {name} import *` only allowed at module level | |
| F407 | future-feature-not-defined | Future feature `{name}` is not defined | |
| F501 | percent-format-invalid-format | '...' % ... has invalid format string: {message} | |
| F502 | percent-format-expected-mapping | '...' % ... expected mapping but got sequence | |
| F503 | percent-format-expected-sequence | '...' % ... expected sequence but got mapping | |
| F504 | percent-format-extra-named-arguments | '...' % ... has unused named argument(s): {message} | 🛠 |
| F505 | percent-format-missing-argument | '...' % ... is missing argument(s) for placeholder(s): {message} | |
| F506 | percent-format-mixed-positional-and-named | '...' % ... has mixed positional and named placeholders | |
| F507 | percent-format-positional-count-mismatch | '...' % ... has {wanted} placeholder(s) but {got} substitution(s) | |
| F508 | percent-format-star-requires-sequence | '...' % ... `*` specifier requires sequence | |
| F509 | percent-format-unsupported-format-character | '...' % ... has unsupported format character '{char}' | |
| F521 | string-dot-format-invalid-format | '...'.format(...) has invalid format string: {message} | |
| F522 | string-dot-format-extra-named-arguments | '...'.format(...) has unused named argument(s): {message} | 🛠 |
| F523 | string-dot-format-extra-positional-arguments | '...'.format(...) has unused arguments at position(s): {message} | |
| F524 | string-dot-format-missing-arguments | '...'.format(...) is missing argument(s) for placeholder(s): {message} | |
| F525 | string-dot-format-mixing-automatic | '...'.format(...) mixes automatic and manual numbering | |
| F501 | percent-format-invalid-format | `%`-format string has invalid format string: {message} | |
| F502 | percent-format-expected-mapping | `%`-format string expected mapping but got sequence | |
| F503 | percent-format-expected-sequence | `%`-format string expected sequence but got mapping | |
| F504 | percent-format-extra-named-arguments | `%`-format string has unused named argument(s): {message} | 🛠 |
| F505 | percent-format-missing-argument | `%`-format string is missing argument(s) for placeholder(s): {message} | |
| F506 | percent-format-mixed-positional-and-named | `%`-format string has mixed positional and named placeholders | |
| F507 | percent-format-positional-count-mismatch | `%`-format string has {wanted} placeholder(s) but {got} substitution(s) | |
| F508 | percent-format-star-requires-sequence | `%`-format string `*` specifier requires sequence | |
| F509 | percent-format-unsupported-format-character | `%`-format string has unsupported format character '{char}' | |
| F521 | string-dot-format-invalid-format | `.format` call has invalid format string: {message} | |
| F522 | string-dot-format-extra-named-arguments | `.format` call has unused named argument(s): {message} | 🛠 |
| F523 | string-dot-format-extra-positional-arguments | `.format` call has unused arguments at position(s): {message} | |
| F524 | string-dot-format-missing-arguments | `.format` call is missing argument(s) for placeholder(s): {message} | |
| F525 | string-dot-format-mixing-automatic | `.format` string mixes automatic and manual numbering | |
| F541 | f-string-missing-placeholders | f-string without any placeholders | 🛠 |
| F601 | multi-value-repeated-key-literal | Dictionary key literal `{name}` repeated | 🛠 |
| F602 | multi-value-repeated-key-variable | Dictionary key `{name}` repeated | 🛠 |
@@ -757,7 +827,6 @@ For more, see [pyupgrade](https://pypi.org/project/pyupgrade/) on PyPI.
| UP013 | convert-typed-dict-functional-to-class | Convert `{name}` from `TypedDict` functional to class syntax | 🛠 |
| UP014 | convert-named-tuple-functional-to-class | Convert `{name}` from `NamedTuple` functional to class syntax | 🛠 |
| UP015 | redundant-open-modes | Unnecessary open mode parameters | 🛠 |
| UP016 | remove-six-compat | Unnecessary `six` compatibility usage | 🛠 |
| UP017 | datetime-timezone-utc | Use `datetime.UTC` alias | 🛠 |
| UP018 | native-literals | Unnecessary call to `{literal_type}` | 🛠 |
| UP019 | typing-text-str-alias | `typing.Text` is deprecated, use `str` | 🛠 |
@@ -826,6 +895,7 @@ For more, see [flake8-bandit](https://pypi.org/project/flake8-bandit/) on PyPI.
| S106 | hardcoded-password-func-arg | Possible hardcoded password: "{}" | |
| S107 | hardcoded-password-default | Possible hardcoded password: "{}" | |
| S108 | hardcoded-temp-file | Probable insecure usage of temporary file or directory: "{}" | |
| S110 | try-except-pass | `try`-`except`-`pass` detected, consider logging the exception | |
| S113 | request-without-timeout | Probable use of requests call with timeout set to `{value}` | |
| S324 | hashlib-insecure-hash-function | Probable use of insecure hash functions in `hashlib`: "{}" | |
| S501 | request-with-no-cert-validation | Probable use of `{string}` call with `verify=False` disabling SSL certificate checks | |
@@ -864,12 +934,12 @@ For more, see [flake8-bugbear](https://pypi.org/project/flake8-bugbear/) on PyPI
| B004 | unreliable-callable-check | Using `hasattr(x, '__call__')` to test if x is callable is unreliable. Use `callable(x)` for consistent results. | |
| B005 | strip-with-multi-characters | Using `.strip()` with multi-character strings is misleading the reader | |
| B006 | mutable-argument-default | Do not use mutable data structures for argument defaults | |
| B007 | unused-loop-control-variable | Loop control variable `{name}` not used within loop body | |
| B007 | unused-loop-control-variable | Loop control variable `{name}` not used within loop body | 🛠 |
| B008 | function-call-argument-default | Do not perform function call `{name}` in argument defaults | |
| B009 | get-attr-with-constant | Do not call `getattr` with a constant attribute value. It is not any safer than normal property access. | 🛠 |
| B010 | set-attr-with-constant | Do not call `setattr` with a constant attribute value. It is not any safer than normal property access. | 🛠 |
| B011 | do-not-assert-false | Do not `assert False` (`python -O` removes these calls), raise `AssertionError()` | 🛠 |
| B012 | jump-statement-in-finally | `{name}` inside finally blocks cause exceptions to be silenced | |
| B012 | jump-statement-in-finally | `{name}` inside `finally` blocks cause exceptions to be silenced | |
| B013 | redundant-tuple-in-exception-handler | A length-one tuple literal is redundant. Write `except {name}` instead of `except ({name},)`. | 🛠 |
| B014 | duplicate-handler-exception | Exception handler with duplicate exception: `{name}` | 🛠 |
| B015 | useless-comparison | Pointless comparison. This comparison does nothing but waste CPU instructions. Either prepend `assert` or remove it. | |
@@ -938,14 +1008,14 @@ For more, see [flake8-datetimez](https://pypi.org/project/flake8-datetimez/) on
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| DTZ001 | call-datetime-without-tzinfo | The use of `datetime.datetime()` without `tzinfo` argument is not allowed | |
| DTZ002 | call-datetime-today | The use of `datetime.datetime.today()` is not allowed | |
| DTZ003 | call-datetime-utcnow | The use of `datetime.datetime.utcnow()` is not allowed | |
| DTZ004 | call-datetime-utcfromtimestamp | The use of `datetime.datetime.utcfromtimestamp()` is not allowed | |
| DTZ002 | call-datetime-today | The use of `datetime.datetime.today()` is not allowed, use `datetime.datetime.now(tz=)` instead | |
| DTZ003 | call-datetime-utcnow | The use of `datetime.datetime.utcnow()` is not allowed, use `datetime.datetime.now(tz=)` instead | |
| DTZ004 | call-datetime-utcfromtimestamp | The use of `datetime.datetime.utcfromtimestamp()` is not allowed, use `datetime.datetime.fromtimestamp(ts, tz=)` instead | |
| DTZ005 | call-datetime-now-without-tzinfo | The use of `datetime.datetime.now()` without `tz` argument is not allowed | |
| DTZ006 | call-datetime-fromtimestamp | The use of `datetime.datetime.fromtimestamp()` without `tz` argument is not allowed | |
| DTZ007 | call-datetime-strptime-without-zone | The use of `datetime.datetime.strptime()` without %z must be followed by `.replace(tzinfo=)` or `.astimezone()` | |
| DTZ011 | call-date-today | The use of `datetime.date.today()` is not allowed. | |
| DTZ012 | call-date-fromtimestamp | The use of `datetime.date.fromtimestamp()` is not allowed | |
| DTZ011 | call-date-today | The use of `datetime.date.today()` is not allowed, use `datetime.datetime.now(tz=).date()` instead | |
| DTZ012 | call-date-fromtimestamp | The use of `datetime.date.fromtimestamp()` is not allowed, use `datetime.datetime.fromtimestamp(ts, tz=).date()` instead | |
### flake8-debugger (T10)
@@ -984,7 +1054,7 @@ For more, see [flake8-implicit-str-concat](https://pypi.org/project/flake8-impli
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| ISC001 | single-line-implicit-string-concatenation | Implicitly concatenated string literals on one line | |
| ISC002 | multi-line-implicit-string-concatenation | Implicitly concatenated string literals over continuation line | |
| ISC002 | multi-line-implicit-string-concatenation | Implicitly concatenated string literals over multiple lines | |
| ISC003 | explicit-string-concatenation | Explicitly concatenated string should be implicitly concatenated | |
### flake8-import-conventions (ICN)
@@ -1048,7 +1118,7 @@ For more, see [flake8-pytest-style](https://pypi.org/project/flake8-pytest-style
| ---- | ---- | ------- | --- |
| PT001 | incorrect-fixture-parentheses-style | Use `@pytest.fixture{expected_parens}` over `@pytest.fixture{actual_parens}` | 🛠 |
| PT002 | fixture-positional-args | Configuration for fixture `{function}` specified via positional args, use kwargs | |
| PT003 | extraneous-scope-function | `scope='function'` is implied in `@pytest.fixture()` | |
| PT003 | extraneous-scope-function | `scope='function'` is implied in `@pytest.fixture()` | 🛠 |
| PT004 | missing-fixture-name-underscore | Fixture `{function}` does not return anything, add leading underscore | |
| PT005 | incorrect-fixture-name-underscore | Fixture `{function}` returns a value, remove leading underscore | |
| PT006 | parametrize-names-wrong-type | Wrong name(s) type in `@pytest.mark.parametrize`, expected `{expected}` | 🛠 |
@@ -1178,13 +1248,13 @@ For more, see [flake8-use-pathlib](https://pypi.org/project/flake8-use-pathlib/)
| PTH106 | pathlib-rmdir | `os.rmdir` should be replaced by `.rmdir()` | |
| PTH107 | pathlib-remove | `os.remove` should be replaced by `.unlink()` | |
| PTH108 | pathlib-unlink | `os.unlink` should be replaced by `.unlink()` | |
| PTH109 | pathlib-getcwd | `os.getcwd()` should be replaced by `Path.cwd()` | |
| PTH109 | pathlib-getcwd | `os.getcwd` should be replaced by `Path.cwd()` | |
| PTH110 | pathlib-exists | `os.path.exists` should be replaced by `.exists()` | |
| PTH111 | pathlib-expanduser | `os.path.expanduser` should be replaced by `.expanduser()` | |
| PTH112 | pathlib-is-dir | `os.path.isdir` should be replaced by `.is_dir()` | |
| PTH113 | pathlib-is-file | `os.path.isfile` should be replaced by `.is_file()` | |
| PTH114 | pathlib-is-link | `os.path.islink` should be replaced by `.is_symlink()` | |
| PTH115 | pathlib-readlink | `os.readlink(` should be replaced by `.readlink()` | |
| PTH115 | pathlib-readlink | `os.readlink` should be replaced by `.readlink()` | |
| PTH116 | pathlib-stat | `os.stat` should be replaced by `.stat()` or `.owner()` or `.group()` | |
| PTH117 | pathlib-is-abs | `os.path.isabs` should be replaced by `.is_absolute()` | |
| PTH118 | pathlib-join | `os.path.join` should be replaced by foo_path / "bar" | |
@@ -1209,7 +1279,7 @@ For more, see [pandas-vet](https://pypi.org/project/pandas-vet/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| PD002 | use-of-inplace-argument | `inplace=True` should be avoided; it has inconsistent behavior | |
| PD002 | use-of-inplace-argument | `inplace=True` should be avoided; it has inconsistent behavior | 🛠 |
| PD003 | use-of-dot-is-null | `.isna` is preferred to `.isnull`; functionality is equivalent | |
| PD004 | use-of-dot-not-null | `.notna` is preferred to `.notnull`; functionality is equivalent | |
| PD007 | use-of-dot-ix | `.ix` is deprecated; use more explicit `.loc` or `.iloc` | |
@@ -1248,6 +1318,8 @@ For more, see [Pylint](https://pypi.org/project/pylint/) on PyPI.
| ---- | ---- | ------- | --- |
| PLE0117 | nonlocal-without-binding | Nonlocal name `{name}` found without binding | |
| PLE0118 | used-prior-global-declaration | Name `{name}` is used prior to global declaration on line {line} | |
| PLE0604 | invalid-all-object | Invalid object in `__all__`, must contain only strings | |
| PLE0605 | invalid-all-format | Invalid format for `__all__`, must be `tuple` or `list` | |
| PLE1142 | await-outside-async | `await` should be used within an async function | |
#### Refactor (PLR)
@@ -1256,6 +1328,7 @@ For more, see [Pylint](https://pypi.org/project/pylint/) on PyPI.
| PLR0133 | constant-comparison | Two constants compared in a comparison, consider replacing `{left_constant} {op} {right_constant}` | |
| PLR0206 | property-with-parameters | Cannot have defined parameters for properties | |
| PLR0402 | consider-using-from-import | Use `from {module} import {name}` in lieu of alias | |
| PLR0913 | too-many-args | Too many arguments to function call ({c_args}/{max_args}) | |
| PLR1701 | consider-merging-isinstance | Merge these isinstance calls: `isinstance({obj}, ({types}))` | |
| PLR1722 | use-sys-exit | Use `sys.exit()` instead of `{name}` | 🛠 |
| PLR2004 | magic-value-comparison | Magic value used in comparison, consider replacing {value} with a constant variable | |
@@ -1294,8 +1367,12 @@ For more, see [tryceratops](https://pypi.org/project/tryceratops/1.1.0/) on PyPI
<!-- End auto-generated sections. -->
<!-- End section: Rules -->
## Editor Integrations
<!-- Begin section: Editor Integrations -->
### VS Code (Official)
Download the [Ruff VS Code extension](https://marketplace.visualstudio.com/items?itemName=charliermarsh.ruff),
@@ -1534,8 +1611,12 @@ jobs:
run: ruff --format=github .
```
<!-- End section: Editor Integrations -->
## FAQ
<!-- Begin section: FAQ -->
### Is Ruff compatible with Black?
Yes. Ruff is compatible with [Black](https://github.com/psf/black) out-of-the-box, as long as
@@ -1578,9 +1659,9 @@ natively, including:
- [`flake8-executable`](https://pypi.org/project/flake8-executable/)
- [`flake8-implicit-str-concat`](https://pypi.org/project/flake8-implicit-str-concat/)
- [`flake8-import-conventions`](https://github.com/joaopalmeiro/flake8-import-conventions)
- [`flake8-logging-format](https://pypi.org/project/flake8-logging-format/)
- [`flake8-logging-format`](https://pypi.org/project/flake8-logging-format/)
- [`flake8-no-pep420`](https://pypi.org/project/flake8-no-pep420)
- [`flake8-pie`](https://pypi.org/project/flake8-pie/) ([#1543](https://github.com/charliermarsh/ruff/issues/1543))
- [`flake8-pie`](https://pypi.org/project/flake8-pie/)
- [`flake8-print`](https://pypi.org/project/flake8-print/)
- [`flake8-pytest-style`](https://pypi.org/project/flake8-pytest-style/)
- [`flake8-quotes`](https://pypi.org/project/flake8-quotes/)
@@ -1588,7 +1669,7 @@ natively, including:
- [`flake8-simplify`](https://pypi.org/project/flake8-simplify/) ([#998](https://github.com/charliermarsh/ruff/issues/998))
- [`flake8-super`](https://pypi.org/project/flake8-super/)
- [`flake8-tidy-imports`](https://pypi.org/project/flake8-tidy-imports/)
- [`flake8-type-checking](https://pypi.org/project/flake8-type-checking/)
- [`flake8-type-checking`](https://pypi.org/project/flake8-type-checking/)
- [`flake8-use-pathlib`](https://pypi.org/project/flake8-use-pathlib/)
- [`isort`](https://pypi.org/project/isort/)
- [`mccabe`](https://pypi.org/project/mccabe/)
@@ -1630,6 +1711,21 @@ Unlike Pylint, Ruff is capable of automatically fixing its own lint violations.
Pylint parity is being tracked in [#970](https://github.com/charliermarsh/ruff/issues/970).
### How does Ruff compare to Mypy, or Pyright, or Pyre?
Ruff is a linter, not a type checker. It can detect some of the same problems that a type checker
can, but a type checker will catch certain errors that Ruff would miss. The opposite is also true:
Ruff will catch certain errors that a type checker would typically ignore.
For example, unlike a type checker, Ruff will notify you if an import is unused, by looking for
references to that import in the source code; on the other hand, a type checker could flag that you
passed an integer argument to a function that expects a string, which Ruff would miss. The
tools are complementary.
It's recommended that you use Ruff in conjunction with a type checker, like Mypy, Pyright, or Pyre,
with Ruff providing faster feedback on lint violations and the type checker providing more detailed
feedback on type errors.
### Which tools does Ruff replace?
Today, Ruff can be used to replace Flake8 when used with any of the following plugins:
@@ -1651,9 +1747,9 @@ Today, Ruff can be used to replace Flake8 when used with any of the following pl
- [`flake8-executable`](https://pypi.org/project/flake8-executable/)
- [`flake8-implicit-str-concat`](https://pypi.org/project/flake8-implicit-str-concat/)
- [`flake8-import-conventions`](https://github.com/joaopalmeiro/flake8-import-conventions)
- [`flake8-logging-format](https://pypi.org/project/flake8-logging-format/)
- [`flake8-logging-format`](https://pypi.org/project/flake8-logging-format/)
- [`flake8-no-pep420`](https://pypi.org/project/flake8-no-pep420)
- [`flake8-pie`](https://pypi.org/project/flake8-pie/) ([#1543](https://github.com/charliermarsh/ruff/issues/1543))
- [`flake8-pie`](https://pypi.org/project/flake8-pie/)
- [`flake8-print`](https://pypi.org/project/flake8-print/)
- [`flake8-pytest-style`](https://pypi.org/project/flake8-pytest-style/)
- [`flake8-quotes`](https://pypi.org/project/flake8-quotes/)
@@ -1661,7 +1757,7 @@ Today, Ruff can be used to replace Flake8 when used with any of the following pl
- [`flake8-simplify`](https://pypi.org/project/flake8-simplify/) ([#998](https://github.com/charliermarsh/ruff/issues/998))
- [`flake8-super`](https://pypi.org/project/flake8-super/)
- [`flake8-tidy-imports`](https://pypi.org/project/flake8-tidy-imports/)
- [`flake8-type-checking](https://pypi.org/project/flake8-type-checking/)
- [`flake8-type-checking`](https://pypi.org/project/flake8-type-checking/)
- [`flake8-use-pathlib`](https://pypi.org/project/flake8-use-pathlib/)
- [`mccabe`](https://pypi.org/project/mccabe/)
- [`pandas-vet`](https://pypi.org/project/pandas-vet/)
@@ -1769,6 +1865,46 @@ matter how they're provided, which avoids accidental incompatibilities and simpl
Run `ruff /path/to/code.py --show-settings` to view the resolved settings for a given file.
### I want to use Ruff, but I don't want to use `pyproject.toml`. Is that possible?
Yes! In lieu of a `pyproject.toml` file, you can use a `ruff.toml` file for configuration. The two
files are functionally equivalent and have an identical schema, with the exception that a `ruff.toml`
file can omit the `[tool.ruff]` section header.
For example, given this `pyproject.toml`:
```toml
[tool.ruff]
line-length = 88
[tool.ruff.pydocstyle]
convention = "google"
```
You could instead use a `ruff.toml` file like so:
```toml
line-length = 88
[pydocstyle]
convention = "google"
```
Ruff doesn't currently support INI files, like `setup.cfg` or `tox.ini`.
### How can I change Ruff's default configuration?
When no configuration file is found, Ruff will look for a user-specific `pyproject.toml` or
`ruff.toml` file as a last resort. This behavior is similar to Flake8's `~/.config/flake8`.
On macOS, Ruff expects that file to be located at `/Users/Alice/Library/Application Support/ruff/ruff.toml`.
On Linux, Ruff expects that file to be located at `/home/alice/.config/ruff/ruff.toml`.
On Windows, Ruff expects that file to be located at `C:\Users\Alice\AppData\Roaming\ruff\ruff.toml`.
For more, see the [`dirs`](https://docs.rs/dirs/4.0.0/dirs/fn.config_dir.html) crate.
### Ruff tried to fix something, but it broke my code. What should I do?
Ruff's autofix is a best-effort mechanism. Given the dynamic nature of Python, it's difficult to
@@ -1787,6 +1923,8 @@ unfixable = ["B", "SIM", "TRY", "RUF"]
If you find a case where Ruff's autofix breaks your code, please file an Issue!
<!-- End section: FAQ -->
## Contributing
Contributions are welcome and hugely appreciated. To get started, check out the
@@ -1923,7 +2061,9 @@ Benchmark 1: find . -type f -name "*.py" | xargs -P 0 pyupgrade --py311-plus
## Reference
### Options
<!-- Begin section: Settings -->
### Top-level
<!-- Sections automatically generated by `cargo dev generate-options`. -->
<!-- Begin auto-generated options sections. -->
@@ -2105,11 +2245,8 @@ extend-exclude = ["tests", "src/bad.py"]
A list of rule codes or prefixes to ignore, in addition to those
specified by `ignore`.
Note that `extend-ignore` is applied after resolving rules from
`ignore`/`select` and a less specific rule in `extend-ignore`
would overwrite a more specific rule in `select`. It is
recommended to only use `extend-ignore` when extending a
`pyproject.toml` file via `extend`.
This option has been **deprecated** in favor of `ignore`
since its usage is now interchangeable with `ignore`.
**Default value**: `[]`
@@ -2130,12 +2267,6 @@ extend-ignore = ["F841"]
A list of rule codes or prefixes to enable, in addition to those
specified by `select`.
Note that `extend-select` is applied after resolving rules from
`ignore`/`select` and a less specific rule in `extend-select`
would overwrite a more specific rule in `ignore`. It is
recommended to only use `extend-select` when extending a
`pyproject.toml` file via `extend`.
**Default value**: `[]`
**Type**: `Vec<RuleSelector>`
@@ -2209,9 +2340,10 @@ fix-only = true
#### [`fixable`](#fixable)
A list of rule codes or prefixes to consider autofixable.
A list of rule codes or prefixes to consider autofixable. By default, all rules are
considered autofixable.
**Default value**: `["A", "ANN", "ARG", "B", "BLE", "C", "D", "E", "ERA", "F", "FBT", "I", "ICN", "N", "PGH", "PLC", "PLE", "PLR", "PLW", "Q", "RET", "RUF", "S", "T", "TID", "UP", "W", "YTT"]`
**Default value**: `["A", "ANN", "ARG", "B", "BLE", "C", "COM", "D", "DTZ", "E", "EM", "ERA", "EXE", "F", "FBT", "G", "I", "ICN", "INP", "ISC", "N", "PD", "PGH", "PIE", "PL", "PT", "PTH", "Q", "RET", "RUF", "S", "SIM", "T", "TCH", "TID", "TRY", "UP", "W", "YTT"]`
**Type**: `Vec<RuleSelector>`
@@ -2680,6 +2812,24 @@ suppress-none-returning = true
### `flake8-bandit`
#### [`check-typed-exception`](#check-typed-exception)
Whether to disallow `try`-`except`-`pass` (`S110`) for specific exception types. By default,
`try`-`except`-`pass` is only disallowed for `Exception` and `BaseException`.
**Default value**: `false`
**Type**: `bool`
**Example usage**:
```toml
[tool.ruff.flake8-bandit]
check-typed-exception = true
```
---
#### [`hardcoded-tmp-directory`](#hardcoded-tmp-directory)
A list of directories to consider temporary.
@@ -2783,6 +2933,11 @@ By default, implicit concatenations of multiline strings are
allowed (but continuation lines, delimited with a backslash, are
prohibited).
Note that setting `allow-multiline = false` should typically be coupled
with disabling `explicit-string-concatenation` (`ISC003`). Otherwise,
both explicit and implicit multiline string concatenations will be seen
as violations.
**Default value**: `true`
**Type**: `bool`
@@ -2803,7 +2958,7 @@ allow-multiline = false
The conventional aliases for imports. These aliases can be extended by
the `extend_aliases` option.
**Default value**: `{"altair": "alt", "matplotlib.pyplot": "plt", "numpy": "np", "pandas": "pd", "seaborn": "sns"}`
**Default value**: `{"altair": "alt", "matplotlib": "mpl", "matplotlib.pyplot": "plt", "numpy": "np", "pandas": "pd", "seaborn": "sns", "tensorflow": "tf", "holoviews": "hv", "panel": "pn", "plotly.express": "px", "polars": "pl", "pyarrow": "pa"}`
**Type**: `FxHashMap<String, String>`
@@ -2818,6 +2973,7 @@ altair = "alt"
numpy = "np"
pandas = "pd"
seaborn = "sns"
scripy = "sp"
```
---
@@ -3118,7 +3274,7 @@ and can be circumvented via `eval` or `importlib`.
Exempt certain modules from needing to be moved into type-checking
blocks.
**Default value**: `[]`
**Default value**: `["typing"]`
**Type**: `Vec<String>`
@@ -3126,7 +3282,7 @@ blocks.
```toml
[tool.ruff.flake8-type-checking]
exempt-modules = ["typing_extensions"]
exempt-modules = ["typing", "typing_extensions"]
```
---
@@ -3397,7 +3553,7 @@ this to "closest-to-furthest" is equivalent to isort's `reverse-relative
**Default value**: `furthest-to-closest`
**Type**: `RelatveImportsOrder`
**Type**: `RelativeImportsOrder`
**Example usage**:
@@ -3625,9 +3781,9 @@ convention = "google"
#### [`allow-magic-value-types`](#allow-magic-value-types)
Constant types to ignore when used as "magic values".
Constant types to ignore when used as "magic values" (see: `PLR2004`).
**Default value**: `["str"]`
**Default value**: `["str", "bytes"]`
**Type**: `Vec<ConstantType>`
@@ -3640,6 +3796,23 @@ allow-magic-value-types = ["int"]
---
#### [`max-args`](#max-args)
Maximum number of arguments allowed for a function definition (see: `PLR0913`).
**Default value**: `5`
**Type**: `usize`
**Example usage**:
```toml
[tool.ruff.pylint]
max_args = 5
```
---
### `pyupgrade`
#### [`keep-runtime-typing`](#keep-runtime-typing)
@@ -3666,6 +3839,8 @@ keep-runtime-typing = true
<!-- End auto-generated options sections. -->
<!-- End section: Settings -->
## License
MIT

View File

@@ -771,7 +771,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8_to_ruff"
version = "0.0.236"
version = "0.0.238"
dependencies = [
"anyhow",
"clap",
@@ -1975,7 +1975,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.236"
version = "0.0.238"
dependencies = [
"anyhow",
"bincode",

View File

@@ -1,6 +1,6 @@
[package]
name = "flake8-to-ruff"
version = "0.0.236"
version = "0.0.238"
edition = "2021"
[dependencies]

49
mkdocs.template.yml Normal file
View File

@@ -0,0 +1,49 @@
site_name: Ruff
theme:
name: material
features:
- navigation.instant
- navigation.tracking
- content.code.annotate
- toc.integrate
- toc.follow
- navigation.path
- navigation.top
- content.code.copy
palette:
- media: "(prefers-color-scheme: light)"
scheme: default
primary: red
toggle:
icon: material/weather-sunny
name: Switch to dark mode
- media: "(prefers-color-scheme: dark)"
scheme: slate
primary: red
toggle:
icon: material/weather-night
name: Switch to light mode
repo_url: https://github.com/charliermarsh/ruff
repo_name: ruff
site_author: charliermarsh
site_url: https://beta.ruff.rs/docs/
site_dir: site/docs
markdown_extensions:
- toc:
permalink: "#"
- pymdownx.snippets:
- pymdownx.magiclink:
- attr_list:
- md_in_html:
- pymdownx.highlight:
anchor_linenums: true
- pymdownx.inlinehilite:
- pymdownx.superfences:
- markdown.extensions.attr_list:
- pymdownx.keys:
- pymdownx.tasklist:
custom_checkbox: true
- pymdownx.highlight:
anchor_linenums: true
plugins:
- search

View File

@@ -7,7 +7,7 @@ build-backend = "maturin"
[project]
name = "ruff"
version = "0.0.236"
version = "0.0.238"
description = "An extremely fast Python linter, written in Rust."
authors = [
{ name = "Charlie Marsh", email = "charlie.r.marsh@gmail.com" },
@@ -15,6 +15,7 @@ authors = [
maintainers = [
{ name = "Charlie Marsh", email = "charlie.r.marsh@gmail.com" },
]
readme = "README.md"
requires-python = ">=3.7"
license = { file = "LICENSE" }
keywords = ["automation", "flake8", "pycodestyle", "pyflakes", "pylint", "clippy"]

View File

@@ -39,3 +39,10 @@ class Foo:
# Error
def __init__(self, foo: int):
...
# Error used to be ok for a moment since the mere presence
# of a vararg falsely indicated that the function has a typed argument.
class Foo:
def __init__(self, *arg):
...

View File

@@ -0,0 +1,14 @@
try:
pass
except Exception:
pass
try:
pass
except:
pass
try:
pass
except ValueError:
pass

View File

@@ -7,6 +7,13 @@ import matplotlib.pyplot # unconventional
import numpy # unconventional
import pandas # unconventional
import seaborn # unconventional
import tensorflow # unconventional
import holoviews # unconventional
import panel # unconventional
import plotly.express # unconventional
import matplotlib # unconventional
import polars # unconventional
import pyarrow # unconventional
import altair as altr # unconventional
import matplotlib.pyplot as plot # unconventional
@@ -15,6 +22,13 @@ import dask.dataframe as ddf # unconventional
import numpy as nmp # unconventional
import pandas as pdas # unconventional
import seaborn as sbrn # unconventional
import tensorflow as tfz # unconventional
import holoviews as hsv # unconventional
import panel as pns # unconventional
import plotly.express as pltx # unconventional
import matplotlib as ml # unconventional
import polars as ps # unconventional
import pyarrow as arr # unconventional
import altair as alt # conventional
import dask.array as da # conventional
@@ -23,3 +37,12 @@ import matplotlib.pyplot as plt # conventional
import numpy as np # conventional
import pandas as pd # conventional
import seaborn as sns # conventional
import tensorflow as tf # conventional
import holoviews as hv # conventional
import panel as pn # conventional
import plotly.express as px # conventional
import matplotlib as mpl # conventional
import polars as pl # conventional
import pyarrow as pa # conventional
from tensorflow.keras import Model # conventional

View File

@@ -1,3 +1,21 @@
import logging
import sys
logging.error('Hello World', exc_info=True)
# G201
try:
pass
except:
logging.error("Hello World", exc_info=True)
try:
pass
except:
logging.error("Hello World", exc_info=sys.exc_info())
# OK
try:
pass
except:
logging.error("Hello World", exc_info=False)
logging.error("Hello World", exc_info=sys.exc_info())

View File

@@ -1,3 +1,21 @@
import logging
import sys
logging.exception('Hello World', exc_info=True)
# G202
try:
pass
except:
logging.exception("Hello World", exc_info=True)
try:
pass
except:
logging.exception("Hello World", exc_info=sys.exc_info())
# OK
try:
pass
except:
logging.exception("Hello World", exc_info=False)
logging.exception("Hello World", exc_info=True)

View File

@@ -14,3 +14,60 @@ def ok_other_scope():
@pytest.fixture(scope="function")
def error():
...
@pytest.fixture(scope="function", name="my_fixture")
def error_multiple_args():
...
@pytest.fixture(name="my_fixture", scope="function")
def error_multiple_args():
...
@pytest.fixture(name="my_fixture", scope="function", **kwargs)
def error_second_arg():
...
# pytest.fixture does not take positional arguments, however this
# tests the general case as we use a helper function that should
# work for all cases.
@pytest.fixture("my_fixture", scope="function")
def error_arg():
...
@pytest.fixture(
scope="function",
name="my_fixture",
)
def error_multiple_args():
...
@pytest.fixture(
name="my_fixture",
scope="function",
)
def error_multiple_args():
...
@pytest.fixture(
"hello",
name,
*args
,
# another comment ,)
scope=\
"function" # some comment ),
,
name2=name, name3="my_fixture", **kwargs
)
def error_multiple_args():
...

View File

@@ -91,3 +91,10 @@ if True:
b = cccccccccccccccccccccccccccccccccccc
else:
b = ddddddddddddddddddddddddddddddddddddd
# OK (trailing comments)
if True:
exitcode = 0
else:
exitcode = 1 # Trailing comment

View File

@@ -1,15 +1,26 @@
# Errors
"yoda" == compare # SIM300
'yoda' == compare # SIM300
"yoda" == compare # SIM300
42 == age # SIM300
("a", "b") == compare # SIM300
"yoda" <= compare # SIM300
'yoda' < compare # SIM300
"yoda" < compare # SIM300
42 > age # SIM300
YODA == age # SIM300
YODA > age # SIM300
YODA >= age # SIM300
JediOrder.YODA == age # SIM300
# OK
compare == "yoda"
age == 42
compare == ("a", "b")
x == y
"yoda" == compare == 1
"yoda" == compare == someothervar
"yoda" == "yoda"
age == YODA
age < YODA
age <= YODA
YODA == YODA
age == JediOrder.YODA

View File

@@ -29,3 +29,4 @@ os.path.splitext(p)
with open(p) as fp:
fp.read()
open(p).close()
os.getcwdb(p)

View File

@@ -0,0 +1,29 @@
from numpy import (
cos,
int8,
int16,
int32,
int64,
sin,
tan,
uint8,
uint16,
uint32,
uint64,
)
if True:
# inside nested block
from numpy import (
cos,
int8,
int16,
int32,
int64,
sin,
tan,
uint8,
uint16,
uint32,
uint64,
)

View File

@@ -0,0 +1,3 @@
from .logging import config_logging
from .settings import ENV
from .settings import *

View File

@@ -0,0 +1,20 @@
import pandas as pd
x = pd.DataFrame()
x.drop(["a"], axis=1, inplace=True)
x.drop(["a"], axis=1, inplace=True)
x.drop(
inplace=True,
columns=["a"],
axis=1,
)
if True:
x.drop(
inplace=True,
columns=["a"],
axis=1,
)

View File

@@ -6,7 +6,7 @@ from typing import NewType
GLOBAL: str = "foo"
def f():
def assign():
global GLOBAL
GLOBAL = "bar"
lower = 0
@@ -18,4 +18,18 @@ def f():
MyObj2 = namedtuple("MyObj12", ["a", "b"])
T = TypeVar("T")
UserId = NewType('UserId', int)
UserId = NewType("UserId", int)
def aug_assign(rank, world_size):
global CURRENT_PORT
CURRENT_PORT += 1
if CURRENT_PORT > MAX_PORT:
CURRENT_PORT = START_PORT
def loop_assign():
global CURRENT_PORT
for CURRENT_PORT in range(5):
pass

View File

@@ -0,0 +1,6 @@
"""Test: explicit re-export of shadowed builtins."""
from concurrent.futures import (
CancelledError as CancelledError,
TimeoutError as TimeoutError,
)

View File

@@ -2,31 +2,37 @@
# Errors.
###
def f():
global x
global X
def f():
global x
global X
print(x)
print(X)
###
# Non-errors.
###
def f():
global x
global X
x = 1
X = 1
def f():
global x
global X
(x, y) = (1, 2)
(X, y) = (1, 2)
def f():
global x
global X
del x
del X
def f():
global X
X += 1

View File

@@ -0,0 +1,19 @@
__all__ = "CONST" # [invalid-all-format]
__all__ = ["Hello"] + {"world"} # [invalid-all-format]
__all__ += {"world"} # [invalid-all-format]
__all__ = {"world"} + ["Hello"] # [invalid-all-format]
__all__ = (x for x in ["Hello", "world"]) # [invalid-all-format]
__all__ = {x for x in ["Hello", "world"]} # [invalid-all-format]
__all__ = ["Hello"]
__all__ = ("Hello",)
__all__ = ["Hello"] + ("world",)
__all__ = [x for x in ["Hello", "world"]]

View File

@@ -0,0 +1,11 @@
__all__ = (
None, # [invalid-all-object]
Fruit,
Worm,
)
class Fruit:
pass
class Worm:
pass

View File

@@ -47,7 +47,7 @@ if input_password == "": # correct
if input_password == ADMIN_PASSWORD: # correct
pass
if input_password == "Hunter2": # [magic-value-comparison]
if input_password == "Hunter2": # correct
pass
PI = 3.141592653589793238
@@ -62,7 +62,7 @@ if pi_estimation == PI: # correct
HELLO_WORLD = b"Hello, World!"
user_input = b"Hello, There!"
if user_input == b"something": # [magic-value-comparison]
if user_input == b"something": # correct
pass
if user_input == HELLO_WORLD: # correct

View File

@@ -0,0 +1,34 @@
def f(x, y, z, t, u, v, w, r): # Too many arguments (8/5)
pass
def f(x, y, z, t, u): # OK
pass
def f(x): # OK
pass
def f(x, y, z, _t, _u, _v, _w, r): # OK (underscore-prefixed names are ignored
pass
def f(x, y, z, u=1, v=1, r=1): # Too many arguments (6/5)
pass
def f(x=1, y=1, z=1): # OK
pass
def f(x, y, z, /, u, v, w): # OK
pass
def f(x, y, z, *, u, v, w): # OK
pass
def f(x, y, z, a, b, c, *, u, v, w): # Too many arguments (6/5)
pass

View File

@@ -0,0 +1,10 @@
# Too many args (6/4) for max_args=4
# OK for dummy_variable_rgx ~ "skip_.*"
def f(x, y, z, skip_t, skip_u, skip_v):
pass
# Too many args (6/4) for max_args=4
# Too many args (6/5) for dummy_variable_rgx ~ "skip_.*"
def f(x, y, z, t, u, v):
pass

View File

@@ -1,87 +0,0 @@
# Replace names by built-in names, whether namespaced or not
# https://github.com/search?q=%22from+six+import%22&type=code
import six
from six.moves import map # No need
from six import text_type
six.text_type # str
six.binary_type # bytes
six.class_types # (type,)
six.string_types # (str,)
six.integer_types # (int,)
six.unichr # chr
six.iterbytes # iter
six.print_(...) # print(...)
six.exec_(c, g, l) # exec(c, g, l)
six.advance_iterator(it) # next(it)
six.next(it) # next(it)
six.callable(x) # callable(x)
six.moves.range(x) # range(x)
six.moves.xrange(x) # range(x)
isinstance(..., six.class_types) # isinstance(..., type)
issubclass(..., six.integer_types) # issubclass(..., int)
isinstance(..., six.string_types) # isinstance(..., str)
# Replace call on arg by method call on arg
six.iteritems(dct) # dct.items()
six.iterkeys(dct) # dct.keys()
six.itervalues(dct) # dct.values()
six.viewitems(dct) # dct.items()
six.viewkeys(dct) # dct.keys()
six.viewvalues(dct) # dct.values()
six.assertCountEqual(self, a1, a2) # self.assertCountEqual(a1, a2)
six.assertRaisesRegex(self, e, r, fn) # self.assertRaisesRegex(e, r, fn)
six.assertRegex(self, s, r) # self.assertRegex(s, r)
# Replace call on arg by arg attribute
six.get_method_function(meth) # meth.__func__
six.get_method_self(meth) # meth.__self__
six.get_function_closure(fn) # fn.__closure__
six.get_function_code(fn) # fn.__code__
six.get_function_defaults(fn) # fn.__defaults__
six.get_function_globals(fn) # fn.__globals__
# Replace by string literal
six.b("...") # b'...'
six.u("...") # '...'
six.ensure_binary("...") # b'...'
six.ensure_str("...") # '...'
six.ensure_text("...") # '...'
six.b(string) # no change
# Replace by simple expression
six.get_unbound_function(meth) # meth
six.create_unbound_method(fn, cls) # fn
# Raise exception
six.raise_from(exc, exc_from) # raise exc from exc_from
six.reraise(tp, exc, tb) # raise exc.with_traceback(tb)
six.reraise(*sys.exc_info()) # raise
# Int / Bytes conversion
six.byte2int(bs) # bs[0]
six.indexbytes(bs, i) # bs[i]
six.int2byte(i) # bytes((i, ))
# Special cases for next calls
next(six.iteritems(dct)) # next(iter(dct.items()))
next(six.iterkeys(dct)) # next(iter(dct.keys()))
next(six.itervalues(dct)) # next(iter(dct.values()))
# TODO: To implement
# Rewrite classes
@six.python_2_unicode_compatible # Remove
class C(six.Iterator):
pass # class C: pass
class C(six.with_metaclass(M, B)):
pass # class C(B, metaclass=M): pass
# class C(B, metaclass=M): pass
@six.add_metaclass(M)
class C(B):
pass

View File

@@ -0,0 +1 @@
import itertools # noqa: F401

View File

@@ -35,6 +35,16 @@ def still_good():
raise
def still_actually_good():
try:
process()
except MyException as e:
try:
pass
except TypeError:
raise e
def bad_that_needs_recursion():
try:
process()

View File

@@ -28,6 +28,16 @@ def bad():
logger.error("Context message here")
def bad():
try:
a = 1
except Exception:
log.error("Context message here")
if True:
log.error("Context message here")
def bad():
try:
a = 1

View File

@@ -66,18 +66,8 @@
"type": "string"
}
},
"extend-ignore": {
"description": "A list of rule codes or prefixes to ignore, in addition to those specified by `ignore`.\n\nNote that `extend-ignore` is applied after resolving rules from `ignore`/`select` and a less specific rule in `extend-ignore` would overwrite a more specific rule in `select`. It is recommended to only use `extend-ignore` when extending a `pyproject.toml` file via `extend`.",
"type": [
"array",
"null"
],
"items": {
"$ref": "#/definitions/RuleSelector"
}
},
"extend-select": {
"description": "A list of rule codes or prefixes to enable, in addition to those specified by `select`.\n\nNote that `extend-select` is applied after resolving rules from `ignore`/`select` and a less specific rule in `extend-select` would overwrite a more specific rule in `ignore`. It is recommended to only use `extend-select` when extending a `pyproject.toml` file via `extend`.",
"description": "A list of rule codes or prefixes to enable, in addition to those specified by `select`.",
"type": [
"array",
"null"
@@ -111,7 +101,7 @@
]
},
"fixable": {
"description": "A list of rule codes or prefixes to consider autofixable.",
"description": "A list of rule codes or prefixes to consider autofixable. By default, all rules are considered autofixable.",
"type": [
"array",
"null"
@@ -578,6 +568,13 @@
"Flake8BanditOptions": {
"type": "object",
"properties": {
"check-typed-exception": {
"description": "Whether to disallow `try`-`except`-`pass` (`S110`) for specific exception types. By default, `try`-`except`-`pass` is only disallowed for `Exception` and `BaseException`.",
"type": [
"boolean",
"null"
]
},
"hardcoded-tmp-directory": {
"description": "A list of directories to consider temporary.",
"type": [
@@ -652,7 +649,7 @@
"type": "object",
"properties": {
"allow-multiline": {
"description": "Whether to allow implicit string concatenations for multiline strings. By default, implicit concatenations of multiline strings are allowed (but continuation lines, delimited with a backslash, are prohibited).",
"description": "Whether to allow implicit string concatenations for multiline strings. By default, implicit concatenations of multiline strings are allowed (but continuation lines, delimited with a backslash, are prohibited).\n\nNote that setting `allow-multiline = false` should typically be coupled with disabling `explicit-string-concatenation` (`ISC003`). Otherwise, both explicit and implicit multiline string concatenations will be seen as violations.",
"type": [
"boolean",
"null"
@@ -981,7 +978,7 @@
"description": "Whether to place \"closer\" imports (fewer `.` characters, most local) before \"further\" imports (more `.` characters, least local), or vice versa.\n\nThe default (\"furthest-to-closest\") is equivalent to isort's `reverse-relative` default (`reverse-relative = false`); setting this to \"closest-to-furthest\" is equivalent to isort's `reverse-relative = true`.",
"anyOf": [
{
"$ref": "#/definitions/RelatveImportsOrder"
"$ref": "#/definitions/RelativeImportsOrder"
},
{
"type": "null"
@@ -1157,7 +1154,7 @@
"type": "object",
"properties": {
"allow-magic-value-types": {
"description": "Constant types to ignore when used as \"magic values\".",
"description": "Constant types to ignore when used as \"magic values\" (see: `PLR2004`).",
"type": [
"array",
"null"
@@ -1165,6 +1162,15 @@
"items": {
"$ref": "#/definitions/ConstantType"
}
},
"max-args": {
"description": "Maximum number of arguments allowed for a function definition (see: `PLR0913`).",
"type": [
"integer",
"null"
],
"format": "uint",
"minimum": 0.0
}
},
"additionalProperties": false
@@ -1197,7 +1203,7 @@
}
]
},
"RelatveImportsOrder": {
"RelativeImportsOrder": {
"oneOf": [
{
"description": "Place \"closer\" imports (fewer `.` characters, most local) before \"further\" imports (more `.` characters, least local).",
@@ -1615,6 +1621,10 @@
"PLE011",
"PLE0117",
"PLE0118",
"PLE06",
"PLE060",
"PLE0604",
"PLE0605",
"PLE1",
"PLE11",
"PLE114",
@@ -1630,6 +1640,9 @@
"PLR04",
"PLR040",
"PLR0402",
"PLR09",
"PLR091",
"PLR0913",
"PLR1",
"PLR17",
"PLR170",
@@ -1749,6 +1762,7 @@
"S107",
"S108",
"S11",
"S110",
"S113",
"S3",
"S32",
@@ -1858,7 +1872,6 @@
"UP013",
"UP014",
"UP015",
"UP016",
"UP017",
"UP018",
"UP019",

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_cli"
version = "0.0.236"
version = "0.0.238"
authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
edition = "2021"
rust-version = "1.65.0"
@@ -53,6 +53,7 @@ similar = { version = "2.2.1" }
textwrap = { version = "0.16.0" }
update-informer = { version = "0.6.0", default-features = false, features = ["pypi"], optional = true }
walkdir = { version = "2.3.2" }
strum = "0.24.1"
[dev-dependencies]
assert_cmd = { version = "2.0.4" }

View File

@@ -5,6 +5,7 @@ use regex::Regex;
use ruff::logging::LogLevel;
use ruff::registry::Rule;
use ruff::resolver::ConfigProcessor;
use ruff::settings::configuration::RuleSelection;
use ruff::settings::types::{
FilePattern, PatternPrefixPair, PerFileIgnore, PythonVersion, SerializationFormat,
};
@@ -15,12 +16,50 @@ use rustc_hash::FxHashMap;
#[command(
author,
name = "ruff",
about = "Ruff: An extremely fast Python linter."
about = "Ruff: An extremely fast Python linter.",
after_help = "For help with a specific command, see: `ruff help <command>`."
)]
#[command(version)]
#[allow(clippy::struct_excessive_bools)]
pub struct Args {
#[arg(required_unless_present_any = ["clean", "explain", "generate_shell_completion"])]
#[command(subcommand)]
pub command: Command,
#[clap(flatten)]
pub log_level_args: LogLevelArgs,
}
#[allow(clippy::large_enum_variant)]
#[derive(Debug, clap::Subcommand)]
pub enum Command {
/// Run Ruff on the given files or directories (default).
Check(CheckArgs),
/// Explain a rule.
#[clap(alias = "--explain")]
Rule {
#[arg(value_parser=Rule::from_code)]
rule: &'static Rule,
/// Output format
#[arg(long, value_enum, default_value = "text")]
format: HelpFormat,
},
/// List all supported upstream linters
Linter {
/// Output format
#[arg(long, value_enum, default_value = "text")]
format: HelpFormat,
},
/// Clear any caches in the current directory and any subdirectories.
#[clap(alias = "--clean")]
Clean,
/// Generate shell completion.
#[clap(alias = "--generate-shell-completion", hide = true)]
GenerateShellCompletion { shell: clap_complete_command::Shell },
}
#[derive(Debug, clap::Args)]
#[allow(clippy::struct_excessive_bools, clippy::module_name_repetitions)]
pub struct CheckArgs {
/// List of files or directories to check.
pub files: Vec<PathBuf>,
/// Attempt to automatically fix lint violations.
#[arg(long, overrides_with("no_fix"))]
@@ -78,13 +117,13 @@ pub struct Args {
help_heading = "Rule selection"
)]
pub extend_select: Option<Vec<RuleSelector>>,
/// Like --ignore, but adds additional rule codes on top of the ignored
/// ones.
/// Like --ignore. (Deprecated: You can just use --ignore instead.)
#[arg(
long,
value_delimiter = ',',
value_name = "RULE_CODE",
help_heading = "Rule selection"
help_heading = "Rule selection",
hide = true
)]
pub extend_ignore: Option<Vec<RuleSelector>>,
/// List of mappings from file pattern to code to exclude
@@ -179,81 +218,36 @@ pub struct Args {
update_check: bool,
#[clap(long, overrides_with("update_check"), hide = true)]
no_update_check: bool,
/// Show counts for every rule with at least one violation.
#[arg(
long,
// Unsupported default-command arguments.
conflicts_with = "diff",
conflicts_with = "show_source",
conflicts_with = "watch",
)]
pub statistics: bool,
/// Enable automatic additions of `noqa` directives to failing lines.
#[arg(
long,
// conflicts_with = "add_noqa",
conflicts_with = "clean",
conflicts_with = "explain",
conflicts_with = "generate_shell_completion",
conflicts_with = "show_files",
conflicts_with = "show_settings",
// Unsupported default-command arguments.
conflicts_with = "statistics",
conflicts_with = "stdin_filename",
conflicts_with = "watch",
)]
pub add_noqa: bool,
/// Explain a rule.
#[arg(
long,
value_parser=Rule::from_code,
help_heading="Subcommands",
// Fake subcommands.
conflicts_with = "add_noqa",
conflicts_with = "clean",
// conflicts_with = "explain",
conflicts_with = "generate_shell_completion",
conflicts_with = "show_files",
conflicts_with = "show_settings",
// Unsupported default-command arguments.
conflicts_with = "stdin_filename",
conflicts_with = "watch",
)]
pub explain: Option<&'static Rule>,
/// Clear any caches in the current directory or any subdirectories.
#[arg(
long,
help_heading="Subcommands",
// Fake subcommands.
conflicts_with = "add_noqa",
// conflicts_with = "clean",
conflicts_with = "explain",
conflicts_with = "generate_shell_completion",
conflicts_with = "show_files",
conflicts_with = "show_settings",
// Unsupported default-command arguments.
conflicts_with = "stdin_filename",
conflicts_with = "watch",
)]
pub clean: bool,
/// Generate shell completion
#[arg(
long,
hide = true,
value_name = "SHELL",
// Fake subcommands.
conflicts_with = "add_noqa",
conflicts_with = "clean",
conflicts_with = "explain",
// conflicts_with = "generate_shell_completion",
conflicts_with = "show_files",
conflicts_with = "show_settings",
// Unsupported default-command arguments.
conflicts_with = "stdin_filename",
conflicts_with = "watch",
)]
pub generate_shell_completion: Option<clap_complete_command::Shell>,
/// See the files Ruff will be run against with the current settings.
#[arg(
long,
// Fake subcommands.
conflicts_with = "add_noqa",
conflicts_with = "clean",
conflicts_with = "explain",
conflicts_with = "generate_shell_completion",
// conflicts_with = "show_files",
conflicts_with = "show_settings",
// Unsupported default-command arguments.
conflicts_with = "statistics",
conflicts_with = "stdin_filename",
conflicts_with = "watch",
)]
@@ -263,32 +257,52 @@ pub struct Args {
long,
// Fake subcommands.
conflicts_with = "add_noqa",
conflicts_with = "clean",
conflicts_with = "explain",
conflicts_with = "generate_shell_completion",
conflicts_with = "show_files",
// conflicts_with = "show_settings",
// Unsupported default-command arguments.
conflicts_with = "statistics",
conflicts_with = "stdin_filename",
conflicts_with = "watch",
)]
pub show_settings: bool,
#[clap(flatten)]
pub log_level_args: LogLevelArgs,
}
#[derive(Debug, Clone, Copy, clap::ValueEnum)]
pub enum HelpFormat {
Text,
Json,
}
#[allow(clippy::module_name_repetitions)]
#[derive(Debug, clap::Args)]
pub struct LogLevelArgs {
/// Enable verbose logging.
#[arg(short, long, group = "verbosity", help_heading = "Log levels")]
#[arg(
short,
long,
global = true,
group = "verbosity",
help_heading = "Log levels"
)]
pub verbose: bool,
/// Print lint violations, but nothing else.
#[arg(short, long, group = "verbosity", help_heading = "Log levels")]
#[arg(
short,
long,
global = true,
group = "verbosity",
help_heading = "Log levels"
)]
pub quiet: bool,
/// Disable all logging (but still exit with status code "1" upon detecting
/// lint violations).
#[arg(short, long, group = "verbosity", help_heading = "Log levels")]
#[arg(
short,
long,
global = true,
group = "verbosity",
help_heading = "Log levels"
)]
pub silent: bool,
}
@@ -306,24 +320,22 @@ impl From<&LogLevelArgs> for LogLevel {
}
}
impl Args {
impl CheckArgs {
/// Partition the CLI into command-line arguments and configuration
/// overrides.
pub fn partition(self) -> (Arguments, Overrides) {
(
Arguments {
add_noqa: self.add_noqa,
clean: self.clean,
config: self.config,
diff: self.diff,
exit_zero: self.exit_zero,
explain: self.explain,
files: self.files,
generate_shell_completion: self.generate_shell_completion,
isolated: self.isolated,
no_cache: self.no_cache,
show_files: self.show_files,
show_settings: self.show_settings,
statistics: self.statistics,
stdin_filename: self.stdin_filename,
watch: self.watch,
},
@@ -371,17 +383,15 @@ fn resolve_bool_arg(yes: bool, no: bool) -> Option<bool> {
#[allow(clippy::struct_excessive_bools)]
pub struct Arguments {
pub add_noqa: bool,
pub clean: bool,
pub config: Option<PathBuf>,
pub diff: bool,
pub exit_zero: bool,
pub explain: Option<&'static Rule>,
pub files: Vec<PathBuf>,
pub generate_shell_completion: Option<clap_complete_command::Shell>,
pub isolated: bool,
pub no_cache: bool,
pub show_files: bool,
pub show_settings: bool,
pub statistics: bool,
pub stdin_filename: Option<PathBuf>,
pub watch: bool,
}
@@ -433,18 +443,25 @@ impl ConfigProcessor for &Overrides {
if let Some(fix_only) = &self.fix_only {
config.fix_only = Some(*fix_only);
}
if let Some(fixable) = &self.fixable {
config.fixable = Some(fixable.clone());
}
config.rule_selections.push(RuleSelection {
select: self.select.clone(),
ignore: self
.ignore
.iter()
.cloned()
.chain(self.extend_ignore.iter().cloned().into_iter())
.flatten()
.collect(),
extend_select: self.extend_select.clone().unwrap_or_default(),
fixable: self.fixable.clone(),
unfixable: self.unfixable.clone().unwrap_or_default(),
});
if let Some(format) = &self.format {
config.format = Some(*format);
}
if let Some(force_exclude) = &self.force_exclude {
config.force_exclude = Some(*force_exclude);
}
if let Some(ignore) = &self.ignore {
config.ignore = Some(ignore.clone());
}
if let Some(line_length) = &self.line_length {
config.line_length = Some(*line_length);
}
@@ -454,38 +471,15 @@ impl ConfigProcessor for &Overrides {
if let Some(respect_gitignore) = &self.respect_gitignore {
config.respect_gitignore = Some(*respect_gitignore);
}
if let Some(select) = &self.select {
config.select = Some(select.clone());
}
if let Some(show_source) = &self.show_source {
config.show_source = Some(*show_source);
}
if let Some(target_version) = &self.target_version {
config.target_version = Some(*target_version);
}
if let Some(unfixable) = &self.unfixable {
config.unfixable = Some(unfixable.clone());
}
if let Some(update_check) = &self.update_check {
config.update_check = Some(*update_check);
}
// Special-case: `extend_ignore` and `extend_select` are parallel arrays, so
// push an empty array if only one of the two is provided.
match (&self.extend_ignore, &self.extend_select) {
(Some(extend_ignore), Some(extend_select)) => {
config.extend_ignore.push(extend_ignore.clone());
config.extend_select.push(extend_select.clone());
}
(Some(extend_ignore), None) => {
config.extend_ignore.push(extend_ignore.clone());
config.extend_select.push(Vec::new());
}
(None, Some(extend_select)) => {
config.extend_ignore.push(Vec::new());
config.extend_select.push(extend_select.clone());
}
(None, None) => {}
}
}
}

View File

@@ -18,16 +18,17 @@ use ruff::message::{Location, Message};
use ruff::registry::{Linter, Rule, RuleNamespace};
use ruff::resolver::PyprojectDiscovery;
use ruff::settings::flags;
use ruff::settings::types::SerializationFormat;
use ruff::{fix, fs, packaging, resolver, warn_user_once, AutofixAvailability, IOError};
use serde::Serialize;
use walkdir::WalkDir;
use crate::args::Overrides;
use crate::args::{HelpFormat, Overrides};
use crate::cache;
use crate::diagnostics::{lint_path, lint_stdin, Diagnostics};
use crate::iterators::par_iter;
pub mod linter;
/// Run the linter over a collection of files.
pub fn run(
files: &[PathBuf],
@@ -111,7 +112,7 @@ pub fn run(
let settings = resolver.resolve(path, pyproject_strategy);
if settings.rules.enabled(&Rule::IOError) {
Diagnostics::new(vec![Message {
kind: IOError(message).into(),
kind: IOError { message }.into(),
location: Location::default(),
end_location: Location::default(),
fix: None,
@@ -269,10 +270,10 @@ struct Explanation<'a> {
}
/// Explain a `Rule` to the user.
pub fn explain(rule: &Rule, format: SerializationFormat) -> Result<()> {
pub fn rule(rule: &Rule, format: HelpFormat) -> Result<()> {
let (linter, _) = Linter::parse_code(rule.code()).unwrap();
match format {
SerializationFormat::Text | SerializationFormat::Grouped => {
HelpFormat::Text => {
println!("{}\n", rule.as_ref());
println!("Code: {} ({})\n", rule.code(), linter.name());
@@ -290,7 +291,7 @@ pub fn explain(rule: &Rule, format: SerializationFormat) -> Result<()> {
println!("* {format}");
}
}
SerializationFormat::Json => {
HelpFormat::Json => {
println!(
"{}",
serde_json::to_string_pretty(&Explanation {
@@ -300,24 +301,12 @@ pub fn explain(rule: &Rule, format: SerializationFormat) -> Result<()> {
})?
);
}
SerializationFormat::Junit => {
bail!("`--explain` does not support junit format")
}
SerializationFormat::Github => {
bail!("`--explain` does not support GitHub format")
}
SerializationFormat::Gitlab => {
bail!("`--explain` does not support GitLab format")
}
SerializationFormat::Pylint => {
bail!("`--explain` does not support pylint format")
}
};
Ok(())
}
/// Clear any caches in the current directory or any subdirectories.
pub fn clean(level: &LogLevel) -> Result<()> {
pub fn clean(level: LogLevel) -> Result<()> {
for entry in WalkDir::new(&*path_dedot::CWD)
.into_iter()
.filter_map(Result::ok)
@@ -325,7 +314,7 @@ pub fn clean(level: &LogLevel) -> Result<()> {
{
let cache = entry.path().join(CACHE_DIR_NAME);
if cache.is_dir() {
if level >= &LogLevel::Default {
if level >= LogLevel::Default {
eprintln!("Removing cache at: {}", fs::relativize_path(&cache).bold());
}
remove_dir_all(&cache)?;

View File

@@ -0,0 +1,59 @@
use itertools::Itertools;
use serde::Serialize;
use strum::IntoEnumIterator;
use ruff::registry::{Linter, LinterCategory, RuleNamespace};
use crate::args::HelpFormat;
pub fn linter(format: HelpFormat) {
match format {
HelpFormat::Text => {
for linter in Linter::iter() {
let prefix = match linter.common_prefix() {
"" => linter
.categories()
.unwrap()
.iter()
.map(|LinterCategory(prefix, ..)| prefix)
.join("/"),
prefix => prefix.to_string(),
};
println!("{:>4} {}", prefix, linter.name());
}
}
HelpFormat::Json => {
let linters: Vec<_> = Linter::iter()
.map(|linter_info| LinterInfo {
prefix: linter_info.common_prefix(),
name: linter_info.name(),
categories: linter_info.categories().map(|cats| {
cats.iter()
.map(|LinterCategory(prefix, name, ..)| LinterCategoryInfo {
prefix,
name,
})
.collect()
}),
})
.collect();
println!("{}", serde_json::to_string_pretty(&linters).unwrap());
}
}
}
#[derive(Serialize)]
struct LinterInfo {
prefix: &'static str,
name: &'static str,
#[serde(skip_serializing_if = "Option::is_none")]
categories: Option<Vec<LinterCategoryInfo>>,
}
#[derive(Serialize)]
struct LinterCategoryInfo {
prefix: &'static str,
name: &'static str,
}

View File

@@ -1,5 +1,5 @@
//! This library only exists to enable the Ruff internal tooling (`ruff_dev`)
//! to automatically update the `ruff --help` output in the `README.md`.
//! to automatically update the `ruff help` output in the `README.md`.
//!
//! For the actual Ruff library, see [`ruff`].
#![forbid(unsafe_code)]
@@ -10,7 +10,21 @@ mod args;
use clap::CommandFactory;
/// Returns the output of `ruff --help`.
pub fn help() -> String {
/// Returns the output of `ruff help`.
pub fn command_help() -> String {
args::Args::command().render_help().to_string()
}
/// Returns the output of `ruff help check`.
pub fn subcommand_help() -> String {
let mut cmd = args::Args::command();
// The build call is necessary for the help output to contain `Usage: ruff check` instead of `Usage: check`
// see https://github.com/clap-rs/clap/issues/4685
cmd.build();
cmd.find_subcommand_mut("check")
.expect("`check` subcommand not found")
.render_help()
.to_string()
}

View File

@@ -17,8 +17,8 @@ use ::ruff::resolver::PyprojectDiscovery;
use ::ruff::settings::types::SerializationFormat;
use ::ruff::{fix, fs, warn_user_once};
use anyhow::Result;
use args::Args;
use clap::{CommandFactory, Parser};
use args::{Args, CheckArgs, Command};
use clap::{CommandFactory, Parser, Subcommand};
use colored::Colorize;
use notify::{recommended_watcher, RecursiveMode, Watcher};
use printer::{Printer, Violations};
@@ -35,8 +35,28 @@ mod resolve;
pub mod updates;
fn inner_main() -> Result<ExitCode> {
let mut args: Vec<_> = std::env::args_os().collect();
// Clap doesn't support default subcommands but we want to run `check` by
// default for convenience and backwards-compatibility, so we just
// preprocess the arguments accordingly before passing them to Clap.
if let Some(arg) = args.get(1).and_then(|s| s.to_str()) {
if !Command::has_subcommand(rewrite_legacy_subcommand(arg))
&& arg != "-h"
&& arg != "--help"
&& arg != "-V"
&& arg != "--version"
&& arg != "help"
{
args.insert(1, "check".into());
}
}
// Extract command-line arguments.
let args = Args::parse();
let Args {
command,
log_level_args,
} = Args::parse_from(args);
let default_panic_hook = std::panic::take_hook();
std::panic::set_hook(Box::new(move |info| {
@@ -53,20 +73,26 @@ quoting the executed command, along with the relevant file contents and `pyproje
default_panic_hook(info);
}));
let log_level: LogLevel = (&args.log_level_args).into();
let log_level: LogLevel = (&log_level_args).into();
set_up_logging(&log_level)?;
let (cli, overrides) = args.partition();
match command {
Command::Rule { rule, format } => commands::rule(rule, format)?,
Command::Linter { format } => commands::linter::linter(format),
Command::Clean => commands::clean(log_level)?,
Command::GenerateShellCompletion { shell } => {
shell.generate(&mut Args::command(), &mut io::stdout());
}
if let Some(shell) = cli.generate_shell_completion {
shell.generate(&mut Args::command(), &mut io::stdout());
return Ok(ExitCode::SUCCESS);
}
if cli.clean {
commands::clean(&log_level)?;
return Ok(ExitCode::SUCCESS);
Command::Check(args) => return check(args, log_level),
}
Ok(ExitCode::SUCCESS)
}
fn check(args: CheckArgs, log_level: LogLevel) -> Result<ExitCode> {
let (cli, overrides) = args.partition();
// Construct the "default" settings. These are used when no `pyproject.toml`
// files are present, or files are injected from outside of the hierarchy.
let pyproject_strategy = resolve::resolve(
@@ -76,6 +102,14 @@ quoting the executed command, along with the relevant file contents and `pyproje
cli.stdin_filename.as_deref(),
)?;
if cli.show_settings {
commands::show_settings(&cli.files, &pyproject_strategy, &overrides)?;
return Ok(ExitCode::SUCCESS);
}
if cli.show_files {
commands::show_files(&cli.files, &pyproject_strategy, &overrides)?;
}
// Extract options that are included in `Settings`, but only apply at the top
// level.
let CliSettings {
@@ -89,19 +123,6 @@ quoting the executed command, along with the relevant file contents and `pyproje
PyprojectDiscovery::Hierarchical(settings) => settings.cli.clone(),
};
if let Some(rule) = cli.explain {
commands::explain(rule, format)?;
return Ok(ExitCode::SUCCESS);
}
if cli.show_settings {
commands::show_settings(&cli.files, &pyproject_strategy, &overrides)?;
return Ok(ExitCode::SUCCESS);
}
if cli.show_files {
commands::show_files(&cli.files, &pyproject_strategy, &overrides)?;
return Ok(ExitCode::SUCCESS);
}
// Autofix rules are as follows:
// - If `--fix` or `--fix-only` is set, always apply fixes to the filesystem (or
// print them to stdout, if we're reading from stdin).
@@ -135,14 +156,17 @@ quoting the executed command, along with the relevant file contents and `pyproje
warn_user_once!("Detected debug build without --no-cache.");
}
let printer = Printer::new(&format, &log_level, &autofix, &violations);
if cli.add_noqa {
let modifications = commands::add_noqa(&cli.files, &pyproject_strategy, &overrides)?;
if modifications > 0 && log_level >= LogLevel::Default {
println!("Added {modifications} noqa directives.");
}
} else if cli.watch {
return Ok(ExitCode::SUCCESS);
}
let printer = Printer::new(&format, &log_level, &autofix, &violations);
if cli.watch {
if !matches!(autofix, fix::FixMode::None) {
warn_user_once!("--fix is not enabled in watch mode.");
}
@@ -221,7 +245,11 @@ quoting the executed command, along with the relevant file contents and `pyproje
// unless we're writing fixes via stdin (in which case, the transformed
// source code goes to stdout).
if !(is_stdin && matches!(autofix, fix::FixMode::Apply | fix::FixMode::Diff)) {
printer.write_once(&diagnostics)?;
if cli.statistics {
printer.write_statistics(&diagnostics)?;
} else {
printer.write_once(&diagnostics)?;
}
}
// Check for updates if we're in a non-silent log level.
@@ -244,10 +272,18 @@ quoting the executed command, along with the relevant file contents and `pyproje
}
}
}
Ok(ExitCode::SUCCESS)
}
fn rewrite_legacy_subcommand(cmd: &str) -> &str {
match cmd {
"--explain" => "rule",
"--clean" => "clean",
"--generate-shell-completion" => "generate-shell-completion",
cmd => cmd,
}
}
#[must_use]
pub fn main() -> ExitCode {
match inner_main() {

View File

@@ -43,6 +43,13 @@ struct ExpandedMessage<'a> {
filename: &'a str,
}
#[derive(Serialize)]
struct ExpandedStatistics<'a> {
count: usize,
code: &'a str,
message: String,
}
struct SerializeRuleAsCode<'a>(&'a Rule);
impl Serialize for SerializeRuleAsCode<'_> {
@@ -336,6 +343,77 @@ impl<'a> Printer<'a> {
Ok(())
}
pub fn write_statistics(&self, diagnostics: &Diagnostics) -> Result<()> {
let mut violations = diagnostics
.messages
.iter()
.map(|message| message.kind.rule())
.collect::<Vec<_>>();
violations.sort();
violations.dedup();
let statistics = violations
.iter()
.map(|rule| ExpandedStatistics {
code: rule.code(),
count: diagnostics
.messages
.iter()
.filter(|message| message.kind.rule() == *rule)
.count(),
message: diagnostics
.messages
.iter()
.find(|message| message.kind.rule() == *rule)
.map(|message| message.kind.body())
.unwrap(),
})
.collect::<Vec<_>>();
let mut stdout = BufWriter::new(io::stdout().lock());
match self.format {
SerializationFormat::Text => {
// Compute the maximum number of digits in the count and code, for all messages, to enable
// pretty-printing.
let count_width = num_digits(
statistics
.iter()
.map(|statistic| statistic.count)
.max()
.unwrap(),
);
let code_width = statistics
.iter()
.map(|statistic| statistic.code.len())
.max()
.unwrap();
// By default, we mimic Flake8's `--statistics` format.
for msg in statistics {
writeln!(
stdout,
"{:>count_width$}\t{:<code_width$}\t{}",
msg.count, msg.code, msg.message
)?;
}
return Ok(());
}
SerializationFormat::Json => {
writeln!(stdout, "{}", serde_json::to_string_pretty(&statistics)?)?;
}
_ => {
anyhow::bail!(
"Unsupported serialization format for statistics: {:?}",
self.format
)
}
}
stdout.flush()?;
Ok(())
}
pub fn write_continuously(&self, diagnostics: &Diagnostics) -> Result<()> {
if matches!(self.log_level, LogLevel::Silent) {
return Ok(());

View File

@@ -14,7 +14,7 @@ const BIN_NAME: &str = "ruff";
#[test]
fn test_stdin_success() -> Result<()> {
let mut cmd = Command::cargo_bin(BIN_NAME)?;
cmd.args(["-", "--format", "text"])
cmd.args(["-", "--format", "text", "--isolated"])
.write_stdin("")
.assert()
.success();
@@ -25,7 +25,7 @@ fn test_stdin_success() -> Result<()> {
fn test_stdin_error() -> Result<()> {
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text"])
.args(["-", "--format", "text", "--isolated"])
.write_stdin("import os\n")
.assert()
.failure();
@@ -41,7 +41,14 @@ fn test_stdin_error() -> Result<()> {
fn test_stdin_filename() -> Result<()> {
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--stdin-filename", "F401.py"])
.args([
"-",
"--format",
"text",
"--stdin-filename",
"F401.py",
"--isolated",
])
.write_stdin("import os\n")
.assert()
.failure();
@@ -58,7 +65,14 @@ fn test_stdin_filename() -> Result<()> {
fn test_stdin_json() -> Result<()> {
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "json", "--stdin-filename", "F401.py"])
.args([
"-",
"--format",
"json",
"--stdin-filename",
"F401.py",
"--isolated",
])
.write_stdin("import os\n")
.assert()
.failure();
@@ -107,7 +121,7 @@ fn test_stdin_json() -> Result<()> {
fn test_stdin_autofix() -> Result<()> {
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--fix"])
.args(["-", "--format", "text", "--fix", "--isolated"])
.write_stdin("import os\nimport sys\n\nprint(sys.version)\n")
.assert()
.success();
@@ -122,7 +136,7 @@ fn test_stdin_autofix() -> Result<()> {
fn test_stdin_autofix_when_not_fixable_should_still_print_contents() -> Result<()> {
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--fix"])
.args(["-", "--format", "text", "--fix", "--isolated"])
.write_stdin("import os\nimport sys\n\nif (1, 2):\n print(sys.version)\n")
.assert()
.failure();
@@ -137,7 +151,7 @@ fn test_stdin_autofix_when_not_fixable_should_still_print_contents() -> Result<(
fn test_stdin_autofix_when_no_issues_should_still_print_contents() -> Result<()> {
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--fix"])
.args(["-", "--format", "text", "--fix", "--isolated"])
.write_stdin("import sys\n\nprint(sys.version)\n")
.assert()
.success();
@@ -152,7 +166,7 @@ fn test_stdin_autofix_when_no_issues_should_still_print_contents() -> Result<()>
fn test_show_source() -> Result<()> {
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--show-source"])
.args(["-", "--format", "text", "--show-source", "--isolated"])
.write_stdin("l = 1")
.assert()
.failure();
@@ -163,8 +177,26 @@ fn test_show_source() -> Result<()> {
#[test]
fn explain_status_codes() -> Result<()> {
let mut cmd = Command::cargo_bin(BIN_NAME)?;
cmd.args(["-", "--explain", "F401"]).assert().success();
cmd.args(["--explain", "F401"]).assert().success();
let mut cmd = Command::cargo_bin(BIN_NAME)?;
cmd.args(["-", "--explain", "RUF404"]).assert().failure();
cmd.args(["--explain", "RUF404"]).assert().failure();
Ok(())
}
#[test]
fn show_statistics() -> Result<()> {
let mut cmd = Command::cargo_bin(BIN_NAME)?;
let output = cmd
.args(["-", "--format", "text", "--select", "F401", "--statistics"])
.write_stdin("import sys\nimport os\n\nprint(os.getuid())\n")
.assert()
.failure();
assert_eq!(
str::from_utf8(&output.get_output().stdout)?
.lines()
.last()
.unwrap(),
"1\tF401\t`sys` imported but unused"
);
Ok(())
}

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_dev"
version = "0.0.236"
version = "0.0.238"
edition = "2021"
[dependencies]
@@ -15,7 +15,7 @@ rustpython-ast = { features = ["unparse"], git = "https://github.com/RustPython/
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "4f38cb68e4a97aeea9eb19673803a0bd5f655383" }
rustpython-parser = { features = ["lalrpop"], git = "https://github.com/RustPython/RustPython.git", rev = "4f38cb68e4a97aeea9eb19673803a0bd5f655383" }
schemars = { version = "0.8.11" }
serde_json = {version="1.0.91"}
serde_json = { version = "1.0.91" }
strum = { version = "0.24.1", features = ["strum_macros"] }
strum_macros = { version = "0.24.3" }
textwrap = { version = "0.16.0" }

View File

@@ -1,11 +1,14 @@
//! Generate CLI help.
use anyhow::Result;
use crate::utils::replace_readme_section;
use anyhow::Result;
use std::str;
const HELP_BEGIN_PRAGMA: &str = "<!-- Begin auto-generated cli help. -->";
const HELP_END_PRAGMA: &str = "<!-- End auto-generated cli help. -->";
const COMMAND_HELP_BEGIN_PRAGMA: &str = "<!-- Begin auto-generated command help. -->";
const COMMAND_HELP_END_PRAGMA: &str = "<!-- End auto-generated command help. -->";
const SUBCOMMAND_HELP_BEGIN_PRAGMA: &str = "<!-- Begin auto-generated subcommand help. -->";
const SUBCOMMAND_HELP_END_PRAGMA: &str = "<!-- End auto-generated subcommand help. -->";
#[derive(clap::Args)]
pub struct Args {
@@ -19,15 +22,25 @@ fn trim_lines(s: &str) -> String {
}
pub fn main(args: &Args) -> Result<()> {
let output = trim_lines(ruff_cli::help().trim());
// Generate `ruff help`.
let command_help = trim_lines(ruff_cli::command_help().trim());
// Generate `ruff help check`.
let subcommand_help = trim_lines(ruff_cli::subcommand_help().trim());
if args.dry_run {
print!("{output}");
print!("{command_help}");
print!("{subcommand_help}");
} else {
replace_readme_section(
&format!("```\n{output}\n```\n"),
HELP_BEGIN_PRAGMA,
HELP_END_PRAGMA,
&format!("```\n{command_help}\n```\n"),
COMMAND_HELP_BEGIN_PRAGMA,
COMMAND_HELP_END_PRAGMA,
)?;
replace_readme_section(
&format!("```\n{subcommand_help}\n```\n"),
SUBCOMMAND_HELP_BEGIN_PRAGMA,
SUBCOMMAND_HELP_END_PRAGMA,
)?;
}

View File

@@ -1,6 +1,7 @@
//! Generate a Markdown-compatible table of supported lint rules.
use anyhow::Result;
use itertools::Itertools;
use ruff::registry::{Linter, LinterCategory, Rule, RuleNamespace};
use strum::IntoEnumIterator;
@@ -47,7 +48,15 @@ pub fn main(args: &Args) -> Result<()> {
let mut table_out = String::new();
let mut toc_out = String::new();
for linter in Linter::iter() {
let codes_csv: String = linter.prefixes().join(", ");
let codes_csv: String = match linter.common_prefix() {
"" => linter
.categories()
.unwrap()
.iter()
.map(|LinterCategory(prefix, ..)| prefix)
.join(", "),
prefix => prefix.to_string(),
};
table_out.push_str(&format!("### {} ({codes_csv})", linter.name()));
table_out.push('\n');
table_out.push('\n');

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_macros"
version = "0.0.236"
version = "0.0.238"
edition = "2021"
[lib]

View File

@@ -15,13 +15,15 @@ pub fn derive_impl(input: DeriveInput) -> syn::Result<proc_macro2::TokenStream>
let mut parsed = Vec::new();
let mut prefix_match_arms = quote!();
let mut common_prefix_match_arms = quote!();
let mut name_match_arms = quote!(Self::Ruff => "Ruff-specific rules",);
let mut url_match_arms = quote!(Self::Ruff => None,);
let mut into_iter_match_arms = quote!();
let mut all_prefixes = HashSet::new();
for variant in variants {
let mut first_chars = HashSet::new();
let prefixes: Result<Vec<_>, _> = variant
.attrs
.iter()
@@ -33,7 +35,9 @@ pub fn derive_impl(input: DeriveInput) -> syn::Result<proc_macro2::TokenStream>
let str = lit.value();
match str.chars().next() {
None => return Err(Error::new(lit.span(), "expected prefix string to be non-empty")),
Some(_) => {},
Some(c) => if !first_chars.insert(c) {
return Err(Error::new(lit.span(), format!("this variant already has another prefix starting with the character '{c}'")))
}
}
if !all_prefixes.insert(str.clone()) {
return Err(Error::new(lit.span(), "prefix has already been defined before"));
@@ -63,31 +67,43 @@ pub fn derive_impl(input: DeriveInput) -> syn::Result<proc_macro2::TokenStream>
}
for lit in &prefixes {
parsed.push((lit.clone(), variant_ident.clone()));
parsed.push((
lit.clone(),
variant_ident.clone(),
match prefixes.len() {
1 => ParseStrategy::SinglePrefix,
_ => ParseStrategy::MultiplePrefixes,
},
));
}
prefix_match_arms.extend(quote! {
Self::#variant_ident => &[#(#prefixes),*],
});
if let [prefix] = &prefixes[..] {
common_prefix_match_arms.extend(quote! { Self::#variant_ident => #prefix, });
let prefix_ident = Ident::new(prefix, Span::call_site());
into_iter_match_arms.extend(quote! {
#ident::#variant_ident => RuleCodePrefix::#prefix_ident.into_iter(),
});
} else {
// There is more than one prefix. We already previously asserted
// that prefixes of the same variant don't start with the same character
// so the common prefix for this variant is the empty string.
common_prefix_match_arms.extend(quote! { Self::#variant_ident => "", });
}
}
parsed.sort_by_key(|(prefix, _)| Reverse(prefix.len()));
parsed.sort_by_key(|(prefix, ..)| Reverse(prefix.len()));
let mut if_statements = quote!();
let mut into_iter_match_arms = quote!();
for (prefix, field) in parsed {
for (prefix, field, strategy) in parsed {
let ret_str = match strategy {
ParseStrategy::SinglePrefix => quote!(rest),
ParseStrategy::MultiplePrefixes => quote!(code),
};
if_statements.extend(quote! {if let Some(rest) = code.strip_prefix(#prefix) {
return Some((#ident::#field, rest));
return Some((#ident::#field, #ret_str));
}});
let prefix_ident = Ident::new(&prefix, Span::call_site());
if field != "Pycodestyle" {
into_iter_match_arms.extend(quote! {
#ident::#field => RuleCodePrefix::#prefix_ident.into_iter(),
});
}
}
into_iter_match_arms.extend(quote! {
@@ -104,9 +120,8 @@ pub fn derive_impl(input: DeriveInput) -> syn::Result<proc_macro2::TokenStream>
None
}
fn prefixes(&self) -> &'static [&'static str] {
match self { #prefix_match_arms }
fn common_prefix(&self) -> &'static str {
match self { #common_prefix_match_arms }
}
fn name(&self) -> &'static str {
@@ -152,3 +167,8 @@ fn parse_doc_attr(doc_attr: &Attribute) -> syn::Result<(String, String)> {
fn parse_markdown_link(link: &str) -> Option<(&str, &str)> {
link.strip_prefix('[')?.strip_suffix(')')?.split_once("](")
}
enum ParseStrategy {
SinglePrefix,
MultiplePrefixes,
}

View File

@@ -1,15 +1,4 @@
condense_wildcard_suffixes = true
edition = "2021"
format_strings = true
group_imports = "StdExternalCrate"
hex_literal_case = "Lower"
imports_granularity = "Module"
max_width = 100
normalize_comments = true
normalize_doc_attributes = true
reorder_impl_items = true
reorder_imports = true
reorder_modules = true
unstable_features = true
use_field_init_shorthand = true
wrap_comments = true

View File

@@ -0,0 +1,67 @@
"""Generate an MkDocs-compatible `docs` and `mkdocs.yml` from the README.md."""
import argparse
import shutil
from pathlib import Path
import yaml
SECTIONS: list[tuple[str, str]] = [
("Overview", "index.md"),
("Installation and Usage", "installation-and-usage.md"),
("Configuration", "configuration.md"),
("Rules", "rules.md"),
("Settings", "settings.md"),
("Editor Integrations", "editor-integrations.md"),
("FAQ", "faq.md"),
]
def main() -> None:
"""Generate an MkDocs-compatible `docs` and `mkdocs.yml`."""
with Path("README.md").open(encoding="utf8") as fp:
content = fp.read()
Path("docs").mkdir(parents=True, exist_ok=True)
# Split the README.md into sections.
for (title, filename) in SECTIONS:
with Path(f"docs/{filename}").open("w+") as f:
block = content.split(f"<!-- Begin section: {title} -->")
if len(block) != 2:
msg = f"Section {title} not found in README.md"
raise ValueError(msg)
block = block[1].split(f"<!-- End section: {title} -->")
if len(block) != 2:
msg = f"Section {title} not found in README.md"
raise ValueError(msg)
f.write(block[0])
# Copy the CONTRIBUTING.md.
shutil.copy("CONTRIBUTING.md", "docs/contributing.md")
# Add the nav section to mkdocs.yml.
with Path("mkdocs.template.yml").open(encoding="utf8") as fp:
config = yaml.safe_load(fp)
config["nav"] = [
{"Overview": "index.md"},
{"Installation and Usage": "installation-and-usage.md"},
{"Configuration": "configuration.md"},
{"Rules": "rules.md"},
{"Settings": "settings.md"},
{"Editor Integrations": "editor-integrations.md"},
{"FAQ": "faq.md"},
{"Contributing": "contributing.md"},
]
with Path("mkdocs.yml").open("w+") as fp:
yaml.safe_dump(config, fp)
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Generate an MkDocs-compatible `docs` and `mkdocs.yml`.",
)
args = parser.parse_args()
main()

View File

@@ -5,6 +5,7 @@ ignore = [
"INP001", # implicit-namespace-package
"PLR2004", # magic-value-comparison
"S101", # assert-used
"EM"
]
[tool.ruff.pydocstyle]

View File

@@ -0,0 +1,71 @@
"""Transform the README.md to support a specific deployment target.
By default, we assume that our README.md will be rendered on GitHub. However, different
targets have different strategies for rendering light- and dark-mode images. This script
adjusts the images in the README.md to support the given target.
"""
import argparse
from pathlib import Path
# https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax#specifying-the-theme-an-image-is-shown-to
GITHUB = """
<p align="center">
<picture align="center">
<source media="(prefers-color-scheme: dark)" srcset="https://user-images.githubusercontent.com/1309177/212613422-7faaf278-706b-4294-ad92-236ffcab3430.svg">
<source media="(prefers-color-scheme: light)" srcset="https://user-images.githubusercontent.com/1309177/212613257-5f4bca12-6d6b-4c79-9bac-51a4c6d08928.svg">
<img alt="Shows a bar chart with benchmark results." src="https://user-images.githubusercontent.com/1309177/212613257-5f4bca12-6d6b-4c79-9bac-51a4c6d08928.svg">
</picture>
</p>
"""
# https://github.com/pypi/warehouse/issues/11251
PYPI = """
<p align="center">
<img alt="Shows a bar chart with benchmark results." src="https://user-images.githubusercontent.com/1309177/212613257-5f4bca12-6d6b-4c79-9bac-51a4c6d08928.svg">
</p>
"""
# https://squidfunk.github.io/mkdocs-material/reference/images/#light-and-dark-mode
MK_DOCS = """
<p align="center">
<img alt="Shows a bar chart with benchmark results." src="https://user-images.githubusercontent.com/1309177/212613257-5f4bca12-6d6b-4c79-9bac-51a4c6d08928.svg#only-light">
</p>
<p align="center">
<img alt="Shows a bar chart with benchmark results." src="https://user-images.githubusercontent.com/1309177/212613422-7faaf278-706b-4294-ad92-236ffcab3430.svg#only-dark">
</p>
"""
def main(target: str) -> None:
"""Modify the README.md to support the given target."""
with Path("README.md").open(encoding="utf8") as fp:
content = fp.read()
if GITHUB not in content:
msg = "README.md is not in the expected format."
raise ValueError(msg)
if target == "pypi":
with Path("README.md").open("w", encoding="utf8") as fp:
fp.write(content.replace(GITHUB, PYPI))
elif target == "mkdocs":
with Path("README.md").open("w", encoding="utf8") as fp:
fp.write(content.replace(GITHUB, MK_DOCS))
else:
msg = f"Unknown target: {target}"
raise ValueError(msg)
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Modify the README.md to support a specific deployment target.",
)
parser.add_argument(
"--target",
type=str,
required=True,
choices=("pypi", "mkdocs"),
)
args = parser.parse_args()
main(target=args.target)

View File

@@ -559,6 +559,17 @@ pub fn collect_arg_names<'a>(arguments: &'a Arguments) -> FxHashSet<&'a str> {
}
/// Returns `true` if a statement or expression includes at least one comment.
pub fn has_comments<T>(located: &Located<T>, locator: &Locator) -> bool {
has_comments_in(
Range::new(
Location::new(located.location.row(), 0),
Location::new(located.end_location.unwrap().row() + 1, 0),
),
locator,
)
}
/// Returns `true` if a [`Range`] includes at least one comment.
pub fn has_comments_in(range: Range, locator: &Locator) -> bool {
lexer::make_tokenizer(locator.slice_source_code_range(&range))
.any(|result| result.map_or(false, |(_, tok, _)| matches!(tok, Tok::Comment(..))))
@@ -651,6 +662,17 @@ pub fn to_absolute(relative: Location, base: Location) -> Location {
}
}
pub fn to_relative(absolute: Location, base: Location) -> Location {
if absolute.row() == base.row() {
Location::new(
absolute.row() - base.row() + 1,
absolute.column() - base.column(),
)
} else {
Location::new(absolute.row() - base.row() + 1, absolute.column())
}
}
/// Return `true` if a `Stmt` has leading content.
pub fn match_leading_content(stmt: &Stmt, locator: &Locator) -> bool {
let range = Range::new(Location::new(stmt.location.row(), 0), stmt.location);
@@ -1010,7 +1032,7 @@ pub fn is_logger_candidate(func: &Expr) -> bool {
if let ExprKind::Attribute { value, .. } = &func.node {
let call_path = collect_call_path(value);
if let Some(tail) = call_path.last() {
if *tail == "logging" || tail.ends_with("logger") {
if tail.starts_with("log") || tail.ends_with("logger") || tail.ends_with("logging") {
return true;
}
}

View File

@@ -1,3 +1,4 @@
use bitflags::bitflags;
use rustc_hash::FxHashMap;
use rustpython_ast::{Cmpop, Located};
use rustpython_parser::ast::{Constant, Expr, ExprKind, Stmt, StmtKind};
@@ -9,9 +10,21 @@ use crate::ast::types::{Binding, BindingKind, Scope};
use crate::ast::visitor;
use crate::ast::visitor::Visitor;
bitflags! {
#[derive(Default)]
pub struct AllNamesFlags: u32 {
const INVALID_FORMAT = 0b0000_0001;
const INVALID_OBJECT = 0b0000_0010;
}
}
/// Extract the names bound to a given __all__ assignment.
pub fn extract_all_names(stmt: &Stmt, scope: &Scope, bindings: &[Binding]) -> Vec<String> {
fn add_to_names(names: &mut Vec<String>, elts: &[Expr]) {
pub fn extract_all_names(
stmt: &Stmt,
scope: &Scope,
bindings: &[Binding],
) -> (Vec<String>, AllNamesFlags) {
fn add_to_names(names: &mut Vec<String>, elts: &[Expr], flags: &mut AllNamesFlags) {
for elt in elts {
if let ExprKind::Constant {
value: Constant::Str(value),
@@ -19,11 +32,14 @@ pub fn extract_all_names(stmt: &Stmt, scope: &Scope, bindings: &[Binding]) -> Ve
} = &elt.node
{
names.push(value.to_string());
} else {
*flags |= AllNamesFlags::INVALID_OBJECT;
}
}
}
let mut names: Vec<String> = vec![];
let mut flags = AllNamesFlags::empty();
// Grab the existing bound __all__ values.
if let StmtKind::AugAssign { .. } = &stmt.node {
@@ -42,7 +58,7 @@ pub fn extract_all_names(stmt: &Stmt, scope: &Scope, bindings: &[Binding]) -> Ve
} {
match &value.node {
ExprKind::List { elts, .. } | ExprKind::Tuple { elts, .. } => {
add_to_names(&mut names, elts);
add_to_names(&mut names, elts, &mut flags);
}
ExprKind::BinOp { left, right, .. } => {
let mut current_left = left;
@@ -50,27 +66,40 @@ pub fn extract_all_names(stmt: &Stmt, scope: &Scope, bindings: &[Binding]) -> Ve
while let Some(elts) = match &current_right.node {
ExprKind::List { elts, .. } => Some(elts),
ExprKind::Tuple { elts, .. } => Some(elts),
_ => None,
_ => {
flags |= AllNamesFlags::INVALID_FORMAT;
None
}
} {
add_to_names(&mut names, elts);
add_to_names(&mut names, elts, &mut flags);
match &current_left.node {
ExprKind::BinOp { left, right, .. } => {
current_left = left;
current_right = right;
}
ExprKind::List { elts, .. } | ExprKind::Tuple { elts, .. } => {
add_to_names(&mut names, elts);
add_to_names(&mut names, elts, &mut flags);
break;
}
_ => {
flags |= AllNamesFlags::INVALID_FORMAT;
break;
}
_ => break,
}
}
}
_ => {}
ExprKind::ListComp { .. } => {
// Allow list comprehensions, even though we can't statically analyze them.
// TODO(charlie): Allow `list()` and `tuple()` calls too, and extract the members
// from them (even if, e.g., it's `list({...})`).
}
_ => {
flags |= AllNamesFlags::INVALID_FORMAT;
}
}
}
names
(names, flags)
}
#[derive(Default)]

View File

@@ -140,7 +140,7 @@ pub enum BindingKind<'a> {
SubmoduleImportation(&'a str, &'a str),
}
#[derive(Clone)]
#[derive(Debug, Clone)]
pub struct Binding<'a> {
pub kind: BindingKind<'a>,
pub range: Range,
@@ -161,7 +161,7 @@ pub struct Binding<'a> {
pub synthetic_usage: Option<(usize, Range)>,
}
#[derive(Copy, Clone)]
#[derive(Copy, Debug, Clone)]
pub enum ExecutionContext {
Runtime,
Typing,

View File

@@ -1 +0,0 @@

View File

@@ -3,7 +3,9 @@ use itertools::Itertools;
use libcst_native::{
Codegen, CodegenState, ImportNames, ParenthesizableWhitespace, SmallStatement, Statement,
};
use rustpython_parser::ast::{ExcepthandlerKind, Location, Stmt, StmtKind};
use rustpython_parser::ast::{ExcepthandlerKind, Expr, Keyword, Location, Stmt, StmtKind};
use rustpython_parser::lexer;
use rustpython_parser::lexer::Tok;
use crate::ast::helpers;
use crate::ast::helpers::to_absolute;
@@ -321,6 +323,109 @@ pub fn remove_unused_imports<'a>(
}
}
/// Generic function to remove arguments or keyword arguments in function
/// calls and class definitions. (For classes `args` should be considered
/// `bases`)
///
/// Supports the removal of parentheses when this is the only (kw)arg left.
/// For this behavior, set `remove_parentheses` to `true`.
pub fn remove_argument(
locator: &Locator,
stmt_at: Location,
expr_at: Location,
expr_end: Location,
args: &[Expr],
keywords: &[Keyword],
remove_parentheses: bool,
) -> Result<Fix> {
// TODO(sbrugman): Preserve trailing comments.
let contents = locator.slice_source_code_at(stmt_at);
let mut fix_start = None;
let mut fix_end = None;
let n_arguments = keywords.len() + args.len();
if n_arguments == 0 {
bail!("No arguments or keywords to remove");
}
if n_arguments == 1 {
// Case 1: there is only one argument.
let mut count: usize = 0;
for (start, tok, end) in lexer::make_tokenizer_located(contents, stmt_at).flatten() {
if matches!(tok, Tok::Lpar) {
if count == 0 {
fix_start = Some(if remove_parentheses {
start
} else {
Location::new(start.row(), start.column() + 1)
});
}
count += 1;
}
if matches!(tok, Tok::Rpar) {
count -= 1;
if count == 0 {
fix_end = Some(if remove_parentheses {
end
} else {
Location::new(end.row(), end.column() - 1)
});
break;
}
}
}
} else if args
.iter()
.map(|node| node.location)
.chain(keywords.iter().map(|node| node.location))
.any(|location| location > expr_at)
{
// Case 2: argument or keyword is _not_ the last node.
let mut seen_comma = false;
for (start, tok, end) in lexer::make_tokenizer_located(contents, stmt_at).flatten() {
if seen_comma {
if matches!(tok, Tok::NonLogicalNewline) {
// Also delete any non-logical newlines after the comma.
continue;
}
fix_end = Some(if matches!(tok, Tok::Newline) {
end
} else {
start
});
break;
}
if start == expr_at {
fix_start = Some(start);
}
if fix_start.is_some() && matches!(tok, Tok::Comma) {
seen_comma = true;
}
}
} else {
// Case 3: argument or keyword is the last node, so we have to find the last
// comma in the stmt.
for (start, tok, _) in lexer::make_tokenizer_located(contents, stmt_at).flatten() {
if start == expr_at {
fix_end = Some(expr_end);
break;
}
if matches!(tok, Tok::Comma) {
fix_start = Some(start);
}
}
}
match (fix_start, fix_end) {
(Some(start), Some(end)) => Ok(Fix::deletion(start, end)),
_ => {
bail!("No fix could be constructed")
}
}
}
#[cfg(test)]
mod tests {
use anyhow::Result;

View File

@@ -8,7 +8,6 @@ use crate::fix::Fix;
use crate::registry::Diagnostic;
use crate::source_code::Locator;
pub mod fixer;
pub mod helpers;
/// Auto-fix errors in a file, and write the fixed source code to disk.
@@ -28,7 +27,7 @@ fn apply_fixes<'a>(
fixes: impl Iterator<Item = &'a Fix>,
locator: &'a Locator<'a>,
) -> (String, usize) {
let mut output = String::new();
let mut output = String::with_capacity(locator.len());
let mut last_pos: Location = Location::new(1, 0);
let mut applied: BTreeSet<&Fix> = BTreeSet::default();
let mut num_fixed: usize = 0;
@@ -67,11 +66,29 @@ fn apply_fixes<'a>(
(output, num_fixed)
}
/// Apply a single fix.
pub(crate) fn apply_fix(fix: &Fix, locator: &Locator) -> String {
let mut output = String::with_capacity(locator.len());
// Add all contents from `last_pos` to `fix.location`.
let slice = locator.slice_source_code_range(&Range::new(Location::new(1, 0), fix.location));
output.push_str(slice);
// Add the patch itself.
output.push_str(&fix.content);
// Add the remaining content.
let slice = locator.slice_source_code_at(fix.end_location);
output.push_str(slice);
output
}
#[cfg(test)]
mod tests {
use rustpython_parser::ast::Location;
use crate::autofix::apply_fixes;
use crate::autofix::{apply_fix, apply_fixes};
use crate::fix::Fix;
use crate::source_code::Locator;
@@ -85,7 +102,7 @@ mod tests {
}
#[test]
fn apply_single_replacement() {
fn apply_one_replacement() {
let fixes = vec![Fix {
content: "Bar".to_string(),
location: Location::new(1, 8),
@@ -111,7 +128,7 @@ class A(Bar):
}
#[test]
fn apply_single_removal() {
fn apply_one_removal() {
let fixes = vec![Fix {
content: String::new(),
location: Location::new(1, 7),
@@ -137,7 +154,7 @@ class A:
}
#[test]
fn apply_double_removal() {
fn apply_two_removals() {
let fixes = vec![
Fix {
content: String::new(),
@@ -202,4 +219,31 @@ class A:
);
assert_eq!(fixed, 1);
}
#[test]
fn apply_single_fix() {
let locator = Locator::new(
r#"
class A(object):
...
"#
.trim(),
);
let contents = apply_fix(
&Fix {
content: String::new(),
location: Location::new(1, 7),
end_location: Location::new(1, 15),
},
&locator,
);
assert_eq!(
contents,
r#"
class A:
...
"#
.trim()
);
}
}

View File

@@ -17,7 +17,7 @@ use rustpython_parser::parser;
use smallvec::smallvec;
use crate::ast::helpers::{binding_range, collect_call_path, extract_handler_names};
use crate::ast::operations::extract_all_names;
use crate::ast::operations::{extract_all_names, AllNamesFlags};
use crate::ast::relocate::relocate_expr;
use crate::ast::types::{
Binding, BindingKind, CallPath, ClassDef, ExecutionContext, FunctionDef, Lambda, Node, Range,
@@ -93,6 +93,7 @@ pub struct Checker<'a> {
in_type_definition: bool,
in_deferred_string_type_definition: bool,
in_deferred_type_definition: bool,
in_exception_handler: bool,
in_literal: bool,
in_subscript: bool,
in_type_checking_block: bool,
@@ -153,6 +154,7 @@ impl<'a> Checker<'a> {
in_type_definition: false,
in_deferred_string_type_definition: false,
in_deferred_type_definition: false,
in_exception_handler: false,
in_literal: false,
in_subscript: false,
in_type_checking_block: false,
@@ -392,7 +394,9 @@ where
if !exists {
if self.settings.rules.enabled(&Rule::NonlocalWithoutBinding) {
self.diagnostics.push(Diagnostic::new(
violations::NonlocalWithoutBinding(name.to_string()),
violations::NonlocalWithoutBinding {
name: name.to_string(),
},
*range,
));
}
@@ -692,6 +696,9 @@ where
context,
},
);
if self.settings.rules.enabled(&Rule::TooManyArgs) {
pylint::rules::too_many_args(self, args, stmt);
}
}
StmtKind::Return { .. } => {
if self.settings.rules.enabled(&Rule::ReturnOutsideFunction) {
@@ -1138,9 +1145,9 @@ where
if self.settings.rules.enabled(&Rule::FutureFeatureNotDefined) {
if !ALL_FEATURE_NAMES.contains(&&*alias.node.name) {
self.diagnostics.push(Diagnostic::new(
violations::FutureFeatureNotDefined(
alias.node.name.to_string(),
),
violations::FutureFeatureNotDefined {
name: alias.node.name.to_string(),
},
Range::from_located(alias),
));
}
@@ -1173,12 +1180,12 @@ where
[*(self.scope_stack.last().expect("No current scope found"))];
if !matches!(scope.kind, ScopeKind::Module) {
self.diagnostics.push(Diagnostic::new(
violations::ImportStarNotPermitted(
helpers::format_import_from(
violations::ImportStarNotPermitted {
name: helpers::format_import_from(
level.as_ref(),
module.as_deref(),
),
),
},
Range::from_located(stmt),
));
}
@@ -1186,10 +1193,12 @@ where
if self.settings.rules.enabled(&Rule::ImportStarUsed) {
self.diagnostics.push(Diagnostic::new(
violations::ImportStarUsed(helpers::format_import_from(
level.as_ref(),
module.as_deref(),
)),
violations::ImportStarUsed {
name: helpers::format_import_from(
level.as_ref(),
module.as_deref(),
),
},
Range::from_located(stmt),
));
}
@@ -1707,6 +1716,7 @@ where
}
// Recurse.
let prev_in_exception_handler = self.in_exception_handler;
let prev_visible_scope = self.visible_scope.clone();
match &stmt.node {
StmtKind::FunctionDef {
@@ -1858,9 +1868,13 @@ where
}
self.visit_body(body);
self.except_handlers.pop();
self.in_exception_handler = true;
for excepthandler in handlers {
self.visit_excepthandler(excepthandler);
}
self.in_exception_handler = prev_in_exception_handler;
self.visit_body(orelse);
self.visit_body(finalbody);
}
@@ -2085,11 +2099,6 @@ where
{
pyupgrade::rules::use_pep585_annotation(self, expr);
}
if self.settings.rules.enabled(&Rule::RemoveSixCompat) {
pyupgrade::rules::remove_six_compat(self, expr);
}
if self.settings.rules.enabled(&Rule::DatetimeTimezoneUTC)
&& self.settings.target_version >= PythonVersion::Py311
{
@@ -2101,16 +2110,13 @@ where
if self.settings.rules.enabled(&Rule::RewriteMockImport) {
pyupgrade::rules::rewrite_mock_attribute(self, expr);
}
if self.settings.rules.enabled(&Rule::SixPY3Referenced) {
flake8_2020::rules::name_or_attribute(self, expr);
}
pandas_vet::rules::check_attr(self, attr, value, expr);
if self.settings.rules.enabled(&Rule::BannedApi) {
flake8_tidy_imports::banned_api::banned_attribute_access(self, expr);
}
pandas_vet::rules::check_attr(self, attr, value, expr);
}
ExprKind::Call {
func,
@@ -2144,9 +2150,9 @@ where
.enabled(&Rule::StringDotFormatInvalidFormat)
{
self.diagnostics.push(Diagnostic::new(
violations::StringDotFormatInvalidFormat(
pyflakes::format::error_to_string(&e),
),
violations::StringDotFormatInvalidFormat {
message: pyflakes::format::error_to_string(&e),
},
location,
));
}
@@ -2223,9 +2229,6 @@ where
if self.settings.rules.enabled(&Rule::RedundantOpenModes) {
pyupgrade::rules::redundant_open_modes(self, expr);
}
if self.settings.rules.enabled(&Rule::RemoveSixCompat) {
pyupgrade::rules::remove_six_compat(self, expr);
}
if self.settings.rules.enabled(&Rule::NativeLiterals) {
pyupgrade::rules::native_literals(self, expr, func, args, keywords);
}
@@ -2475,8 +2478,9 @@ where
// pandas-vet
if self.settings.rules.enabled(&Rule::UseOfInplaceArgument) {
self.diagnostics
.extend(pandas_vet::rules::inplace_argument(keywords).into_iter());
self.diagnostics.extend(
pandas_vet::rules::inplace_argument(self, expr, args, keywords).into_iter(),
);
}
pandas_vet::rules::check_call(self, func);
@@ -2705,7 +2709,9 @@ where
let scope = self.current_scope();
if matches!(scope.kind, ScopeKind::Class(_) | ScopeKind::Module) {
self.diagnostics.push(Diagnostic::new(
violations::YieldOutsideFunction(DeferralKeyword::Yield),
violations::YieldOutsideFunction {
keyword: DeferralKeyword::Yield,
},
Range::from_located(expr),
));
}
@@ -2716,7 +2722,9 @@ where
let scope = self.current_scope();
if matches!(scope.kind, ScopeKind::Class(_) | ScopeKind::Module) {
self.diagnostics.push(Diagnostic::new(
violations::YieldOutsideFunction(DeferralKeyword::YieldFrom),
violations::YieldOutsideFunction {
keyword: DeferralKeyword::YieldFrom,
},
Range::from_located(expr),
));
}
@@ -2727,7 +2735,9 @@ where
let scope = self.current_scope();
if matches!(scope.kind, ScopeKind::Class(_) | ScopeKind::Module) {
self.diagnostics.push(Diagnostic::new(
violations::YieldOutsideFunction(DeferralKeyword::Await),
violations::YieldOutsideFunction {
keyword: DeferralKeyword::Await,
},
Range::from_located(expr),
));
}
@@ -2813,7 +2823,9 @@ where
.enabled(&Rule::PercentFormatUnsupportedFormatCharacter)
{
self.diagnostics.push(Diagnostic::new(
violations::PercentFormatUnsupportedFormatCharacter(c),
violations::PercentFormatUnsupportedFormatCharacter {
char: c,
},
location,
));
}
@@ -2825,7 +2837,9 @@ where
.enabled(&Rule::PercentFormatInvalidFormat)
{
self.diagnostics.push(Diagnostic::new(
violations::PercentFormatInvalidFormat(e.to_string()),
violations::PercentFormatInvalidFormat {
message: e.to_string(),
},
location,
));
}
@@ -3447,6 +3461,15 @@ where
body,
);
}
if self.settings.rules.enabled(&Rule::TryExceptPass) {
flake8_bandit::rules::try_except_pass(
self,
type_.as_deref(),
name.as_deref(),
body,
self.settings.flake8_bandit.check_typed_exception,
);
}
if self.settings.rules.enabled(&Rule::ReraiseNoCause) {
tryceratops::rules::reraise_no_cause(self, body);
}
@@ -3504,7 +3527,9 @@ where
if !self.bindings[*index].used() {
if self.settings.rules.enabled(&Rule::UnusedVariable) {
let mut diagnostic = Diagnostic::new(
violations::UnusedVariable(name.to_string()),
violations::UnusedVariable {
name: name.to_string(),
},
name_range,
);
if self.patch(&Rule::UnusedVariable) {
@@ -3723,7 +3748,7 @@ impl<'a> Checker<'a> {
kind: BindingKind::Builtin,
range: Range::default(),
runtime_usage: None,
synthetic_usage: None,
synthetic_usage: Some((0, Range::default())),
typing_usage: None,
source: None,
context: ExecutionContext::Runtime,
@@ -3763,6 +3788,10 @@ impl<'a> Checker<'a> {
.map(|index| &self.scopes[*index])
}
pub fn in_exception_handler(&self) -> bool {
self.in_exception_handler
}
pub fn execution_context(&self) -> ExecutionContext {
if self.in_type_checking_block
|| self.in_annotation
@@ -3814,10 +3843,10 @@ impl<'a> Checker<'a> {
if matches!(binding.kind, BindingKind::LoopVar) && existing_is_import {
if self.settings.rules.enabled(&Rule::ImportShadowedByLoopVar) {
self.diagnostics.push(Diagnostic::new(
violations::ImportShadowedByLoopVar(
name.to_string(),
existing.range.location.row(),
),
violations::ImportShadowedByLoopVar {
name: name.to_string(),
line: existing.range.location.row(),
},
binding.range,
));
}
@@ -3833,10 +3862,10 @@ impl<'a> Checker<'a> {
{
if self.settings.rules.enabled(&Rule::RedefinedWhileUnused) {
self.diagnostics.push(Diagnostic::new(
violations::RedefinedWhileUnused(
name.to_string(),
existing.range.location.row(),
),
violations::RedefinedWhileUnused {
name: name.to_string(),
line: existing.range.location.row(),
},
binding_range(&binding, self.locator),
));
}
@@ -3850,16 +3879,41 @@ impl<'a> Checker<'a> {
}
}
// Assume the rebound name is used as a global or within a loop.
let scope = self.current_scope();
let binding = match scope.values.get(&name) {
None => binding,
Some(index) => Binding {
runtime_usage: self.bindings[*index].runtime_usage,
synthetic_usage: self.bindings[*index].synthetic_usage,
typing_usage: self.bindings[*index].typing_usage,
..binding
},
let binding = if let Some(index) = scope.values.get(&name) {
if matches!(self.bindings[*index].kind, BindingKind::Builtin) {
// Avoid overriding builtins.
binding
} else if matches!(self.bindings[*index].kind, BindingKind::Global) {
// If the original binding was a global, and the new binding conflicts within the
// current scope, then the new binding is also a global.
Binding {
runtime_usage: self.bindings[*index].runtime_usage,
synthetic_usage: self.bindings[*index].synthetic_usage,
typing_usage: self.bindings[*index].typing_usage,
kind: BindingKind::Global,
..binding
}
} else if matches!(self.bindings[*index].kind, BindingKind::Nonlocal) {
// If the original binding was a nonlocal, and the new binding conflicts within the
// current scope, then the new binding is also a nonlocal.
Binding {
runtime_usage: self.bindings[*index].runtime_usage,
synthetic_usage: self.bindings[*index].synthetic_usage,
typing_usage: self.bindings[*index].typing_usage,
kind: BindingKind::Nonlocal,
..binding
}
} else {
Binding {
runtime_usage: self.bindings[*index].runtime_usage,
synthetic_usage: self.bindings[*index].synthetic_usage,
typing_usage: self.bindings[*index].typing_usage,
..binding
}
}
} else {
binding
};
// Don't treat annotations as assignments if there is an existing value
@@ -3975,7 +4029,10 @@ impl<'a> Checker<'a> {
from_list.sort();
self.diagnostics.push(Diagnostic::new(
violations::ImportStarUsage(id.to_string(), from_list),
violations::ImportStarUsage {
name: id.to_string(),
sources: from_list,
},
Range::from_located(expr),
));
}
@@ -4006,7 +4063,7 @@ impl<'a> Checker<'a> {
}
self.diagnostics.push(Diagnostic::new(
violations::UndefinedName(id.clone()),
violations::UndefinedName { name: id.clone() },
Range::from_located(expr),
));
}
@@ -4152,14 +4209,25 @@ impl<'a> Checker<'a> {
}
_ => false,
} {
let (all_names, all_names_flags) =
extract_all_names(parent, current, &self.bindings);
if self.settings.rules.enabled(&Rule::InvalidAllFormat)
&& matches!(all_names_flags, AllNamesFlags::INVALID_FORMAT)
{
pylint::rules::invalid_all_format(self, expr);
}
if self.settings.rules.enabled(&Rule::InvalidAllObject)
&& matches!(all_names_flags, AllNamesFlags::INVALID_OBJECT)
{
pylint::rules::invalid_all_object(self, expr);
}
self.add_binding(
id,
Binding {
kind: BindingKind::Export(extract_all_names(
parent,
current,
&self.bindings,
)),
kind: BindingKind::Export(all_names),
runtime_usage: None,
synthetic_usage: None,
typing_usage: None,
@@ -4203,7 +4271,9 @@ impl<'a> Checker<'a> {
&& self.settings.rules.enabled(&Rule::UndefinedName)
{
self.diagnostics.push(Diagnostic::new(
violations::UndefinedName(id.to_string()),
violations::UndefinedName {
name: id.to_string(),
},
Range::from_located(expr),
));
}
@@ -4269,7 +4339,9 @@ impl<'a> Checker<'a> {
.enabled(&Rule::ForwardAnnotationSyntaxError)
{
self.diagnostics.push(Diagnostic::new(
violations::ForwardAnnotationSyntaxError(expression.to_string()),
violations::ForwardAnnotationSyntaxError {
body: expression.to_string(),
},
range,
));
}
@@ -4438,10 +4510,16 @@ impl<'a> Checker<'a> {
for (name, index) in &scope.values {
let binding = &self.bindings[*index];
if matches!(binding.kind, BindingKind::Global) {
diagnostics.push(Diagnostic::new(
violations::GlobalVariableNotAssigned((*name).to_string()),
binding.range,
));
if let Some(stmt) = &binding.source {
if matches!(stmt.node, StmtKind::Global { .. }) {
diagnostics.push(Diagnostic::new(
violations::GlobalVariableNotAssigned {
name: (*name).to_string(),
},
binding.range,
));
}
}
}
}
}
@@ -4468,7 +4546,9 @@ impl<'a> Checker<'a> {
for &name in names {
if !scope.values.contains_key(name) {
diagnostics.push(Diagnostic::new(
violations::UndefinedExport(name.to_string()),
violations::UndefinedExport {
name: name.to_string(),
},
all_binding.range,
));
}
@@ -4506,10 +4586,10 @@ impl<'a> Checker<'a> {
if let Some(indices) = self.redefinitions.get(index) {
for index in indices {
diagnostics.push(Diagnostic::new(
violations::RedefinedWhileUnused(
(*name).to_string(),
binding.range.location.row(),
),
violations::RedefinedWhileUnused {
name: (*name).to_string(),
line: binding.range.location.row(),
},
binding_range(&self.bindings[*index], self.locator),
));
}
@@ -4537,10 +4617,10 @@ impl<'a> Checker<'a> {
for &name in names {
if !scope.values.contains_key(name) {
diagnostics.push(Diagnostic::new(
violations::ImportStarUsage(
name.to_string(),
from_list.clone(),
),
violations::ImportStarUsage {
name: name.to_string(),
sources: from_list.clone(),
},
all_binding.range,
));
}
@@ -4703,7 +4783,11 @@ impl<'a> Checker<'a> {
let multiple = unused_imports.len() > 1;
for (full_name, range) in unused_imports {
let mut diagnostic = Diagnostic::new(
violations::UnusedImport(full_name.to_string(), ignore_init, multiple),
violations::UnusedImport {
name: full_name.to_string(),
ignore_init,
multiple,
},
*range,
);
if matches!(child.node, StmtKind::ImportFrom { .. })
@@ -4725,7 +4809,11 @@ impl<'a> Checker<'a> {
let multiple = unused_imports.len() > 1;
for (full_name, range) in unused_imports {
let mut diagnostic = Diagnostic::new(
violations::UnusedImport(full_name.to_string(), ignore_init, multiple),
violations::UnusedImport {
name: full_name.to_string(),
ignore_init,
multiple,
},
*range,
);
if matches!(child.node, StmtKind::ImportFrom { .. })

View File

@@ -103,7 +103,7 @@ pub fn check_noqa(
Directive::All(spaces, start, end) => {
if matches.is_empty() {
let mut diagnostic = Diagnostic::new(
violations::UnusedNOQA(None),
violations::UnusedNOQA { codes: None },
Range::new(Location::new(row + 1, start), Location::new(row + 1, end)),
);
if matches!(autofix, flags::Autofix::Enabled)
@@ -154,20 +154,22 @@ pub fn check_noqa(
&& unmatched_codes.is_empty())
{
let mut diagnostic = Diagnostic::new(
violations::UnusedNOQA(Some(UnusedCodes {
disabled: disabled_codes
.iter()
.map(|code| (*code).to_string())
.collect(),
unknown: unknown_codes
.iter()
.map(|code| (*code).to_string())
.collect(),
unmatched: unmatched_codes
.iter()
.map(|code| (*code).to_string())
.collect(),
})),
violations::UnusedNOQA {
codes: Some(UnusedCodes {
disabled: disabled_codes
.iter()
.map(|code| (*code).to_string())
.collect(),
unknown: unknown_codes
.iter()
.map(|code| (*code).to_string())
.collect(),
unmatched: unmatched_codes
.iter()
.map(|code| (*code).to_string())
.collect(),
}),
},
Range::new(Location::new(row + 1, start), Location::new(row + 1, end)),
);
if matches!(autofix, flags::Autofix::Enabled)

View File

@@ -36,6 +36,8 @@ impl Fix {
}
pub fn replacement(content: String, start: Location, end: Location) -> Self {
debug_assert!(!content.is_empty(), "Prefer `Fix::deletion`");
Self {
content,
location: start,
@@ -44,6 +46,8 @@ impl Fix {
}
pub fn insertion(content: String, at: Location) -> Self {
debug_assert!(!content.is_empty(), "Insert content is empty");
Self {
content,
location: at,

View File

@@ -118,7 +118,9 @@ pub fn check_path(
Err(parse_error) => {
if settings.rules.enabled(&Rule::SyntaxError) {
diagnostics.push(Diagnostic::new(
violations::SyntaxError(parse_error.error.to_string()),
violations::SyntaxError {
message: parse_error.error.to_string(),
},
Range::new(parse_error.location, parse_error.location),
));
}
@@ -149,6 +151,15 @@ pub fn check_path(
));
}
// Ignore diagnostics based on per-file-ignores.
if !diagnostics.is_empty() && !settings.per_file_ignores.is_empty() {
let ignores = fs::ignores_from_path(path, &settings.per_file_ignores)?;
if !ignores.is_empty() {
diagnostics.retain(|diagnostic| !ignores.contains(&diagnostic.kind.rule()));
}
};
// Enforce `noqa` directives.
if (matches!(noqa, flags::Noqa::Enabled) && !diagnostics.is_empty())
|| settings
@@ -166,17 +177,6 @@ pub fn check_path(
);
}
// Create path ignores.
if !diagnostics.is_empty() && !settings.per_file_ignores.is_empty() {
let ignores = fs::ignores_from_path(path, &settings.per_file_ignores)?;
if !ignores.is_empty() {
return Ok(diagnostics
.into_iter()
.filter(|diagnostic| !ignores.contains(&diagnostic.kind.rule()))
.collect());
}
}
Ok(diagnostics)
}

View File

@@ -46,7 +46,7 @@ macro_rules! notify_user {
}
}
#[derive(Debug, Default, PartialOrd, Ord, PartialEq, Eq)]
#[derive(Debug, Default, PartialOrd, Ord, PartialEq, Eq, Copy, Clone)]
pub enum LogLevel {
// No output (+ `log::LevelFilter::Off`).
Silent,
@@ -60,6 +60,7 @@ pub enum LogLevel {
}
impl LogLevel {
#[allow(clippy::trivially_copy_pass_by_ref)]
fn level_filter(&self) -> log::LevelFilter {
match self {
LogLevel::Default => log::LevelFilter::Info,

View File

@@ -247,7 +247,9 @@ mod tests {
assert_eq!(output, format!("{contents}\n"));
let diagnostics = vec![Diagnostic::new(
violations::UnusedVariable("x".to_string()),
violations::UnusedVariable {
name: "x".to_string(),
},
Range::new(Location::new(1, 0), Location::new(1, 0)),
)];
let contents = "x = 1";
@@ -269,7 +271,9 @@ mod tests {
Range::new(Location::new(1, 0), Location::new(1, 0)),
),
Diagnostic::new(
violations::UnusedVariable("x".to_string()),
violations::UnusedVariable {
name: "x".to_string(),
},
Range::new(Location::new(1, 0), Location::new(1, 0)),
),
];
@@ -292,7 +296,9 @@ mod tests {
Range::new(Location::new(1, 0), Location::new(1, 0)),
),
Diagnostic::new(
violations::UnusedVariable("x".to_string()),
violations::UnusedVariable {
name: "x".to_string(),
},
Range::new(Location::new(1, 0), Location::new(1, 0)),
),
];

View File

@@ -194,6 +194,7 @@ pub static KNOWN_STANDARD_LIBRARY: Lazy<FxHashSet<&'static str>> = Lazy::new(||
"tkinter",
"token",
"tokenize",
"tomllib",
"trace",
"traceback",
"tracemalloc",

View File

@@ -49,7 +49,7 @@ pub static TYPING_EXTENSIONS: Lazy<FxHashSet<&'static str>> = Lazy::new(|| {
"assert_type",
"clear_overloads",
"final",
"get_Type_hints",
"get_type_hints",
"get_args",
"get_origin",
"get_overloads",

View File

@@ -78,6 +78,8 @@ ruff_macros::define_rule_mapping!(
F842 => violations::UnusedAnnotation,
F901 => violations::RaiseNotImplemented,
// pylint
PLE0604 => rules::pylint::rules::InvalidAllObject,
PLE0605 => rules::pylint::rules::InvalidAllFormat,
PLC0414 => violations::UselessImportAlias,
PLC3002 => violations::UnnecessaryDirectLambdaCall,
PLE0117 => violations::NonlocalWithoutBinding,
@@ -91,6 +93,7 @@ ruff_macros::define_rule_mapping!(
PLR2004 => violations::MagicValueComparison,
PLW0120 => violations::UselessElseOnLoop,
PLW0602 => violations::GlobalVariableNotAssigned,
PLR0913 => rules::pylint::rules::TooManyArgs,
// flake8-builtins
A001 => violations::BuiltinVariableShadowing,
A002 => violations::BuiltinArgumentShadowing,
@@ -235,7 +238,6 @@ ruff_macros::define_rule_mapping!(
UP013 => violations::ConvertTypedDictFunctionalToClass,
UP014 => violations::ConvertNamedTupleFunctionalToClass,
UP015 => violations::RedundantOpenModes,
UP016 => violations::RemoveSixCompat,
UP017 => violations::DatetimeTimezoneUTC,
UP018 => violations::NativeLiterals,
UP019 => violations::TypingTextStrAlias,
@@ -282,7 +284,7 @@ ruff_macros::define_rule_mapping!(
D300 => violations::UsesTripleQuotes,
D301 => violations::UsesRPrefixForBackslashedContent,
D400 => violations::EndsInPeriod,
D401 => crate::rules::pydocstyle::rules::non_imperative_mood::NonImperativeMood,
D401 => rules::pydocstyle::rules::non_imperative_mood::NonImperativeMood,
D402 => violations::NoSignature,
D403 => violations::FirstLineCapitalized,
D404 => violations::NoThisPrefix,
@@ -331,6 +333,7 @@ ruff_macros::define_rule_mapping!(
S106 => violations::HardcodedPasswordFuncArg,
S107 => violations::HardcodedPasswordDefault,
S108 => violations::HardcodedTempFile,
S110 => rules::flake8_bandit::rules::TryExceptPass,
S113 => violations::RequestWithoutTimeout,
S324 => violations::HashlibInsecureHashFunction,
S501 => violations::RequestWithNoCertValidation,
@@ -367,28 +370,28 @@ ruff_macros::define_rule_mapping!(
PGH003 => violations::BlanketTypeIgnore,
PGH004 => violations::BlanketNOQA,
// pandas-vet
PD002 => violations::UseOfInplaceArgument,
PD003 => violations::UseOfDotIsNull,
PD004 => violations::UseOfDotNotNull,
PD007 => violations::UseOfDotIx,
PD008 => violations::UseOfDotAt,
PD009 => violations::UseOfDotIat,
PD010 => violations::UseOfDotPivotOrUnstack,
PD011 => violations::UseOfDotValues,
PD012 => violations::UseOfDotReadTable,
PD013 => violations::UseOfDotStack,
PD015 => violations::UseOfPdMerge,
PD901 => violations::DfIsABadVariableName,
PD002 => rules::pandas_vet::rules::UseOfInplaceArgument,
PD003 => rules::pandas_vet::rules::UseOfDotIsNull,
PD004 => rules::pandas_vet::rules::UseOfDotNotNull,
PD007 => rules::pandas_vet::rules::UseOfDotIx,
PD008 => rules::pandas_vet::rules::UseOfDotAt,
PD009 => rules::pandas_vet::rules::UseOfDotIat,
PD010 => rules::pandas_vet::rules::UseOfDotPivotOrUnstack,
PD011 => rules::pandas_vet::rules::UseOfDotValues,
PD012 => rules::pandas_vet::rules::UseOfDotReadTable,
PD013 => rules::pandas_vet::rules::UseOfDotStack,
PD015 => rules::pandas_vet::rules::UseOfPdMerge,
PD901 => rules::pandas_vet::rules::DfIsABadVariableName,
// flake8-errmsg
EM101 => violations::RawStringInException,
EM102 => violations::FStringInException,
EM103 => violations::DotFormatInException,
// flake8-pytest-style
PT001 => violations::IncorrectFixtureParenthesesStyle,
PT002 => violations::FixturePositionalArgs,
PT003 => violations::ExtraneousScopeFunction,
PT004 => violations::MissingFixtureNameUnderscore,
PT005 => violations::IncorrectFixtureNameUnderscore,
PT001 => rules::flake8_pytest_style::rules::IncorrectFixtureParenthesesStyle,
PT002 => rules::flake8_pytest_style::rules::FixturePositionalArgs,
PT003 => rules::flake8_pytest_style::rules::ExtraneousScopeFunction,
PT004 => rules::flake8_pytest_style::rules::MissingFixtureNameUnderscore,
PT005 => rules::flake8_pytest_style::rules::IncorrectFixtureNameUnderscore,
PT006 => violations::ParametrizeNamesWrongType,
PT007 => violations::ParametrizeValuesWrongType,
PT008 => violations::PatchWithLambda,
@@ -401,13 +404,13 @@ ruff_macros::define_rule_mapping!(
PT016 => violations::FailWithoutMessage,
PT017 => violations::AssertInExcept,
PT018 => violations::CompositeAssertion,
PT019 => violations::FixtureParamWithoutValue,
PT020 => violations::DeprecatedYieldFixture,
PT021 => violations::FixtureFinalizerCallback,
PT022 => violations::UselessYieldFixture,
PT019 => rules::flake8_pytest_style::rules::FixtureParamWithoutValue,
PT020 => rules::flake8_pytest_style::rules::DeprecatedYieldFixture,
PT021 => rules::flake8_pytest_style::rules::FixtureFinalizerCallback,
PT022 => rules::flake8_pytest_style::rules::UselessYieldFixture,
PT023 => violations::IncorrectMarkParenthesesStyle,
PT024 => violations::UnnecessaryAsyncioMarkOnFixture,
PT025 => violations::ErroneousUseFixturesOnFixture,
PT024 => rules::flake8_pytest_style::rules::UnnecessaryAsyncioMarkOnFixture,
PT025 => rules::flake8_pytest_style::rules::ErroneousUseFixturesOnFixture,
PT026 => violations::UseFixturesWithoutParameters,
// flake8-pie
PIE790 => rules::flake8_pie::rules::NoUnnecessaryPass,
@@ -613,9 +616,16 @@ pub enum Linter {
}
pub trait RuleNamespace: Sized {
fn parse_code(code: &str) -> Option<(Self, &str)>;
/// Returns the prefix that every single code that ruff uses to identify
/// rules from this linter starts with. In the case that multiple
/// `#[prefix]`es are configured for the variant in the `Linter` enum
/// definition this is the empty string.
fn common_prefix(&self) -> &'static str;
fn prefixes(&self) -> &'static [&'static str];
/// Attempts to parse the given rule code. If the prefix is recognized
/// returns the respective variant along with the code with the common
/// prefix stripped.
fn parse_code(code: &str) -> Option<(Self, &str)>;
fn name(&self) -> &'static str;
@@ -732,12 +742,20 @@ impl Diagnostic {
}
/// Pairs of checks that shouldn't be enabled together.
pub const INCOMPATIBLE_CODES: &[(Rule, Rule, &str)] = &[(
Rule::OneBlankLineBeforeClass,
Rule::NoBlankLineBeforeClass,
"`D203` (OneBlankLineBeforeClass) and `D211` (NoBlankLinesBeforeClass) are incompatible. \
Consider adding `D203` to `ignore`.",
)];
pub const INCOMPATIBLE_CODES: &[(Rule, Rule, &str); 2] = &[
(
Rule::NoBlankLineBeforeClass,
Rule::OneBlankLineBeforeClass,
"`one-blank-line-before-class` (D203) and `no-blank-line-before-class` (D211) are \
incompatible. Ignoring `one-blank-line-before-class`.",
),
(
Rule::MultiLineSummaryFirstLine,
Rule::MultiLineSummarySecondLine,
"`multi-line-summary-first-line` (D212) and `multi-line-summary-second-line` (D213) are \
incompatible. Ignoring `multi-line-summary-second-line`.",
),
];
#[cfg(test)]
mod tests {
@@ -756,10 +774,12 @@ mod tests {
}
#[test]
fn test_linter_prefixes() {
fn test_linter_parse_code() {
for rule in Rule::iter() {
Linter::parse_code(rule.code())
.unwrap_or_else(|| panic!("couldn't parse {:?}", rule.code()));
let code = rule.code();
let (linter, rest) =
Linter::parse_code(code).unwrap_or_else(|| panic!("couldn't parse {:?}", code));
assert_eq!(code, format!("{}{rest}", linter.common_prefix()));
}
}
}

View File

@@ -6,6 +6,7 @@ use schemars::JsonSchema;
use serde::de::{self, Visitor};
use serde::{Deserialize, Serialize};
use strum::IntoEnumIterator;
use strum_macros::EnumIter;
use crate::registry::{Rule, RuleCodePrefix, RuleIter};
use crate::rule_redirects::get_redirect;
@@ -171,7 +172,7 @@ impl RuleSelector {
}
}
#[derive(PartialEq, Eq, PartialOrd, Ord)]
#[derive(EnumIter, PartialEq, Eq, PartialOrd, Ord)]
pub(crate) enum Specificity {
All,
Linter,

View File

@@ -58,7 +58,7 @@ where
{
if checker.match_typing_expr(annotation, "Any") {
checker.diagnostics.push(Diagnostic::new(
violations::DynamicallyTypedExpression(func()),
violations::DynamicallyTypedExpression { name: func() },
Range::from_located(annotation),
));
};
@@ -115,7 +115,9 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
.enabled(&Rule::MissingTypeFunctionArgument)
{
checker.diagnostics.push(Diagnostic::new(
violations::MissingTypeFunctionArgument(arg.node.arg.to_string()),
violations::MissingTypeFunctionArgument {
name: arg.node.arg.to_string(),
},
Range::from_located(arg),
));
}
@@ -125,8 +127,8 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
// ANN002, ANN401
if let Some(arg) = &args.vararg {
has_any_typed_arg = true;
if let Some(expr) = &arg.node.annotation {
has_any_typed_arg = true;
if !checker.settings.flake8_annotations.allow_star_arg_any {
if checker
.settings
@@ -143,7 +145,9 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
{
if checker.settings.rules.enabled(&Rule::MissingTypeArgs) {
checker.diagnostics.push(Diagnostic::new(
violations::MissingTypeArgs(arg.node.arg.to_string()),
violations::MissingTypeArgs {
name: arg.node.arg.to_string(),
},
Range::from_located(arg),
));
}
@@ -153,8 +157,8 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
// ANN003, ANN401
if let Some(arg) = &args.kwarg {
has_any_typed_arg = true;
if let Some(expr) = &arg.node.annotation {
has_any_typed_arg = true;
if !checker.settings.flake8_annotations.allow_star_arg_any {
if checker
.settings
@@ -171,7 +175,9 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
{
if checker.settings.rules.enabled(&Rule::MissingTypeKwargs) {
checker.diagnostics.push(Diagnostic::new(
violations::MissingTypeKwargs(arg.node.arg.to_string()),
violations::MissingTypeKwargs {
name: arg.node.arg.to_string(),
},
Range::from_located(arg),
));
}
@@ -186,14 +192,18 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
if visibility::is_classmethod(checker, cast::decorator_list(stmt)) {
if checker.settings.rules.enabled(&Rule::MissingTypeCls) {
checker.diagnostics.push(Diagnostic::new(
violations::MissingTypeCls(arg.node.arg.to_string()),
violations::MissingTypeCls {
name: arg.node.arg.to_string(),
},
Range::from_located(arg),
));
}
} else {
if checker.settings.rules.enabled(&Rule::MissingTypeSelf) {
checker.diagnostics.push(Diagnostic::new(
violations::MissingTypeSelf(arg.node.arg.to_string()),
violations::MissingTypeSelf {
name: arg.node.arg.to_string(),
},
Range::from_located(arg),
));
}
@@ -227,7 +237,9 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
.enabled(&Rule::MissingReturnTypeClassMethod)
{
checker.diagnostics.push(Diagnostic::new(
violations::MissingReturnTypeClassMethod(name.to_string()),
violations::MissingReturnTypeClassMethod {
name: name.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
));
}
@@ -240,7 +252,9 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
.enabled(&Rule::MissingReturnTypeStaticMethod)
{
checker.diagnostics.push(Diagnostic::new(
violations::MissingReturnTypeStaticMethod(name.to_string()),
violations::MissingReturnTypeStaticMethod {
name: name.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
));
}
@@ -256,7 +270,9 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
&& has_any_typed_arg)
{
let mut diagnostic = Diagnostic::new(
violations::MissingReturnTypeSpecialMethod(name.to_string()),
violations::MissingReturnTypeSpecialMethod {
name: name.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
);
if checker.patch(diagnostic.kind.rule()) {
@@ -277,7 +293,9 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
.enabled(&Rule::MissingReturnTypeSpecialMethod)
{
checker.diagnostics.push(Diagnostic::new(
violations::MissingReturnTypeSpecialMethod(name.to_string()),
violations::MissingReturnTypeSpecialMethod {
name: name.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
));
}
@@ -290,7 +308,9 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
.enabled(&Rule::MissingReturnTypePublicFunction)
{
checker.diagnostics.push(Diagnostic::new(
violations::MissingReturnTypePublicFunction(name.to_string()),
violations::MissingReturnTypePublicFunction {
name: name.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
));
}
@@ -302,7 +322,9 @@ pub fn definition(checker: &mut Checker, definition: &Definition, visibility: &V
.enabled(&Rule::MissingReturnTypePrivateFunction)
{
checker.diagnostics.push(Diagnostic::new(
violations::MissingReturnTypePrivateFunction(name.to_string()),
violations::MissingReturnTypePrivateFunction {
name: name.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
));
}

View File

@@ -3,7 +3,8 @@ source: src/rules/flake8_annotations/mod.rs
expression: diagnostics
---
- kind:
MissingReturnTypePublicFunction: bar
MissingReturnTypePublicFunction:
name: bar
location:
row: 29
column: 8

View File

@@ -3,7 +3,8 @@ source: src/rules/flake8_annotations/mod.rs
expression: diagnostics
---
- kind:
DynamicallyTypedExpression: a
DynamicallyTypedExpression:
name: a
location:
row: 10
column: 11
@@ -13,7 +14,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
DynamicallyTypedExpression: foo
DynamicallyTypedExpression:
name: foo
location:
row: 15
column: 46
@@ -23,7 +25,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
DynamicallyTypedExpression: a
DynamicallyTypedExpression:
name: a
location:
row: 40
column: 28
@@ -33,7 +36,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
DynamicallyTypedExpression: foo_method
DynamicallyTypedExpression:
name: foo_method
location:
row: 44
column: 66

View File

@@ -3,7 +3,8 @@ source: src/rules/flake8_annotations/mod.rs
expression: diagnostics
---
- kind:
MissingReturnTypePublicFunction: foo
MissingReturnTypePublicFunction:
name: foo
location:
row: 4
column: 4
@@ -13,7 +14,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
MissingTypeFunctionArgument: a
MissingTypeFunctionArgument:
name: a
location:
row: 4
column: 8
@@ -23,7 +25,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
MissingTypeFunctionArgument: b
MissingTypeFunctionArgument:
name: b
location:
row: 4
column: 11
@@ -33,7 +36,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
MissingReturnTypePublicFunction: foo
MissingReturnTypePublicFunction:
name: foo
location:
row: 9
column: 4
@@ -43,7 +47,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
MissingTypeFunctionArgument: b
MissingTypeFunctionArgument:
name: b
location:
row: 9
column: 16
@@ -53,7 +58,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
MissingTypeFunctionArgument: b
MissingTypeFunctionArgument:
name: b
location:
row: 14
column: 16
@@ -63,7 +69,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
MissingReturnTypePublicFunction: foo
MissingReturnTypePublicFunction:
name: foo
location:
row: 19
column: 4
@@ -73,7 +80,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
MissingReturnTypePublicFunction: foo
MissingReturnTypePublicFunction:
name: foo
location:
row: 24
column: 4
@@ -83,7 +91,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
DynamicallyTypedExpression: a
DynamicallyTypedExpression:
name: a
location:
row: 44
column: 11
@@ -93,7 +102,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
DynamicallyTypedExpression: foo
DynamicallyTypedExpression:
name: foo
location:
row: 49
column: 46
@@ -103,7 +113,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
DynamicallyTypedExpression: "*args"
DynamicallyTypedExpression:
name: "*args"
location:
row: 54
column: 23
@@ -113,7 +124,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
DynamicallyTypedExpression: "**kwargs"
DynamicallyTypedExpression:
name: "**kwargs"
location:
row: 54
column: 38
@@ -123,7 +135,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
DynamicallyTypedExpression: "*args"
DynamicallyTypedExpression:
name: "*args"
location:
row: 59
column: 23
@@ -133,7 +146,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
DynamicallyTypedExpression: "**kwargs"
DynamicallyTypedExpression:
name: "**kwargs"
location:
row: 64
column: 38
@@ -143,7 +157,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
MissingTypeSelf: self
MissingTypeSelf:
name: self
location:
row: 74
column: 12
@@ -153,7 +168,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
DynamicallyTypedExpression: a
DynamicallyTypedExpression:
name: a
location:
row: 78
column: 28
@@ -163,7 +179,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
DynamicallyTypedExpression: foo
DynamicallyTypedExpression:
name: foo
location:
row: 82
column: 66
@@ -173,7 +190,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
DynamicallyTypedExpression: "*params"
DynamicallyTypedExpression:
name: "*params"
location:
row: 86
column: 42
@@ -183,7 +201,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
DynamicallyTypedExpression: "**options"
DynamicallyTypedExpression:
name: "**options"
location:
row: 86
column: 58
@@ -193,7 +212,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
DynamicallyTypedExpression: "*params"
DynamicallyTypedExpression:
name: "*params"
location:
row: 90
column: 42
@@ -203,7 +223,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
DynamicallyTypedExpression: "**options"
DynamicallyTypedExpression:
name: "**options"
location:
row: 94
column: 58
@@ -213,7 +234,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
MissingTypeCls: cls
MissingTypeCls:
name: cls
location:
row: 104
column: 12

View File

@@ -3,7 +3,8 @@ source: src/rules/flake8_annotations/mod.rs
expression: diagnostics
---
- kind:
MissingReturnTypeSpecialMethod: __init__
MissingReturnTypeSpecialMethod:
name: __init__
location:
row: 5
column: 8
@@ -21,7 +22,8 @@ expression: diagnostics
column: 22
parent: ~
- kind:
MissingReturnTypeSpecialMethod: __init__
MissingReturnTypeSpecialMethod:
name: __init__
location:
row: 11
column: 8
@@ -39,7 +41,8 @@ expression: diagnostics
column: 27
parent: ~
- kind:
MissingReturnTypePrivateFunction: __init__
MissingReturnTypePrivateFunction:
name: __init__
location:
row: 40
column: 4
@@ -48,4 +51,23 @@ expression: diagnostics
column: 12
fix: ~
parent: ~
- kind:
MissingReturnTypeSpecialMethod:
name: __init__
location:
row: 47
column: 8
end_location:
row: 47
column: 16
fix:
content:
- " -> None"
location:
row: 47
column: 28
end_location:
row: 47
column: 28
parent: ~

View File

@@ -3,7 +3,8 @@ source: src/rules/flake8_annotations/mod.rs
expression: diagnostics
---
- kind:
MissingReturnTypePublicFunction: foo
MissingReturnTypePublicFunction:
name: foo
location:
row: 45
column: 4
@@ -13,7 +14,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
MissingReturnTypePublicFunction: foo
MissingReturnTypePublicFunction:
name: foo
location:
row: 50
column: 4

View File

@@ -31,6 +31,7 @@ mod tests {
#[test_case(Rule::SnmpWeakCryptography, Path::new("S509.py"); "S509")]
#[test_case(Rule::LoggingConfigInsecureListen, Path::new("S612.py"); "S612")]
#[test_case(Rule::Jinja2AutoescapeFalse, Path::new("S701.py"); "S701")]
#[test_case(Rule::TryExceptPass, Path::new("S110.py"); "S110")]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.code(), path.to_string_lossy());
let diagnostics = test_path(
@@ -55,6 +56,7 @@ mod tests {
"/dev/shm".to_string(),
"/foo".to_string(),
],
check_typed_exception: false,
},
..Settings::for_rule(Rule::HardcodedTempFile)
},
@@ -62,4 +64,20 @@ mod tests {
assert_yaml_snapshot!("S108_extend", diagnostics);
Ok(())
}
#[test]
fn check_typed_exception() -> Result<()> {
let diagnostics = test_path(
Path::new("./resources/test/fixtures/flake8_bandit/S110.py"),
&Settings {
flake8_bandit: super::settings::Settings {
check_typed_exception: true,
..Default::default()
},
..Settings::for_rule(Rule::TryExceptPass)
},
)?;
assert_yaml_snapshot!("S110_typed", diagnostics);
Ok(())
}
}

View File

@@ -101,7 +101,7 @@ pub fn bad_file_permissions(
if let Some(int_value) = get_int_value(mode_arg) {
if (int_value & WRITE_WORLD > 0) || (int_value & EXECUTE_GROUP > 0) {
checker.diagnostics.push(Diagnostic::new(
violations::BadFilePermissions(int_value),
violations::BadFilePermissions { mask: int_value },
Range::from_located(mode_arg),
));
}

View File

@@ -12,7 +12,9 @@ fn check_password_kwarg(arg: &Located<ArgData>, default: &Expr) -> Option<Diagno
return None;
}
Some(Diagnostic::new(
violations::HardcodedPasswordDefault(string.to_string()),
violations::HardcodedPasswordDefault {
string: string.to_string(),
},
Range::from_located(default),
))
}

View File

@@ -16,7 +16,9 @@ pub fn hardcoded_password_func_arg(keywords: &[Keyword]) -> Vec<Diagnostic> {
return None;
}
Some(Diagnostic::new(
violations::HardcodedPasswordFuncArg(string.to_string()),
violations::HardcodedPasswordFuncArg {
string: string.to_string(),
},
Range::from_located(keyword),
))
})

View File

@@ -35,7 +35,9 @@ pub fn compare_to_hardcoded_password_string(left: &Expr, comparators: &[Expr]) -
return None;
}
Some(Diagnostic::new(
violations::HardcodedPasswordString(string.to_string()),
violations::HardcodedPasswordString {
string: string.to_string(),
},
Range::from_located(comp),
))
})
@@ -48,7 +50,9 @@ pub fn assign_hardcoded_password_string(value: &Expr, targets: &[Expr]) -> Optio
for target in targets {
if is_password_target(target) {
return Some(Diagnostic::new(
violations::HardcodedPasswordString(string.to_string()),
violations::HardcodedPasswordString {
string: string.to_string(),
},
Range::from_located(value),
));
}

View File

@@ -12,7 +12,9 @@ pub fn hardcoded_tmp_directory(
) -> Option<Diagnostic> {
if prefixes.iter().any(|prefix| value.starts_with(prefix)) {
Some(Diagnostic::new(
violations::HardcodedTempFile(value.to_string()),
violations::HardcodedTempFile {
string: value.to_string(),
},
Range::from_located(expr),
))
} else {

View File

@@ -56,7 +56,9 @@ pub fn hashlib_insecure_hash_functions(
if let Some(hash_func_name) = string_literal(name_arg) {
if WEAK_HASHES.contains(&hash_func_name.to_lowercase().as_str()) {
checker.diagnostics.push(Diagnostic::new(
violations::HashlibInsecureHashFunction(hash_func_name.to_string()),
violations::HashlibInsecureHashFunction {
string: hash_func_name.to_string(),
},
Range::from_located(name_arg),
));
}
@@ -71,7 +73,9 @@ pub fn hashlib_insecure_hash_functions(
}
checker.diagnostics.push(Diagnostic::new(
violations::HashlibInsecureHashFunction((*func_name).to_string()),
violations::HashlibInsecureHashFunction {
string: (*func_name).to_string(),
},
Range::from_located(func),
));
}

View File

@@ -29,20 +29,20 @@ pub fn jinja2_autoescape_false(
if let ExprKind::Name { id, .. } = &func.node {
if id.as_str() != "select_autoescape" {
checker.diagnostics.push(Diagnostic::new(
violations::Jinja2AutoescapeFalse(true),
violations::Jinja2AutoescapeFalse { value: true },
Range::from_located(autoescape_arg),
));
}
}
}
_ => checker.diagnostics.push(Diagnostic::new(
violations::Jinja2AutoescapeFalse(true),
violations::Jinja2AutoescapeFalse { value: true },
Range::from_located(autoescape_arg),
)),
}
} else {
checker.diagnostics.push(Diagnostic::new(
violations::Jinja2AutoescapeFalse(false),
violations::Jinja2AutoescapeFalse { value: false },
Range::from_located(func),
));
}

View File

@@ -17,6 +17,7 @@ pub use request_with_no_cert_validation::request_with_no_cert_validation;
pub use request_without_timeout::request_without_timeout;
pub use snmp_insecure_version::snmp_insecure_version;
pub use snmp_weak_cryptography::snmp_weak_cryptography;
pub use try_except_pass::{try_except_pass, TryExceptPass};
pub use unsafe_yaml_load::unsafe_yaml_load;
mod assert_used;
@@ -34,4 +35,5 @@ mod request_with_no_cert_validation;
mod request_without_timeout;
mod snmp_insecure_version;
mod snmp_weak_cryptography;
mod try_except_pass;
mod unsafe_yaml_load;

View File

@@ -48,7 +48,9 @@ pub fn request_with_no_cert_validation(
} = &verify_arg.node
{
checker.diagnostics.push(Diagnostic::new(
violations::RequestWithNoCertValidation(target.to_string()),
violations::RequestWithNoCertValidation {
string: target.to_string(),
},
Range::from_located(verify_arg),
));
}

View File

@@ -31,13 +31,15 @@ pub fn request_without_timeout(
_ => None,
} {
checker.diagnostics.push(Diagnostic::new(
violations::RequestWithoutTimeout(Some(timeout)),
violations::RequestWithoutTimeout {
timeout: Some(timeout),
},
Range::from_located(timeout_arg),
));
}
} else {
checker.diagnostics.push(Diagnostic::new(
violations::RequestWithoutTimeout(None),
violations::RequestWithoutTimeout { timeout: None },
Range::from_located(func),
));
}

View File

@@ -0,0 +1,43 @@
use ruff_macros::derive_message_formats;
use rustpython_ast::{Expr, Stmt, StmtKind};
use crate::ast::types::Range;
use crate::checkers::ast::Checker;
use crate::define_violation;
use crate::registry::Diagnostic;
use crate::violation::Violation;
define_violation!(
pub struct TryExceptPass;
);
impl Violation for TryExceptPass {
#[derive_message_formats]
fn message(&self) -> String {
format!("`try`-`except`-`pass` detected, consider logging the exception")
}
}
/// S110
pub fn try_except_pass(
checker: &mut Checker,
type_: Option<&Expr>,
_name: Option<&str>,
body: &[Stmt],
check_typed_exception: bool,
) {
if body.len() == 1
&& body[0].node == StmtKind::Pass
&& (check_typed_exception
|| type_.map_or(true, |type_| {
checker.resolve_call_path(type_).map_or(true, |call_path| {
call_path.as_slice() == ["", "Exception"]
|| call_path.as_slice() == ["", "BaseException"]
})
}))
{
checker.diagnostics.push(Diagnostic::new(
TryExceptPass,
Range::from_located(&body[0]),
));
}
}

View File

@@ -27,13 +27,13 @@ pub fn unsafe_yaml_load(checker: &mut Checker, func: &Expr, args: &[Expr], keywo
_ => None,
};
checker.diagnostics.push(Diagnostic::new(
violations::UnsafeYAMLLoad(loader),
violations::UnsafeYAMLLoad { loader },
Range::from_located(loader_arg),
));
}
} else {
checker.diagnostics.push(Diagnostic::new(
violations::UnsafeYAMLLoad(None),
violations::UnsafeYAMLLoad { loader: None },
Range::from_located(func),
));
}

View File

@@ -34,11 +34,20 @@ pub struct Options {
/// A list of directories to consider temporary, in addition to those
/// specified by `hardcoded-tmp-directory`.
pub hardcoded_tmp_directory_extend: Option<Vec<String>>,
#[option(
default = "false",
value_type = "bool",
example = "check-typed-exception = true"
)]
/// Whether to disallow `try`-`except`-`pass` (`S110`) for specific exception types. By default,
/// `try`-`except`-`pass` is only disallowed for `Exception` and `BaseException`.
pub check_typed_exception: Option<bool>,
}
#[derive(Debug, Hash)]
pub struct Settings {
pub hardcoded_tmp_directory: Vec<String>,
pub check_typed_exception: bool,
}
impl From<Options> for Settings {
@@ -55,6 +64,7 @@ impl From<Options> for Settings {
.into_iter(),
)
.collect(),
check_typed_exception: options.check_typed_exception.unwrap_or(false),
}
}
}
@@ -64,6 +74,7 @@ impl From<Settings> for Options {
Self {
hardcoded_tmp_directory: Some(settings.hardcoded_tmp_directory),
hardcoded_tmp_directory_extend: None,
check_typed_exception: Some(settings.check_typed_exception),
}
}
}
@@ -72,6 +83,7 @@ impl Default for Settings {
fn default() -> Self {
Self {
hardcoded_tmp_directory: default_tmp_dirs(),
check_typed_exception: false,
}
}
}

View File

@@ -3,7 +3,8 @@ source: src/rules/flake8_bandit/mod.rs
expression: diagnostics
---
- kind:
BadFilePermissions: 151
BadFilePermissions:
mask: 151
location:
row: 6
column: 24
@@ -13,7 +14,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
BadFilePermissions: 7
BadFilePermissions:
mask: 7
location:
row: 7
column: 24
@@ -23,7 +25,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
BadFilePermissions: 511
BadFilePermissions:
mask: 511
location:
row: 9
column: 24
@@ -33,7 +36,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
BadFilePermissions: 504
BadFilePermissions:
mask: 504
location:
row: 10
column: 24
@@ -43,7 +47,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
BadFilePermissions: 510
BadFilePermissions:
mask: 510
location:
row: 11
column: 24
@@ -53,7 +58,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
BadFilePermissions: 511
BadFilePermissions:
mask: 511
location:
row: 13
column: 22
@@ -63,7 +69,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
BadFilePermissions: 511
BadFilePermissions:
mask: 511
location:
row: 14
column: 23
@@ -73,7 +80,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
BadFilePermissions: 511
BadFilePermissions:
mask: 511
location:
row: 15
column: 24
@@ -83,7 +91,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
BadFilePermissions: 511
BadFilePermissions:
mask: 511
location:
row: 17
column: 18
@@ -93,7 +102,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
BadFilePermissions: 511
BadFilePermissions:
mask: 511
location:
row: 18
column: 18
@@ -103,7 +113,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
BadFilePermissions: 511
BadFilePermissions:
mask: 511
location:
row: 19
column: 18
@@ -113,7 +124,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
BadFilePermissions: 8
BadFilePermissions:
mask: 8
location:
row: 20
column: 26
@@ -123,7 +135,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
BadFilePermissions: 2
BadFilePermissions:
mask: 2
location:
row: 22
column: 24

View File

@@ -3,7 +3,8 @@ source: src/rules/flake8_bandit/mod.rs
expression: diagnostics
---
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 12
column: 11
@@ -13,7 +14,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 13
column: 8
@@ -23,7 +25,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 14
column: 9
@@ -33,7 +36,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 15
column: 6
@@ -43,7 +47,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 16
column: 9
@@ -53,7 +58,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 17
column: 8
@@ -63,7 +69,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 18
column: 10
@@ -73,7 +80,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 19
column: 18
@@ -83,7 +91,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 20
column: 18
@@ -93,7 +102,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 22
column: 16
@@ -103,7 +113,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 23
column: 12
@@ -113,7 +124,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 24
column: 14
@@ -123,7 +135,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 25
column: 11
@@ -133,7 +146,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 26
column: 14
@@ -143,7 +157,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 27
column: 13
@@ -153,7 +168,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 28
column: 15
@@ -163,7 +179,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 29
column: 23
@@ -173,7 +190,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 30
column: 23
@@ -183,7 +201,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 34
column: 15
@@ -193,7 +212,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 38
column: 19
@@ -203,7 +223,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 39
column: 16
@@ -213,7 +234,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 40
column: 17
@@ -223,7 +245,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 41
column: 14
@@ -233,7 +256,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 42
column: 17
@@ -243,7 +267,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 43
column: 16
@@ -253,7 +278,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 44
column: 18
@@ -263,7 +289,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 46
column: 12
@@ -273,7 +300,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 47
column: 9
@@ -283,7 +311,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 48
column: 10
@@ -293,7 +322,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 49
column: 7
@@ -303,7 +333,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 50
column: 10
@@ -313,7 +344,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 51
column: 9
@@ -323,7 +355,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 52
column: 11
@@ -333,7 +366,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: s3cr3t
HardcodedPasswordString:
string: s3cr3t
location:
row: 53
column: 20
@@ -343,7 +377,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: "1\n2"
HardcodedPasswordString:
string: "1\n2"
location:
row: 55
column: 12
@@ -353,7 +388,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: "3\t4"
HardcodedPasswordString:
string: "3\t4"
location:
row: 58
column: 12
@@ -363,7 +399,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordString: "5\r6"
HardcodedPasswordString:
string: "5\r6"
location:
row: 61
column: 12

View File

@@ -3,7 +3,8 @@ source: src/rules/flake8_bandit/mod.rs
expression: diagnostics
---
- kind:
HardcodedPasswordFuncArg: s3cr3t
HardcodedPasswordFuncArg:
string: s3cr3t
location:
row: 13
column: 8

View File

@@ -3,7 +3,8 @@ source: src/rules/flake8_bandit/mod.rs
expression: diagnostics
---
- kind:
HardcodedPasswordDefault: default
HardcodedPasswordDefault:
string: default
location:
row: 5
column: 28
@@ -13,7 +14,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordDefault: posonly
HardcodedPasswordDefault:
string: posonly
location:
row: 13
column: 44
@@ -23,7 +25,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordDefault: kwonly
HardcodedPasswordDefault:
string: kwonly
location:
row: 21
column: 38
@@ -33,7 +36,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordDefault: posonly
HardcodedPasswordDefault:
string: posonly
location:
row: 29
column: 38
@@ -43,7 +47,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedPasswordDefault: kwonly
HardcodedPasswordDefault:
string: kwonly
location:
row: 29
column: 61

View File

@@ -3,7 +3,8 @@ source: src/rules/flake8_bandit/mod.rs
expression: diagnostics
---
- kind:
HardcodedTempFile: /tmp/abc
HardcodedTempFile:
string: /tmp/abc
location:
row: 5
column: 10
@@ -13,7 +14,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedTempFile: /var/tmp/123
HardcodedTempFile:
string: /var/tmp/123
location:
row: 8
column: 10
@@ -23,7 +25,8 @@ expression: diagnostics
fix: ~
parent: ~
- kind:
HardcodedTempFile: /dev/shm/unit/test
HardcodedTempFile:
string: /dev/shm/unit/test
location:
row: 11
column: 10

Some files were not shown because too many files have changed in this diff Show More