Compare commits

..

23 Commits

Author SHA1 Message Date
Zanie
b75864c5f9 WIP: Nightmare scenario warning on redirected deprecated rules 2024-01-31 10:59:11 -06:00
Zanie
4a5d711a6e Fix bug where selection included deprecated rules during preview 2024-01-30 16:04:05 -06:00
Zanie
204c000e20 Track deprecated rules during selection 2024-01-30 16:01:58 -06:00
Zanie
27f2b3af5d Additional cleanup 2024-01-30 15:39:18 -06:00
Zanie
4d5d889055 Change phrasing 2024-01-30 15:36:18 -06:00
Zanie
427a05b1b5 Refactor 2024-01-30 15:33:55 -06:00
Zanie
7e763cf84d Lint 2024-01-30 13:51:18 -06:00
Zanie
e5bf2301bb Add handling for rules that are both deprecated and redirected 2024-01-30 13:11:31 -06:00
Zanie
c7e40e6dc1 Mark PGH rules as deprecated 2024-01-30 12:12:39 -06:00
Zanie Blue
761e148308 Add rule removal infrastructure (#9691)
Similar to https://github.com/astral-sh/ruff/pull/9689 — retains removed
rules for better error messages and documentation but removed rules
_cannot_ be used in any context.

Removes PLR1706 as a useful test case and something we want to
accomplish in #9680 anyway. The rule was in preview so we do not need to
deprecate it first.

Closes https://github.com/astral-sh/ruff/issues/9007

## Test plan

<img width="1110" alt="Rules table"
src="https://github.com/astral-sh/ruff/assets/2586601/ac9fa682-623c-44aa-8e51-d8ab0d308355">

<img width="1110" alt="Rule page"
src="https://github.com/astral-sh/ruff/assets/2586601/05850b2d-7ca5-49bb-8df8-bb931bab25cd">
2024-01-30 12:04:06 -06:00
Zanie Blue
5c75da980f Add rule deprecation infrastructure (#9689)
Adds a new `Deprecated` rule group in addition to `Stable` and
`Preview`.

Deprecated rules:
- Warn on explicit selection without preview
- Error on explicit selection with preview
- Are excluded when selected by prefix with preview

Deprecates `TRY200`, `ANN101`, and `ANN102` as a proof of concept. We
can consider deprecating them separately.
2024-01-30 12:04:06 -06:00
Zanie Blue
747728beca Remove the NURSERY selector from the json schema (#9695) 2024-01-30 12:04:06 -06:00
Zanie Blue
897b7f75b9 Always request the concise output format during ecosystem checks (#9708)
Fixes a regression in the ecosystem checks from
https://github.com/astral-sh/ruff/pull/9687 which was causing them to
run for multiple hours due to the size of the output.

We need the concise format for comparisons.

We should probably update the ecosystem checks to actually diff the full
output in the future because that'd be nice.
2024-01-30 12:04:06 -06:00
Zanie Blue
07c6a3e672 Error if nursery rules are selected without preview (#9683)
Extends #9682 to error if the nursery selector is used or nursery rules
are selected without preview.

Part of #7992 — we will remove this in 0.3.0 instead so we can provide
nice errors in 0.2.0.
2024-01-30 12:04:06 -06:00
Jane Lewis
57576fa139 Replace --show-source and --no-show-source with --output_format=<full|concise> (#9687)
Fixes #7350

## Summary

* `--show-source` and `--no-show-source` are now deprecated.
* `output-format` supports two new variants, `full` and `concise`.
`text` is now a deprecated variant, and any use of it is treated as the
default serialization format.
* `--output-format` now default to `concise`
* In preview mode, `--output-format` defaults to `full`
* `--show-source` will still set `--output-format` to `full` if the
output format is not otherwise specified.
* likewise, `--no-show-source` can override an output format that was
set in a file-based configuration, though it will also be overridden by
`--output-format`

## Test Plan

A lot of tests were updated to use `--output-format=full`. Additional
tests were added to ensure the correct deprecation warnings appeared,
and that deprecated options behaved as intended.
2024-01-30 12:04:06 -06:00
Charlie Marsh
b1c793e2ad Remove preview gating for flake8-simplify rules (#9686)
## Summary

Un-gates detecting `dict.get` rewrites in `if` expressions (rather than
just `if` statements).
2024-01-30 12:04:06 -06:00
Charlie Marsh
9fca4d830e Remove preview gating for flake8-pie rules (#9684)
## Summary

Both of the preview behaviors gated here seem like improvements, so
let's make them stable in v0.2.0
2024-01-30 12:04:06 -06:00
Charlie Marsh
d318a27b0f Remove preview gating for pycodestyle rules (#9685)
## Summary

Un-gates the behavior to allow `sys.path` modifications between imports,
which removed a bunch of false positives in the ecosystem CI at the
time.
2024-01-30 12:04:06 -06:00
Zanie
002d040c41 Remove preview gating for newly-added stable fixes (#9681)
## Summary

At present, our versioning policy forbids the addition of safe fixes to
stable rules outside of a minor release, so we've accumulated a bunch of
new fixes that are behind `--preview`, and can be ungated in v0.2.0.

To find these, I just grepped for `preview.is_enabled()` and identified
all such cases. I then audited the `preview_rules` test fixtures and
removed any tests that existed only to test this autofix behavior.
# Conflicts:
#	crates/ruff_linter/src/rules/flake8_simplify/snapshots/ruff_linter__rules__flake8_simplify__tests__SIM114_SIM114.py.snap
#	crates/ruff_linter/src/rules/flake8_simplify/snapshots/ruff_linter__rules__flake8_simplify__tests__preview__SIM114_SIM114.py.snap
2024-01-30 12:04:02 -06:00
Charlie Marsh
432b4da27b Recategorize static-key-dict-comprehension from RUF011 to B035 (#9428)
## Summary

This rule was added to flake8-bugbear. In general, we tend to prefer
redirecting to prominent plugins when our own rules are reimplemented
(since more projects have `B` activated than `RUF`).

## Test Plan

`cargo test`
2024-01-30 12:00:45 -06:00
Charlie Marsh
7992caed37 [flake8-pyi] Mark unaliased-collections-abc-set-import fix as safe (#9679)
## Summary

Prompted by
https://github.com/astral-sh/ruff/issues/8482#issuecomment-1859299411.
The rename is only unsafe when the symbol is exported, so we can narrow
the conditions.
2024-01-30 12:00:45 -06:00
Micha Reiser
4c8d1fe0e4 Add deprecation message for top-level lint settings (#9582) 2024-01-30 12:00:45 -06:00
Micha Reiser
8e8c632409 Promote lint. settings over top-level settings (#9476) 2024-01-30 12:00:45 -06:00
553 changed files with 7270 additions and 20169 deletions

View File

@@ -1,8 +0,0 @@
[profile.ci]
# Print out output for failing tests as soon as they fail, and also at the end
# of the run (for easy scrollability).
failure-output = "immediate-final"
# Do not cancel the test run on the first failure.
fail-fast = false
status-level = "skip"

View File

@@ -9,10 +9,6 @@ updates:
actions:
patterns:
- "*"
ignore:
# The latest versions of these are not compatible with our release workflow
- dependency-name: "actions/upload-artifact"
- dependency-name: "actions/download-artifact"
- package-ecosystem: "cargo"
directory: "/"

View File

@@ -35,7 +35,7 @@ jobs:
with:
fetch-depth: 0
- uses: tj-actions/changed-files@v42
- uses: tj-actions/changed-files@v41
id: changed
with:
files_yaml: |
@@ -111,23 +111,13 @@ jobs:
- uses: actions/checkout@v4
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@v2
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@v2
with:
tool: cargo-insta
- uses: Swatinem/rust-cache@v2
- name: "Run tests"
shell: bash
env:
NEXTEST_PROFILE: "ci"
run: cargo insta test --all-features --unreferenced reject --test-runner nextest
run: cargo insta test --all --all-features --unreferenced reject
# Check for broken links in the documentation.
- run: cargo doc --all --no-deps
env:
@@ -148,16 +138,15 @@ jobs:
- uses: actions/checkout@v4
- name: "Install Rust toolchain"
run: rustup show
- name: "Install cargo nextest"
- name: "Install cargo insta"
uses: taiki-e/install-action@v2
with:
tool: cargo-nextest
tool: cargo-insta
- uses: Swatinem/rust-cache@v2
- name: "Run tests"
shell: bash
run: |
cargo nextest run --all-features --profile ci
cargo test --all-features --doc
# We can't reject unreferenced snapshots on windows because flake8_executable can't run on windows
run: cargo insta test --all --all-features
cargo-test-wasm:
name: "cargo test (wasm)"
@@ -393,7 +382,7 @@ jobs:
- name: "Install pre-commit"
run: pip install pre-commit
- name: "Cache pre-commit"
uses: actions/cache@v4
uses: actions/cache@v3
with:
path: ~/.cache/pre-commit
key: pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
@@ -418,7 +407,7 @@ jobs:
- uses: actions/setup-python@v5
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
uses: webfactory/ssh-agent@v0.9.0
uses: webfactory/ssh-agent@v0.8.0
with:
ssh-private-key: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY }}
- name: "Install Rust toolchain"

View File

@@ -23,7 +23,7 @@ jobs:
- uses: actions/setup-python@v5
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
uses: webfactory/ssh-agent@v0.9.0
uses: webfactory/ssh-agent@v0.8.0
with:
ssh-private-key: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY }}
- name: "Install Rust toolchain"

View File

@@ -61,7 +61,7 @@ jobs:
echo 'EOF' >> $GITHUB_OUTPUT
- name: Find existing comment
uses: peter-evans/find-comment@v3
uses: peter-evans/find-comment@v2
if: steps.generate-comment.outcome == 'success'
id: find-comment
with:
@@ -71,7 +71,7 @@ jobs:
- name: Create or update comment
if: steps.find-comment.outcome == 'success'
uses: peter-evans/create-or-update-comment@v4
uses: peter-evans/create-or-update-comment@v3
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ steps.pr-number.outputs.pr-number }}

View File

@@ -1,215 +1,5 @@
# Changelog
## 0.2.1
This release includes support for range formatting (i.e., the ability to format specific lines
within a source file).
### Preview features
- \[`refurb`\] Implement `missing-f-string-syntax` (`RUF027`) ([#9728](https://github.com/astral-sh/ruff/pull/9728))
- Format module-level docstrings ([#9725](https://github.com/astral-sh/ruff/pull/9725))
### Formatter
- Add `--range` option to `ruff format` ([#9733](https://github.com/astral-sh/ruff/pull/9733))
- Don't trim last empty line in docstrings ([#9813](https://github.com/astral-sh/ruff/pull/9813))
### Bug fixes
- Skip empty lines when determining base indentation ([#9795](https://github.com/astral-sh/ruff/pull/9795))
- Drop `__get__` and `__set__` from `unnecessary-dunder-call` ([#9791](https://github.com/astral-sh/ruff/pull/9791))
- Respect generic `Protocol` in ellipsis removal ([#9841](https://github.com/astral-sh/ruff/pull/9841))
- Revert "Use publicly available Apple Silicon runners (#9726)" ([#9834](https://github.com/astral-sh/ruff/pull/9834))
### Performance
- Skip LibCST parsing for standard dedent adjustments ([#9769](https://github.com/astral-sh/ruff/pull/9769))
- Remove CST-based fixer for `C408` ([#9822](https://github.com/astral-sh/ruff/pull/9822))
- Add our own ignored-names abstractions ([#9802](https://github.com/astral-sh/ruff/pull/9802))
- Remove CST-based fixers for `C400`, `C401`, `C410`, and `C418` ([#9819](https://github.com/astral-sh/ruff/pull/9819))
- Use `AhoCorasick` to speed up quote match ([#9773](https://github.com/astral-sh/ruff/pull/9773))
- Remove CST-based fixers for `C405` and `C409` ([#9821](https://github.com/astral-sh/ruff/pull/9821))
- Add fast-path for comment detection ([#9808](https://github.com/astral-sh/ruff/pull/9808))
- Invert order of checks in `zero-sleep-call` ([#9766](https://github.com/astral-sh/ruff/pull/9766))
- Short-circuit typing matches based on imports ([#9800](https://github.com/astral-sh/ruff/pull/9800))
- Run dunder method rule on methods directly ([#9815](https://github.com/astral-sh/ruff/pull/9815))
- Track top-level module imports in the semantic model ([#9775](https://github.com/astral-sh/ruff/pull/9775))
- Slight speed-up for lowercase and uppercase identifier checks ([#9798](https://github.com/astral-sh/ruff/pull/9798))
- Remove LibCST-based fixer for `C403` ([#9818](https://github.com/astral-sh/ruff/pull/9818))
### Documentation
- Update `max-pos-args` example to `max-positional-args` ([#9797](https://github.com/astral-sh/ruff/pull/9797))
- Fixed example code in `weak_cryptographic_key.rs` ([#9774](https://github.com/astral-sh/ruff/pull/9774))
- Fix references to deprecated `ANN` rules in changelog ([#9771](https://github.com/astral-sh/ruff/pull/9771))
- Fix default for `max-positional-args` ([#9838](https://github.com/astral-sh/ruff/pull/9838))
## 0.2.0
### Breaking changes
- The `NURSERY` selector cannot be used anymore
- Legacy selection of nursery rules by exact codes is no longer allowed without preview enabled
See also, the "Remapped rules" section which may result in disabled rules.
### Deprecations
The following rules are now deprecated:
- [`missing-type-self`](https://docs.astral.sh/ruff/rules/missing-type-self/) (`ANN101`)
- [`missing-type-cls`](https://docs.astral.sh/ruff/rules/missing-type-cls/) (`ANN102`)
The following command line options are now deprecated:
- `--show-source`; use `--output-format full` instead
- `--no-show-source`; use `--output-format concise` instead
- `--output-format text`; use `full` or `concise` instead
The following settings have moved and the previous name is deprecated:
- `ruff.allowed-confusables` → [`ruff.lint.allowed-confusables`](https://docs.astral.sh//ruff/settings/#lint_allowed-confusables)
- `ruff.dummy-variable-rgx` → [`ruff.lint.dummy-variable-rgx`](https://docs.astral.sh//ruff/settings/#lint_dummy-variable-rgx)
- `ruff.explicit-preview-rules` → [`ruff.lint.explicit-preview-rules`](https://docs.astral.sh//ruff/settings/#lint_explicit-preview-rules)
- `ruff.extend-fixable` → [`ruff.lint.extend-fixable`](https://docs.astral.sh//ruff/settings/#lint_extend-fixable)
- `ruff.extend-ignore` → [`ruff.lint.extend-ignore`](https://docs.astral.sh//ruff/settings/#lint_extend-ignore)
- `ruff.extend-per-file-ignores` → [`ruff.lint.extend-per-file-ignores`](https://docs.astral.sh//ruff/settings/#lint_extend-per-file-ignores)
- `ruff.extend-safe-fixes` → [`ruff.lint.extend-safe-fixes`](https://docs.astral.sh//ruff/settings/#lint_extend-safe-fixes)
- `ruff.extend-select` → [`ruff.lint.extend-select`](https://docs.astral.sh//ruff/settings/#lint_extend-select)
- `ruff.extend-unfixable` → [`ruff.lint.extend-unfixable`](https://docs.astral.sh//ruff/settings/#lint_extend-unfixable)
- `ruff.extend-unsafe-fixes` → [`ruff.lint.extend-unsafe-fixes`](https://docs.astral.sh//ruff/settings/#lint_extend-unsafe-fixes)
- `ruff.external` → [`ruff.lint.external`](https://docs.astral.sh//ruff/settings/#lint_external)
- `ruff.fixable` → [`ruff.lint.fixable`](https://docs.astral.sh//ruff/settings/#lint_fixable)
- `ruff.flake8-annotations` → [`ruff.lint.flake8-annotations`](https://docs.astral.sh//ruff/settings/#lint_flake8-annotations)
- `ruff.flake8-bandit` → [`ruff.lint.flake8-bandit`](https://docs.astral.sh//ruff/settings/#lint_flake8-bandit)
- `ruff.flake8-bugbear` → [`ruff.lint.flake8-bugbear`](https://docs.astral.sh//ruff/settings/#lint_flake8-bugbear)
- `ruff.flake8-builtins` → [`ruff.lint.flake8-builtins`](https://docs.astral.sh//ruff/settings/#lint_flake8-builtins)
- `ruff.flake8-comprehensions` → [`ruff.lint.flake8-comprehensions`](https://docs.astral.sh//ruff/settings/#lint_flake8-comprehensions)
- `ruff.flake8-copyright` → [`ruff.lint.flake8-copyright`](https://docs.astral.sh//ruff/settings/#lint_flake8-copyright)
- `ruff.flake8-errmsg` → [`ruff.lint.flake8-errmsg`](https://docs.astral.sh//ruff/settings/#lint_flake8-errmsg)
- `ruff.flake8-gettext` → [`ruff.lint.flake8-gettext`](https://docs.astral.sh//ruff/settings/#lint_flake8-gettext)
- `ruff.flake8-implicit-str-concat` → [`ruff.lint.flake8-implicit-str-concat`](https://docs.astral.sh//ruff/settings/#lint_flake8-implicit-str-concat)
- `ruff.flake8-import-conventions` → [`ruff.lint.flake8-import-conventions`](https://docs.astral.sh//ruff/settings/#lint_flake8-import-conventions)
- `ruff.flake8-pytest-style` → [`ruff.lint.flake8-pytest-style`](https://docs.astral.sh//ruff/settings/#lint_flake8-pytest-style)
- `ruff.flake8-quotes` → [`ruff.lint.flake8-quotes`](https://docs.astral.sh//ruff/settings/#lint_flake8-quotes)
- `ruff.flake8-self` → [`ruff.lint.flake8-self`](https://docs.astral.sh//ruff/settings/#lint_flake8-self)
- `ruff.flake8-tidy-imports` → [`ruff.lint.flake8-tidy-imports`](https://docs.astral.sh//ruff/settings/#lint_flake8-tidy-imports)
- `ruff.flake8-type-checking` → [`ruff.lint.flake8-type-checking`](https://docs.astral.sh//ruff/settings/#lint_flake8-type-checking)
- `ruff.flake8-unused-arguments` → [`ruff.lint.flake8-unused-arguments`](https://docs.astral.sh//ruff/settings/#lint_flake8-unused-arguments)
- `ruff.ignore` → [`ruff.lint.ignore`](https://docs.astral.sh//ruff/settings/#lint_ignore)
- `ruff.ignore-init-module-imports` → [`ruff.lint.ignore-init-module-imports`](https://docs.astral.sh//ruff/settings/#lint_ignore-init-module-imports)
- `ruff.isort` → [`ruff.lint.isort`](https://docs.astral.sh//ruff/settings/#lint_isort)
- `ruff.logger-objects` → [`ruff.lint.logger-objects`](https://docs.astral.sh//ruff/settings/#lint_logger-objects)
- `ruff.mccabe` → [`ruff.lint.mccabe`](https://docs.astral.sh//ruff/settings/#lint_mccabe)
- `ruff.pep8-naming` → [`ruff.lint.pep8-naming`](https://docs.astral.sh//ruff/settings/#lint_pep8-naming)
- `ruff.per-file-ignores` → [`ruff.lint.per-file-ignores`](https://docs.astral.sh//ruff/settings/#lint_per-file-ignores)
- `ruff.pycodestyle` → [`ruff.lint.pycodestyle`](https://docs.astral.sh//ruff/settings/#lint_pycodestyle)
- `ruff.pydocstyle` → [`ruff.lint.pydocstyle`](https://docs.astral.sh//ruff/settings/#lint_pydocstyle)
- `ruff.pyflakes` → [`ruff.lint.pyflakes`](https://docs.astral.sh//ruff/settings/#lint_pyflakes)
- `ruff.pylint` → [`ruff.lint.pylint`](https://docs.astral.sh//ruff/settings/#lint_pylint)
- `ruff.pyupgrade` → [`ruff.lint.pyupgrade`](https://docs.astral.sh//ruff/settings/#lint_pyupgrade)
- `ruff.select` → [`ruff.lint.select`](https://docs.astral.sh//ruff/settings/#lint_select)
- `ruff.task-tags` → [`ruff.lint.task-tags`](https://docs.astral.sh//ruff/settings/#lint_task-tags)
- `ruff.typing-modules` → [`ruff.lint.typing-modules`](https://docs.astral.sh//ruff/settings/#lint_typing-modules)
- `ruff.unfixable` → [`ruff.lint.unfixable`](https://docs.astral.sh//ruff/settings/#lint_unfixable)
### Remapped rules
The following rules have been remapped to new codes:
- [`raise-without-from-inside-except`](https://docs.astral.sh/ruff/rules/raise-without-from-inside-except/): `TRY200` to `B904`
- [`suspicious-eval-usage`](https://docs.astral.sh/ruff/rules/suspicious-eval-usage/): `PGH001` to `S307`
- [`logging-warn`](https://docs.astral.sh/ruff/rules/logging-warn/): `PGH002` to `G010`
- [`static-key-dict-comprehension`](https://docs.astral.sh/ruff/rules/static-key-dict-comprehension): `RUF011` to `B035`
- [`runtime-string-union`](https://docs.astral.sh/ruff/rules/runtime-string-union): `TCH006` to `TCH010`
### Stabilizations
The following rules have been stabilized and are no longer in preview:
- [`trio-timeout-without-await`](https://docs.astral.sh/ruff/rules/trio-timeout-without-await) (`TRIO100`)
- [`trio-sync-call`](https://docs.astral.sh/ruff/rules/trio-sync-call) (`TRIO105`)
- [`trio-async-function-with-timeout`](https://docs.astral.sh/ruff/rules/trio-async-function-with-timeout) (`TRIO109`)
- [`trio-unneeded-sleep`](https://docs.astral.sh/ruff/rules/trio-unneeded-sleep) (`TRIO110`)
- [`trio-zero-sleep-call`](https://docs.astral.sh/ruff/rules/trio-zero-sleep-call) (`TRIO115`)
- [`unnecessary-escaped-quote`](https://docs.astral.sh/ruff/rules/unnecessary-escaped-quote) (`Q004`)
- [`enumerate-for-loop`](https://docs.astral.sh/ruff/rules/enumerate-for-loop) (`SIM113`)
- [`zip-dict-keys-and-values`](https://docs.astral.sh/ruff/rules/zip-dict-keys-and-values) (`SIM911`)
- [`timeout-error-alias`](https://docs.astral.sh/ruff/rules/timeout-error-alias) (`UP041`)
- [`flask-debug-true`](https://docs.astral.sh/ruff/rules/flask-debug-true) (`S201`)
- [`tarfile-unsafe-members`](https://docs.astral.sh/ruff/rules/tarfile-unsafe-members) (`S202`)
- [`ssl-insecure-version`](https://docs.astral.sh/ruff/rules/ssl-insecure-version) (`S502`)
- [`ssl-with-bad-defaults`](https://docs.astral.sh/ruff/rules/ssl-with-bad-defaults) (`S503`)
- [`ssl-with-no-version`](https://docs.astral.sh/ruff/rules/ssl-with-no-version) (`S504`)
- [`weak-cryptographic-key`](https://docs.astral.sh/ruff/rules/weak-cryptographic-key) (`S505`)
- [`ssh-no-host-key-verification`](https://docs.astral.sh/ruff/rules/ssh-no-host-key-verification) (`S507`)
- [`django-raw-sql`](https://docs.astral.sh/ruff/rules/django-raw-sql) (`S611`)
- [`mako-templates`](https://docs.astral.sh/ruff/rules/mako-templates) (`S702`)
- [`generator-return-from-iter-method`](https://docs.astral.sh/ruff/rules/generator-return-from-iter-method) (`PYI058`)
- [`runtime-string-union`](https://docs.astral.sh/ruff/rules/runtime-string-union) (`TCH006`)
- [`numpy2-deprecation`](https://docs.astral.sh/ruff/rules/numpy2-deprecation) (`NPY201`)
- [`quadratic-list-summation`](https://docs.astral.sh/ruff/rules/quadratic-list-summation) (`RUF017`)
- [`assignment-in-assert`](https://docs.astral.sh/ruff/rules/assignment-in-assert) (`RUF018`)
- [`unnecessary-key-check`](https://docs.astral.sh/ruff/rules/unnecessary-key-check) (`RUF019`)
- [`never-union`](https://docs.astral.sh/ruff/rules/never-union) (`RUF020`)
- [`direct-logger-instantiation`](https://docs.astral.sh/ruff/rules/direct-logger-instantiation) (`LOG001`)
- [`invalid-get-logger-argument`](https://docs.astral.sh/ruff/rules/invalid-get-logger-argument) (`LOG002`)
- [`exception-without-exc-info`](https://docs.astral.sh/ruff/rules/exception-without-exc-info) (`LOG007`)
- [`undocumented-warn`](https://docs.astral.sh/ruff/rules/undocumented-warn) (`LOG009`)
Fixes for the following rules have been stabilized and are now available without preview:
- [`triple-single-quotes`](https://docs.astral.sh/ruff/rules/triple-single-quotes) (`D300`)
- [`non-pep604-annotation`](https://docs.astral.sh/ruff/rules/non-pep604-annotation) (`UP007`)
- [`dict-get-with-none-default`](https://docs.astral.sh/ruff/rules/dict-get-with-none-default) (`SIM910`)
- [`in-dict-keys`](https://docs.astral.sh/ruff/rules/in-dict-keys) (`SIM118`)
- [`collapsible-else-if`](https://docs.astral.sh/ruff/rules/collapsible-else-if) (`PLR5501`)
- [`if-with-same-arms`](https://docs.astral.sh/ruff/rules/if-with-same-arms) (`SIM114`)
- [`useless-else-on-loop`](https://docs.astral.sh/ruff/rules/useless-else-on-loop) (`PLW0120`)
- [`unnecessary-literal-union`](https://docs.astral.sh/ruff/rules/unnecessary-literal-union) (`PYI030`)
- [`unnecessary-spread`](https://docs.astral.sh/ruff/rules/unnecessary-spread) (`PIE800`)
- [`error-instead-of-exception`](https://docs.astral.sh/ruff/rules/error-instead-of-exception) (`TRY400`)
- [`redefined-while-unused`](https://docs.astral.sh/ruff/rules/redefined-while-unused) (`F811`)
- [`duplicate-value`](https://docs.astral.sh/ruff/rules/duplicate-value) (`B033`)
- [`multiple-imports-on-one-line`](https://docs.astral.sh/ruff/rules/multiple-imports-on-one-line) (`E401`)
- [`non-pep585-annotation`](https://docs.astral.sh/ruff/rules/non-pep585-annotation) (`UP006`)
Fixes for the following rules have been promoted from unsafe to safe:
- [`unaliased-collections-abc-set-import`](https://docs.astral.sh/ruff/rules/unaliased-collections-abc-set-import) (`PYI025`)
The following behaviors have been stabilized:
- [`module-import-not-at-top-of-file`](https://docs.astral.sh/ruff/rules/module-import-not-at-top-of-file/) (`E402`) allows `sys.path` modifications between imports
- [`reimplemented-container-builtin`](https://docs.astral.sh/ruff/rules/reimplemented-container-builtin/) (`PIE807`) includes lambdas that can be replaced with `dict`
- [`unnecessary-placeholder`](https://docs.astral.sh/ruff/rules/unnecessary-placeholder/) (`PIE790`) applies to unnecessary ellipses (`...`)
- [`if-else-block-instead-of-dict-get`](https://docs.astral.sh/ruff/rules/if-else-block-instead-of-dict-get/) (`SIM401`) applies to `if-else` expressions
### Preview features
- \[`refurb`\] Implement `metaclass_abcmeta` (`FURB180`) ([#9658](https://github.com/astral-sh/ruff/pull/9658))
- Implement `blank_line_after_nested_stub_class` preview style ([#9155](https://github.com/astral-sh/ruff/pull/9155))
- The preview rule [`and-or-ternary`](https://docs.astral.sh/ruff/rules/and-or-ternary) (`PLR1706`) was removed
### Bug fixes
- \[`flake8-async`\] Take `pathlib.Path` into account when analyzing async functions ([#9703](https://github.com/astral-sh/ruff/pull/9703))
- \[`flake8-return`\] - fix indentation syntax error (`RET505`) ([#9705](https://github.com/astral-sh/ruff/pull/9705))
- Detect multi-statement lines in else removal ([#9748](https://github.com/astral-sh/ruff/pull/9748))
- `RUF022`, `RUF023`: never add two trailing commas to the end of a sequence ([#9698](https://github.com/astral-sh/ruff/pull/9698))
- `RUF023`: Don't sort `__match_args__`, only `__slots__` ([#9724](https://github.com/astral-sh/ruff/pull/9724))
- \[`flake8-simplify`\] - Fix syntax error in autofix (`SIM114`) ([#9704](https://github.com/astral-sh/ruff/pull/9704))
- \[`pylint`\] Show verbatim constant in `magic-value-comparison` (`PLR2004`) ([#9694](https://github.com/astral-sh/ruff/pull/9694))
- Removing trailing whitespace inside multiline strings is unsafe ([#9744](https://github.com/astral-sh/ruff/pull/9744))
- Support `IfExp` with dual string arms in `invalid-envvar-default` ([#9734](https://github.com/astral-sh/ruff/pull/9734))
- \[`pylint`\] Add `__mro_entries__` to known dunder methods (`PLW3201`) ([#9706](https://github.com/astral-sh/ruff/pull/9706))
### Documentation
- Removed rules are now retained in the documentation ([#9691](https://github.com/astral-sh/ruff/pull/9691))
- Deprecated rules are now indicated in the documentation ([#9689](https://github.com/astral-sh/ruff/pull/9689))
## 0.1.15
### Preview features

View File

@@ -26,10 +26,6 @@ Welcome! We're happy to have you here. Thank you in advance for your contributio
- [`cargo dev`](#cargo-dev)
- [Subsystems](#subsystems)
- [Compilation Pipeline](#compilation-pipeline)
- [Import Categorization](#import-categorization)
- [Project root](#project-root)
- [Package root](#package-root)
- [Import categorization](#import-categorization-1)
## The Basics
@@ -67,7 +63,7 @@ You'll also need [Insta](https://insta.rs/docs/) to update snapshot tests:
cargo install cargo-insta
```
And you'll need pre-commit to run some validation checks:
and pre-commit to run some validation checks:
```shell
pipx install pre-commit # or `pip install pre-commit` if you have a virtualenv
@@ -80,16 +76,6 @@ when making a commit:
pre-commit install
```
We recommend [nextest](https://nexte.st/) to run Ruff's test suite (via `cargo nextest run`),
though it's not strictly necessary:
```shell
cargo install cargo-nextest --locked
```
Throughout this guide, any usages of `cargo test` can be replaced with `cargo nextest run`,
if you choose to install `nextest`.
### Development
After cloning the repository, run Ruff locally from the repository root with:
@@ -245,7 +231,7 @@ Once you've completed the code for the rule itself, you can define tests with th
For example, if you're adding a new rule named `E402`, you would run:
```shell
cargo run -p ruff -- check crates/ruff_linter/resources/test/fixtures/pycodestyle/E402.py --no-cache --preview --select E402
cargo run -p ruff -- check crates/ruff_linter/resources/test/fixtures/pycodestyle/E402.py --no-cache --select E402
```
**Note:** Only a subset of rules are enabled by default. When testing a new rule, ensure that
@@ -387,11 +373,6 @@ We have several ways of benchmarking and profiling Ruff:
- Microbenchmarks which run the linter or the formatter on individual files. These run on pull requests.
- Profiling the linter on either the microbenchmarks or entire projects
> \[!NOTE\]
> When running benchmarks, ensure that your CPU is otherwise idle (e.g., close any background
> applications, like web browsers). You may also want to switch your CPU to a "performance"
> mode, if it exists, especially when benchmarking short-lived processes.
### CPython Benchmark
First, clone [CPython](https://github.com/python/cpython). It's a large and diverse Python codebase,

760
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -19,10 +19,9 @@ argfile = { version = "0.1.6" }
assert_cmd = { version = "2.0.13" }
bincode = { version = "1.3.3" }
bitflags = { version = "2.4.1" }
bstr = { version = "1.9.0" }
cachedir = { version = "0.3.1" }
chrono = { version = "0.4.34", default-features = false, features = ["clock"] }
clap = { version = "4.4.18", features = ["derive"] }
chrono = { version = "0.4.33", default-features = false, features = ["clock"] }
clap = { version = "4.4.13", features = ["derive"] }
clap_complete_command = { version = "0.5.1" }
clearscreen = { version = "2.0.0" }
codspeed-criterion-compat = { version = "2.3.3", default-features = false }
@@ -44,19 +43,19 @@ hexf-parse = { version ="0.2.1"}
ignore = { version = "0.4.22" }
imara-diff ={ version = "0.1.5"}
imperative = { version = "1.0.4" }
indicatif ={ version = "0.17.8"}
indicatif ={ version = "0.17.7"}
indoc ={ version = "2.0.4"}
insta = { version = "1.34.0", feature = ["filters", "glob"] }
insta-cmd = { version = "0.4.0" }
is-macro = { version = "0.3.5" }
is-macro = { version = "0.3.4" }
is-wsl = { version = "0.4.0" }
itertools = { version = "0.12.1" }
itertools = { version = "0.12.0" }
js-sys = { version = "0.3.67" }
lalrpop-util = { version = "0.20.0", default-features = false }
lexical-parse-float = { version = "0.8.0", features = ["format"] }
libcst = { version = "1.1.0", default-features = false }
log = { version = "0.4.17" }
memchr = { version = "2.7.1" }
memchr = { version = "2.6.4" }
mimalloc = { version ="0.1.39"}
natord = { version = "1.0.9" }
notify = { version = "6.1.1" }
@@ -65,8 +64,8 @@ path-absolutize = { version = "3.1.1" }
pathdiff = { version = "0.2.1" }
pep440_rs = { version = "0.4.0", features = ["serde"] }
pretty_assertions = "1.3.0"
proc-macro2 = { version = "1.0.78" }
pyproject-toml = { version = "0.9.0" }
proc-macro2 = { version = "1.0.73" }
pyproject-toml = { version = "0.8.1" }
quick-junit = { version = "0.3.5" }
quote = { version = "1.0.23" }
rand = { version = "0.8.5" }
@@ -77,11 +76,11 @@ rustc-hash = { version = "1.1.0" }
schemars = { version = "0.8.16" }
seahash = { version ="4.1.0"}
semver = { version = "1.0.21" }
serde = { version = "1.0.196", features = ["derive"] }
serde = { version = "1.0.195", features = ["derive"] }
serde-wasm-bindgen = { version = "0.6.3" }
serde_json = { version = "1.0.113" }
serde_test = { version = "1.0.152" }
serde_with = { version = "3.6.0", default-features = false, features = ["macros"] }
serde_with = { version = "3.5.1", default-features = false, features = ["macros"] }
shellexpand = { version = "3.0.0" }
shlex = { version ="1.3.0"}
similar = { version = "2.4.0", features = ["inline"] }
@@ -92,9 +91,9 @@ strum_macros = { version = "0.25.3" }
syn = { version = "2.0.40" }
tempfile = { version ="3.9.0"}
test-case = { version = "3.3.1" }
thiserror = { version = "1.0.57" }
thiserror = { version = "1.0.51" }
tikv-jemallocator = { version ="0.5.0"}
toml = { version = "0.8.9" }
toml = { version = "0.8.8" }
tracing = { version = "0.1.40" }
tracing-indicatif = { version = "0.3.6" }
tracing-subscriber = { version = "0.3.18", features = ["env-filter"] }

View File

@@ -150,7 +150,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.2.1
rev: v0.1.15
hooks:
# Run the linter.
- id: ruff
@@ -402,7 +402,6 @@ Ruff is used by a number of major open-source projects and companies, including:
[Diffusers](https://github.com/huggingface/diffusers))
- ING Bank ([popmon](https://github.com/ing-bank/popmon), [probatus](https://github.com/ing-bank/probatus))
- [Ibis](https://github.com/ibis-project/ibis)
- [ivy](https://github.com/unifyai/ivy)
- [Jupyter](https://github.com/jupyter-server/jupyter_server)
- [LangChain](https://github.com/hwchase17/langchain)
- [Litestar](https://litestar.dev/)
@@ -433,7 +432,6 @@ Ruff is used by a number of major open-source projects and companies, including:
- [PyInstaller](https://github.com/pyinstaller/pyinstaller)
- [PyMC](https://github.com/pymc-devs/pymc/)
- [PyMC-Marketing](https://github.com/pymc-labs/pymc-marketing)
- [pytest](https://github.com/pytest-dev/pytest)
- [PyTorch](https://github.com/pytorch/pytorch)
- [Pydantic](https://github.com/pydantic/pydantic)
- [Pylint](https://github.com/PyCQA/pylint)
@@ -464,7 +462,7 @@ Ruff is used by a number of major open-source projects and companies, including:
### Show Your Support
If you're using Ruff, consider adding the Ruff badge to your project's `README.md`:
If you're using Ruff, consider adding the Ruff badge to project's `README.md`:
```md
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
@@ -490,6 +488,6 @@ MIT
<div align="center">
<a target="_blank" href="https://astral.sh" style="background:none">
<img src="https://raw.githubusercontent.com/astral-sh/ruff/main/assets/svg/Astral.svg" alt="Made by Astral">
<img src="https://raw.githubusercontent.com/astral-sh/ruff/main/assets/svg/Astral.svg">
</a>
</div>

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.2.1"
version = "0.1.15"
publish = false
authors = { workspace = true }
edition = { workspace = true }
@@ -48,16 +48,12 @@ serde = { workspace = true }
serde_json = { workspace = true }
shellexpand = { workspace = true }
strum = { workspace = true, features = [] }
tempfile = { workspace = true }
thiserror = { workspace = true }
toml = { workspace = true }
tracing = { workspace = true, features = ["log"] }
walkdir = { workspace = true }
wild = { workspace = true }
[dev-dependencies]
# Enable test rules during development
ruff_linter = { path = "../ruff_linter", features = ["clap", "test-rules"] }
assert_cmd = { workspace = true }
# Avoid writing colored snapshots when running tests from the terminal
colored = { workspace = true, features = ["no-color"]}

View File

@@ -1,18 +1,8 @@
use std::cmp::Ordering;
use std::fmt::Formatter;
use std::ops::Deref;
use std::path::{Path, PathBuf};
use std::str::FromStr;
use std::sync::Arc;
use std::path::PathBuf;
use anyhow::bail;
use clap::builder::{TypedValueParser, ValueParserFactory};
use clap::{command, Parser};
use colored::Colorize;
use path_absolutize::path_dedot;
use regex::Regex;
use rustc_hash::FxHashMap;
use toml;
use ruff_linter::line_width::LineLength;
use ruff_linter::logging::LogLevel;
@@ -22,10 +12,8 @@ use ruff_linter::settings::types::{
SerializationFormat, UnsafeFixes,
};
use ruff_linter::{warn_user, RuleParser, RuleSelector, RuleSelectorParser};
use ruff_source_file::{LineIndex, OneIndexed};
use ruff_text_size::TextRange;
use ruff_workspace::configuration::{Configuration, RuleSelection};
use ruff_workspace::options::{Options, PycodestyleOptions};
use ruff_workspace::options::PycodestyleOptions;
use ruff_workspace::resolver::ConfigurationTransformer;
#[derive(Debug, Parser)]
@@ -161,20 +149,10 @@ pub struct CheckCommand {
preview: bool,
#[clap(long, overrides_with("preview"), hide = true)]
no_preview: bool,
/// Either a path to a TOML configuration file (`pyproject.toml` or `ruff.toml`),
/// or a TOML `<KEY> = <VALUE>` pair
/// (such as you might find in a `ruff.toml` configuration file)
/// overriding a specific configuration option.
/// Overrides of individual settings using this option always take precedence
/// over all configuration files, including configuration files that were also
/// specified using `--config`.
#[arg(
long,
action = clap::ArgAction::Append,
value_name = "CONFIG_OPTION",
value_parser = ConfigArgumentParser,
)]
pub config: Vec<SingleConfigArgument>,
/// Path to the `pyproject.toml` or `ruff.toml` file to use for
/// configuration.
#[arg(long, conflicts_with = "isolated")]
pub config: Option<PathBuf>,
/// Comma-separated list of rule codes to enable (or ALL, to enable all rules).
#[arg(
long,
@@ -307,15 +285,7 @@ pub struct CheckCommand {
#[arg(short, long, env = "RUFF_NO_CACHE", help_heading = "Miscellaneous")]
pub no_cache: bool,
/// Ignore all configuration files.
//
// Note: We can't mark this as conflicting with `--config` here
// as `--config` can be used for specifying configuration overrides
// as well as configuration files.
// Specifying a configuration file conflicts with `--isolated`;
// specifying a configuration override does not.
// If a user specifies `ruff check --isolated --config=ruff.toml`,
// we emit an error later on, after the initial parsing by clap.
#[arg(long, help_heading = "Miscellaneous")]
#[arg(long, conflicts_with = "config", help_heading = "Miscellaneous")]
pub isolated: bool,
/// Path to the cache directory.
#[arg(long, env = "RUFF_CACHE_DIR", help_heading = "Miscellaneous")]
@@ -408,20 +378,9 @@ pub struct FormatCommand {
/// difference between the current file and how the formatted file would look like.
#[arg(long)]
pub diff: bool,
/// Either a path to a TOML configuration file (`pyproject.toml` or `ruff.toml`),
/// or a TOML `<KEY> = <VALUE>` pair
/// (such as you might find in a `ruff.toml` configuration file)
/// overriding a specific configuration option.
/// Overrides of individual settings using this option always take precedence
/// over all configuration files, including configuration files that were also
/// specified using `--config`.
#[arg(
long,
action = clap::ArgAction::Append,
value_name = "CONFIG_OPTION",
value_parser = ConfigArgumentParser,
)]
pub config: Vec<SingleConfigArgument>,
/// Path to the `pyproject.toml` or `ruff.toml` file to use for configuration.
#[arg(long, conflicts_with = "isolated")]
pub config: Option<PathBuf>,
/// Disable cache reads.
#[arg(short, long, env = "RUFF_NO_CACHE", help_heading = "Miscellaneous")]
@@ -463,15 +422,7 @@ pub struct FormatCommand {
#[arg(long, help_heading = "Format configuration")]
pub line_length: Option<LineLength>,
/// Ignore all configuration files.
//
// Note: We can't mark this as conflicting with `--config` here
// as `--config` can be used for specifying configuration overrides
// as well as configuration files.
// Specifying a configuration file conflicts with `--isolated`;
// specifying a configuration override does not.
// If a user specifies `ruff check --isolated --config=ruff.toml`,
// we emit an error later on, after the initial parsing by clap.
#[arg(long, help_heading = "Miscellaneous")]
#[arg(long, conflicts_with = "config", help_heading = "Miscellaneous")]
pub isolated: bool,
/// The name of the file when passing it through stdin.
#[arg(long, help_heading = "Miscellaneous")]
@@ -489,21 +440,6 @@ pub struct FormatCommand {
preview: bool,
#[clap(long, overrides_with("preview"), hide = true)]
no_preview: bool,
/// When specified, Ruff will try to only format the code in the given range.
/// It might be necessary to extend the start backwards or the end forwards, to fully enclose a logical line.
/// The `<RANGE>` uses the format `<start_line>:<start_column>-<end_line>:<end_column>`.
///
/// - The line and column numbers are 1 based.
/// - The column specifies the nth-unicode codepoint on that line.
/// - The end offset is exclusive.
/// - The column numbers are optional. You can write `--range=1-2` instead of `--range=1:1-2:1`.
/// - The end position is optional. You can write `--range=2` to format the entire document starting from the second line.
/// - The start position is optional. You can write `--range=-3` to format the first three lines of the document.
///
/// The option can only be used when formatting a single file. Range formatting of notebooks is unsupported.
#[clap(long, help_heading = "Editor options", verbatim_doc_comment)]
pub range: Option<FormatRange>,
}
#[derive(Debug, Clone, Copy, clap::ValueEnum)]
@@ -558,181 +494,100 @@ impl From<&LogLevelArgs> for LogLevel {
}
}
/// Configuration-related arguments passed via the CLI.
#[derive(Default)]
pub struct ConfigArguments {
/// Path to a pyproject.toml or ruff.toml configuration file (etc.).
/// Either 0 or 1 configuration file paths may be provided on the command line.
config_file: Option<PathBuf>,
/// Overrides provided via the `--config "KEY=VALUE"` option.
/// An arbitrary number of these overrides may be provided on the command line.
/// These overrides take precedence over all configuration files,
/// even configuration files that were also specified using `--config`.
overrides: Configuration,
/// Overrides provided via dedicated flags such as `--line-length` etc.
/// These overrides take precedence over all configuration files,
/// and also over all overrides specified using any `--config "KEY=VALUE"` flags.
per_flag_overrides: ExplicitConfigOverrides,
}
impl ConfigArguments {
pub fn config_file(&self) -> Option<&Path> {
self.config_file.as_deref()
}
fn from_cli_arguments(
config_options: Vec<SingleConfigArgument>,
per_flag_overrides: ExplicitConfigOverrides,
isolated: bool,
) -> anyhow::Result<Self> {
let mut new = Self {
per_flag_overrides,
..Self::default()
};
for option in config_options {
match option {
SingleConfigArgument::SettingsOverride(overridden_option) => {
let overridden_option = Arc::try_unwrap(overridden_option)
.unwrap_or_else(|option| option.deref().clone());
new.overrides = new.overrides.combine(Configuration::from_options(
overridden_option,
None,
&path_dedot::CWD,
)?);
}
SingleConfigArgument::FilePath(path) => {
if isolated {
bail!(
"\
The argument `--config={}` cannot be used with `--isolated`
tip: You cannot specify a configuration file and also specify `--isolated`,
as `--isolated` causes ruff to ignore all configuration files.
For more information, try `--help`.
",
path.display()
);
}
if let Some(ref config_file) = new.config_file {
let (first, second) = (config_file.display(), path.display());
bail!(
"\
You cannot specify more than one configuration file on the command line.
tip: remove either `--config={first}` or `--config={second}`.
For more information, try `--help`.
"
);
}
new.config_file = Some(path);
}
}
}
Ok(new)
}
}
impl ConfigurationTransformer for ConfigArguments {
fn transform(&self, config: Configuration) -> Configuration {
let with_config_overrides = self.overrides.clone().combine(config);
self.per_flag_overrides.transform(with_config_overrides)
}
}
impl CheckCommand {
/// Partition the CLI into command-line arguments and configuration
/// overrides.
pub fn partition(self) -> anyhow::Result<(CheckArguments, ConfigArguments)> {
let check_arguments = CheckArguments {
add_noqa: self.add_noqa,
diff: self.diff,
ecosystem_ci: self.ecosystem_ci,
exit_non_zero_on_fix: self.exit_non_zero_on_fix,
exit_zero: self.exit_zero,
files: self.files,
ignore_noqa: self.ignore_noqa,
isolated: self.isolated,
no_cache: self.no_cache,
output_file: self.output_file,
show_files: self.show_files,
show_settings: self.show_settings,
statistics: self.statistics,
stdin_filename: self.stdin_filename,
watch: self.watch,
};
let cli_overrides = ExplicitConfigOverrides {
dummy_variable_rgx: self.dummy_variable_rgx,
exclude: self.exclude,
extend_exclude: self.extend_exclude,
extend_fixable: self.extend_fixable,
extend_ignore: self.extend_ignore,
extend_per_file_ignores: self.extend_per_file_ignores,
extend_select: self.extend_select,
extend_unfixable: self.extend_unfixable,
fixable: self.fixable,
ignore: self.ignore,
line_length: self.line_length,
per_file_ignores: self.per_file_ignores,
preview: resolve_bool_arg(self.preview, self.no_preview).map(PreviewMode::from),
respect_gitignore: resolve_bool_arg(self.respect_gitignore, self.no_respect_gitignore),
select: self.select,
target_version: self.target_version,
unfixable: self.unfixable,
// TODO(charlie): Included in `pyproject.toml`, but not inherited.
cache_dir: self.cache_dir,
fix: resolve_bool_arg(self.fix, self.no_fix),
fix_only: resolve_bool_arg(self.fix_only, self.no_fix_only),
unsafe_fixes: resolve_bool_arg(self.unsafe_fixes, self.no_unsafe_fixes)
.map(UnsafeFixes::from),
force_exclude: resolve_bool_arg(self.force_exclude, self.no_force_exclude),
output_format: resolve_output_format(
self.output_format,
resolve_bool_arg(self.show_source, self.no_show_source),
resolve_bool_arg(self.preview, self.no_preview).unwrap_or_default(),
),
show_fixes: resolve_bool_arg(self.show_fixes, self.no_show_fixes),
extension: self.extension,
};
let config_args =
ConfigArguments::from_cli_arguments(self.config, cli_overrides, self.isolated)?;
Ok((check_arguments, config_args))
pub fn partition(self) -> (CheckArguments, CliOverrides) {
(
CheckArguments {
add_noqa: self.add_noqa,
config: self.config,
diff: self.diff,
ecosystem_ci: self.ecosystem_ci,
exit_non_zero_on_fix: self.exit_non_zero_on_fix,
exit_zero: self.exit_zero,
files: self.files,
ignore_noqa: self.ignore_noqa,
isolated: self.isolated,
no_cache: self.no_cache,
output_file: self.output_file,
show_files: self.show_files,
show_settings: self.show_settings,
statistics: self.statistics,
stdin_filename: self.stdin_filename,
watch: self.watch,
},
CliOverrides {
dummy_variable_rgx: self.dummy_variable_rgx,
exclude: self.exclude,
extend_exclude: self.extend_exclude,
extend_fixable: self.extend_fixable,
extend_ignore: self.extend_ignore,
extend_per_file_ignores: self.extend_per_file_ignores,
extend_select: self.extend_select,
extend_unfixable: self.extend_unfixable,
fixable: self.fixable,
ignore: self.ignore,
line_length: self.line_length,
per_file_ignores: self.per_file_ignores,
preview: resolve_bool_arg(self.preview, self.no_preview).map(PreviewMode::from),
respect_gitignore: resolve_bool_arg(
self.respect_gitignore,
self.no_respect_gitignore,
),
select: self.select,
target_version: self.target_version,
unfixable: self.unfixable,
// TODO(charlie): Included in `pyproject.toml`, but not inherited.
cache_dir: self.cache_dir,
fix: resolve_bool_arg(self.fix, self.no_fix),
fix_only: resolve_bool_arg(self.fix_only, self.no_fix_only),
unsafe_fixes: resolve_bool_arg(self.unsafe_fixes, self.no_unsafe_fixes)
.map(UnsafeFixes::from),
force_exclude: resolve_bool_arg(self.force_exclude, self.no_force_exclude),
output_format: resolve_output_format(
self.output_format,
resolve_bool_arg(self.show_source, self.no_show_source),
resolve_bool_arg(self.preview, self.no_preview).unwrap_or_default(),
),
show_fixes: resolve_bool_arg(self.show_fixes, self.no_show_fixes),
extension: self.extension,
},
)
}
}
impl FormatCommand {
/// Partition the CLI into command-line arguments and configuration
/// overrides.
pub fn partition(self) -> anyhow::Result<(FormatArguments, ConfigArguments)> {
let format_arguments = FormatArguments {
check: self.check,
diff: self.diff,
files: self.files,
isolated: self.isolated,
no_cache: self.no_cache,
stdin_filename: self.stdin_filename,
range: self.range,
};
pub fn partition(self) -> (FormatArguments, CliOverrides) {
(
FormatArguments {
check: self.check,
diff: self.diff,
config: self.config,
files: self.files,
isolated: self.isolated,
no_cache: self.no_cache,
stdin_filename: self.stdin_filename,
},
CliOverrides {
line_length: self.line_length,
respect_gitignore: resolve_bool_arg(
self.respect_gitignore,
self.no_respect_gitignore,
),
exclude: self.exclude,
preview: resolve_bool_arg(self.preview, self.no_preview).map(PreviewMode::from),
force_exclude: resolve_bool_arg(self.force_exclude, self.no_force_exclude),
target_version: self.target_version,
cache_dir: self.cache_dir,
extension: self.extension,
let cli_overrides = ExplicitConfigOverrides {
line_length: self.line_length,
respect_gitignore: resolve_bool_arg(self.respect_gitignore, self.no_respect_gitignore),
exclude: self.exclude,
preview: resolve_bool_arg(self.preview, self.no_preview).map(PreviewMode::from),
force_exclude: resolve_bool_arg(self.force_exclude, self.no_force_exclude),
target_version: self.target_version,
cache_dir: self.cache_dir,
extension: self.extension,
// Unsupported on the formatter CLI, but required on `Overrides`.
..ExplicitConfigOverrides::default()
};
let config_args =
ConfigArguments::from_cli_arguments(self.config, cli_overrides, self.isolated)?;
Ok((format_arguments, config_args))
// Unsupported on the formatter CLI, but required on `Overrides`.
..CliOverrides::default()
},
)
}
}
@@ -745,154 +600,6 @@ fn resolve_bool_arg(yes: bool, no: bool) -> Option<bool> {
}
}
#[derive(Debug)]
enum TomlParseFailureKind {
SyntaxError,
UnknownOption,
}
impl std::fmt::Display for TomlParseFailureKind {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let display = match self {
Self::SyntaxError => "The supplied argument is not valid TOML",
Self::UnknownOption => {
"Could not parse the supplied argument as a `ruff.toml` configuration option"
}
};
write!(f, "{display}")
}
}
#[derive(Debug)]
struct TomlParseFailure {
kind: TomlParseFailureKind,
underlying_error: toml::de::Error,
}
impl std::fmt::Display for TomlParseFailure {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let TomlParseFailure {
kind,
underlying_error,
} = self;
let display = format!("{kind}:\n\n{underlying_error}");
write!(f, "{}", display.trim_end())
}
}
/// Enumeration to represent a single `--config` argument
/// passed via the CLI.
///
/// Using the `--config` flag, users may pass 0 or 1 paths
/// to configuration files and an arbitrary number of
/// "inline TOML" overrides for specific settings.
///
/// For example:
///
/// ```sh
/// ruff check --config "path/to/ruff.toml" --config "extend-select=['E501', 'F841']" --config "lint.per-file-ignores = {'some_file.py' = ['F841']}"
/// ```
#[derive(Clone, Debug)]
pub enum SingleConfigArgument {
FilePath(PathBuf),
SettingsOverride(Arc<Options>),
}
#[derive(Clone)]
pub struct ConfigArgumentParser;
impl ValueParserFactory for SingleConfigArgument {
type Parser = ConfigArgumentParser;
fn value_parser() -> Self::Parser {
ConfigArgumentParser
}
}
impl TypedValueParser for ConfigArgumentParser {
type Value = SingleConfigArgument;
fn parse_ref(
&self,
cmd: &clap::Command,
arg: Option<&clap::Arg>,
value: &std::ffi::OsStr,
) -> Result<Self::Value, clap::Error> {
let path_to_config_file = PathBuf::from(value);
if path_to_config_file.exists() {
return Ok(SingleConfigArgument::FilePath(path_to_config_file));
}
let value = value
.to_str()
.ok_or_else(|| clap::Error::new(clap::error::ErrorKind::InvalidUtf8))?;
let toml_parse_error = match toml::Table::from_str(value) {
Ok(table) => match table.try_into() {
Ok(option) => return Ok(SingleConfigArgument::SettingsOverride(Arc::new(option))),
Err(underlying_error) => TomlParseFailure {
kind: TomlParseFailureKind::UnknownOption,
underlying_error,
},
},
Err(underlying_error) => TomlParseFailure {
kind: TomlParseFailureKind::SyntaxError,
underlying_error,
},
};
let mut new_error = clap::Error::new(clap::error::ErrorKind::ValueValidation).with_cmd(cmd);
if let Some(arg) = arg {
new_error.insert(
clap::error::ContextKind::InvalidArg,
clap::error::ContextValue::String(arg.to_string()),
);
}
new_error.insert(
clap::error::ContextKind::InvalidValue,
clap::error::ContextValue::String(value.to_string()),
);
// small hack so that multiline tips
// have the same indent on the left-hand side:
let tip_indent = " ".repeat(" tip: ".len());
let mut tip = format!(
"\
A `--config` flag must either be a path to a `.toml` configuration file
{tip_indent}or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
{tip_indent}option"
);
// Here we do some heuristics to try to figure out whether
// the user was trying to pass in a path to a configuration file
// or some inline TOML.
// We want to display the most helpful error to the user as possible.
if std::path::Path::new(value)
.extension()
.map_or(false, |ext| ext.eq_ignore_ascii_case("toml"))
{
if !value.contains('=') {
tip.push_str(&format!(
"
It looks like you were trying to pass a path to a configuration file.
The path `{value}` does not exist"
));
}
} else if value.contains('=') {
tip.push_str(&format!("\n\n{toml_parse_error}"));
}
new_error.insert(
clap::error::ContextKind::Suggested,
clap::error::ContextValue::StyledStrs(vec![tip.into()]),
);
Err(new_error)
}
}
fn resolve_output_format(
output_format: Option<SerializationFormat>,
show_sources: Option<bool>,
@@ -935,6 +642,7 @@ fn resolve_output_format(
#[allow(clippy::struct_excessive_bools)]
pub struct CheckArguments {
pub add_noqa: bool,
pub config: Option<PathBuf>,
pub diff: bool,
pub ecosystem_ci: bool,
pub exit_non_zero_on_fix: bool,
@@ -958,235 +666,45 @@ pub struct FormatArguments {
pub check: bool,
pub no_cache: bool,
pub diff: bool,
pub config: Option<PathBuf>,
pub files: Vec<PathBuf>,
pub isolated: bool,
pub stdin_filename: Option<PathBuf>,
pub range: Option<FormatRange>,
}
/// A text range specified by line and column numbers.
#[derive(Copy, Clone, Debug)]
pub struct FormatRange {
start: LineColumn,
end: LineColumn,
}
impl FormatRange {
/// Converts the line:column range to a byte offset range specific for `source`.
///
/// Returns an empty range if the start range is past the end of `source`.
pub(super) fn to_text_range(self, source: &str, line_index: &LineIndex) -> TextRange {
let start_byte_offset = line_index.offset(self.start.line, self.start.column, source);
let end_byte_offset = line_index.offset(self.end.line, self.end.column, source);
TextRange::new(start_byte_offset, end_byte_offset)
}
}
impl FromStr for FormatRange {
type Err = FormatRangeParseError;
fn from_str(value: &str) -> Result<Self, Self::Err> {
let (start, end) = value.split_once('-').unwrap_or((value, ""));
let start = if start.is_empty() {
LineColumn::default()
} else {
start.parse().map_err(FormatRangeParseError::InvalidStart)?
};
let end = if end.is_empty() {
LineColumn {
line: OneIndexed::MAX,
column: OneIndexed::MAX,
}
} else {
end.parse().map_err(FormatRangeParseError::InvalidEnd)?
};
if start > end {
return Err(FormatRangeParseError::StartGreaterThanEnd(start, end));
}
Ok(FormatRange { start, end })
}
}
#[derive(Clone, Debug)]
pub enum FormatRangeParseError {
InvalidStart(LineColumnParseError),
InvalidEnd(LineColumnParseError),
StartGreaterThanEnd(LineColumn, LineColumn),
}
impl std::fmt::Display for FormatRangeParseError {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let tip = " tip:".bold().green();
match self {
FormatRangeParseError::StartGreaterThanEnd(start, end) => {
write!(
f,
"the start position '{start_invalid}' is greater than the end position '{end_invalid}'.\n {tip} Try switching start and end: '{end}-{start}'",
start_invalid=start.to_string().bold().yellow(),
end_invalid=end.to_string().bold().yellow(),
start=start.to_string().green().bold(),
end=end.to_string().green().bold()
)
}
FormatRangeParseError::InvalidStart(inner) => inner.write(f, true),
FormatRangeParseError::InvalidEnd(inner) => inner.write(f, false),
}
}
}
impl std::error::Error for FormatRangeParseError {}
#[derive(Copy, Clone, Debug)]
pub struct LineColumn {
pub line: OneIndexed,
pub column: OneIndexed,
}
impl std::fmt::Display for LineColumn {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
write!(f, "{line}:{column}", line = self.line, column = self.column)
}
}
impl Default for LineColumn {
fn default() -> Self {
LineColumn {
line: OneIndexed::MIN,
column: OneIndexed::MIN,
}
}
}
impl PartialOrd for LineColumn {
#[inline]
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl Ord for LineColumn {
fn cmp(&self, other: &Self) -> Ordering {
self.line
.cmp(&other.line)
.then(self.column.cmp(&other.column))
}
}
impl PartialEq for LineColumn {
fn eq(&self, other: &Self) -> bool {
self.cmp(other) == Ordering::Equal
}
}
impl Eq for LineColumn {}
impl FromStr for LineColumn {
type Err = LineColumnParseError;
fn from_str(value: &str) -> Result<Self, Self::Err> {
let (line, column) = value.split_once(':').unwrap_or((value, "1"));
let line: usize = line.parse().map_err(LineColumnParseError::LineParseError)?;
let column: usize = column
.parse()
.map_err(LineColumnParseError::ColumnParseError)?;
match (OneIndexed::new(line), OneIndexed::new(column)) {
(Some(line), Some(column)) => Ok(LineColumn { line, column }),
(Some(line), None) => Err(LineColumnParseError::ZeroColumnIndex { line }),
(None, Some(column)) => Err(LineColumnParseError::ZeroLineIndex { column }),
(None, None) => Err(LineColumnParseError::ZeroLineAndColumnIndex),
}
}
}
#[derive(Clone, Debug)]
pub enum LineColumnParseError {
ZeroLineIndex { column: OneIndexed },
ZeroColumnIndex { line: OneIndexed },
ZeroLineAndColumnIndex,
LineParseError(std::num::ParseIntError),
ColumnParseError(std::num::ParseIntError),
}
impl LineColumnParseError {
fn write(&self, f: &mut std::fmt::Formatter, start_range: bool) -> std::fmt::Result {
let tip = "tip:".bold().green();
let range = if start_range { "start" } else { "end" };
match self {
LineColumnParseError::ColumnParseError(inner) => {
write!(f, "the {range}s column is not a valid number ({inner})'\n {tip} The format is 'line:column'.")
}
LineColumnParseError::LineParseError(inner) => {
write!(f, "the {range} line is not a valid number ({inner})\n {tip} The format is 'line:column'.")
}
LineColumnParseError::ZeroColumnIndex { line } => {
write!(
f,
"the {range} column is 0, but it should be 1 or greater.\n {tip} The column numbers start at 1.\n {tip} Try {suggestion} instead.",
suggestion=format!("{line}:1").green().bold()
)
}
LineColumnParseError::ZeroLineIndex { column } => {
write!(
f,
"the {range} line is 0, but it should be 1 or greater.\n {tip} The line numbers start at 1.\n {tip} Try {suggestion} instead.",
suggestion=format!("1:{column}").green().bold()
)
}
LineColumnParseError::ZeroLineAndColumnIndex => {
write!(
f,
"the {range} line and column are both 0, but they should be 1 or greater.\n {tip} The line and column numbers start at 1.\n {tip} Try {suggestion} instead.",
suggestion="1:1".to_string().green().bold()
)
}
}
}
}
/// Configuration overrides provided via dedicated CLI flags:
/// `--line-length`, `--respect-gitignore`, etc.
/// CLI settings that function as configuration overrides.
#[derive(Clone, Default)]
#[allow(clippy::struct_excessive_bools)]
struct ExplicitConfigOverrides {
dummy_variable_rgx: Option<Regex>,
exclude: Option<Vec<FilePattern>>,
extend_exclude: Option<Vec<FilePattern>>,
extend_fixable: Option<Vec<RuleSelector>>,
extend_ignore: Option<Vec<RuleSelector>>,
extend_select: Option<Vec<RuleSelector>>,
extend_unfixable: Option<Vec<RuleSelector>>,
fixable: Option<Vec<RuleSelector>>,
ignore: Option<Vec<RuleSelector>>,
line_length: Option<LineLength>,
per_file_ignores: Option<Vec<PatternPrefixPair>>,
extend_per_file_ignores: Option<Vec<PatternPrefixPair>>,
preview: Option<PreviewMode>,
respect_gitignore: Option<bool>,
select: Option<Vec<RuleSelector>>,
target_version: Option<PythonVersion>,
unfixable: Option<Vec<RuleSelector>>,
pub struct CliOverrides {
pub dummy_variable_rgx: Option<Regex>,
pub exclude: Option<Vec<FilePattern>>,
pub extend_exclude: Option<Vec<FilePattern>>,
pub extend_fixable: Option<Vec<RuleSelector>>,
pub extend_ignore: Option<Vec<RuleSelector>>,
pub extend_select: Option<Vec<RuleSelector>>,
pub extend_unfixable: Option<Vec<RuleSelector>>,
pub fixable: Option<Vec<RuleSelector>>,
pub ignore: Option<Vec<RuleSelector>>,
pub line_length: Option<LineLength>,
pub per_file_ignores: Option<Vec<PatternPrefixPair>>,
pub extend_per_file_ignores: Option<Vec<PatternPrefixPair>>,
pub preview: Option<PreviewMode>,
pub respect_gitignore: Option<bool>,
pub select: Option<Vec<RuleSelector>>,
pub target_version: Option<PythonVersion>,
pub unfixable: Option<Vec<RuleSelector>>,
// TODO(charlie): Captured in pyproject.toml as a default, but not part of `Settings`.
cache_dir: Option<PathBuf>,
fix: Option<bool>,
fix_only: Option<bool>,
unsafe_fixes: Option<UnsafeFixes>,
force_exclude: Option<bool>,
output_format: Option<SerializationFormat>,
show_fixes: Option<bool>,
extension: Option<Vec<ExtensionPair>>,
pub cache_dir: Option<PathBuf>,
pub fix: Option<bool>,
pub fix_only: Option<bool>,
pub unsafe_fixes: Option<UnsafeFixes>,
pub force_exclude: Option<bool>,
pub output_format: Option<SerializationFormat>,
pub show_fixes: Option<bool>,
pub extension: Option<Vec<ExtensionPair>>,
}
impl ConfigurationTransformer for ExplicitConfigOverrides {
impl ConfigurationTransformer for CliOverrides {
fn transform(&self, mut config: Configuration) -> Configuration {
if let Some(cache_dir) = &self.cache_dir {
config.cache_dir = Some(cache_dir.clone());

View File

@@ -1,7 +1,7 @@
use std::fmt::Debug;
use std::fs::{self, File};
use std::hash::Hasher;
use std::io::{self, BufReader, Write};
use std::io::{self, BufReader, BufWriter, Write};
use std::path::{Path, PathBuf};
use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::Mutex;
@@ -15,7 +15,6 @@ use rayon::iter::ParallelIterator;
use rayon::iter::{IntoParallelIterator, ParallelBridge};
use rustc_hash::FxHashMap;
use serde::{Deserialize, Serialize};
use tempfile::NamedTempFile;
use ruff_cache::{CacheKey, CacheKeyHasher};
use ruff_diagnostics::{DiagnosticKind, Fix};
@@ -166,29 +165,15 @@ impl Cache {
return Ok(());
}
// Write the cache to a temporary file first and then rename it for an "atomic" write.
// Protects against data loss if the process is killed during the write and races between different ruff
// processes, resulting in a corrupted cache file. https://github.com/astral-sh/ruff/issues/8147#issuecomment-1943345964
let mut temp_file =
NamedTempFile::new_in(self.path.parent().expect("Write path must have a parent"))
.context("Failed to create temporary file")?;
// Serialize to in-memory buffer because hyperfine benchmark showed that it's faster than
// using a `BufWriter` and our cache files are small enough that streaming isn't necessary.
let serialized =
bincode::serialize(&self.package).context("Failed to serialize cache data")?;
temp_file
.write_all(&serialized)
.context("Failed to write serialized cache to temporary file.")?;
temp_file.persist(&self.path).with_context(|| {
let file = File::create(&self.path)
.with_context(|| format!("Failed to create cache file '{}'", self.path.display()))?;
let writer = BufWriter::new(file);
bincode::serialize_into(writer, &self.package).with_context(|| {
format!(
"Failed to rename temporary cache file to {}",
"Failed to serialise cache to file '{}'",
self.path.display()
)
})?;
Ok(())
})
}
/// Applies the pending changes without storing the cache to disk.
@@ -1065,7 +1050,6 @@ mod tests {
&self.settings.formatter,
PySourceType::Python,
FormatMode::Write,
None,
Some(cache),
)
}

View File

@@ -12,17 +12,17 @@ use ruff_linter::warn_user_once;
use ruff_python_ast::{PySourceType, SourceType};
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig, ResolvedFile};
use crate::args::ConfigArguments;
use crate::args::CliOverrides;
/// Add `noqa` directives to a collection of files.
pub(crate) fn add_noqa(
files: &[PathBuf],
pyproject_config: &PyprojectConfig,
config_arguments: &ConfigArguments,
overrides: &CliOverrides,
) -> Result<usize> {
// Collect all the files to check.
let start = Instant::now();
let (paths, resolver) = python_files_in_path(files, pyproject_config, config_arguments)?;
let (paths, resolver) = python_files_in_path(files, pyproject_config, overrides)?;
let duration = start.elapsed();
debug!("Identified files to lint in: {:?}", duration);

View File

@@ -24,7 +24,7 @@ use ruff_workspace::resolver::{
match_exclusion, python_files_in_path, PyprojectConfig, ResolvedFile,
};
use crate::args::ConfigArguments;
use crate::args::CliOverrides;
use crate::cache::{Cache, PackageCacheMap, PackageCaches};
use crate::diagnostics::Diagnostics;
use crate::panic::catch_unwind;
@@ -34,7 +34,7 @@ use crate::panic::catch_unwind;
pub(crate) fn check(
files: &[PathBuf],
pyproject_config: &PyprojectConfig,
config_arguments: &ConfigArguments,
overrides: &CliOverrides,
cache: flags::Cache,
noqa: flags::Noqa,
fix_mode: flags::FixMode,
@@ -42,7 +42,7 @@ pub(crate) fn check(
) -> Result<Diagnostics> {
// Collect all the Python files to check.
let start = Instant::now();
let (paths, resolver) = python_files_in_path(files, pyproject_config, config_arguments)?;
let (paths, resolver) = python_files_in_path(files, pyproject_config, overrides)?;
debug!("Identified files to lint in: {:?}", start.elapsed());
if paths.is_empty() {
@@ -233,7 +233,7 @@ mod test {
use ruff_workspace::resolver::{PyprojectConfig, PyprojectDiscoveryStrategy};
use ruff_workspace::Settings;
use crate::args::ConfigArguments;
use crate::args::CliOverrides;
use super::check;
@@ -272,7 +272,7 @@ mod test {
// Notebooks are not included by default
&[tempdir.path().to_path_buf(), notebook],
&pyproject_config,
&ConfigArguments::default(),
&CliOverrides::default(),
flags::Cache::Disabled,
flags::Noqa::Disabled,
flags::FixMode::Generate,

View File

@@ -6,7 +6,7 @@ use ruff_linter::packaging;
use ruff_linter::settings::flags;
use ruff_workspace::resolver::{match_exclusion, python_file_at_path, PyprojectConfig, Resolver};
use crate::args::ConfigArguments;
use crate::args::CliOverrides;
use crate::diagnostics::{lint_stdin, Diagnostics};
use crate::stdin::{parrot_stdin, read_from_stdin};
@@ -14,7 +14,7 @@ use crate::stdin::{parrot_stdin, read_from_stdin};
pub(crate) fn check_stdin(
filename: Option<&Path>,
pyproject_config: &PyprojectConfig,
overrides: &ConfigArguments,
overrides: &CliOverrides,
noqa: flags::Noqa,
fix_mode: flags::FixMode,
) -> Result<Diagnostics> {

View File

@@ -23,13 +23,12 @@ use ruff_linter::rules::flake8_quotes::settings::Quote;
use ruff_linter::source_kind::{SourceError, SourceKind};
use ruff_linter::warn_user_once;
use ruff_python_ast::{PySourceType, SourceType};
use ruff_python_formatter::{format_module_source, format_range, FormatModuleError, QuoteStyle};
use ruff_source_file::LineIndex;
use ruff_python_formatter::{format_module_source, FormatModuleError, QuoteStyle};
use ruff_text_size::{TextLen, TextRange, TextSize};
use ruff_workspace::resolver::{match_exclusion, python_files_in_path, ResolvedFile, Resolver};
use ruff_workspace::FormatterSettings;
use crate::args::{ConfigArguments, FormatArguments, FormatRange};
use crate::args::{CliOverrides, FormatArguments};
use crate::cache::{Cache, FileCacheKey, PackageCacheMap, PackageCaches};
use crate::panic::{catch_unwind, PanicError};
use crate::resolve::resolve;
@@ -60,30 +59,24 @@ impl FormatMode {
/// Format a set of files, and return the exit status.
pub(crate) fn format(
cli: FormatArguments,
config_arguments: &ConfigArguments,
overrides: &CliOverrides,
log_level: LogLevel,
) -> Result<ExitStatus> {
let pyproject_config = resolve(
cli.isolated,
config_arguments,
cli.config.as_deref(),
overrides,
cli.stdin_filename.as_deref(),
)?;
let mode = FormatMode::from_cli(&cli);
let files = resolve_default_files(cli.files, false);
let (paths, resolver) = python_files_in_path(&files, &pyproject_config, config_arguments)?;
let (paths, resolver) = python_files_in_path(&files, &pyproject_config, overrides)?;
if paths.is_empty() {
warn_user_once!("No Python files found under the given path(s)");
return Ok(ExitStatus::Success);
}
if cli.range.is_some() && paths.len() > 1 {
return Err(anyhow::anyhow!(
"The `--range` option is only supported when formatting a single file but the specified paths resolve to {} files.",
paths.len()
));
}
warn_incompatible_formatter_settings(&resolver);
// Discover the package root for each Python file.
@@ -146,14 +139,7 @@ pub(crate) fn format(
Some(
match catch_unwind(|| {
format_path(
path,
&settings.formatter,
source_type,
mode,
cli.range,
cache,
)
format_path(path, &settings.formatter, source_type, mode, cache)
}) {
Ok(inner) => inner.map(|result| FormatPathResult {
path: resolved_file.path().to_path_buf(),
@@ -240,7 +226,6 @@ pub(crate) fn format_path(
settings: &FormatterSettings,
source_type: PySourceType,
mode: FormatMode,
range: Option<FormatRange>,
cache: Option<&Cache>,
) -> Result<FormatResult, FormatCommandError> {
if let Some(cache) = cache {
@@ -265,12 +250,8 @@ pub(crate) fn format_path(
}
};
// Don't write back to the cache if formatting a range.
let cache = cache.filter(|_| range.is_none());
// Format the source.
let format_result = match format_source(&unformatted, source_type, Some(path), settings, range)?
{
let format_result = match format_source(&unformatted, source_type, Some(path), settings)? {
FormattedSource::Formatted(formatted) => match mode {
FormatMode::Write => {
let mut writer = File::create(path).map_err(|err| {
@@ -338,31 +319,12 @@ pub(crate) fn format_source(
source_type: PySourceType,
path: Option<&Path>,
settings: &FormatterSettings,
range: Option<FormatRange>,
) -> Result<FormattedSource, FormatCommandError> {
match &source_kind {
SourceKind::Python(unformatted) => {
let options = settings.to_format_options(source_type, unformatted);
let formatted = if let Some(range) = range {
let line_index = LineIndex::from_source_text(unformatted);
let byte_range = range.to_text_range(unformatted, &line_index);
format_range(unformatted, byte_range, options).map(|formatted_range| {
let mut formatted = unformatted.to_string();
formatted.replace_range(
std::ops::Range::<usize>::from(formatted_range.source_range()),
formatted_range.as_code(),
);
formatted
})
} else {
// Using `Printed::into_code` requires adding `ruff_formatter` as a direct dependency, and I suspect that Rust can optimize the closure away regardless.
#[allow(clippy::redundant_closure_for_method_calls)]
format_module_source(unformatted, options).map(|formatted| formatted.into_code())
};
let formatted = formatted.map_err(|err| {
let formatted = format_module_source(unformatted, options).map_err(|err| {
if let FormatModuleError::ParseError(err) = err {
DisplayParseError::from_source_kind(
err,
@@ -375,6 +337,7 @@ pub(crate) fn format_source(
}
})?;
let formatted = formatted.into_code();
if formatted.len() == unformatted.len() && formatted == *unformatted {
Ok(FormattedSource::Unchanged)
} else {
@@ -386,12 +349,6 @@ pub(crate) fn format_source(
return Ok(FormattedSource::Unchanged);
}
if range.is_some() {
return Err(FormatCommandError::RangeFormatNotebook(
path.map(Path::to_path_buf),
));
}
let options = settings.to_format_options(source_type, notebook.source_code());
let mut output: Option<String> = None;
@@ -632,7 +589,6 @@ pub(crate) enum FormatCommandError {
Format(Option<PathBuf>, FormatModuleError),
Write(Option<PathBuf>, SourceError),
Diff(Option<PathBuf>, io::Error),
RangeFormatNotebook(Option<PathBuf>),
}
impl FormatCommandError {
@@ -650,8 +606,7 @@ impl FormatCommandError {
| Self::Read(path, _)
| Self::Format(path, _)
| Self::Write(path, _)
| Self::Diff(path, _)
| Self::RangeFormatNotebook(path) => path.as_deref(),
| Self::Diff(path, _) => path.as_deref(),
}
}
}
@@ -673,10 +628,9 @@ impl Display for FormatCommandError {
} else {
write!(
f,
"{header} {error}",
header = "Encountered error:".bold(),
error = err
.io_error()
"{} {}",
"Encountered error:".bold(),
err.io_error()
.map_or_else(|| err.to_string(), std::string::ToString::to_string)
)
}
@@ -694,7 +648,7 @@ impl Display for FormatCommandError {
":".bold()
)
} else {
write!(f, "{header} {err}", header = "Failed to read:".bold())
write!(f, "{}{} {err}", "Failed to read".bold(), ":".bold())
}
}
Self::Write(path, err) => {
@@ -707,7 +661,7 @@ impl Display for FormatCommandError {
":".bold()
)
} else {
write!(f, "{header} {err}", header = "Failed to write:".bold())
write!(f, "{}{} {err}", "Failed to write".bold(), ":".bold())
}
}
Self::Format(path, err) => {
@@ -720,7 +674,7 @@ impl Display for FormatCommandError {
":".bold()
)
} else {
write!(f, "{header} {err}", header = "Failed to format:".bold())
write!(f, "{}{} {err}", "Failed to format".bold(), ":".bold())
}
}
Self::Diff(path, err) => {
@@ -735,25 +689,9 @@ impl Display for FormatCommandError {
} else {
write!(
f,
"{header} {err}",
header = "Failed to generate diff:".bold(),
)
}
}
Self::RangeFormatNotebook(path) => {
if let Some(path) = path {
write!(
f,
"{header}{path}{colon} Range formatting isn't supported for notebooks.",
header = "Failed to format ".bold(),
path = fs::relativize_path(path).bold(),
colon = ":".bold()
)
} else {
write!(
f,
"{header} Range formatting isn't supported for notebooks",
header = "Failed to format:".bold()
"{}{} {err}",
"Failed to generate diff".bold(),
":".bold()
)
}
}

View File

@@ -9,7 +9,7 @@ use ruff_python_ast::{PySourceType, SourceType};
use ruff_workspace::resolver::{match_exclusion, python_file_at_path, Resolver};
use ruff_workspace::FormatterSettings;
use crate::args::{ConfigArguments, FormatArguments, FormatRange};
use crate::args::{CliOverrides, FormatArguments};
use crate::commands::format::{
format_source, warn_incompatible_formatter_settings, FormatCommandError, FormatMode,
FormatResult, FormattedSource,
@@ -19,13 +19,11 @@ use crate::stdin::{parrot_stdin, read_from_stdin};
use crate::ExitStatus;
/// Run the formatter over a single file, read from `stdin`.
pub(crate) fn format_stdin(
cli: &FormatArguments,
config_arguments: &ConfigArguments,
) -> Result<ExitStatus> {
pub(crate) fn format_stdin(cli: &FormatArguments, overrides: &CliOverrides) -> Result<ExitStatus> {
let pyproject_config = resolve(
cli.isolated,
config_arguments,
cli.config.as_deref(),
overrides,
cli.stdin_filename.as_deref(),
)?;
@@ -36,7 +34,7 @@ pub(crate) fn format_stdin(
if resolver.force_exclude() {
if let Some(filename) = cli.stdin_filename.as_deref() {
if !python_file_at_path(filename, &mut resolver, config_arguments)? {
if !python_file_at_path(filename, &mut resolver, overrides)? {
if mode.is_write() {
parrot_stdin()?;
}
@@ -71,7 +69,7 @@ pub(crate) fn format_stdin(
};
// Format the file.
match format_source_code(path, cli.range, settings, source_type, mode) {
match format_source_code(path, settings, source_type, mode) {
Ok(result) => match mode {
FormatMode::Write => Ok(ExitStatus::Success),
FormatMode::Check | FormatMode::Diff => {
@@ -92,7 +90,6 @@ pub(crate) fn format_stdin(
/// Format source code read from `stdin`.
fn format_source_code(
path: Option<&Path>,
range: Option<FormatRange>,
settings: &FormatterSettings,
source_type: PySourceType,
mode: FormatMode,
@@ -110,7 +107,7 @@ fn format_source_code(
};
// Format the source.
let formatted = format_source(&source_kind, source_type, path, settings, range)?;
let formatted = format_source(&source_kind, source_type, path, settings)?;
match &formatted {
FormattedSource::Formatted(formatted) => match mode {

View File

@@ -7,17 +7,17 @@ use itertools::Itertools;
use ruff_linter::warn_user_once;
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig, ResolvedFile};
use crate::args::ConfigArguments;
use crate::args::CliOverrides;
/// Show the list of files to be checked based on current settings.
pub(crate) fn show_files(
files: &[PathBuf],
pyproject_config: &PyprojectConfig,
config_arguments: &ConfigArguments,
overrides: &CliOverrides,
writer: &mut impl Write,
) -> Result<()> {
// Collect all files in the hierarchy.
let (paths, _resolver) = python_files_in_path(files, pyproject_config, config_arguments)?;
let (paths, _resolver) = python_files_in_path(files, pyproject_config, overrides)?;
if paths.is_empty() {
warn_user_once!("No Python files found under the given path(s)");

View File

@@ -6,17 +6,17 @@ use itertools::Itertools;
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig, ResolvedFile};
use crate::args::ConfigArguments;
use crate::args::CliOverrides;
/// Print the user-facing configuration settings.
pub(crate) fn show_settings(
files: &[PathBuf],
pyproject_config: &PyprojectConfig,
config_arguments: &ConfigArguments,
overrides: &CliOverrides,
writer: &mut impl Write,
) -> Result<()> {
// Collect all files in the hierarchy.
let (paths, resolver) = python_files_in_path(files, pyproject_config, config_arguments)?;
let (paths, resolver) = python_files_in_path(files, pyproject_config, overrides)?;
// Print the list of files.
let Some(path) = paths
@@ -31,9 +31,9 @@ pub(crate) fn show_settings(
let settings = resolver.resolve(&path);
writeln!(writer, "Resolved settings for: \"{}\"", path.display())?;
writeln!(writer, "Resolved settings for: {path:?}")?;
if let Some(settings_path) = pyproject_config.path.as_ref() {
writeln!(writer, "Settings path: \"{}\"", settings_path.display())?;
writeln!(writer, "Settings path: {settings_path:?}")?;
}
write!(writer, "{settings}")?;

View File

@@ -204,23 +204,24 @@ pub fn run(
}
fn format(args: FormatCommand, log_level: LogLevel) -> Result<ExitStatus> {
let (cli, config_arguments) = args.partition()?;
let (cli, overrides) = args.partition();
if is_stdin(&cli.files, cli.stdin_filename.as_deref()) {
commands::format_stdin::format_stdin(&cli, &config_arguments)
commands::format_stdin::format_stdin(&cli, &overrides)
} else {
commands::format::format(cli, &config_arguments, log_level)
commands::format::format(cli, &overrides, log_level)
}
}
pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
let (cli, config_arguments) = args.partition()?;
let (cli, overrides) = args.partition();
// Construct the "default" settings. These are used when no `pyproject.toml`
// files are present, or files are injected from outside of the hierarchy.
let pyproject_config = resolve::resolve(
cli.isolated,
&config_arguments,
cli.config.as_deref(),
&overrides,
cli.stdin_filename.as_deref(),
)?;
@@ -238,21 +239,11 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
let files = resolve_default_files(cli.files, is_stdin);
if cli.show_settings {
commands::show_settings::show_settings(
&files,
&pyproject_config,
&config_arguments,
&mut writer,
)?;
commands::show_settings::show_settings(&files, &pyproject_config, &overrides, &mut writer)?;
return Ok(ExitStatus::Success);
}
if cli.show_files {
commands::show_files::show_files(
&files,
&pyproject_config,
&config_arguments,
&mut writer,
)?;
commands::show_files::show_files(&files, &pyproject_config, &overrides, &mut writer)?;
return Ok(ExitStatus::Success);
}
@@ -311,8 +302,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
if !fix_mode.is_generate() {
warn_user!("--fix is incompatible with --add-noqa.");
}
let modifications =
commands::add_noqa::add_noqa(&files, &pyproject_config, &config_arguments)?;
let modifications = commands::add_noqa::add_noqa(&files, &pyproject_config, &overrides)?;
if modifications > 0 && log_level >= LogLevel::Default {
let s = if modifications == 1 { "" } else { "s" };
#[allow(clippy::print_stderr)]
@@ -331,11 +321,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
printer_flags,
);
// the settings should already be combined with the CLI overrides at this point
// TODO(jane): let's make this `PreviewMode`
// TODO: this should reference the global preview mode once https://github.com/astral-sh/ruff/issues/8232
// is resolved.
let preview = pyproject_config.settings.linter.preview.is_enabled();
let preview = overrides.preview.unwrap_or_default().is_enabled();
if cli.watch {
if output_format != SerializationFormat::default(preview) {
@@ -362,7 +348,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
let messages = commands::check::check(
&files,
&pyproject_config,
&config_arguments,
&overrides,
cache.into(),
noqa.into(),
fix_mode,
@@ -384,7 +370,8 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
if matches!(change_kind, ChangeKind::Configuration) {
pyproject_config = resolve::resolve(
cli.isolated,
&config_arguments,
cli.config.as_deref(),
&overrides,
cli.stdin_filename.as_deref(),
)?;
}
@@ -394,7 +381,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
let messages = commands::check::check(
&files,
&pyproject_config,
&config_arguments,
&overrides,
cache.into(),
noqa.into(),
fix_mode,
@@ -411,7 +398,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
commands::check_stdin::check_stdin(
cli.stdin_filename.map(fs::normalize_path).as_deref(),
&pyproject_config,
&config_arguments,
&overrides,
noqa.into(),
fix_mode,
)?
@@ -419,7 +406,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
commands::check::check(
&files,
&pyproject_config,
&config_arguments,
&overrides,
cache.into(),
noqa.into(),
fix_mode,

View File

@@ -11,18 +11,19 @@ use ruff_workspace::resolver::{
Relativity,
};
use crate::args::ConfigArguments;
use crate::args::CliOverrides;
/// Resolve the relevant settings strategy and defaults for the current
/// invocation.
pub fn resolve(
isolated: bool,
config_arguments: &ConfigArguments,
config: Option<&Path>,
overrides: &CliOverrides,
stdin_filename: Option<&Path>,
) -> Result<PyprojectConfig> {
// First priority: if we're running in isolated mode, use the default settings.
if isolated {
let config = config_arguments.transform(Configuration::default());
let config = overrides.transform(Configuration::default());
let settings = config.into_settings(&path_dedot::CWD)?;
debug!("Isolated mode, not reading any pyproject.toml");
return Ok(PyprojectConfig::new(
@@ -35,13 +36,12 @@ pub fn resolve(
// Second priority: the user specified a `pyproject.toml` file. Use that
// `pyproject.toml` for _all_ configuration, and resolve paths relative to the
// current working directory. (This matches ESLint's behavior.)
if let Some(pyproject) = config_arguments
.config_file()
if let Some(pyproject) = config
.map(|config| config.display().to_string())
.map(|config| shellexpand::full(&config).map(|config| PathBuf::from(config.as_ref())))
.transpose()?
{
let settings = resolve_root_settings(&pyproject, Relativity::Cwd, config_arguments)?;
let settings = resolve_root_settings(&pyproject, Relativity::Cwd, overrides)?;
debug!(
"Using user-specified configuration file at: {}",
pyproject.display()
@@ -67,7 +67,7 @@ pub fn resolve(
"Using configuration file (via parent) at: {}",
pyproject.display()
);
let settings = resolve_root_settings(&pyproject, Relativity::Parent, config_arguments)?;
let settings = resolve_root_settings(&pyproject, Relativity::Parent, overrides)?;
return Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Hierarchical,
settings,
@@ -84,7 +84,7 @@ pub fn resolve(
"Using configuration file (via cwd) at: {}",
pyproject.display()
);
let settings = resolve_root_settings(&pyproject, Relativity::Cwd, config_arguments)?;
let settings = resolve_root_settings(&pyproject, Relativity::Cwd, overrides)?;
return Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Hierarchical,
settings,
@@ -97,7 +97,7 @@ pub fn resolve(
// "closest" `pyproject.toml` file for every Python file later on, so these act
// as the "default" settings.)
debug!("Using Ruff default settings");
let config = config_arguments.transform(Configuration::default());
let config = overrides.transform(Configuration::default());
let settings = config.into_settings(&path_dedot::CWD)?;
Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Hierarchical,

View File

@@ -90,179 +90,6 @@ fn format_warn_stdin_filename_with_files() {
"###);
}
#[test]
fn nonexistent_config_file() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--config", "foo.toml", "."]), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'foo.toml' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
It looks like you were trying to pass a path to a configuration file.
The path `foo.toml` does not exist
For more information, try '--help'.
"###);
}
#[test]
fn config_override_rejected_if_invalid_toml() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--config", "foo = bar", "."]), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'foo = bar' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
The supplied argument is not valid TOML:
TOML parse error at line 1, column 7
|
1 | foo = bar
| ^
invalid string
expected `"`, `'`
For more information, try '--help'.
"###);
}
#[test]
fn too_many_config_files() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_dot_toml = tempdir.path().join("ruff.toml");
let ruff2_dot_toml = tempdir.path().join("ruff2.toml");
fs::File::create(&ruff_dot_toml)?;
fs::File::create(&ruff2_dot_toml)?;
let expected_stderr = format!(
"\
ruff failed
Cause: You cannot specify more than one configuration file on the command line.
tip: remove either `--config={}` or `--config={}`.
For more information, try `--help`.
",
ruff_dot_toml.display(),
ruff2_dot_toml.display(),
);
let cmd = Command::new(get_cargo_bin(BIN_NAME))
.arg("format")
.arg("--config")
.arg(&ruff_dot_toml)
.arg("--config")
.arg(&ruff2_dot_toml)
.arg(".")
.output()?;
let stderr = std::str::from_utf8(&cmd.stderr)?;
assert_eq!(stderr, expected_stderr);
Ok(())
}
#[test]
fn config_file_and_isolated() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_dot_toml = tempdir.path().join("ruff.toml");
fs::File::create(&ruff_dot_toml)?;
let expected_stderr = format!(
"\
ruff failed
Cause: The argument `--config={}` cannot be used with `--isolated`
tip: You cannot specify a configuration file and also specify `--isolated`,
as `--isolated` causes ruff to ignore all configuration files.
For more information, try `--help`.
",
ruff_dot_toml.display(),
);
let cmd = Command::new(get_cargo_bin(BIN_NAME))
.arg("format")
.arg("--config")
.arg(&ruff_dot_toml)
.arg("--isolated")
.arg(".")
.output()?;
let stderr = std::str::from_utf8(&cmd.stderr)?;
assert_eq!(stderr, expected_stderr);
Ok(())
}
#[test]
fn config_override_via_cli() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(&ruff_toml, "line-length = 100")?;
let fixture = r#"
def foo():
print("looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong string")
"#;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.arg("format")
.arg("--config")
.arg(&ruff_toml)
// This overrides the long line length set in the config file
.args(["--config", "line-length=80"])
.arg("-")
.pass_stdin(fixture), @r###"
success: true
exit_code: 0
----- stdout -----
def foo():
print(
"looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong string"
)
----- stderr -----
"###);
Ok(())
}
#[test]
fn config_doubly_overridden_via_cli() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(&ruff_toml, "line-length = 70")?;
let fixture = r#"
def foo():
print("looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong string")
"#;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.arg("format")
.arg("--config")
.arg(&ruff_toml)
// This overrides the long line length set in the config file...
.args(["--config", "line-length=80"])
// ...but this overrides them both:
.args(["--line-length", "100"])
.arg("-")
.pass_stdin(fixture), @r###"
success: true
exit_code: 0
----- stdout -----
def foo():
print("looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong string")
----- stderr -----
"###);
Ok(())
}
#[test]
fn format_options() -> Result<()> {
let tempdir = TempDir::new()?;
@@ -681,9 +508,11 @@ if __name__ == '__main__':
say_hy("dear Ruff contributor")
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
- 'extend-select' -> 'lint.extend-select'
- 'ignore' -> 'lint.ignore'
"###);
Ok(())
}
@@ -722,9 +551,11 @@ if __name__ == '__main__':
say_hy("dear Ruff contributor")
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
- 'extend-select' -> 'lint.extend-select'
- 'ignore' -> 'lint.ignore'
"###);
Ok(())
}
@@ -1717,322 +1548,3 @@ include = ["*.ipy"]
"###);
Ok(())
}
#[test]
fn range_formatting() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=2:8-2:14"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Shouldn't format this" )
"#), @r###"
success: true
exit_code: 0
----- stdout -----
def foo(
arg1,
arg2,
):
print("Shouldn't format this" )
----- stderr -----
"###);
}
#[test]
fn range_formatting_unicode() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=2:21-3"])
.arg("-")
.pass_stdin(r#"
def foo(arg1="👋🏽" ): print("Format this" )
"#), @r###"
success: true
exit_code: 0
----- stdout -----
def foo(arg1="👋🏽" ):
print("Format this")
----- stderr -----
"###);
}
#[test]
fn range_formatting_multiple_files() -> std::io::Result<()> {
let tempdir = TempDir::new()?;
let file1 = tempdir.path().join("file1.py");
fs::write(
&file1,
r#"
def file1(arg1, arg2,):
print("Shouldn't format this" )
"#,
)?;
let file2 = tempdir.path().join("file2.py");
fs::write(
&file2,
r#"
def file2(arg1, arg2,):
print("Shouldn't format this" )
"#,
)?;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--range=1:8-1:15"])
.arg(file1)
.arg(file2), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: The `--range` option is only supported when formatting a single file but the specified paths resolve to 2 files.
"###);
Ok(())
}
#[test]
fn range_formatting_out_of_bounds() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=100:40-200:1"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Shouldn't format this" )
"#), @r###"
success: true
exit_code: 0
----- stdout -----
def foo(arg1, arg2,):
print("Shouldn't format this" )
----- stderr -----
"###);
}
#[test]
fn range_start_larger_than_end() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=90-50"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Shouldn't format this" )
"#), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value '90-50' for '--range <RANGE>': the start position '90:1' is greater than the end position '50:1'.
tip: Try switching start and end: '50:1-90:1'
For more information, try '--help'.
"###);
}
#[test]
fn range_line_numbers_only() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=2-3"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Shouldn't format this" )
"#), @r###"
success: true
exit_code: 0
----- stdout -----
def foo(
arg1,
arg2,
):
print("Shouldn't format this" )
----- stderr -----
"###);
}
#[test]
fn range_start_only() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=3"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Should format this" )
"#), @r###"
success: true
exit_code: 0
----- stdout -----
def foo(arg1, arg2,):
print("Should format this")
----- stderr -----
"###);
}
#[test]
fn range_end_only() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=-3"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Should format this" )
"#), @r###"
success: true
exit_code: 0
----- stdout -----
def foo(
arg1,
arg2,
):
print("Should format this" )
----- stderr -----
"###);
}
#[test]
fn range_missing_line() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=1-:20"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Should format this" )
"#), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value '1-:20' for '--range <RANGE>': the end line is not a valid number (cannot parse integer from empty string)
tip: The format is 'line:column'.
For more information, try '--help'.
"###);
}
#[test]
fn zero_line_number() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=0:2"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Should format this" )
"#), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value '0:2' for '--range <RANGE>': the start line is 0, but it should be 1 or greater.
tip: The line numbers start at 1.
tip: Try 1:2 instead.
For more information, try '--help'.
"###);
}
#[test]
fn column_and_line_zero() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.py", "--range=0:0"])
.arg("-")
.pass_stdin(r#"
def foo(arg1, arg2,):
print("Should format this" )
"#), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value '0:0' for '--range <RANGE>': the start line and column are both 0, but they should be 1 or greater.
tip: The line and column numbers start at 1.
tip: Try 1:1 instead.
For more information, try '--help'.
"###);
}
#[test]
fn range_formatting_notebook() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--no-cache", "--stdin-filename", "main.ipynb", "--range=1-2"])
.arg("-")
.pass_stdin(r#"
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "ad6f36d9-4b7d-4562-8d00-f15a0f1fbb6d",
"metadata": {},
"outputs": [],
"source": [
"x=1"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
"#), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: Failed to format main.ipynb: Range formatting isn't supported for notebooks.
"###);
}

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,6 @@
#![cfg(not(target_family = "wasm"))]
use regex::escape;
use std::fs;
use std::process::Command;
use std::str;
@@ -14,10 +13,6 @@ use tempfile::TempDir;
const BIN_NAME: &str = "ruff";
const STDIN_BASE_OPTIONS: &[&str] = &["--no-cache", "--output-format", "concise"];
fn tempdir_filter(tempdir: &TempDir) -> String {
format!(r"{}\\?/?", escape(tempdir.path().to_str().unwrap()))
}
#[test]
fn top_level_options() -> Result<()> {
let tempdir = TempDir::new()?;
@@ -32,32 +27,29 @@ inline-quotes = "single"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
.args(["--stdin-filename", "test.py"])
.arg("-")
.pass_stdin(r#"a = "abcba".strip("aba")"#), @r###"
success: false
exit_code: 1
----- stdout -----
test.py:1:5: Q000 [*] Double quotes found but single quotes preferred
test.py:1:5: B005 Using `.strip()` with multi-character strings is misleading
test.py:1:19: Q000 [*] Double quotes found but single quotes preferred
Found 3 errors.
[*] 2 fixable with the `--fix` option.
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
.args(["--stdin-filename", "test.py"])
.arg("-")
.pass_stdin(r#"a = "abcba".strip("aba")"#), @r###"
success: false
exit_code: 1
----- stdout -----
test.py:1:5: Q000 [*] Double quotes found but single quotes preferred
test.py:1:5: B005 Using `.strip()` with multi-character strings is misleading
test.py:1:19: Q000 [*] Double quotes found but single quotes preferred
Found 3 errors.
[*] 2 fixable with the `--fix` option.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `[TMP]/ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
- 'flake8-quotes' -> 'lint.flake8-quotes'
"###);
});
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
- 'extend-select' -> 'lint.extend-select'
- 'flake8-quotes' -> 'lint.flake8-quotes'
"###);
Ok(())
}
@@ -76,9 +68,6 @@ inline-quotes = "single"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
@@ -96,8 +85,6 @@ inline-quotes = "single"
----- stderr -----
"###);
});
Ok(())
}
@@ -116,9 +103,6 @@ inline-quotes = "single"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
@@ -135,11 +119,11 @@ inline-quotes = "single"
[*] 2 fixable with the `--fix` option.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `[TMP]/ruff.toml`:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
- 'extend-select' -> 'lint.extend-select'
"###);
});
"###);
Ok(())
}
@@ -162,9 +146,6 @@ inline-quotes = "single"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
@@ -181,11 +162,11 @@ inline-quotes = "single"
[*] 2 fixable with the `--fix` option.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `[TMP]/ruff.toml`:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
- 'flake8-quotes' -> 'lint.flake8-quotes'
"###);
});
"###);
Ok(())
}
@@ -241,9 +222,6 @@ OTHER = "OTHER"
fs::write(out_dir.join("a.py"), r#"a = "a""#)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.arg("check")
@@ -263,11 +241,11 @@ OTHER = "OTHER"
[*] 3 fixable with the `--fix` option.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
- 'extend-select' -> 'lint.extend-select'
"###);
});
"###);
Ok(())
}
@@ -288,9 +266,6 @@ inline-quotes = "single"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.arg("check")
@@ -313,11 +288,11 @@ if __name__ == "__main__":
[*] 2 fixable with the `--fix` option.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
- 'extend-select' -> 'lint.extend-select'
"###);
});
"###);
Ok(())
}
@@ -336,9 +311,6 @@ max-line-length = 100
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
@@ -358,12 +330,12 @@ _ = "---------------------------------------------------------------------------
Found 1 error.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `[TMP]/ruff.toml`:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
- 'select' -> 'lint.select'
- 'pycodestyle' -> 'lint.pycodestyle'
"###);
});
"###);
Ok(())
}
@@ -381,9 +353,6 @@ inline-quotes = "single"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.arg("check")
@@ -408,11 +377,11 @@ if __name__ == "__main__":
[*] 1 fixable with the `--fix` option.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
- 'extend-select' -> 'lint.extend-select'
"###);
});
"###);
Ok(())
}
@@ -430,9 +399,6 @@ inline-quotes = "single"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.arg("check")
@@ -457,11 +423,11 @@ if __name__ == "__main__":
[*] 1 fixable with the `--fix` option.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
- 'extend-select' -> 'lint.extend-select'
"###);
});
"###);
Ok(())
}
@@ -490,9 +456,6 @@ ignore = ["D203", "D212"]
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(sub_dir)
.arg("check")
@@ -505,346 +468,9 @@ ignore = ["D203", "D212"]
----- stderr -----
warning: No Python files found under the given path(s)
"###);
});
Ok(())
}
#[test]
fn nonexistent_config_file() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--config", "foo.toml", "."]), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'foo.toml' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
It looks like you were trying to pass a path to a configuration file.
The path `foo.toml` does not exist
For more information, try '--help'.
"###);
}
#[test]
fn config_override_rejected_if_invalid_toml() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--config", "foo = bar", "."]), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'foo = bar' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
The supplied argument is not valid TOML:
TOML parse error at line 1, column 7
|
1 | foo = bar
| ^
invalid string
expected `"`, `'`
For more information, try '--help'.
"###);
}
#[test]
fn too_many_config_files() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_dot_toml = tempdir.path().join("ruff.toml");
let ruff2_dot_toml = tempdir.path().join("ruff2.toml");
fs::File::create(&ruff_dot_toml)?;
fs::File::create(&ruff2_dot_toml)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_dot_toml)
.arg("--config")
.arg(&ruff2_dot_toml)
.arg("."), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: You cannot specify more than one configuration file on the command line.
tip: remove either `--config=[TMP]/ruff.toml` or `--config=[TMP]/ruff2.toml`.
For more information, try `--help`.
"###);
});
Ok(())
}
#[test]
fn config_file_and_isolated() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_dot_toml = tempdir.path().join("ruff.toml");
fs::File::create(&ruff_dot_toml)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_dot_toml)
.arg("--isolated")
.arg("."), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: The argument `--config=[TMP]/ruff.toml` cannot be used with `--isolated`
tip: You cannot specify a configuration file and also specify `--isolated`,
as `--isolated` causes ruff to ignore all configuration files.
For more information, try `--help`.
"###);
});
Ok(())
}
#[test]
fn config_override_via_cli() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
line-length = 100
[lint]
select = ["I"]
[lint.isort]
combine-as-imports = true
"#,
)?;
let fixture = r#"
from foo import (
aaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbb as bbbbbbbbbbbbbbbb,
cccccccccccccccc,
ddddddddddd as ddddddddddddd,
eeeeeeeeeeeeeee,
ffffffffffff as ffffffffffffff,
ggggggggggggg,
hhhhhhh as hhhhhhhhhhh,
iiiiiiiiiiiiii,
jjjjjjjjjjjjj as jjjjjj,
)
x = "longer_than_90_charactersssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss"
"#;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
.args(["--config", "line-length=90"])
.args(["--config", "lint.extend-select=['E501', 'F841']"])
.args(["--config", "lint.isort.combine-as-imports = false"])
.arg("-")
.pass_stdin(fixture), @r###"
success: false
exit_code: 1
----- stdout -----
-:2:1: I001 [*] Import block is un-sorted or un-formatted
-:15:91: E501 Line too long (97 > 90)
Found 2 errors.
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
Ok(())
}
#[test]
fn valid_toml_but_nonexistent_option_provided_via_config_argument() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args([".", "--config", "extend-select=['F481']"]), // No such code as F481!
@r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'extend-select=['F481']' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
Could not parse the supplied argument as a `ruff.toml` configuration option:
Unknown rule selector: `F481`
For more information, try '--help'.
"###);
}
#[test]
fn each_toml_option_requires_a_new_flag_1() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
// commas can't be used to delimit different config overrides;
// you need a new --config flag for each override
.args([".", "--config", "extend-select=['F841'], line-length=90"]),
@r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'extend-select=['F841'], line-length=90' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
The supplied argument is not valid TOML:
TOML parse error at line 1, column 23
|
1 | extend-select=['F841'], line-length=90
| ^
expected newline, `#`
For more information, try '--help'.
"###);
}
#[test]
fn each_toml_option_requires_a_new_flag_2() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
// spaces *also* can't be used to delimit different config overrides;
// you need a new --config flag for each override
.args([".", "--config", "extend-select=['F841'] line-length=90"]),
@r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'extend-select=['F841'] line-length=90' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
The supplied argument is not valid TOML:
TOML parse error at line 1, column 24
|
1 | extend-select=['F841'] line-length=90
| ^
expected newline, `#`
For more information, try '--help'.
"###);
}
#[test]
fn config_doubly_overridden_via_cli() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
line-length = 100
[lint]
select=["E501"]
"#,
)?;
let fixture = "x = 'longer_than_90_charactersssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss'";
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
// The --line-length flag takes priority over both the config file
// and the `--config="line-length=110"` flag,
// despite them both being specified after this flag on the command line:
.args(["--line-length", "90"])
.arg("--config")
.arg(&ruff_toml)
.args(["--config", "line-length=110"])
.arg("-")
.pass_stdin(fixture), @r###"
success: false
exit_code: 1
----- stdout -----
-:1:91: E501 Line too long (97 > 90)
Found 1 error.
----- stderr -----
"###);
Ok(())
}
#[test]
fn complex_config_setting_overridden_via_cli() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(&ruff_toml, "lint.select = ['N801']")?;
let fixture = "class violates_n801: pass";
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
.args(["--config", "lint.per-file-ignores = {'generated.py' = ['N801']}"])
.args(["--stdin-filename", "generated.py"])
.arg("-")
.pass_stdin(fixture), @r###"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
"###);
Ok(())
}
#[test]
fn deprecated_config_option_overridden_via_cli() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--config", "select=['N801']", "-"])
.pass_stdin("class lowercase: ..."),
@r###"
success: false
exit_code: 1
----- stdout -----
-:1:7: N801 Class name `lowercase` should use CapWords convention
Found 1 error.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your `--config` CLI arguments:
- 'select' -> 'lint.select'
"###);
}
#[test]
fn extension() -> Result<()> {
let tempdir = TempDir::new()?;
@@ -898,9 +524,6 @@ include = ["*.ipy"]
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.arg("check")
@@ -917,7 +540,5 @@ include = ["*.ipy"]
----- stderr -----
"###);
});
Ok(())
}

View File

@@ -39,8 +39,10 @@ fn check_project_include_defaults() {
[BASEPATH]/include-test/subdirectory/c.py
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `nested-project/pyproject.toml`:
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your configuration:
- 'select' -> 'lint.select'
"###);
});
}

View File

@@ -4,29 +4,25 @@ use std::process::Command;
const BIN_NAME: &str = "ruff";
#[cfg(not(target_os = "windows"))]
const TEST_FILTERS: &[(&str, &str)] = &[
("\"[^\\*\"]*/pyproject.toml", "\"[BASEPATH]/pyproject.toml"),
("\".*/crates", "\"[BASEPATH]/crates"),
("\".*/\\.ruff_cache", "\"[BASEPATH]/.ruff_cache"),
("\".*/ruff\"", "\"[BASEPATH]\""),
];
#[cfg(target_os = "windows")]
const TEST_FILTERS: &[(&str, &str)] = &[
(r#""[^\*"]*\\pyproject.toml"#, "\"[BASEPATH]/pyproject.toml"),
(r#"".*\\crates"#, "\"[BASEPATH]/crates"),
(r#"".*\\\.ruff_cache"#, "\"[BASEPATH]/.ruff_cache"),
(r#"".*\\ruff""#, "\"[BASEPATH]\""),
(r#"\\+(\w\w|\s|")"#, "/$1"),
];
#[test]
fn display_default_settings() {
// Navigate from the crate directory to the workspace root.
let base_path = Path::new(env!("CARGO_MANIFEST_DIR"))
.parent()
.unwrap()
.parent()
.unwrap();
let base_path = base_path.to_string_lossy();
// Escape the backslashes for the regex.
let base_path = regex::escape(&base_path);
#[cfg(not(target_os = "windows"))]
let test_filters = &[(base_path.as_ref(), "[BASEPATH]")];
#[cfg(target_os = "windows")]
let test_filters = &[
(base_path.as_ref(), "[BASEPATH]"),
(r#"\\+(\w\w|\s|\.|")"#, "/$1"),
];
insta::with_settings!({ filters => test_filters.to_vec() }, {
insta::with_settings!({ filters => TEST_FILTERS.to_vec() }, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "--show-settings", "unformatted.py"]).current_dir(Path::new("./resources/test/fixtures")));
});

View File

@@ -25,15 +25,6 @@ import cycles. They also increase the cognitive load of reading the code.
If an import statement is used to check for the availability or existence
of a module, consider using `importlib.util.find_spec` instead.
If an import statement is used to re-export a symbol as part of a module's
public interface, consider using a "redundant" import alias, which
instructs Ruff (and other tools) to respect the re-export, and avoid
marking it as unused, as in:
```python
from module import member as member
```
## Example
```python
import numpy as np # unused import
@@ -60,12 +51,11 @@ else:
```
## Options
- `lint.ignore-init-module-imports`
- `lint.pyflakes.extend-generics`
## References
- [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)
- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)
- [Typing documentation: interface conventions](https://typing.readthedocs.io/en/latest/source/libraries.html#library-interface-public-and-private-symbols)
----- stderr -----

View File

@@ -205,9 +205,7 @@ linter.external = []
linter.ignore_init_module_imports = false
linter.logger_objects = []
linter.namespace_packages = []
linter.src = [
"[BASEPATH]",
]
linter.src = ["[BASEPATH]"]
linter.tab_size = 4
linter.line_length = 88
linter.task_tags = [

View File

@@ -13,7 +13,6 @@ license = { workspace = true }
[lib]
bench = false
doctest = false
[[bench]]
name = "linter"

View File

@@ -27,7 +27,7 @@ use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
use tracing_subscriber::EnvFilter;
use ruff::args::{ConfigArguments, FormatArguments, FormatCommand, LogLevelArgs};
use ruff::args::{CliOverrides, FormatArguments, FormatCommand, LogLevelArgs};
use ruff::resolve::resolve;
use ruff_formatter::{FormatError, LineWidth, PrintError};
use ruff_linter::logging::LogLevel;
@@ -38,23 +38,24 @@ use ruff_python_formatter::{
use ruff_python_parser::ParseError;
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig, ResolvedFile, Resolver};
fn parse_cli(dirs: &[PathBuf]) -> anyhow::Result<(FormatArguments, ConfigArguments)> {
fn parse_cli(dirs: &[PathBuf]) -> anyhow::Result<(FormatArguments, CliOverrides)> {
let args_matches = FormatCommand::command()
.no_binary_name(true)
.get_matches_from(dirs);
let arguments: FormatCommand = FormatCommand::from_arg_matches(&args_matches)?;
let (cli, config_arguments) = arguments.partition()?;
Ok((cli, config_arguments))
let (cli, overrides) = arguments.partition();
Ok((cli, overrides))
}
/// Find the [`PyprojectConfig`] to use for formatting.
fn find_pyproject_config(
cli: &FormatArguments,
config_arguments: &ConfigArguments,
overrides: &CliOverrides,
) -> anyhow::Result<PyprojectConfig> {
let mut pyproject_config = resolve(
cli.isolated,
config_arguments,
cli.config.as_deref(),
overrides,
cli.stdin_filename.as_deref(),
)?;
// We don't want to format pyproject.toml
@@ -71,9 +72,9 @@ fn find_pyproject_config(
fn ruff_check_paths<'a>(
pyproject_config: &'a PyprojectConfig,
cli: &FormatArguments,
config_arguments: &ConfigArguments,
overrides: &CliOverrides,
) -> anyhow::Result<(Vec<Result<ResolvedFile, ignore::Error>>, Resolver<'a>)> {
let (paths, resolver) = python_files_in_path(&cli.files, pyproject_config, config_arguments)?;
let (paths, resolver) = python_files_in_path(&cli.files, pyproject_config, overrides)?;
Ok((paths, resolver))
}

View File

@@ -11,7 +11,6 @@ repository = { workspace = true }
license = { workspace = true }
[lib]
doctest = false
[dependencies]
ruff_text_size = { path = "../ruff_text_size" }

View File

@@ -308,8 +308,11 @@ impl std::fmt::Debug for Token {
/// assert_eq!(printed.as_code(), r#""Hello 'Ruff'""#);
/// assert_eq!(printed.sourcemap(), [
/// SourceMarker { source: TextSize::new(0), dest: TextSize::new(0) },
/// SourceMarker { source: TextSize::new(0), dest: TextSize::new(7) },
/// SourceMarker { source: TextSize::new(8), dest: TextSize::new(7) },
/// SourceMarker { source: TextSize::new(8), dest: TextSize::new(13) },
/// SourceMarker { source: TextSize::new(14), dest: TextSize::new(13) },
/// SourceMarker { source: TextSize::new(14), dest: TextSize::new(14) },
/// SourceMarker { source: TextSize::new(20), dest: TextSize::new(14) },
/// ]);
///
@@ -325,30 +328,24 @@ pub struct SourcePosition(TextSize);
impl<Context> Format<Context> for SourcePosition {
fn fmt(&self, f: &mut Formatter<Context>) -> FormatResult<()> {
if let Some(FormatElement::SourcePosition(last_position)) = f.buffer.elements().last() {
if *last_position == self.0 {
return Ok(());
}
}
f.write_element(FormatElement::SourcePosition(self.0));
Ok(())
}
}
/// Creates a text from a dynamic string.
///
/// Creates a text from a dynamic string with its optional start-position in the source document.
/// This is done by allocating a new string internally.
pub fn text(text: &str) -> Text {
pub fn text(text: &str, position: Option<TextSize>) -> Text {
debug_assert_no_newlines(text);
Text { text }
Text { text, position }
}
#[derive(Eq, PartialEq)]
pub struct Text<'a> {
text: &'a str,
position: Option<TextSize>,
}
impl<Context> Format<Context> for Text<'_>
@@ -356,6 +353,10 @@ where
Context: FormatContext,
{
fn fmt(&self, f: &mut Formatter<Context>) -> FormatResult<()> {
if let Some(source_position) = self.position {
f.write_element(FormatElement::SourcePosition(source_position));
}
f.write_element(FormatElement::Text {
text: self.text.to_string().into_boxed_str(),
text_width: TextWidth::from_text(self.text, f.options().indent_width()),
@@ -2285,7 +2286,7 @@ impl<Context, T> std::fmt::Debug for FormatWith<Context, T> {
/// let mut join = f.join_with(&separator);
///
/// for item in &self.items {
/// join.entry(&format_with(|f| write!(f, [text(item)])));
/// join.entry(&format_with(|f| write!(f, [text(item, None)])));
/// }
/// join.finish()
/// })),
@@ -2370,7 +2371,7 @@ where
/// let mut count = 0;
///
/// let value = format_once(|f| {
/// write!(f, [text(&std::format!("Formatted {count}."))])
/// write!(f, [text(&std::format!("Formatted {count}."), None)])
/// });
///
/// format!(SimpleFormatContext::default(), [value]).expect("Formatting once works fine");

View File

@@ -346,7 +346,10 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
}
FormatElement::SourcePosition(position) => {
write!(f, [text(&std::format!("source_position({position:?})"))])?;
write!(
f,
[text(&std::format!("source_position({position:?})"), None)]
)?;
}
FormatElement::LineSuffixBoundary => {
@@ -357,7 +360,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
write!(f, [token("best_fitting(")])?;
if *mode != BestFittingMode::FirstLine {
write!(f, [text(&std::format!("mode: {mode:?}, "))])?;
write!(f, [text(&std::format!("mode: {mode:?}, "), None)])?;
}
write!(f, [token("[")])?;
@@ -389,14 +392,17 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
write!(
f,
[
text(&std::format!("<interned {index}>")),
text(&std::format!("<interned {index}>"), None),
space(),
&&**interned,
]
)?;
}
Some(reference) => {
write!(f, [text(&std::format!("<ref interned *{reference}>"))])?;
write!(
f,
[text(&std::format!("<ref interned *{reference}>"), None)]
)?;
}
}
}
@@ -415,7 +421,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
f,
[
token("<END_TAG_WITHOUT_START<"),
text(&std::format!("{:?}", tag.kind())),
text(&std::format!("{:?}", tag.kind()), None),
token(">>"),
]
)?;
@@ -430,9 +436,9 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
token(")"),
soft_line_break_or_space(),
token("ERROR<START_END_TAG_MISMATCH<start: "),
text(&std::format!("{start_kind:?}")),
text(&std::format!("{start_kind:?}"), None),
token(", end: "),
text(&std::format!("{:?}", tag.kind())),
text(&std::format!("{:?}", tag.kind()), None),
token(">>")
]
)?;
@@ -464,7 +470,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
f,
[
token("align("),
text(&count.to_string()),
text(&count.to_string(), None),
token(","),
space(),
]
@@ -476,7 +482,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
f,
[
token("line_suffix("),
text(&std::format!("{reserved_width:?}")),
text(&std::format!("{reserved_width:?}"), None),
token(","),
space(),
]
@@ -493,7 +499,11 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
if let Some(group_id) = group.id() {
write!(
f,
[text(&std::format!("\"{group_id:?}\"")), token(","), space(),]
[
text(&std::format!("\"{group_id:?}\""), None),
token(","),
space(),
]
)?;
}
@@ -514,7 +524,11 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
if let Some(group_id) = id {
write!(
f,
[text(&std::format!("\"{group_id:?}\"")), token(","), space(),]
[
text(&std::format!("\"{group_id:?}\""), None),
token(","),
space(),
]
)?;
}
}
@@ -547,7 +561,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
f,
[
token("indent_if_group_breaks("),
text(&std::format!("\"{id:?}\"")),
text(&std::format!("\"{id:?}\""), None),
token(","),
space(),
]
@@ -567,7 +581,11 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
if let Some(group_id) = condition.group_id {
write!(
f,
[text(&std::format!("\"{group_id:?}\"")), token(","), space()]
[
text(&std::format!("\"{group_id:?}\""), None),
token(","),
space(),
]
)?;
}
}
@@ -577,7 +595,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
f,
[
token("label("),
text(&std::format!("\"{label_id:?}\"")),
text(&std::format!("\"{label_id:?}\""), None),
token(","),
space(),
]
@@ -646,7 +664,7 @@ impl Format<IrFormatContext<'_>> for &[FormatElement] {
ContentArrayEnd,
token(")"),
soft_line_break_or_space(),
text(&std::format!("<START_WITHOUT_END<{top:?}>>")),
text(&std::format!("<START_WITHOUT_END<{top:?}>>"), None),
]
)?;
}
@@ -789,7 +807,7 @@ impl Format<IrFormatContext<'_>> for Condition {
f,
[
token("if_group_fits_on_line("),
text(&std::format!("\"{id:?}\"")),
text(&std::format!("\"{id:?}\""), None),
token(")")
]
),
@@ -798,7 +816,7 @@ impl Format<IrFormatContext<'_>> for Condition {
f,
[
token("if_group_breaks("),
text(&std::format!("\"{id:?}\"")),
text(&std::format!("\"{id:?}\""), None),
token(")")
]
),

View File

@@ -32,7 +32,7 @@ pub trait MemoizeFormat<Context> {
/// let value = self.value.get();
/// self.value.set(value + 1);
///
/// write!(f, [text(&std::format!("Formatted {value} times."))])
/// write!(f, [text(&std::format!("Formatted {value} times."), None)])
/// }
/// }
///
@@ -110,7 +110,7 @@ where
/// write!(f, [
/// token("Count:"),
/// space(),
/// text(&std::format!("{current}")),
/// text(&std::format!("{current}"), None),
/// hard_line_break()
/// ])?;
///

View File

@@ -41,7 +41,7 @@ use std::marker::PhantomData;
use std::num::{NonZeroU16, NonZeroU8, TryFromIntError};
use crate::format_element::document::Document;
use crate::printer::{Printer, PrinterOptions};
use crate::printer::{Printer, PrinterOptions, SourceMapGeneration};
pub use arguments::{Argument, Arguments};
pub use buffer::{
Buffer, BufferExtensions, BufferSnapshot, Inspect, RemoveSoftLinesBuffer, VecBuffer,
@@ -53,7 +53,7 @@ pub use crate::diagnostics::{ActualStart, FormatError, InvalidDocumentError, Pri
pub use format_element::{normalize_newlines, FormatElement, LINE_TERMINATORS};
pub use group_id::GroupId;
use ruff_macros::CacheKey;
use ruff_text_size::{TextLen, TextRange, TextSize};
use ruff_text_size::{TextRange, TextSize};
#[derive(Debug, Eq, PartialEq, Clone, Copy, Hash, CacheKey)]
#[cfg_attr(
@@ -269,6 +269,7 @@ impl FormatOptions for SimpleFormatOptions {
line_width: self.line_width,
indent_style: self.indent_style,
indent_width: self.indent_width,
source_map_generation: SourceMapGeneration::Enabled,
..PrinterOptions::default()
}
}
@@ -431,129 +432,6 @@ impl Printed {
pub fn take_verbatim_ranges(&mut self) -> Vec<TextRange> {
std::mem::take(&mut self.verbatim_ranges)
}
/// Slices the formatted code to the sub-slices that covers the passed `source_range` in `source`.
///
/// The implementation uses the source map generated during formatting to find the closest range
/// in the formatted document that covers `source_range` or more. The returned slice
/// matches the `source_range` exactly (except indent, see below) if the formatter emits [`FormatElement::SourcePosition`] for
/// the range's offsets.
///
/// ## Indentation
/// The indentation before `source_range.start` is replaced with the indentation returned by the formatter
/// to fix up incorrectly intended code.
///
/// Returns the entire document if the source map is empty.
///
/// # Panics
/// If `source_range` points to offsets that are not in the bounds of `source`.
#[must_use]
pub fn slice_range(self, source_range: TextRange, source: &str) -> PrintedRange {
let mut start_marker: Option<SourceMarker> = None;
let mut end_marker: Option<SourceMarker> = None;
// Note: The printer can generate multiple source map entries for the same source position.
// For example if you have:
// * token("a + b")
// * `source_position(276)`
// * `token(")")`
// * `source_position(276)`
// * `hard_line_break`
// The printer uses the source position 276 for both the tokens `)` and the `\n` because
// there were multiple `source_position` entries in the IR with the same offset.
// This can happen if multiple nodes start or end at the same position. A common example
// for this are expressions and expression statement that always end at the same offset.
//
// Warning: Source markers are often emitted sorted by their source position but it's not guaranteed
// and depends on the emitted `IR`.
// They are only guaranteed to be sorted in increasing order by their destination position.
for marker in self.sourcemap {
// Take the closest start marker, but skip over start_markers that have the same start.
if marker.source <= source_range.start()
&& !start_marker.is_some_and(|existing| existing.source >= marker.source)
{
start_marker = Some(marker);
}
if marker.source >= source_range.end()
&& !end_marker.is_some_and(|existing| existing.source <= marker.source)
{
end_marker = Some(marker);
}
}
let (source_start, formatted_start) = start_marker
.map(|marker| (marker.source, marker.dest))
.unwrap_or_default();
let (source_end, formatted_end) = end_marker
.map_or((source.text_len(), self.code.text_len()), |marker| {
(marker.source, marker.dest)
});
let source_range = TextRange::new(source_start, source_end);
let formatted_range = TextRange::new(formatted_start, formatted_end);
// Extend both ranges to include the indentation
let source_range = extend_range_to_include_indent(source_range, source);
let formatted_range = extend_range_to_include_indent(formatted_range, &self.code);
PrintedRange {
code: self.code[formatted_range].to_string(),
source_range,
}
}
}
/// Extends `range` backwards (by reducing `range.start`) to include any directly preceding whitespace (`\t` or ` `).
///
/// # Panics
/// If `range.start` is out of `source`'s bounds.
fn extend_range_to_include_indent(range: TextRange, source: &str) -> TextRange {
let whitespace_len: TextSize = source[..usize::from(range.start())]
.chars()
.rev()
.take_while(|c| matches!(c, ' ' | '\t'))
.map(TextLen::text_len)
.sum();
TextRange::new(range.start() - whitespace_len, range.end())
}
#[derive(Debug, Clone, Eq, PartialEq)]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
pub struct PrintedRange {
code: String,
source_range: TextRange,
}
impl PrintedRange {
pub fn new(code: String, source_range: TextRange) -> Self {
Self { code, source_range }
}
pub fn empty() -> Self {
Self {
code: String::new(),
source_range: TextRange::default(),
}
}
/// The formatted code.
pub fn as_code(&self) -> &str {
&self.code
}
/// The range the formatted code corresponds to in the source document.
pub fn source_range(&self) -> TextRange {
self.source_range
}
#[must_use]
pub fn with_code(self, code: String) -> Self {
Self { code, ..self }
}
}
/// Public return type of the formatter
@@ -575,7 +453,7 @@ pub type FormatResult<F> = Result<F, FormatError>;
/// impl Format<SimpleFormatContext> for Paragraph {
/// fn fmt(&self, f: &mut Formatter<SimpleFormatContext>) -> FormatResult<()> {
/// write!(f, [
/// text(&self.0),
/// text(&self.0, None),
/// hard_line_break(),
/// ])
/// }

View File

@@ -21,7 +21,8 @@ impl<'a> LineSuffixes<'a> {
/// Takes all the pending line suffixes.
pub(super) fn take_pending<'l>(
&'l mut self,
) -> impl DoubleEndedIterator<Item = LineSuffixEntry<'a>> + 'l + ExactSizeIterator {
) -> impl Iterator<Item = LineSuffixEntry<'a>> + DoubleEndedIterator + 'l + ExactSizeIterator
{
self.suffixes.drain(..)
}

View File

@@ -4,7 +4,7 @@ use drop_bomb::DebugDropBomb;
use unicode_width::UnicodeWidthChar;
pub use printer_options::*;
use ruff_text_size::{TextLen, TextSize};
use ruff_text_size::{Ranged, TextLen, TextSize};
use crate::format_element::document::Document;
use crate::format_element::tag::{Condition, GroupMode};
@@ -60,10 +60,7 @@ impl<'a> Printer<'a> {
document: &'a Document,
indent: u16,
) -> PrintResult<Printed> {
let indentation = Indention::Level(indent);
self.state.pending_indent = indentation;
let mut stack = PrintCallStack::new(PrintElementArgs::new(indentation));
let mut stack = PrintCallStack::new(PrintElementArgs::new(Indention::Level(indent)));
let mut queue: PrintQueue<'a> = PrintQueue::new(document.as_ref());
loop {
@@ -76,9 +73,6 @@ impl<'a> Printer<'a> {
}
}
// Push any pending marker
self.push_marker();
Ok(Printed::new(
self.state.buffer,
None,
@@ -100,38 +94,42 @@ impl<'a> Printer<'a> {
let args = stack.top();
match element {
FormatElement::Space => self.print_text(Text::Token(" ")),
FormatElement::Token { text } => self.print_text(Text::Token(text)),
FormatElement::Text { text, text_width } => self.print_text(Text::Text {
text,
text_width: *text_width,
}),
FormatElement::SourceCodeSlice { slice, text_width } => {
let text = slice.text(self.source_code);
self.print_text(Text::Text {
FormatElement::Space => self.print_text(Text::Token(" "), None),
FormatElement::Token { text } => self.print_text(Text::Token(text), None),
FormatElement::Text { text, text_width } => self.print_text(
Text::Text {
text,
text_width: *text_width,
});
},
None,
),
FormatElement::SourceCodeSlice { slice, text_width } => {
let text = slice.text(self.source_code);
self.print_text(
Text::Text {
text,
text_width: *text_width,
},
Some(slice.range()),
);
}
FormatElement::Line(line_mode) => {
if args.mode().is_flat()
&& matches!(line_mode, LineMode::Soft | LineMode::SoftOrSpace)
{
if line_mode == &LineMode::SoftOrSpace {
self.print_text(Text::Token(" "));
self.print_text(Text::Token(" "), None);
}
} else if self.state.line_suffixes.has_pending() {
self.flush_line_suffixes(queue, stack, Some(element));
} else {
// Only print a newline if the current line isn't already empty
if self.state.line_width > 0 {
self.push_marker();
self.print_char('\n');
}
// Print a second line break if this is an empty line
if line_mode == &LineMode::Empty {
self.push_marker();
self.print_char('\n');
}
@@ -144,11 +142,8 @@ impl<'a> Printer<'a> {
}
FormatElement::SourcePosition(position) => {
// The printer defers printing indents until the next text
// is printed. Pushing the marker now would mean that the
// mapped range includes the indent range, which we don't want.
// Queue the source map position and emit it when printing the next character
self.state.pending_source_position = Some(*position);
self.state.source_position = *position;
self.push_marker();
}
FormatElement::LineSuffixBoundary => {
@@ -440,7 +435,7 @@ impl<'a> Printer<'a> {
Ok(print_mode)
}
fn print_text(&mut self, text: Text) {
fn print_text(&mut self, text: Text, source_range: Option<TextRange>) {
if !self.state.pending_indent.is_empty() {
let (indent_char, repeat_count) = match self.options.indent_style() {
IndentStyle::Tab => ('\t', 1),
@@ -463,6 +458,19 @@ impl<'a> Printer<'a> {
}
}
// Insert source map markers before and after the token
//
// If the token has source position information the start marker
// will use the start position of the original token, and the end
// marker will use that position + the text length of the token
//
// If the token has no source position (was created by the formatter)
// both the start and end marker will use the last known position
// in the input source (from state.source_position)
if let Some(range) = source_range {
self.state.source_position = range.start();
}
self.push_marker();
match text {
@@ -485,24 +493,29 @@ impl<'a> Printer<'a> {
}
}
}
if let Some(range) = source_range {
self.state.source_position = range.end();
}
self.push_marker();
}
fn push_marker(&mut self) {
let Some(source_position) = self.state.pending_source_position.take() else {
if self.options.source_map_generation.is_disabled() {
return;
};
}
let marker = SourceMarker {
source: source_position,
source: self.state.source_position,
dest: self.state.buffer.text_len(),
};
if self
.state
.source_markers
.last()
.map_or(true, |last| last != &marker)
{
if let Some(last) = self.state.source_markers.last() {
if last != &marker {
self.state.source_markers.push(marker);
}
} else {
self.state.source_markers.push(marker);
}
}
@@ -874,7 +887,7 @@ enum FillPairLayout {
struct PrinterState<'a> {
buffer: String,
source_markers: Vec<SourceMarker>,
pending_source_position: Option<TextSize>,
source_position: TextSize,
pending_indent: Indention,
measured_group_fits: bool,
line_width: u32,
@@ -1729,7 +1742,7 @@ a",
let result = format_with_options(
&format_args![
token("function main() {"),
block_indent(&text("let x = `This is a multiline\nstring`;")),
block_indent(&text("let x = `This is a multiline\nstring`;", None)),
token("}"),
hard_line_break()
],
@@ -1746,7 +1759,7 @@ a",
fn it_breaks_a_group_if_a_string_contains_a_newline() {
let result = format(&FormatArrayElements {
items: vec![
&text("`This is a string spanning\ntwo lines`"),
&text("`This is a string spanning\ntwo lines`", None),
&token("\"b\""),
],
});

View File

@@ -14,6 +14,10 @@ pub struct PrinterOptions {
/// The type of line ending to apply to the printed input
pub line_ending: LineEnding,
/// Whether the printer should build a source map that allows mapping positions in the source document
/// to positions in the formatted document.
pub source_map_generation: SourceMapGeneration,
}
impl<'a, O> From<&'a O> for PrinterOptions

View File

@@ -11,7 +11,6 @@ repository = { workspace = true }
license = { workspace = true }
[lib]
doctest = false
[dependencies]
ruff_macros = { path = "../ruff_macros" }

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.2.1"
version = "0.1.15"
publish = false
authors = { workspace = true }
edition = { workspace = true }
@@ -85,8 +85,6 @@ tempfile = { workspace = true }
[features]
default = []
schemars = ["dep:schemars"]
# Enables rules for internal integration tests
test-rules = []
[lints]
workspace = true

View File

@@ -1,22 +0,0 @@
from django.utils.safestring import mark_safe
def some_func():
return mark_safe('<script>alert("evil!")</script>')
@mark_safe
def some_func():
return '<script>alert("evil!")</script>'
from django.utils.html import mark_safe
def some_func():
return mark_safe('<script>alert("evil!")</script>')
@mark_safe
def some_func():
return '<script>alert("evil!")</script>'

View File

@@ -13,11 +13,3 @@ s = f"{set([f(x) for x in 'ab'])}"
s = f"{ set([x for x in 'ab']) | set([x for x in 'ab']) }"
s = f"{set([x for x in 'ab']) | set([x for x in 'ab'])}"
s = set( # comment
[x for x in range(3)]
)
s = set([ # comment
x for x in range(3)
])

View File

@@ -20,10 +20,3 @@ f"{dict(x='y') | dict(y='z')}"
f"{ dict(x='y') | dict(y='z') }"
f"a {dict(x='y') | dict(y='z')} b"
f"a { dict(x='y') | dict(y='z') } b"
dict(
# comment
)
tuple( # comment
)

View File

@@ -8,11 +8,3 @@ t4 = tuple([
t5 = tuple(
(1, 2)
)
tuple( # comment
[1, 2]
)
tuple([ # comment
1, 2
])

View File

@@ -2,12 +2,3 @@ l1 = list([1, 2])
l2 = list((1, 2))
l3 = list([])
l4 = list(())
list( # comment
[1, 2]
)
list([ # comment
1, 2
])

View File

@@ -207,23 +207,3 @@ class Repro:
def stub(self) -> str:
"""Docstring"""
...
class Repro(Protocol[int]):
def func(self) -> str:
"""Docstring"""
...
def impl(self) -> str:
"""Docstring"""
return self.func()
class Repro[int](Protocol):
def func(self) -> str:
"""Docstring"""
...
def impl(self) -> str:
"""Docstring"""
return self.func()

View File

@@ -11,25 +11,13 @@ class _UnusedTypedDict2(typing.TypedDict):
class _UsedTypedDict(TypedDict):
foo: bytes
foo: bytes
class _CustomClass(_UsedTypedDict):
bar: list[int]
_UnusedTypedDict3 = TypedDict("_UnusedTypedDict3", {"foo": int})
_UsedTypedDict3 = TypedDict("_UsedTypedDict3", {"bar": bytes})
def uses_UsedTypedDict3(arg: _UsedTypedDict3) -> None: ...
# In `.py` files, we don't flag unused definitions in class scopes (unlike in `.pyi`
# files).
class _CustomClass3:
class _UnusedTypeDict4(TypedDict):
pass
def method(self) -> None:
_CustomClass3._UnusedTypeDict4()

View File

@@ -35,13 +35,3 @@ _UnusedTypedDict3 = TypedDict("_UnusedTypedDict3", {"foo": int})
_UsedTypedDict3 = TypedDict("_UsedTypedDict3", {"bar": bytes})
def uses_UsedTypedDict3(arg: _UsedTypedDict3) -> None: ...
# In `.pyi` files, we flag unused definitions in class scopes as well as in the global
# scope (unlike in `.py` files).
class _CustomClass3:
class _UnusedTypeDict4(TypedDict):
pass
def method(self) -> None:
_CustomClass3._UnusedTypeDict4()

View File

@@ -192,49 +192,3 @@ elif x == 2:
y = "b"
else:
y = "c"
# Regression test for: https://github.com/astral-sh/ruff/issues/9732
def sb(self):
if self._sb is not None: return self._sb
else: self._sb = '\033[01;%dm'; self._sa = '\033[0;0m';
def indent(x, y, w, z):
if x: # [no-else-return]
a = 1
return y
else:
c = 3
return z
def indent(x, y, w, z):
if x: # [no-else-return]
a = 1
return y
else:
# comment
c = 3
return z
def indent(x, y, w, z):
if x: # [no-else-return]
a = 1
return y
else:
# comment
c = 3
return z
def indent(x, y, w, z):
if x: # [no-else-return]
a = 1
return y
else:
# comment
c = 3
return z

View File

@@ -1,27 +1,18 @@
import trio
async def func():
async def foo():
with trio.fail_after():
...
async def func():
async def foo():
with trio.fail_at():
await ...
async def func():
async def foo():
with trio.move_on_after():
...
async def func():
async def foo():
with trio.move_at():
await ...
async def func():
with trio.move_at():
async with trio.open_nursery() as nursery:
...

View File

@@ -0,0 +1,18 @@
from __future__ import annotations
from typing import TypeVar
x: "int" | str # TCH006
x: ("int" | str) | "bool" # TCH006
def func():
x: "int" | str # OK
z: list[str, str | "int"] = [] # TCH006
type A = Value["int" | str] # OK
OldS = TypeVar('OldS', int | 'str', str) # TCH006

View File

@@ -0,0 +1,16 @@
from typing import TypeVar
x: "int" | str # TCH006
x: ("int" | str) | "bool" # TCH006
def func():
x: "int" | str # OK
z: list[str, str | "int"] = [] # TCH006
type A = Value["int" | str] # OK
OldS = TypeVar('OldS', int | 'str', str) # TCH006

View File

@@ -1,18 +0,0 @@
from __future__ import annotations
from typing import TypeVar
x: "int" | str # TCH010
x: ("int" | str) | "bool" # TCH010
def func():
x: "int" | str # OK
z: list[str, str | "int"] = [] # TCH010
type A = Value["int" | str] # OK
OldS = TypeVar('OldS', int | 'str', str) # TCH010

View File

@@ -1,16 +0,0 @@
from typing import TypeVar
x: "int" | str # TCH010
x: ("int" | str) | "bool" # TCH010
def func():
x: "int" | str # OK
z: list[str, str | "int"] = [] # TCH010
type A = Value["int" | str] # OK
OldS = TypeVar('OldS', int | 'str', str) # TCH010

View File

@@ -19,11 +19,8 @@ numpy.random.seed()
numpy.random.get_state()
numpy.random.set_state()
numpy.random.rand()
numpy.random.ranf()
numpy.random.sample()
numpy.random.randn()
numpy.random.randint()
numpy.random.random()
numpy.random.random_integers()
numpy.random.random_sample()
numpy.random.choice()
@@ -38,6 +35,7 @@ numpy.random.exponential()
numpy.random.f()
numpy.random.gamma()
numpy.random.geometric()
numpy.random.get_state()
numpy.random.gumbel()
numpy.random.hypergeometric()
numpy.random.laplace()

View File

@@ -36,47 +36,35 @@ for i in list( # Comment
): # PERF101
pass
for i in list(foo_dict): # OK
for i in list(foo_dict): # Ok
pass
for i in list(1): # OK
for i in list(1): # Ok
pass
for i in list(foo_int): # OK
for i in list(foo_int): # Ok
pass
import itertools
for i in itertools.product(foo_int): # OK
for i in itertools.product(foo_int): # Ok
pass
for i in list(foo_list): # OK
for i in list(foo_list): # Ok
foo_list.append(i + 1)
for i in list(foo_list): # PERF101
# Make sure we match the correct list
other_list.append(i + 1)
for i in list(foo_tuple): # OK
for i in list(foo_tuple): # Ok
foo_tuple.append(i + 1)
for i in list(foo_set): # OK
for i in list(foo_set): # Ok
foo_set.append(i + 1)
x, y, nested_tuple = (1, 2, (3, 4, 5))
for i in list(nested_tuple): # PERF101
pass
for i in list(foo_list): # OK
if True:
foo_list.append(i + 1)
for i in list(foo_list): # OK
if True:
foo_list[i] = i + 1
for i in list(foo_list): # OK
if True:
del foo_list[i + 1]

View File

@@ -1,848 +0,0 @@
"""Fixtures for the errors E301, E302, E303, E304, E305 and E306.
Since these errors are about new lines, each test starts with either "No error" or "# E30X".
Each test's end is signaled by a "# end" line.
There should be no E30X error outside of a test's bound.
"""
# No error
class Class:
pass
# end
# No error
class Class:
"""Docstring"""
def __init__(self) -> None:
pass
# end
# No error
def func():
pass
# end
# No error
# comment
class Class:
pass
# end
# No error
# comment
def func():
pass
# end
# no error
def foo():
pass
def bar():
pass
class Foo(object):
pass
class Bar(object):
pass
# end
# No error
class Class(object):
def func1():
pass
def func2():
pass
# end
# No error
class Class(object):
def func1():
pass
# comment
def func2():
pass
# end
# No error
class Class:
def func1():
pass
# comment
def func2():
pass
# This is a
# ... multi-line comment
def func3():
pass
# This is a
# ... multi-line comment
@decorator
class Class:
def func1():
pass
# comment
def func2():
pass
@property
def func3():
pass
# end
# No error
try:
from nonexistent import Bar
except ImportError:
class Bar(object):
"""This is a Bar replacement"""
# end
# No error
def with_feature(f):
"""Some decorator"""
wrapper = f
if has_this_feature(f):
def wrapper(*args):
call_feature(args[0])
return f(*args)
return wrapper
# end
# No error
try:
next
except NameError:
def next(iterator, default):
for item in iterator:
return item
return default
# end
# No error
def fn():
pass
class Foo():
"""Class Foo"""
def fn():
pass
# end
# No error
# comment
def c():
pass
# comment
def d():
pass
# This is a
# ... multi-line comment
# And this one is
# ... a second paragraph
# ... which spans on 3 lines
# Function `e` is below
# NOTE: Hey this is a testcase
def e():
pass
def fn():
print()
# comment
print()
print()
# Comment 1
# Comment 2
# Comment 3
def fn2():
pass
# end
# no error
if __name__ == '__main__':
foo()
# end
# no error
defaults = {}
defaults.update({})
# end
# no error
def foo(x):
classification = x
definitely = not classification
# end
# no error
def bar(): pass
def baz(): pass
# end
# no error
def foo():
def bar(): pass
def baz(): pass
# end
# no error
from typing import overload
from typing import Union
# end
# no error
@overload
def f(x: int) -> int: ...
@overload
def f(x: str) -> str: ...
# end
# no error
def f(x: Union[int, str]) -> Union[int, str]:
return x
# end
# no error
from typing import Protocol
class C(Protocol):
@property
def f(self) -> int: ...
@property
def g(self) -> str: ...
# end
# no error
def f(
a,
):
pass
# end
# no error
if True:
class Class:
"""Docstring"""
def function(self):
...
# end
# no error
if True:
def function(self):
...
# end
# no error
@decorator
# comment
@decorator
def function():
pass
# end
# no error
class Class:
def method(self):
if True:
def function():
pass
# end
# no error
@decorator
async def function(data: None) -> None:
...
# end
# no error
class Class:
def method():
"""docstring"""
# comment
def function():
pass
# end
# no error
try:
if True:
# comment
class Class:
pass
except:
pass
# end
# no error
def f():
def f():
pass
# end
# no error
class MyClass:
# comment
def method(self) -> None:
pass
# end
# no error
def function1():
# Comment
def function2():
pass
# end
# no error
async def function1():
await function2()
async with function3():
pass
# end
# no error
if (
cond1
and cond2
):
pass
#end
# no error
async def function1():
await function2()
async with function3():
pass
# end
# no error
async def function1():
await function2()
async with function3():
pass
# end
# no error
async def function1():
await function2()
async with function3():
pass
# end
# no error
class Test:
async
def a(self): pass
# end
# no error
class Test:
def a():
pass
# wrongly indented comment
def b():
pass
# end
# no error
def test():
pass
# Wrongly indented comment
pass
# end
# E301
class Class(object):
def func1():
pass
def func2():
pass
# end
# E301
class Class:
def fn1():
pass
# comment
def fn2():
pass
# end
# E302
"""Main module."""
def fn():
pass
# end
# E302
import sys
def get_sys_path():
return sys.path
# end
# E302
def a():
pass
def b():
pass
# end
# E302
def a():
pass
# comment
def b():
pass
# end
# E302
def a():
pass
async def b():
pass
# end
# E302
async def x():
pass
async def x(y: int = 1):
pass
# end
# E302
def bar():
pass
def baz(): pass
# end
# E302
def bar(): pass
def baz():
pass
# end
# E302
def f():
pass
# comment
@decorator
def g():
pass
# end
# E302
class Test:
pass
def method1():
return 1
def method2():
return 22
# end
# E303
def fn():
_ = None
# arbitrary comment
def inner(): # E306 not expected (pycodestyle detects E306)
pass
# end
# E303
def fn():
_ = None
# arbitrary comment
def inner(): # E306 not expected (pycodestyle detects E306)
pass
# end
# E303
print()
print()
# end
# E303:5:1
print()
# comment
print()
# end
# E303:5:5 E303:8:5
def a():
print()
# comment
# another comment
print()
# end
# E303
#!python
"""This class docstring comes on line 5.
It gives error E303: too many blank lines (3)
"""
# end
# E303
class Class:
def a(self):
pass
def b(self):
pass
# end
# E303
if True:
a = 1
a = 2
# end
# E303
class Test:
# comment
# another comment
def test(self): pass
# end
# E303
class Test:
def a(self):
pass
# wrongly indented comment
def b(self):
pass
# end
# E303
def fn():
pass
pass
# end
# E304
@decorator
def function():
pass
# end
# E304
@decorator
# comment E304 not expected
def function():
pass
# end
# E304
@decorator
# comment E304 not expected
# second comment E304 not expected
def function():
pass
# end
# E305:7:1
def fn():
print()
# comment
# another comment
fn()
# end
# E305
class Class():
pass
# comment
# another comment
a = 1
# end
# E305:8:1
def fn():
print()
# comment
# another comment
try:
fn()
except Exception:
pass
# end
# E305:5:1
def a():
print()
# Two spaces before comments, too.
if a():
a()
# end
#: E305:8:1
# Example from https://github.com/PyCQA/pycodestyle/issues/400
import stuff
def main():
blah, blah
if __name__ == '__main__':
main()
# end
# E306:3:5
def a():
x = 1
def b():
pass
# end
#: E306:3:5
async def a():
x = 1
def b():
pass
# end
#: E306:3:5 E306:5:9
def a():
x = 2
def b():
x = 1
def c():
pass
# end
# E306:3:5 E306:6:5
def a():
x = 1
class C:
pass
x = 2
def b():
pass
# end
# E306
def foo():
def bar():
pass
def baz(): pass
# end
# E306:3:5
def foo():
def bar(): pass
def baz():
pass
# end
# E306
def a():
x = 2
@decorator
def b():
pass
# end
# E306
def a():
x = 2
@decorator
async def b():
pass
# end
# E306
def a():
x = 2
async def b():
pass
# end

View File

@@ -1,15 +0,0 @@
'''trailing whitespace
inside a multiline string'''
f'''trailing whitespace
inside a multiline f-string'''
# Trailing whitespace after `{`
f'abc {
1 + 2
}'
# Trailing whitespace after `2`
f'abc {
1 + 2
}'

View File

@@ -562,46 +562,3 @@ def titlecase_sub_section_header():
Returns:
"""
def test_method_should_be_correctly_capitalized(parameters: list[str], other_parameters: dict[str, str]): # noqa: D213
"""Test parameters and attributes sections are capitalized correctly.
Parameters
----------
parameters:
A list of string parameters
other_parameters:
A dictionary of string attributes
Other Parameters
----------
other_parameters:
A dictionary of string attributes
parameters:
A list of string parameters
"""
def test_lowercase_sub_section_header_should_be_valid(parameters: list[str], value: int): # noqa: D213
"""Test that lower case subsection header is valid even if it has the same name as section kind.
Parameters:
----------
parameters:
A list of string parameters
value:
Some value
"""
def test_lowercase_sub_section_header_different_kind(returns: int):
"""Test that lower case subsection header is valid even if it is of a different kind.
Parameters
------------------
returns:
some value
"""

View File

@@ -0,0 +1,9 @@
from ast import literal_eval
eval("3 + 4")
literal_eval({1: 2})
def fn() -> None:
eval("3 + 4")

View File

@@ -0,0 +1,11 @@
def eval(content: str) -> None:
pass
eval("3 + 4")
literal_eval({1: 2})
def fn() -> None:
eval("3 + 4")

View File

@@ -0,0 +1,10 @@
import logging
import warnings
from warnings import warn
warnings.warn("this is ok")
warn("by itself is also ok")
logging.warning("this is fine")
logger = logging.getLogger(__name__)
logger.warning("this is fine")

View File

@@ -0,0 +1,8 @@
import logging
from logging import warn
logging.warn("this is not ok")
warn("not ok")
logger = logging.getLogger(__name__)
logger.warn("this is not ok")

View File

@@ -6,9 +6,8 @@ dictVarBad = os.getenv("AAA", {"a", 7}) # [invalid-envvar-default]
print(os.getenv("TEST", False)) # [invalid-envvar-default]
os.getenv("AA", "GOOD")
os.getenv("AA", f"GOOD")
os.getenv("AA", "GOOD" + "BAR")
os.getenv("AA", "GOOD" + "BAD")
os.getenv("AA", "GOOD" + 1)
os.getenv("AA", "GOOD %s" % "BAR")
os.getenv("AA", "GOOD %s" % "BAD")
os.getenv("B", Z)
os.getenv("AA", "GOOD" if Z else "BAR")
os.getenv("AA", 1 if Z else "BAR") # [invalid-envvar-default]

View File

@@ -4,12 +4,7 @@
1 in (
1, 2, 3
)
fruits = ["cherry", "grapes"]
"cherry" in fruits
_ = {key: value for key, value in {"a": 1, "b": 2}.items() if key in ("a", "b")}
# OK
fruits in [[1, 2, 3], [4, 5, 6]]
fruits in [1, 2, 3]
1 in [[1, 2, 3], [4, 5, 6]]
_ = {key: value for key, value in {"a": 1, "b": 2}.items() if key in (["a", "b"], ["c", "d"])}
fruits = ["cherry", "grapes"]
"cherry" in fruits

View File

@@ -35,15 +35,6 @@ if argc != 0: # correct
if argc != 1: # correct
pass
if argc != -1.0: # correct
pass
if argc != 0.0: # correct
pass
if argc != 1.0: # correct
pass
if argc != 2: # [magic-value-comparison]
pass
@@ -53,12 +44,6 @@ if argc != -2: # [magic-value-comparison]
if argc != +2: # [magic-value-comparison]
pass
if argc != -2.0: # [magic-value-comparison]
pass
if argc != +2.0: # [magic-value-comparison]
pass
if __name__ == "__main__": # correct
pass

View File

@@ -30,11 +30,6 @@ class Thing:
def do_thing(self, item):
return object.__getattribute__(self, item) # PLC2801
def use_descriptor(self, item):
item.__get__(self, type(self)) # OK
item.__set__(self, 1) # OK
item.__delete__(self) # OK
blah = lambda: {"a": 1}.__delitem__("a") # OK

View File

@@ -215,13 +215,3 @@ if sys.version_info[:2] > (3,13):
if sys.version_info[:3] > (3,13):
print("py3")
if sys.version_info > (3,0):
f"this is\
allowed too"
f"""the indentation on
this line is significant"""
"this is\
allowed too"

View File

@@ -46,8 +46,3 @@ x: typing.TypeAlias = list[T]
# OK
x: TypeAlias
x: int = 1
# Ensure that "T" appears only once in the type parameters for the modernized
# type alias.
T = typing.TypeVar["T"]
Decorator: TypeAlias = typing.Callable[[T], T]

View File

@@ -1,75 +0,0 @@
import codecs
import io
from pathlib import Path
# Errors
with open("FURB129.py") as f:
for _line in f.readlines():
pass
a = [line.lower() for line in f.readlines()]
b = {line.upper() for line in f.readlines()}
c = {line.lower(): line.upper() for line in f.readlines()}
with Path("FURB129.py").open() as f:
for _line in f.readlines():
pass
for _line in open("FURB129.py").readlines():
pass
for _line in Path("FURB129.py").open().readlines():
pass
def func():
f = Path("FURB129.py").open()
for _line in f.readlines():
pass
f.close()
def func(f: io.BytesIO):
for _line in f.readlines():
pass
def func():
with (open("FURB129.py") as f, foo as bar):
for _line in f.readlines():
pass
for _line in bar.readlines():
pass
# False positives
def func(f):
for _line in f.readlines():
pass
def func(f: codecs.StreamReader):
for _line in f.readlines():
pass
def func():
class A:
def readlines(self) -> list[str]:
return ["a", "b", "c"]
return A()
for _line in func().readlines():
pass
# OK
for _line in ["a", "b", "c"]:
pass
with open("FURB129.py") as f:
for _line in f:
pass
for _line in f.readlines(10):
pass
for _not_line in f.readline():
pass

View File

@@ -1,58 +0,0 @@
import abc
from abc import abstractmethod, ABCMeta
# Errors
class A0(metaclass=abc.ABCMeta):
@abstractmethod
def foo(self): pass
class A1(metaclass=ABCMeta):
@abstractmethod
def foo(self): pass
class B0:
def __init_subclass__(cls, **kwargs):
super().__init_subclass__()
class B1:
pass
class A2(B0, B1, metaclass=ABCMeta):
@abstractmethod
def foo(self): pass
class A3(B0, before_metaclass=1, metaclass=abc.ABCMeta):
pass
# OK
class Meta(type):
def __new__(cls, *args, **kwargs):
return super().__new__(cls, *args)
class A4(metaclass=Meta, no_metaclass=ABCMeta):
@abstractmethod
def foo(self): pass
class A5(metaclass=Meta):
pass
class A6(abc.ABC):
@abstractmethod
def foo(self): pass
class A7(B0, abc.ABC, B1):
@abstractmethod
def foo(self): pass

View File

@@ -162,26 +162,3 @@ async def f(x: bool):
T = asyncio.create_task(asyncio.sleep(1))
else:
T = None
# Error
def f():
loop = asyncio.new_event_loop()
loop.create_task(main()) # Error
# Error
def f():
loop = asyncio.get_event_loop()
loop.create_task(main()) # Error
# OK
def f():
global task
loop = asyncio.new_event_loop()
task = loop.create_task(main()) # Error
# OK
def f():
global task
loop = asyncio.get_event_loop()
task = loop.create_task(main()) # Error

View File

@@ -250,23 +250,6 @@ __all__ = (
,
)
__all__ = ( # comment about the opening paren
# multiline strange comment 0a
# multiline strange comment 0b
"foo" # inline comment about foo
# multiline strange comment 1a
# multiline strange comment 1b
, # comment about the comma??
# comment about bar part a
# comment about bar part b
"bar" # inline comment about bar
# strange multiline comment comment 2a
# strange multiline comment 2b
,
# strange multiline comment 3a
# strange multiline comment 3b
) # comment about the closing paren
###################################
# These should all not get flagged:
###################################

View File

@@ -4,14 +4,14 @@
class Klass:
__slots__ = ["d", "c", "b", "a"] # a comment that is untouched
__slots__ = ("d", "c", "b", "a")
__match_args__ = ("d", "c", "b", "a")
# Quoting style is retained,
# but unnecessary parens are not
__slots__: set = {'b', "c", ((('a')))}
# Trailing commas are also not retained for single-line definitions
# (but they are in multiline definitions)
__slots__: tuple = ("b", "c", "a",)
__match_args__: tuple = ("b", "c", "a",)
class Klass2:
if bool():
@@ -19,7 +19,7 @@ class Klass2:
else:
__slots__ = "foo3", "foo2", "foo1" # NB: an implicit tuple (without parens)
__slots__: list[str] = ["the", "three", "little", "pigs"]
__match_args__: list[str] = ["the", "three", "little", "pigs"]
__slots__ = ("parenthesized_item"), "in", ("an_unparenthesized_tuple")
# we use natural sort,
# not alphabetical sort or "isort-style" sort
@@ -37,7 +37,7 @@ class Klass3:
# a comment regarding 'a0':
"a0"
)
__slots__ = [
__match_args__ = [
"d",
"c", # a comment regarding 'c'
"b",
@@ -61,7 +61,7 @@ class Klass4:
) # comment6
# comment7
__slots__ = [ # comment0
__match_args__ = [ # comment0
# comment1
# comment2
"dx", "cx", "bx", "ax" # comment3
@@ -139,7 +139,7 @@ class SlotUser:
'distance': 'measured in kilometers'}
class Klass5:
__slots__ = (
__match_args__ = (
"look",
(
"a_veeeeeeeeeeeeeeeeeeery_long_parenthesized_item"
@@ -188,24 +188,20 @@ class BezierBuilder4:
,
)
__slots__ = {"foo", "bar",
"baz", "bingo"
}
###################################
# These should all not get flagged:
###################################
class Klass6:
__slots__ = ()
__slots__ = []
__match_args__ = []
__slots__ = ("single_item",)
__slots__ = (
__match_args__ = (
"single_item_multiline",
)
__slots__ = {"single_item",}
__slots__ = {"single_item_no_trailing_comma": "docs for that"}
__slots__ = [
__match_args__ = [
"single_item_multiline_no_trailing_comma"
]
__slots__ = ("not_a_tuple_just_a_string")
@@ -222,11 +218,11 @@ class Klass6:
__slots__ = ("b", "a", "e", "d")
__slots__ = ["b", "a", "e", "d"]
__slots__ = ["foo", "bar", "antipasti"]
__match_args__ = ["foo", "bar", "antipasti"]
class Klass6:
__slots__ = (9, 8, 7)
__slots__ = ( # This is just an empty tuple,
__match_args__ = ( # This is just an empty tuple,
# but,
# it's very well
) # documented
@@ -249,10 +245,10 @@ class Klass6:
__slots__ = [
()
]
__slots__ = (
__match_args__ = (
()
)
__slots__ = (
__match_args__ = (
[]
)
__slots__ = (
@@ -261,9 +257,12 @@ class Klass6:
__slots__ = (
[],
)
__slots__ = (
__match_args__ = (
"foo", [], "bar"
)
__slots__ = [
__match_args__ = [
"foo", (), "bar"
]
__match_args__ = {"a", "set", "for", "__match_args__", "is invalid"}
__match_args__ = {"this": "is", "also": "invalid"}

View File

@@ -1,70 +0,0 @@
val = 2
"always ignore this: {val}"
print("but don't ignore this: {val}") # RUF027
def simple_cases():
a = 4
b = "{a}" # RUF027
c = "{a} {b} f'{val}' " # RUF027
def escaped_string():
a = 4
b = "escaped string: {{ brackets surround me }}" # RUF027
def raw_string():
a = 4
b = r"raw string with formatting: {a}" # RUF027
c = r"raw string with \backslashes\ and \"escaped quotes\": {a}" # RUF027
def print_name(name: str):
a = 4
print("Hello, {name}!") # RUF027
print("The test value we're using today is {a}") # RUF027
def nested_funcs():
a = 4
print(do_nothing(do_nothing("{a}"))) # RUF027
def tripled_quoted():
a = 4
c = a
single_line = """ {a} """ # RUF027
# RUF027
multi_line = a = """b { # comment
c} d
"""
def single_quoted_multi_line():
a = 4
# RUF027
b = " {\
a} \
"
def implicit_concat():
a = 4
b = "{a}" "+" "{b}" r" \\ " # RUF027 for the first part only
print(f"{a}" "{a}" f"{b}") # RUF027
def escaped_chars():
a = 4
b = "\"not escaped:\" '{a}' \"escaped:\": '{{c}}'" # RUF027
def method_calls():
value = {}
value.method = print_name
first = "Wendy"
last = "Appleseed"
value.method("{first} {last}") # RUF027

View File

@@ -1,36 +0,0 @@
def do_nothing(a):
return a
def alternative_formatter(src, **kwargs):
src.format(**kwargs)
def format2(src, *args):
pass
# These should not cause an RUF027 message
def negative_cases():
a = 4
positive = False
"""{a}"""
"don't format: {a}"
c = """ {b} """
d = "bad variable: {invalid}"
e = "incorrect syntax: {}"
f = "uses a builtin: {max}"
json = "{ positive: false }"
json2 = "{ 'positive': false }"
json3 = "{ 'positive': 'false' }"
alternative_formatter("{a}", a=5)
formatted = "{a}".fmt(a=7)
print(do_nothing("{a}".format(a=3)))
print(do_nothing(alternative_formatter("{a}", a=5)))
print(format(do_nothing("{a}"), a=5))
print("{a}".to_upper())
print(do_nothing("{a}").format(a="Test"))
print(do_nothing("{a}").format2(a))
print(("{a}" "{c}").format(a=1, c=2))
print("{a}".attribute.chaining.call(a=2))
print("{a} {c}".format(a))

View File

@@ -0,0 +1,40 @@
"""
Violation:
Reraise without using 'from'
"""
class MyException(Exception):
pass
def func():
try:
a = 1
except Exception:
raise MyException()
def func():
try:
a = 1
except Exception:
if True:
raise MyException()
def good():
try:
a = 1
except MyException as e:
raise e # This is verbose violation, shouldn't trigger no cause
except Exception:
raise # Just re-raising don't need 'from'
def good():
try:
from mod import f
except ImportError:
def f():
raise MyException() # Raising within a new scope is fine

View File

@@ -2,14 +2,11 @@ use ruff_python_ast::Comprehension;
use crate::checkers::ast::Checker;
use crate::codes::Rule;
use crate::rules::{flake8_simplify, refurb};
use crate::rules::flake8_simplify;
/// Run lint rules over a [`Comprehension`] syntax nodes.
pub(crate) fn comprehension(comprehension: &Comprehension, checker: &mut Checker) {
if checker.enabled(Rule::InDictKeys) {
flake8_simplify::rules::key_in_dict_comprehension(checker, comprehension);
}
if checker.enabled(Rule::ReadlinesInFor) {
refurb::rules::readlines_in_comprehension(checker, comprehension);
}
}

View File

@@ -281,21 +281,17 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
}
}
if checker.source_type.is_stub()
|| matches!(scope.kind, ScopeKind::Module | ScopeKind::Function(_))
{
if checker.enabled(Rule::UnusedPrivateTypeVar) {
flake8_pyi::rules::unused_private_type_var(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateProtocol) {
flake8_pyi::rules::unused_private_protocol(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypeAlias) {
flake8_pyi::rules::unused_private_type_alias(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypedDict) {
flake8_pyi::rules::unused_private_typed_dict(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypeVar) {
flake8_pyi::rules::unused_private_type_var(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateProtocol) {
flake8_pyi::rules::unused_private_protocol(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypeAlias) {
flake8_pyi::rules::unused_private_type_alias(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypedDict) {
flake8_pyi::rules::unused_private_typed_dict(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::AsyncioDanglingTask) {

View File

@@ -1,15 +1,13 @@
use ruff_python_ast::str::raw_contents_range;
use ruff_text_size::{Ranged, TextRange};
use ruff_python_semantic::{
BindingKind, ContextualizedDefinition, Definition, Export, Member, MemberKind,
};
use ruff_python_semantic::{BindingKind, ContextualizedDefinition, Export};
use crate::checkers::ast::Checker;
use crate::codes::Rule;
use crate::docstrings::Docstring;
use crate::fs::relativize_path;
use crate::rules::{flake8_annotations, flake8_pyi, pydocstyle, pylint};
use crate::rules::{flake8_annotations, flake8_pyi, pydocstyle};
use crate::{docstrings, warn_user};
/// Run lint rules over all [`Definition`] nodes in the [`SemanticModel`].
@@ -33,7 +31,6 @@ pub(crate) fn definitions(checker: &mut Checker) {
]);
let enforce_stubs = checker.source_type.is_stub() && checker.enabled(Rule::DocstringInStub);
let enforce_stubs_and_runtime = checker.enabled(Rule::IterMethodReturnIterable);
let enforce_dunder_method = checker.enabled(Rule::BadDunderMethodName);
let enforce_docstrings = checker.any_enabled(&[
Rule::BlankLineAfterLastSection,
Rule::BlankLineAfterSummary,
@@ -83,12 +80,7 @@ pub(crate) fn definitions(checker: &mut Checker) {
Rule::UndocumentedPublicPackage,
]);
if !enforce_annotations
&& !enforce_docstrings
&& !enforce_stubs
&& !enforce_stubs_and_runtime
&& !enforce_dunder_method
{
if !enforce_annotations && !enforce_docstrings && !enforce_stubs && !enforce_stubs_and_runtime {
return;
}
@@ -155,19 +147,6 @@ pub(crate) fn definitions(checker: &mut Checker) {
}
}
// pylint
if enforce_dunder_method {
if checker.enabled(Rule::BadDunderMethodName) {
if let Definition::Member(Member {
kind: MemberKind::Method(method),
..
}) = definition
{
pylint::rules::bad_dunder_method_name(checker, method);
}
}
}
// pydocstyle
if enforce_docstrings {
if pydocstyle::helpers::should_ignore_definition(

View File

@@ -5,6 +5,7 @@ use crate::checkers::ast::Checker;
use crate::registry::Rule;
use crate::rules::{
flake8_bandit, flake8_blind_except, flake8_bugbear, flake8_builtins, pycodestyle, pylint,
tryceratops,
};
/// Run lint rules over an [`ExceptHandler`] syntax node.
@@ -65,6 +66,9 @@ pub(crate) fn except_handler(except_handler: &ExceptHandler, checker: &mut Check
if checker.enabled(Rule::ExceptWithNonExceptionClasses) {
flake8_bugbear::rules::except_with_non_exception_classes(checker, except_handler);
}
if checker.enabled(Rule::ReraiseNoCause) {
tryceratops::rules::reraise_no_cause(checker, body);
}
if checker.enabled(Rule::BinaryOpException) {
pylint::rules::binary_op_exception(checker, except_handler);
}

View File

@@ -16,7 +16,8 @@ use crate::rules::{
flake8_future_annotations, flake8_gettext, flake8_implicit_str_concat, flake8_logging,
flake8_logging_format, flake8_pie, flake8_print, flake8_pyi, flake8_pytest_style, flake8_self,
flake8_simplify, flake8_tidy_imports, flake8_trio, flake8_type_checking, flake8_use_pathlib,
flynt, numpy, pandas_vet, pep8_naming, pycodestyle, pyflakes, pylint, pyupgrade, refurb, ruff,
flynt, numpy, pandas_vet, pep8_naming, pycodestyle, pyflakes, pygrep_hooks, pylint, pyupgrade,
refurb, ruff,
};
use crate::settings::types::PythonVersion;
@@ -319,7 +320,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
numpy::rules::numpy_2_0_deprecation(checker, expr);
}
if checker.enabled(Rule::DeprecatedMockImport) {
pyupgrade::rules::deprecated_mock_attribute(checker, attribute);
pyupgrade::rules::deprecated_mock_attribute(checker, expr);
}
if checker.enabled(Rule::SixPY3) {
flake8_2020::rules::name_or_attribute(checker, expr);
@@ -336,9 +337,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::UndocumentedWarn) {
flake8_logging::rules::undocumented_warn(checker, expr);
}
if checker.enabled(Rule::PandasUseOfDotValues) {
pandas_vet::rules::attr(checker, attribute);
}
pandas_vet::rules::attr(checker, attribute);
}
Expr::Call(
call @ ast::ExprCall {
@@ -638,10 +637,14 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
flake8_bandit::rules::tarfile_unsafe_members(checker, call);
}
if checker.enabled(Rule::UnnecessaryGeneratorList) {
flake8_comprehensions::rules::unnecessary_generator_list(checker, call);
flake8_comprehensions::rules::unnecessary_generator_list(
checker, expr, func, args, keywords,
);
}
if checker.enabled(Rule::UnnecessaryGeneratorSet) {
flake8_comprehensions::rules::unnecessary_generator_set(checker, call);
flake8_comprehensions::rules::unnecessary_generator_set(
checker, expr, func, args, keywords,
);
}
if checker.enabled(Rule::UnnecessaryGeneratorDict) {
flake8_comprehensions::rules::unnecessary_generator_dict(
@@ -649,7 +652,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
);
}
if checker.enabled(Rule::UnnecessaryListComprehensionSet) {
flake8_comprehensions::rules::unnecessary_list_comprehension_set(checker, call);
flake8_comprehensions::rules::unnecessary_list_comprehension_set(
checker, expr, func, args, keywords,
);
}
if checker.enabled(Rule::UnnecessaryListComprehensionDict) {
flake8_comprehensions::rules::unnecessary_list_comprehension_dict(
@@ -657,7 +662,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
);
}
if checker.enabled(Rule::UnnecessaryLiteralSet) {
flake8_comprehensions::rules::unnecessary_literal_set(checker, call);
flake8_comprehensions::rules::unnecessary_literal_set(
checker, expr, func, args, keywords,
);
}
if checker.enabled(Rule::UnnecessaryLiteralDict) {
flake8_comprehensions::rules::unnecessary_literal_dict(
@@ -667,18 +674,27 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::UnnecessaryCollectionCall) {
flake8_comprehensions::rules::unnecessary_collection_call(
checker,
call,
expr,
func,
args,
keywords,
&checker.settings.flake8_comprehensions,
);
}
if checker.enabled(Rule::UnnecessaryLiteralWithinTupleCall) {
flake8_comprehensions::rules::unnecessary_literal_within_tuple_call(checker, call);
flake8_comprehensions::rules::unnecessary_literal_within_tuple_call(
checker, expr, func, args, keywords,
);
}
if checker.enabled(Rule::UnnecessaryLiteralWithinListCall) {
flake8_comprehensions::rules::unnecessary_literal_within_list_call(checker, call);
flake8_comprehensions::rules::unnecessary_literal_within_list_call(
checker, expr, func, args, keywords,
);
}
if checker.enabled(Rule::UnnecessaryLiteralWithinDictCall) {
flake8_comprehensions::rules::unnecessary_literal_within_dict_call(checker, call);
flake8_comprehensions::rules::unnecessary_literal_within_dict_call(
checker, expr, func, args, keywords,
);
}
if checker.enabled(Rule::UnnecessaryListCall) {
flake8_comprehensions::rules::unnecessary_list_call(checker, expr, func, args);
@@ -757,6 +773,12 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::CallDateFromtimestamp) {
flake8_datetimez::rules::call_date_fromtimestamp(checker, func, expr.range());
}
if checker.enabled(Rule::Eval) {
pygrep_hooks::rules::no_eval(checker, func);
}
if checker.enabled(Rule::DeprecatedLogWarn) {
pygrep_hooks::rules::deprecated_log_warn(checker, call);
}
if checker.enabled(Rule::UnnecessaryDirectLambdaCall) {
pylint::rules::unnecessary_direct_lambda_call(checker, expr, func);
}
@@ -961,6 +983,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::UnsortedDunderAll) {
ruff::rules::sort_dunder_all_extend_call(checker, call);
}
if checker.enabled(Rule::DefaultFactoryKwarg) {
ruff::rules::default_factory_kwarg(checker, call);
}
@@ -1025,16 +1048,6 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
pyupgrade::rules::unicode_kind_prefix(checker, string_literal);
}
}
if checker.enabled(Rule::MissingFStringSyntax) {
for string_literal in value.literals() {
ruff::rules::missing_fstring_syntax(
&mut checker.diagnostics,
string_literal,
checker.locator,
&checker.semantic,
);
}
}
}
Expr::BinOp(ast::ExprBinOp {
left,
@@ -1306,22 +1319,12 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
refurb::rules::math_constant(checker, number_literal);
}
}
Expr::StringLiteral(ast::ExprStringLiteral { value, range: _ }) => {
Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) => {
if checker.enabled(Rule::UnicodeKindPrefix) {
for string_part in value {
pyupgrade::rules::unicode_kind_prefix(checker, string_part);
}
}
if checker.enabled(Rule::MissingFStringSyntax) {
for string_literal in value.as_slice() {
ruff::rules::missing_fstring_syntax(
&mut checker.diagnostics,
string_literal,
checker.locator,
&checker.semantic,
);
}
}
}
Expr::IfExp(
if_exp @ ast::ExprIfExp {

View File

@@ -247,11 +247,6 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::HardcodedPasswordDefault) {
flake8_bandit::rules::hardcoded_password_default(checker, parameters);
}
if checker.enabled(Rule::SuspiciousMarkSafeUsage) {
for decorator in decorator_list {
flake8_bandit::rules::suspicious_function_decorator(checker, decorator);
}
}
if checker.enabled(Rule::PropertyWithParameters) {
pylint::rules::property_with_parameters(checker, stmt, decorator_list, parameters);
}
@@ -518,8 +513,8 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::SingleStringSlots) {
pylint::rules::single_string_slots(checker, class_def);
}
if checker.enabled(Rule::MetaClassABCMeta) {
refurb::rules::metaclass_abcmeta(checker, class_def);
if checker.enabled(Rule::BadDunderMethodName) {
pylint::rules::bad_dunder_method_name(checker, body);
}
}
Stmt::Import(ast::StmtImport { names, range: _ }) => {
@@ -1317,9 +1312,6 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::UnnecessaryDictIndexLookup) {
pylint::rules::unnecessary_dict_index_lookup(checker, for_stmt);
}
if checker.enabled(Rule::ReadlinesInFor) {
refurb::rules::readlines_in_for(checker, for_stmt);
}
if !is_async {
if checker.enabled(Rule::ReimplementedBuiltin) {
flake8_simplify::rules::convert_for_loop_to_any_all(checker, stmt);

View File

@@ -31,8 +31,8 @@ use std::path::Path;
use itertools::Itertools;
use log::debug;
use ruff_python_ast::{
self as ast, Comprehension, ElifElseClause, ExceptHandler, Expr, ExprContext, Keyword,
MatchCase, Parameter, ParameterWithDefault, Parameters, Pattern, Stmt, Suite, UnaryOp,
self as ast, Arguments, Comprehension, ElifElseClause, ExceptHandler, Expr, ExprContext,
Keyword, MatchCase, Parameter, ParameterWithDefault, Parameters, Pattern, Stmt, Suite, UnaryOp,
};
use ruff_text_size::{Ranged, TextRange, TextSize};
@@ -40,7 +40,7 @@ use ruff_diagnostics::{Diagnostic, IsolationLevel};
use ruff_notebook::{CellOffsets, NotebookIndex};
use ruff_python_ast::all::{extract_all_names, DunderAllFlags};
use ruff_python_ast::helpers::{
collect_import_from_member, extract_handled_exceptions, is_docstring_stmt, to_module_path,
collect_import_from_member, extract_handled_exceptions, to_module_path,
};
use ruff_python_ast::identifier::Identifier;
use ruff_python_ast::str::trailing_quote;
@@ -71,38 +71,6 @@ mod analyze;
mod annotation;
mod deferred;
/// State representing whether a docstring is expected or not for the next statement.
#[derive(Default, Debug, Copy, Clone, PartialEq)]
enum DocstringState {
/// The next statement is expected to be a docstring, but not necessarily so.
///
/// For example, in the following code:
///
/// ```python
/// class Foo:
/// pass
///
///
/// def bar(x, y):
/// """Docstring."""
/// return x + y
/// ```
///
/// For `Foo`, the state is expected when the checker is visiting the class
/// body but isn't going to be present. While, for `bar` function, the docstring
/// is expected and present.
#[default]
Expected,
Other,
}
impl DocstringState {
/// Returns `true` if the next statement is expected to be a docstring.
const fn is_expected(self) -> bool {
matches!(self, DocstringState::Expected)
}
}
pub(crate) struct Checker<'a> {
/// The [`Path`] to the file under analysis.
path: &'a Path,
@@ -146,8 +114,6 @@ pub(crate) struct Checker<'a> {
pub(crate) flake8_bugbear_seen: Vec<TextRange>,
/// The end offset of the last visited statement.
last_stmt_end: TextSize,
/// A state describing if a docstring is expected or not.
docstring_state: DocstringState,
}
impl<'a> Checker<'a> {
@@ -187,7 +153,6 @@ impl<'a> Checker<'a> {
cell_offsets,
notebook_index,
last_stmt_end: TextSize::default(),
docstring_state: DocstringState::default(),
}
}
}
@@ -232,7 +197,7 @@ impl<'a> Checker<'a> {
let trailing_quote = trailing_quote(self.locator.slice(string_range))?;
// Invert the quote character, if it's a single quote.
match trailing_quote {
match *trailing_quote {
"'" => Some(Quote::Double),
"\"" => Some(Quote::Single),
_ => None,
@@ -340,16 +305,19 @@ where
self.semantic.flags -= SemanticModelFlags::IMPORT_BOUNDARY;
}
// Track whether we've seen module docstrings, non-imports, etc.
// Track whether we've seen docstrings, non-imports, etc.
match stmt {
Stmt::Expr(ast::StmtExpr { value, .. })
if !self.semantic.seen_module_docstring_boundary()
if !self
.semantic
.flags
.intersects(SemanticModelFlags::MODULE_DOCSTRING)
&& value.is_string_literal_expr() =>
{
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING_BOUNDARY;
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING;
}
Stmt::ImportFrom(ast::StmtImportFrom { module, names, .. }) => {
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING_BOUNDARY;
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING;
// Allow __future__ imports until we see a non-__future__ import.
if let Some("__future__") = module.as_deref() {
@@ -364,11 +332,11 @@ where
}
}
Stmt::Import(_) => {
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING_BOUNDARY;
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING;
self.semantic.flags |= SemanticModelFlags::FUTURES_BOUNDARY;
}
_ => {
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING_BOUNDARY;
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING;
self.semantic.flags |= SemanticModelFlags::FUTURES_BOUNDARY;
if !(self.semantic.seen_import_boundary()
|| helpers::is_assignment_to_a_dunder(stmt)
@@ -381,20 +349,17 @@ where
}
}
// Track each top-level import, to guide import insertions.
if matches!(stmt, Stmt::Import(_) | Stmt::ImportFrom(_)) {
if self.semantic.at_top_level() {
self.importer.visit_import(stmt);
}
}
// Store the flags prior to any further descent, so that we can restore them after visiting
// the node.
let flags_snapshot = self.semantic.flags;
// Update the semantic model if it is in a docstring. This should be done after the
// flags snapshot to ensure that it gets reset once the statement is analyzed.
if self.docstring_state.is_expected() {
if is_docstring_stmt(stmt) {
self.semantic.flags |= SemanticModelFlags::DOCSTRING;
}
// Reset the state irrespective of whether the statement is a docstring or not.
self.docstring_state = DocstringState::Other;
}
// Step 1: Binding
match stmt {
Stmt::AugAssign(ast::StmtAugAssign {
@@ -406,22 +371,14 @@ where
self.handle_node_load(target);
}
Stmt::Import(ast::StmtImport { names, range: _ }) => {
if self.semantic.at_top_level() {
self.importer.visit_import(stmt);
}
for alias in names {
// Given `import foo.bar`, `module` would be "foo", and `call_path` would be
// `["foo", "bar"]`.
let module = alias.name.split('.').next().unwrap();
// Mark the top-level module as "seen" by the semantic model.
self.semantic.add_module(module);
if alias.asname.is_none() && alias.name.contains('.') {
if alias.name.contains('.') && alias.asname.is_none() {
// Given `import foo.bar`, `name` would be "foo", and `qualified_name` would be
// "foo.bar".
let name = alias.name.split('.').next().unwrap();
let call_path: Box<[&str]> = alias.name.split('.').collect();
self.add_binding(
module,
name,
alias.identifier(),
BindingKind::SubmoduleImport(SubmoduleImport { call_path }),
BindingFlags::EXTERNAL,
@@ -456,20 +413,8 @@ where
level,
range: _,
}) => {
if self.semantic.at_top_level() {
self.importer.visit_import(stmt);
}
let module = module.as_deref();
let level = *level;
// Mark the top-level module as "seen" by the semantic model.
if level.map_or(true, |level| level == 0) {
if let Some(module) = module.and_then(|module| module.split('.').next()) {
self.semantic.add_module(module);
}
}
for alias in names {
if let Some("__future__") = module {
let name = alias.asname.as_ref().unwrap_or(&alias.name);
@@ -696,8 +641,6 @@ where
self.semantic.set_globals(globals);
}
// Set the docstring state before visiting the class body.
self.docstring_state = DocstringState::Expected;
self.visit_body(body);
}
Stmt::TypeAlias(ast::StmtTypeAlias {
@@ -1033,7 +976,12 @@ where
}
Expr::Call(ast::ExprCall {
func,
arguments,
arguments:
Arguments {
args,
keywords,
range: _,
},
range: _,
}) => {
self.visit_expr(func);
@@ -1076,7 +1024,7 @@ where
});
match callable {
Some(typing::Callable::Bool) => {
let mut args = arguments.args.iter();
let mut args = args.iter();
if let Some(arg) = args.next() {
self.visit_boolean_test(arg);
}
@@ -1085,7 +1033,7 @@ where
}
}
Some(typing::Callable::Cast) => {
let mut args = arguments.args.iter();
let mut args = args.iter();
if let Some(arg) = args.next() {
self.visit_type_definition(arg);
}
@@ -1094,7 +1042,7 @@ where
}
}
Some(typing::Callable::NewType) => {
let mut args = arguments.args.iter();
let mut args = args.iter();
if let Some(arg) = args.next() {
self.visit_non_type_definition(arg);
}
@@ -1103,21 +1051,21 @@ where
}
}
Some(typing::Callable::TypeVar) => {
let mut args = arguments.args.iter();
let mut args = args.iter();
if let Some(arg) = args.next() {
self.visit_non_type_definition(arg);
}
for arg in args {
self.visit_type_definition(arg);
}
for keyword in arguments.keywords.iter() {
for keyword in keywords {
let Keyword {
arg,
value,
range: _,
} = keyword;
if let Some(id) = arg {
if id.as_str() == "bound" {
if id == "bound" {
self.visit_type_definition(value);
} else {
self.visit_non_type_definition(value);
@@ -1127,7 +1075,7 @@ where
}
Some(typing::Callable::NamedTuple) => {
// Ex) NamedTuple("a", [("a", int)])
let mut args = arguments.args.iter();
let mut args = args.iter();
if let Some(arg) = args.next() {
self.visit_non_type_definition(arg);
}
@@ -1156,7 +1104,7 @@ where
}
}
for keyword in arguments.keywords.iter() {
for keyword in keywords {
let Keyword { arg, value, .. } = keyword;
match (arg.as_ref(), value) {
// Ex) NamedTuple("a", **{"a": int})
@@ -1183,7 +1131,7 @@ where
}
Some(typing::Callable::TypedDict) => {
// Ex) TypedDict("a", {"a": int})
let mut args = arguments.args.iter();
let mut args = args.iter();
if let Some(arg) = args.next() {
self.visit_non_type_definition(arg);
}
@@ -1206,13 +1154,13 @@ where
}
// Ex) TypedDict("a", a=int)
for keyword in arguments.keywords.iter() {
for keyword in keywords {
let Keyword { value, .. } = keyword;
self.visit_type_definition(value);
}
}
Some(typing::Callable::MypyExtension) => {
let mut args = arguments.args.iter();
let mut args = args.iter();
if let Some(arg) = args.next() {
// Ex) DefaultNamedArg(bool | None, name="some_prop_name")
self.visit_type_definition(arg);
@@ -1220,13 +1168,13 @@ where
for arg in args {
self.visit_non_type_definition(arg);
}
for keyword in arguments.keywords.iter() {
for keyword in keywords {
let Keyword { value, .. } = keyword;
self.visit_non_type_definition(value);
}
} else {
// Ex) DefaultNamedArg(type="bool", name="some_prop_name")
for keyword in arguments.keywords.iter() {
for keyword in keywords {
let Keyword {
value,
arg,
@@ -1244,10 +1192,10 @@ where
// If we're in a type definition, we need to treat the arguments to any
// other callables as non-type definitions (i.e., we don't want to treat
// any strings as deferred type definitions).
for arg in arguments.args.iter() {
for arg in args {
self.visit_non_type_definition(arg);
}
for keyword in arguments.keywords.iter() {
for keyword in keywords {
let Keyword { value, .. } = keyword;
self.visit_non_type_definition(value);
}
@@ -1332,16 +1280,6 @@ where
self.semantic.flags |= SemanticModelFlags::F_STRING;
visitor::walk_expr(self, expr);
}
Expr::NamedExpr(ast::ExprNamedExpr {
target,
value,
range: _,
}) => {
self.visit_expr(value);
self.semantic.flags |= SemanticModelFlags::NAMED_EXPRESSION_ASSIGNMENT;
self.visit_expr(target);
}
_ => visitor::walk_expr(self, expr),
}
@@ -1558,8 +1496,6 @@ impl<'a> Checker<'a> {
unreachable!("Generator expression must contain at least one generator");
};
let flags = self.semantic.flags;
// Generators are compiled as nested functions. (This may change with PEP 709.)
// As such, the `iter` of the first generator is evaluated in the outer scope, while all
// subsequent nodes are evaluated in the inner scope.
@@ -1589,22 +1525,14 @@ impl<'a> Checker<'a> {
// `x` is local to `foo`, and the `T` in `y=T` skips the class scope when resolving.
self.visit_expr(&generator.iter);
self.semantic.push_scope(ScopeKind::Generator);
self.semantic.flags = flags | SemanticModelFlags::COMPREHENSION_ASSIGNMENT;
self.visit_expr(&generator.target);
self.semantic.flags = flags;
for expr in &generator.ifs {
self.visit_boolean_test(expr);
}
for generator in iterator {
self.visit_expr(&generator.iter);
self.semantic.flags = flags | SemanticModelFlags::COMPREHENSION_ASSIGNMENT;
self.visit_expr(&generator.target);
self.semantic.flags = flags;
for expr in &generator.ifs {
self.visit_boolean_test(expr);
}
@@ -1803,21 +1731,11 @@ impl<'a> Checker<'a> {
return;
}
// A binding within a `for` must be a loop variable, as in:
// ```python
// for x in range(10):
// ...
// ```
if parent.is_for_stmt() {
self.add_binding(id, expr.range(), BindingKind::LoopVar, flags);
return;
}
// A binding within a `with` must be an item, as in:
// ```python
// with open("file.txt") as fp:
// ...
// ```
if parent.is_with_stmt() {
self.add_binding(id, expr.range(), BindingKind::WithItemVar, flags);
return;
@@ -1873,26 +1791,17 @@ impl<'a> Checker<'a> {
}
// If the expression is the left-hand side of a walrus operator, then it's a named
// expression assignment, as in:
// ```python
// if (x := 10) > 5:
// ...
// ```
if self.semantic.in_named_expression_assignment() {
// expression assignment.
if self
.semantic
.current_expressions()
.filter_map(Expr::as_named_expr_expr)
.any(|parent| parent.target.as_ref() == expr)
{
self.add_binding(id, expr.range(), BindingKind::NamedExprAssignment, flags);
return;
}
// If the expression is part of a comprehension target, then it's a comprehension variable
// assignment, as in:
// ```python
// [x for x in range(10)]
// ```
if self.semantic.in_comprehension_assignment() {
self.add_binding(id, expr.range(), BindingKind::ComprehensionVar, flags);
return;
}
self.add_binding(id, expr.range(), BindingKind::Assignment, flags);
}
@@ -2008,8 +1917,6 @@ impl<'a> Checker<'a> {
};
self.visit_parameters(parameters);
// Set the docstring state before visiting the function body.
self.docstring_state = DocstringState::Expected;
self.visit_body(body);
}
}

View File

@@ -1,4 +1,3 @@
use crate::line_width::IndentWidth;
use ruff_diagnostics::Diagnostic;
use ruff_python_codegen::Stylist;
use ruff_python_parser::lexer::LexResult;
@@ -16,11 +15,11 @@ use crate::rules::pycodestyle::rules::logical_lines::{
use crate::settings::LinterSettings;
/// Return the amount of indentation, expanding tabs to the next multiple of the settings' tab size.
pub(crate) fn expand_indent(line: &str, indent_width: IndentWidth) -> usize {
fn expand_indent(line: &str, settings: &LinterSettings) -> usize {
let line = line.trim_end_matches(['\n', '\r']);
let mut indent = 0;
let tab_size = indent_width.as_usize();
let tab_size = settings.tab_size.as_usize();
for c in line.bytes() {
match c {
b'\t' => indent = (indent / tab_size) * tab_size + tab_size,
@@ -86,7 +85,7 @@ pub(crate) fn check_logical_lines(
TextRange::new(locator.line_start(first_token.start()), first_token.start())
};
let indent_level = expand_indent(locator.slice(range), settings.tab_size);
let indent_level = expand_indent(locator.slice(range), settings);
let indent_size = 4;

View File

@@ -4,7 +4,6 @@ use std::path::Path;
use ruff_notebook::CellOffsets;
use ruff_python_ast::PySourceType;
use ruff_python_codegen::Stylist;
use ruff_python_parser::lexer::LexResult;
use ruff_python_parser::Tok;
@@ -15,7 +14,6 @@ use ruff_source_file::Locator;
use crate::directives::TodoComment;
use crate::lex::docstring_detection::StateMachine;
use crate::registry::{AsRule, Rule};
use crate::rules::pycodestyle::rules::BlankLinesChecker;
use crate::rules::ruff::rules::Context;
use crate::rules::{
eradicate, flake8_commas, flake8_executable, flake8_fixme, flake8_implicit_str_concat,
@@ -23,37 +21,17 @@ use crate::rules::{
};
use crate::settings::LinterSettings;
#[allow(clippy::too_many_arguments)]
pub(crate) fn check_tokens(
tokens: &[LexResult],
path: &Path,
locator: &Locator,
indexer: &Indexer,
stylist: &Stylist,
settings: &LinterSettings,
source_type: PySourceType,
cell_offsets: Option<&CellOffsets>,
) -> Vec<Diagnostic> {
let mut diagnostics: Vec<Diagnostic> = vec![];
if settings.rules.any_enabled(&[
Rule::BlankLineBetweenMethods,
Rule::BlankLinesTopLevel,
Rule::TooManyBlankLines,
Rule::BlankLineAfterDecorator,
Rule::BlankLinesAfterFunctionOrClass,
Rule::BlankLinesBeforeNestedDefinition,
]) {
let mut blank_lines_checker = BlankLinesChecker::default();
blank_lines_checker.check_lines(
tokens,
locator,
stylist,
settings.tab_size,
&mut diagnostics,
);
}
if settings.rules.enabled(Rule::BlanketNOQA) {
pygrep_hooks::rules::blanket_noqa(&mut diagnostics, indexer, locator);
}
@@ -117,7 +95,7 @@ pub(crate) fn check_tokens(
}
if settings.rules.enabled(Rule::TabIndentation) {
pycodestyle::rules::tab_indentation(&mut diagnostics, locator, indexer);
pycodestyle::rules::tab_indentation(&mut diagnostics, tokens, locator, indexer);
}
if settings.rules.any_enabled(&[

View File

@@ -137,12 +137,6 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pycodestyle, "E274") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::TabBeforeKeyword),
#[allow(deprecated)]
(Pycodestyle, "E275") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MissingWhitespaceAfterKeyword),
(Pycodestyle, "E301") => (RuleGroup::Preview, rules::pycodestyle::rules::BlankLineBetweenMethods),
(Pycodestyle, "E302") => (RuleGroup::Preview, rules::pycodestyle::rules::BlankLinesTopLevel),
(Pycodestyle, "E303") => (RuleGroup::Preview, rules::pycodestyle::rules::TooManyBlankLines),
(Pycodestyle, "E304") => (RuleGroup::Preview, rules::pycodestyle::rules::BlankLineAfterDecorator),
(Pycodestyle, "E305") => (RuleGroup::Preview, rules::pycodestyle::rules::BlankLinesAfterFunctionOrClass),
(Pycodestyle, "E306") => (RuleGroup::Preview, rules::pycodestyle::rules::BlankLinesBeforeNestedDefinition),
(Pycodestyle, "E401") => (RuleGroup::Stable, rules::pycodestyle::rules::MultipleImportsOnOneLine),
(Pycodestyle, "E402") => (RuleGroup::Stable, rules::pycodestyle::rules::ModuleImportNotAtTopOfFile),
(Pycodestyle, "E501") => (RuleGroup::Stable, rules::pycodestyle::rules::LineTooLong),
@@ -317,11 +311,11 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Async, "102") => (RuleGroup::Stable, rules::flake8_async::rules::BlockingOsCallInAsyncFunction),
// flake8-trio
(Flake8Trio, "100") => (RuleGroup::Stable, rules::flake8_trio::rules::TrioTimeoutWithoutAwait),
(Flake8Trio, "105") => (RuleGroup::Stable, rules::flake8_trio::rules::TrioSyncCall),
(Flake8Trio, "109") => (RuleGroup::Stable, rules::flake8_trio::rules::TrioAsyncFunctionWithTimeout),
(Flake8Trio, "110") => (RuleGroup::Stable, rules::flake8_trio::rules::TrioUnneededSleep),
(Flake8Trio, "115") => (RuleGroup::Stable, rules::flake8_trio::rules::TrioZeroSleepCall),
(Flake8Trio, "100") => (RuleGroup::Preview, rules::flake8_trio::rules::TrioTimeoutWithoutAwait),
(Flake8Trio, "105") => (RuleGroup::Preview, rules::flake8_trio::rules::TrioSyncCall),
(Flake8Trio, "109") => (RuleGroup::Preview, rules::flake8_trio::rules::TrioAsyncFunctionWithTimeout),
(Flake8Trio, "110") => (RuleGroup::Preview, rules::flake8_trio::rules::TrioUnneededSleep),
(Flake8Trio, "115") => (RuleGroup::Preview, rules::flake8_trio::rules::TrioZeroSleepCall),
// flake8-builtins
(Flake8Builtins, "001") => (RuleGroup::Stable, rules::flake8_builtins::rules::BuiltinVariableShadowing),
@@ -429,7 +423,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Quotes, "001") => (RuleGroup::Stable, rules::flake8_quotes::rules::BadQuotesMultilineString),
(Flake8Quotes, "002") => (RuleGroup::Stable, rules::flake8_quotes::rules::BadQuotesDocstring),
(Flake8Quotes, "003") => (RuleGroup::Stable, rules::flake8_quotes::rules::AvoidableEscapedQuote),
(Flake8Quotes, "004") => (RuleGroup::Stable, rules::flake8_quotes::rules::UnnecessaryEscapedQuote),
(Flake8Quotes, "004") => (RuleGroup::Preview, rules::flake8_quotes::rules::UnnecessaryEscapedQuote),
// flake8-annotations
(Flake8Annotations, "001") => (RuleGroup::Stable, rules::flake8_annotations::rules::MissingTypeFunctionArgument),
@@ -470,7 +464,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Simplify, "109") => (RuleGroup::Stable, rules::flake8_simplify::rules::CompareWithTuple),
(Flake8Simplify, "110") => (RuleGroup::Stable, rules::flake8_simplify::rules::ReimplementedBuiltin),
(Flake8Simplify, "112") => (RuleGroup::Stable, rules::flake8_simplify::rules::UncapitalizedEnvironmentVariables),
(Flake8Simplify, "113") => (RuleGroup::Stable, rules::flake8_simplify::rules::EnumerateForLoop),
(Flake8Simplify, "113") => (RuleGroup::Preview, rules::flake8_simplify::rules::EnumerateForLoop),
(Flake8Simplify, "114") => (RuleGroup::Stable, rules::flake8_simplify::rules::IfWithSameArms),
(Flake8Simplify, "115") => (RuleGroup::Stable, rules::flake8_simplify::rules::OpenFileWithContextHandler),
(Flake8Simplify, "116") => (RuleGroup::Stable, rules::flake8_simplify::rules::IfElseBlockInsteadOfDictLookup),
@@ -489,7 +483,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Simplify, "300") => (RuleGroup::Stable, rules::flake8_simplify::rules::YodaConditions),
(Flake8Simplify, "401") => (RuleGroup::Stable, rules::flake8_simplify::rules::IfElseBlockInsteadOfDictGet),
(Flake8Simplify, "910") => (RuleGroup::Stable, rules::flake8_simplify::rules::DictGetWithNoneDefault),
(Flake8Simplify, "911") => (RuleGroup::Stable, rules::flake8_simplify::rules::ZipDictKeysAndValues),
(Flake8Simplify, "911") => (RuleGroup::Preview, rules::flake8_simplify::rules::ZipDictKeysAndValues),
// flake8-copyright
#[allow(deprecated)]
@@ -534,7 +528,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pyupgrade, "038") => (RuleGroup::Stable, rules::pyupgrade::rules::NonPEP604Isinstance),
(Pyupgrade, "039") => (RuleGroup::Stable, rules::pyupgrade::rules::UnnecessaryClassParentheses),
(Pyupgrade, "040") => (RuleGroup::Stable, rules::pyupgrade::rules::NonPEP695TypeAlias),
(Pyupgrade, "041") => (RuleGroup::Stable, rules::pyupgrade::rules::TimeoutErrorAlias),
(Pyupgrade, "041") => (RuleGroup::Preview, rules::pyupgrade::rules::TimeoutErrorAlias),
// pydocstyle
(Pydocstyle, "100") => (RuleGroup::Stable, rules::pydocstyle::rules::UndocumentedPublicModule),
@@ -621,8 +615,8 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Bandit, "110") => (RuleGroup::Stable, rules::flake8_bandit::rules::TryExceptPass),
(Flake8Bandit, "112") => (RuleGroup::Stable, rules::flake8_bandit::rules::TryExceptContinue),
(Flake8Bandit, "113") => (RuleGroup::Stable, rules::flake8_bandit::rules::RequestWithoutTimeout),
(Flake8Bandit, "201") => (RuleGroup::Stable, rules::flake8_bandit::rules::FlaskDebugTrue),
(Flake8Bandit, "202") => (RuleGroup::Stable, rules::flake8_bandit::rules::TarfileUnsafeMembers),
(Flake8Bandit, "201") => (RuleGroup::Preview, rules::flake8_bandit::rules::FlaskDebugTrue),
(Flake8Bandit, "202") => (RuleGroup::Preview, rules::flake8_bandit::rules::TarfileUnsafeMembers),
(Flake8Bandit, "301") => (RuleGroup::Stable, rules::flake8_bandit::rules::SuspiciousPickleUsage),
(Flake8Bandit, "302") => (RuleGroup::Stable, rules::flake8_bandit::rules::SuspiciousMarshalUsage),
(Flake8Bandit, "303") => (RuleGroup::Stable, rules::flake8_bandit::rules::SuspiciousInsecureHashUsage),
@@ -660,12 +654,12 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Bandit, "413") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousPycryptoImport),
(Flake8Bandit, "415") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousPyghmiImport),
(Flake8Bandit, "501") => (RuleGroup::Stable, rules::flake8_bandit::rules::RequestWithNoCertValidation),
(Flake8Bandit, "502") => (RuleGroup::Stable, rules::flake8_bandit::rules::SslInsecureVersion),
(Flake8Bandit, "503") => (RuleGroup::Stable, rules::flake8_bandit::rules::SslWithBadDefaults),
(Flake8Bandit, "504") => (RuleGroup::Stable, rules::flake8_bandit::rules::SslWithNoVersion),
(Flake8Bandit, "505") => (RuleGroup::Stable, rules::flake8_bandit::rules::WeakCryptographicKey),
(Flake8Bandit, "502") => (RuleGroup::Preview, rules::flake8_bandit::rules::SslInsecureVersion),
(Flake8Bandit, "503") => (RuleGroup::Preview, rules::flake8_bandit::rules::SslWithBadDefaults),
(Flake8Bandit, "504") => (RuleGroup::Preview, rules::flake8_bandit::rules::SslWithNoVersion),
(Flake8Bandit, "505") => (RuleGroup::Preview, rules::flake8_bandit::rules::WeakCryptographicKey),
(Flake8Bandit, "506") => (RuleGroup::Stable, rules::flake8_bandit::rules::UnsafeYAMLLoad),
(Flake8Bandit, "507") => (RuleGroup::Stable, rules::flake8_bandit::rules::SSHNoHostKeyVerification),
(Flake8Bandit, "507") => (RuleGroup::Preview, rules::flake8_bandit::rules::SSHNoHostKeyVerification),
(Flake8Bandit, "508") => (RuleGroup::Stable, rules::flake8_bandit::rules::SnmpInsecureVersion),
(Flake8Bandit, "509") => (RuleGroup::Stable, rules::flake8_bandit::rules::SnmpWeakCryptography),
(Flake8Bandit, "601") => (RuleGroup::Stable, rules::flake8_bandit::rules::ParamikoCall),
@@ -677,10 +671,10 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Bandit, "607") => (RuleGroup::Stable, rules::flake8_bandit::rules::StartProcessWithPartialPath),
(Flake8Bandit, "608") => (RuleGroup::Stable, rules::flake8_bandit::rules::HardcodedSQLExpression),
(Flake8Bandit, "609") => (RuleGroup::Stable, rules::flake8_bandit::rules::UnixCommandWildcardInjection),
(Flake8Bandit, "611") => (RuleGroup::Stable, rules::flake8_bandit::rules::DjangoRawSql),
(Flake8Bandit, "611") => (RuleGroup::Preview, rules::flake8_bandit::rules::DjangoRawSql),
(Flake8Bandit, "612") => (RuleGroup::Stable, rules::flake8_bandit::rules::LoggingConfigInsecureListen),
(Flake8Bandit, "701") => (RuleGroup::Stable, rules::flake8_bandit::rules::Jinja2AutoescapeFalse),
(Flake8Bandit, "702") => (RuleGroup::Stable, rules::flake8_bandit::rules::MakoTemplates),
(Flake8Bandit, "702") => (RuleGroup::Preview, rules::flake8_bandit::rules::MakoTemplates),
// flake8-boolean-trap
(Flake8BooleanTrap, "001") => (RuleGroup::Stable, rules::flake8_boolean_trap::rules::BooleanTypeHintPositionalArgument),
@@ -711,11 +705,11 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Datetimez, "012") => (RuleGroup::Stable, rules::flake8_datetimez::rules::CallDateFromtimestamp),
// pygrep-hooks
(PygrepHooks, "001") => (RuleGroup::Removed, rules::pygrep_hooks::rules::Eval),
(PygrepHooks, "002") => (RuleGroup::Removed, rules::pygrep_hooks::rules::DeprecatedLogWarn),
(PygrepHooks, "003") => (RuleGroup::Stable, rules::pygrep_hooks::rules::BlanketTypeIgnore),
(PygrepHooks, "004") => (RuleGroup::Stable, rules::pygrep_hooks::rules::BlanketNOQA),
(PygrepHooks, "005") => (RuleGroup::Stable, rules::pygrep_hooks::rules::InvalidMockAccess),
(PygrepHooks, "001") => (RuleGroup::Deprecated, rules::pygrep_hooks::rules::Eval),
(PygrepHooks, "002") => (RuleGroup::Deprecated, rules::pygrep_hooks::rules::DeprecatedLogWarn),
(PygrepHooks, "003") => (RuleGroup::Deprecated, rules::pygrep_hooks::rules::BlanketTypeIgnore),
(PygrepHooks, "004") => (RuleGroup::Deprecated, rules::pygrep_hooks::rules::BlanketNOQA),
(PygrepHooks, "005") => (RuleGroup::Deprecated, rules::pygrep_hooks::rules::InvalidMockAccess),
// pandas-vet
(PandasVet, "002") => (RuleGroup::Stable, rules::pandas_vet::rules::PandasUseOfInplaceArgument),
@@ -785,7 +779,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Pyi, "053") => (RuleGroup::Stable, rules::flake8_pyi::rules::StringOrBytesTooLong),
(Flake8Pyi, "055") => (RuleGroup::Stable, rules::flake8_pyi::rules::UnnecessaryTypeUnion),
(Flake8Pyi, "056") => (RuleGroup::Stable, rules::flake8_pyi::rules::UnsupportedMethodCallOnAll),
(Flake8Pyi, "058") => (RuleGroup::Stable, rules::flake8_pyi::rules::GeneratorReturnFromIterMethod),
(Flake8Pyi, "058") => (RuleGroup::Preview, rules::flake8_pyi::rules::GeneratorReturnFromIterMethod),
// flake8-pytest-style
(Flake8PytestStyle, "001") => (RuleGroup::Stable, rules::flake8_pytest_style::rules::PytestFixtureIncorrectParenthesesStyle),
@@ -847,13 +841,13 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8TypeChecking, "003") => (RuleGroup::Stable, rules::flake8_type_checking::rules::TypingOnlyStandardLibraryImport),
(Flake8TypeChecking, "004") => (RuleGroup::Stable, rules::flake8_type_checking::rules::RuntimeImportInTypeCheckingBlock),
(Flake8TypeChecking, "005") => (RuleGroup::Stable, rules::flake8_type_checking::rules::EmptyTypeCheckingBlock),
(Flake8TypeChecking, "010") => (RuleGroup::Stable, rules::flake8_type_checking::rules::RuntimeStringUnion),
(Flake8TypeChecking, "006") => (RuleGroup::Preview, rules::flake8_type_checking::rules::RuntimeStringUnion),
// tryceratops
(Tryceratops, "002") => (RuleGroup::Stable, rules::tryceratops::rules::RaiseVanillaClass),
(Tryceratops, "003") => (RuleGroup::Stable, rules::tryceratops::rules::RaiseVanillaArgs),
(Tryceratops, "004") => (RuleGroup::Stable, rules::tryceratops::rules::TypeCheckWithoutTypeError),
(Tryceratops, "200") => (RuleGroup::Removed, rules::tryceratops::rules::ReraiseNoCause),
(Tryceratops, "200") => (RuleGroup::Deprecated, rules::tryceratops::rules::ReraiseNoCause),
(Tryceratops, "201") => (RuleGroup::Stable, rules::tryceratops::rules::VerboseRaise),
(Tryceratops, "300") => (RuleGroup::Stable, rules::tryceratops::rules::TryConsiderElse),
(Tryceratops, "301") => (RuleGroup::Stable, rules::tryceratops::rules::RaiseWithinTry),
@@ -916,7 +910,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Numpy, "001") => (RuleGroup::Stable, rules::numpy::rules::NumpyDeprecatedTypeAlias),
(Numpy, "002") => (RuleGroup::Stable, rules::numpy::rules::NumpyLegacyRandom),
(Numpy, "003") => (RuleGroup::Stable, rules::numpy::rules::NumpyDeprecatedFunction),
(Numpy, "201") => (RuleGroup::Stable, rules::numpy::rules::Numpy2Deprecation),
(Numpy, "201") => (RuleGroup::Preview, rules::numpy::rules::Numpy2Deprecation),
// ruff
(Ruff, "001") => (RuleGroup::Stable, rules::ruff::rules::AmbiguousUnicodeCharacterString),
@@ -928,52 +922,23 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Ruff, "008") => (RuleGroup::Stable, rules::ruff::rules::MutableDataclassDefault),
(Ruff, "009") => (RuleGroup::Stable, rules::ruff::rules::FunctionCallInDataclassDefaultArgument),
(Ruff, "010") => (RuleGroup::Stable, rules::ruff::rules::ExplicitFStringTypeConversion),
(Ruff, "011") => (RuleGroup::Removed, rules::ruff::rules::RuffStaticKeyDictComprehension),
(Ruff, "012") => (RuleGroup::Stable, rules::ruff::rules::MutableClassDefault),
(Ruff, "013") => (RuleGroup::Stable, rules::ruff::rules::ImplicitOptional),
(Ruff, "015") => (RuleGroup::Stable, rules::ruff::rules::UnnecessaryIterableAllocationForFirstElement),
(Ruff, "016") => (RuleGroup::Stable, rules::ruff::rules::InvalidIndexType),
(Ruff, "017") => (RuleGroup::Stable, rules::ruff::rules::QuadraticListSummation),
(Ruff, "018") => (RuleGroup::Stable, rules::ruff::rules::AssignmentInAssert),
(Ruff, "019") => (RuleGroup::Stable, rules::ruff::rules::UnnecessaryKeyCheck),
(Ruff, "020") => (RuleGroup::Stable, rules::ruff::rules::NeverUnion),
#[allow(deprecated)]
(Ruff, "017") => (RuleGroup::Nursery, rules::ruff::rules::QuadraticListSummation),
(Ruff, "018") => (RuleGroup::Preview, rules::ruff::rules::AssignmentInAssert),
(Ruff, "019") => (RuleGroup::Preview, rules::ruff::rules::UnnecessaryKeyCheck),
(Ruff, "020") => (RuleGroup::Preview, rules::ruff::rules::NeverUnion),
(Ruff, "021") => (RuleGroup::Preview, rules::ruff::rules::ParenthesizeChainedOperators),
(Ruff, "022") => (RuleGroup::Preview, rules::ruff::rules::UnsortedDunderAll),
(Ruff, "023") => (RuleGroup::Preview, rules::ruff::rules::UnsortedDunderSlots),
(Ruff, "024") => (RuleGroup::Preview, rules::ruff::rules::MutableFromkeysValue),
(Ruff, "025") => (RuleGroup::Preview, rules::ruff::rules::UnnecessaryDictComprehensionForIterable),
(Ruff, "026") => (RuleGroup::Preview, rules::ruff::rules::DefaultFactoryKwarg),
(Ruff, "027") => (RuleGroup::Preview, rules::ruff::rules::MissingFStringSyntax),
(Ruff, "100") => (RuleGroup::Stable, rules::ruff::rules::UnusedNOQA),
(Ruff, "200") => (RuleGroup::Stable, rules::ruff::rules::InvalidPyprojectToml),
#[cfg(feature = "test-rules")]
(Ruff, "900") => (RuleGroup::Stable, rules::ruff::rules::StableTestRule),
#[cfg(feature = "test-rules")]
(Ruff, "901") => (RuleGroup::Stable, rules::ruff::rules::StableTestRuleSafeFix),
#[cfg(feature = "test-rules")]
(Ruff, "902") => (RuleGroup::Stable, rules::ruff::rules::StableTestRuleUnsafeFix),
#[cfg(feature = "test-rules")]
(Ruff, "903") => (RuleGroup::Stable, rules::ruff::rules::StableTestRuleDisplayOnlyFix),
#[cfg(feature = "test-rules")]
(Ruff, "911") => (RuleGroup::Preview, rules::ruff::rules::PreviewTestRule),
#[cfg(feature = "test-rules")]
#[allow(deprecated)]
(Ruff, "912") => (RuleGroup::Nursery, rules::ruff::rules::NurseryTestRule),
#[cfg(feature = "test-rules")]
(Ruff, "920") => (RuleGroup::Deprecated, rules::ruff::rules::DeprecatedTestRule),
#[cfg(feature = "test-rules")]
(Ruff, "921") => (RuleGroup::Deprecated, rules::ruff::rules::AnotherDeprecatedTestRule),
#[cfg(feature = "test-rules")]
(Ruff, "930") => (RuleGroup::Removed, rules::ruff::rules::RemovedTestRule),
#[cfg(feature = "test-rules")]
(Ruff, "931") => (RuleGroup::Removed, rules::ruff::rules::AnotherRemovedTestRule),
#[cfg(feature = "test-rules")]
(Ruff, "940") => (RuleGroup::Removed, rules::ruff::rules::RedirectedFromTestRule),
#[cfg(feature = "test-rules")]
(Ruff, "950") => (RuleGroup::Stable, rules::ruff::rules::RedirectedToTestRule),
#[cfg(feature = "test-rules")]
(Ruff, "960") => (RuleGroup::Removed, rules::ruff::rules::RedirectedFromPrefixTestRule),
// flake8-django
(Flake8Django, "001") => (RuleGroup::Stable, rules::flake8_django::rules::DjangoNullableModelStringField),
@@ -1025,7 +990,6 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
#[allow(deprecated)]
(Refurb, "113") => (RuleGroup::Nursery, rules::refurb::rules::RepeatedAppend),
(Refurb, "118") => (RuleGroup::Preview, rules::refurb::rules::ReimplementedOperator),
(Refurb, "129") => (RuleGroup::Preview, rules::refurb::rules::ReadlinesInFor),
#[allow(deprecated)]
(Refurb, "131") => (RuleGroup::Nursery, rules::refurb::rules::DeleteFullSlice),
#[allow(deprecated)]
@@ -1042,14 +1006,13 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Refurb, "169") => (RuleGroup::Preview, rules::refurb::rules::TypeNoneComparison),
(Refurb, "171") => (RuleGroup::Preview, rules::refurb::rules::SingleItemMembershipTest),
(Refurb, "177") => (RuleGroup::Preview, rules::refurb::rules::ImplicitCwd),
(Refurb, "180") => (RuleGroup::Preview, rules::refurb::rules::MetaClassABCMeta),
(Refurb, "181") => (RuleGroup::Preview, rules::refurb::rules::HashlibDigestHex),
// flake8-logging
(Flake8Logging, "001") => (RuleGroup::Stable, rules::flake8_logging::rules::DirectLoggerInstantiation),
(Flake8Logging, "002") => (RuleGroup::Stable, rules::flake8_logging::rules::InvalidGetLoggerArgument),
(Flake8Logging, "007") => (RuleGroup::Stable, rules::flake8_logging::rules::ExceptionWithoutExcInfo),
(Flake8Logging, "009") => (RuleGroup::Stable, rules::flake8_logging::rules::UndocumentedWarn),
(Flake8Logging, "001") => (RuleGroup::Preview, rules::flake8_logging::rules::DirectLoggerInstantiation),
(Flake8Logging, "002") => (RuleGroup::Preview, rules::flake8_logging::rules::InvalidGetLoggerArgument),
(Flake8Logging, "007") => (RuleGroup::Preview, rules::flake8_logging::rules::ExceptionWithoutExcInfo),
(Flake8Logging, "009") => (RuleGroup::Preview, rules::flake8_logging::rules::UndocumentedWarn),
_ => return None,
})

View File

@@ -5,7 +5,7 @@ use ruff_python_ast::docstrings::{leading_space, leading_words};
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
use strum_macros::EnumIter;
use ruff_source_file::{Line, NewlineWithTrailingNewline, UniversalNewlines};
use ruff_source_file::{Line, UniversalNewlineIterator, UniversalNewlines};
use crate::docstrings::styles::SectionStyle;
use crate::docstrings::{Docstring, DocstringBody};
@@ -130,34 +130,6 @@ impl SectionKind {
Self::Yields => "Yields",
}
}
/// Returns `true` if a section can contain subsections, as in:
/// ```python
/// Yields
/// ------
/// int
/// Description of the anonymous integer return value.
/// ```
///
/// For NumPy, see: <https://numpydoc.readthedocs.io/en/latest/format.html>
///
/// For Google, see: <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>
pub(crate) fn has_subsections(self) -> bool {
matches!(
self,
Self::Args
| Self::Arguments
| Self::OtherArgs
| Self::OtherParameters
| Self::OtherParams
| Self::Parameters
| Self::Raises
| Self::Returns
| Self::SeeAlso
| Self::Warns
| Self::Yields
)
}
}
pub(crate) struct SectionContexts<'a> {
@@ -384,16 +356,13 @@ impl<'a> SectionContext<'a> {
pub(crate) fn previous_line(&self) -> Option<&'a str> {
let previous =
&self.docstring_body.as_str()[TextRange::up_to(self.range_relative().start())];
previous
.universal_newlines()
.last()
.map(|line| line.as_str())
previous.universal_newlines().last().map(|l| l.as_str())
}
/// Returns the lines belonging to this section after the summary line.
pub(crate) fn following_lines(&self) -> NewlineWithTrailingNewline<'a> {
pub(crate) fn following_lines(&self) -> UniversalNewlineIterator<'a> {
let lines = self.following_lines_str();
NewlineWithTrailingNewline::with_offset(lines, self.offset() + self.data.summary_full_end)
UniversalNewlineIterator::with_offset(lines, self.offset() + self.data.summary_full_end)
}
fn following_lines_str(&self) -> &'a str {
@@ -490,54 +459,13 @@ fn is_docstring_section(
// args: The arguments to the function.
// """
// ```
// Or `parameters` in:
// ```python
// def func(parameters: tuple[int]):
// """Toggle the gizmo.
//
// Parameters:
// -----
// parameters:
// The arguments to the function.
// """
// ```
// However, if the header is an _exact_ match (like `Returns:`, as opposed to `returns:`), then
// continue to treat it as a section header.
if section_kind.has_subsections() {
if let Some(previous_section) = previous_section {
if let Some(previous_section) = previous_section {
if previous_section.indent_size < indent_size {
let verbatim = &line[TextRange::at(indent_size, section_name_size)];
// If the section is more deeply indented, assume it's a subsection, as in:
// ```python
// def func(args: tuple[int]):
// """Toggle the gizmo.
//
// Args:
// args: The arguments to the function.
// """
// ```
if previous_section.indent_size < indent_size {
if section_kind.as_str() != verbatim {
return false;
}
}
// If the section isn't underlined, and isn't title-cased, assume it's a subsection,
// as in:
// ```python
// def func(parameters: tuple[int]):
// """Toggle the gizmo.
//
// Parameters:
// -----
// parameters:
// The arguments to the function.
// """
// ```
if !next_line_is_underline && verbatim.chars().next().is_some_and(char::is_lowercase) {
if section_kind.as_str() != verbatim {
return false;
}
if section_kind.as_str() != verbatim {
return false;
}
}
}

View File

@@ -8,7 +8,6 @@ use ruff_python_ast::{self as ast, Arguments, ExceptHandler, Stmt};
use ruff_python_ast::{AnyNodeRef, ArgOrKeyword};
use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer;
use ruff_python_trivia::textwrap::dedent_to;
use ruff_python_trivia::{
has_leading_content, is_python_whitespace, CommentRanges, PythonWhitespace, SimpleTokenKind,
SimpleTokenizer,
@@ -16,9 +15,7 @@ use ruff_python_trivia::{
use ruff_source_file::{Locator, NewlineWithTrailingNewline, UniversalNewlines};
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
use crate::cst::matchers::{match_function_def, match_indented_block, match_statement};
use crate::fix::codemods;
use crate::fix::codemods::CodegenStylist;
use crate::line_width::{IndentWidth, LineLength, LineWidthBuilder};
/// Return the `Fix` to use when deleting a `Stmt`.
@@ -169,50 +166,6 @@ pub(crate) fn add_argument(
}
}
/// Safely adjust the indentation of the indented block at [`TextRange`].
///
/// The [`TextRange`] is assumed to represent an entire indented block, including the leading
/// indentation of that block. For example, to dedent the body here:
/// ```python
/// if True:
/// print("Hello, world!")
/// ```
///
/// The range would be the entirety of ` print("Hello, world!")`.
pub(crate) fn adjust_indentation(
range: TextRange,
indentation: &str,
locator: &Locator,
indexer: &Indexer,
stylist: &Stylist,
) -> Result<String> {
// If the range includes a multi-line string, use LibCST to ensure that we don't adjust the
// whitespace _within_ the string.
if indexer.multiline_ranges().intersects(range) || indexer.fstring_ranges().intersects(range) {
let contents = locator.slice(range);
let module_text = format!("def f():{}{contents}", stylist.line_ending().as_str());
let mut tree = match_statement(&module_text)?;
let embedding = match_function_def(&mut tree)?;
let indented_block = match_indented_block(&mut embedding.body)?;
indented_block.indent = Some(indentation);
let module_text = indented_block.codegen_stylist(stylist);
let module_text = module_text
.strip_prefix(stylist.line_ending().as_str())
.unwrap()
.to_string();
Ok(module_text)
} else {
// Otherwise, we can do a simple adjustment ourselves.
let contents = locator.slice(range);
Ok(dedent_to(contents, indentation))
}
}
/// Determine if a vector contains only one, specific element.
fn is_only<T: PartialEq>(vec: &[T], value: &T) -> bool {
vec.len() == 1 && vec[0] == *value

View File

@@ -34,7 +34,7 @@ pub mod packaging;
pub mod pyproject_toml;
pub mod registry;
mod renamer;
mod rule_redirects;
pub mod rule_redirects;
pub mod rule_selector;
pub mod rules;
pub mod settings;

View File

@@ -33,8 +33,6 @@ use crate::message::Message;
use crate::noqa::add_noqa;
use crate::registry::{AsRule, Rule, RuleSet};
use crate::rules::pycodestyle;
#[cfg(feature = "test-rules")]
use crate::rules::ruff::rules::test_rules::{self, TestRule, TEST_RULES};
use crate::settings::types::UnsafeFixes;
use crate::settings::{flags, LinterSettings};
use crate::source_kind::SourceKind;
@@ -109,7 +107,6 @@ pub fn check_path(
path,
locator,
indexer,
stylist,
settings,
source_type,
source_kind.as_ipy_notebook().map(Notebook::cell_offsets),
@@ -217,53 +214,6 @@ pub fn check_path(
));
}
// Raise violations for internal test rules
#[cfg(feature = "test-rules")]
{
for test_rule in TEST_RULES {
if !settings.rules.enabled(*test_rule) {
continue;
}
let diagnostic = match test_rule {
Rule::StableTestRule => test_rules::StableTestRule::diagnostic(locator, indexer),
Rule::StableTestRuleSafeFix => {
test_rules::StableTestRuleSafeFix::diagnostic(locator, indexer)
}
Rule::StableTestRuleUnsafeFix => {
test_rules::StableTestRuleUnsafeFix::diagnostic(locator, indexer)
}
Rule::StableTestRuleDisplayOnlyFix => {
test_rules::StableTestRuleDisplayOnlyFix::diagnostic(locator, indexer)
}
Rule::NurseryTestRule => test_rules::NurseryTestRule::diagnostic(locator, indexer),
Rule::PreviewTestRule => test_rules::PreviewTestRule::diagnostic(locator, indexer),
Rule::DeprecatedTestRule => {
test_rules::DeprecatedTestRule::diagnostic(locator, indexer)
}
Rule::AnotherDeprecatedTestRule => {
test_rules::AnotherDeprecatedTestRule::diagnostic(locator, indexer)
}
Rule::RemovedTestRule => test_rules::RemovedTestRule::diagnostic(locator, indexer),
Rule::AnotherRemovedTestRule => {
test_rules::AnotherRemovedTestRule::diagnostic(locator, indexer)
}
Rule::RedirectedToTestRule => {
test_rules::RedirectedToTestRule::diagnostic(locator, indexer)
}
Rule::RedirectedFromTestRule => {
test_rules::RedirectedFromTestRule::diagnostic(locator, indexer)
}
Rule::RedirectedFromPrefixTestRule => {
test_rules::RedirectedFromPrefixTestRule::diagnostic(locator, indexer)
}
_ => unreachable!("All test rules must have an implementation"),
};
if let Some(diagnostic) = diagnostic {
diagnostics.push(diagnostic);
}
}
}
// Ignore diagnostics based on per-file-ignores.
let per_file_ignores = if !diagnostics.is_empty() && !settings.per_file_ignores.is_empty() {
fs::ignores_from_path(path, &settings.per_file_ignores)
@@ -589,7 +539,7 @@ pub fn lint_fix<'a>(
// Increment the iteration count.
iterations += 1;
// Re-run the linter pass (by avoiding the return).
// Re-run the linter pass (by avoiding the break).
continue;
}

View File

@@ -8,7 +8,6 @@ use fern;
use log::Level;
use once_cell::sync::Lazy;
use ruff_python_parser::{ParseError, ParseErrorType};
use rustc_hash::FxHashSet;
use ruff_source_file::{LineIndex, OneIndexed, SourceCode, SourceLocation};
@@ -16,7 +15,7 @@ use crate::fs;
use crate::source_kind::SourceKind;
use ruff_notebook::Notebook;
pub static IDENTIFIERS: Lazy<Mutex<Vec<&'static str>>> = Lazy::new(Mutex::default);
pub static WARNINGS: Lazy<Mutex<Vec<&'static str>>> = Lazy::new(Mutex::default);
/// Warn a user once, with uniqueness determined by the given ID.
#[macro_export]
@@ -25,7 +24,7 @@ macro_rules! warn_user_once_by_id {
use colored::Colorize;
use log::warn;
if let Ok(mut states) = $crate::logging::IDENTIFIERS.lock() {
if let Ok(mut states) = $crate::logging::WARNINGS.lock() {
if !states.contains(&$id) {
let message = format!("{}", format_args!($($arg)*));
warn!("{}", message.bold());
@@ -35,26 +34,6 @@ macro_rules! warn_user_once_by_id {
};
}
pub static MESSAGES: Lazy<Mutex<FxHashSet<String>>> = Lazy::new(Mutex::default);
/// Warn a user once, if warnings are enabled, with uniqueness determined by the content of the
/// message.
#[macro_export]
macro_rules! warn_user_once_by_message {
($($arg:tt)*) => {
use colored::Colorize;
use log::warn;
if let Ok(mut states) = $crate::logging::MESSAGES.lock() {
let message = format!("{}", format_args!($($arg)*));
if !states.contains(&message) {
warn!("{}", message.bold());
states.insert(message);
}
}
};
}
/// Warn a user once, with uniqueness determined by the calling location itself.
#[macro_export]
macro_rules! warn_user_once {

View File

@@ -31,7 +31,7 @@ pub enum FromCodeError {
Unknown,
}
#[derive(EnumIter, Debug, PartialEq, Eq, Clone, Hash, RuleNamespace)]
#[derive(EnumIter, Debug, PartialEq, Eq, Clone, Hash, Copy, RuleNamespace)]
pub enum Linter {
/// [Pyflakes](https://pypi.org/project/pyflakes/)
#[prefix = "F"]
@@ -264,11 +264,6 @@ impl Rule {
| Rule::BadQuotesMultilineString
| Rule::BlanketNOQA
| Rule::BlanketTypeIgnore
| Rule::BlankLineAfterDecorator
| Rule::BlankLineBetweenMethods
| Rule::BlankLinesAfterFunctionOrClass
| Rule::BlankLinesBeforeNestedDefinition
| Rule::BlankLinesTopLevel
| Rule::CommentedOutCode
| Rule::EmptyComment
| Rule::ExtraneousParentheses
@@ -301,7 +296,6 @@ impl Rule {
| Rule::ShebangNotFirstLine
| Rule::SingleLineImplicitStringConcatenation
| Rule::TabIndentation
| Rule::TooManyBlankLines
| Rule::TrailingCommaOnBareTuple
| Rule::TypeCommentInStub
| Rule::UselessSemicolon

View File

@@ -248,7 +248,6 @@ impl Renamer {
| BindingKind::Assignment
| BindingKind::BoundException
| BindingKind::LoopVar
| BindingKind::ComprehensionVar
| BindingKind::WithItemVar
| BindingKind::Global
| BindingKind::Nonlocal(_)

View File

@@ -3,7 +3,7 @@ use std::collections::HashMap;
use once_cell::sync::Lazy;
/// Returns the redirect target for the given code.
pub(crate) fn get_redirect_target(code: &str) -> Option<&'static str> {
pub fn get_redirect_target(code: &str) -> Option<&'static str> {
REDIRECTS.get(code).copied()
}
@@ -99,15 +99,5 @@ static REDIRECTS: Lazy<HashMap<&'static str, &'static str>> = Lazy::new(|| {
("T003", "FIX003"),
("T004", "FIX004"),
("RUF011", "B035"),
("TCH006", "TCH010"),
("TRY200", "B904"),
("PGH001", "S307"),
("PGH002", "G010"),
// Test redirect by exact code
#[cfg(feature = "test-rules")]
("RUF940", "RUF950"),
// Test redirect by prefix
#[cfg(feature = "test-rules")]
("RUF96", "RUF95"),
])
});

View File

@@ -8,7 +8,7 @@ use strum_macros::EnumIter;
use crate::codes::RuleCodePrefix;
use crate::codes::RuleIter;
use crate::registry::{Linter, Rule, RuleNamespace};
use crate::rule_redirects::get_redirect;
use crate::rule_redirects::{get_redirect, get_redirect_target};
use crate::settings::types::PreviewMode;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
@@ -44,25 +44,10 @@ impl From<Linter> for RuleSelector {
}
}
impl Ord for RuleSelector {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
// TODO(zanieb): We ought to put "ALL" and "Linter" selectors
// above those that are rule specific but it's not critical for now
self.prefix_and_code().cmp(&other.prefix_and_code())
}
}
impl PartialOrd for RuleSelector {
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
Some(self.cmp(other))
}
}
impl FromStr for RuleSelector {
type Err = ParseError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
// **Changes should be reflected in `parse_no_redirect` as well**
match s {
"ALL" => Ok(Self::All),
#[allow(deprecated)]
@@ -82,6 +67,7 @@ impl FromStr for RuleSelector {
return Ok(Self::Linter(linter));
}
// Does the selector select a single rule?
let prefix = RuleCodePrefix::parse(&linter, code)
.map_err(|_| ParseError::Unknown(s.to_string()))?;
@@ -140,6 +126,41 @@ impl RuleSelector {
RuleSelector::Linter(l) => (l.common_prefix(), ""),
}
}
/// Parse [`RuleSelector`] from a string; but do not follow redirects.
pub fn from_str_no_redirect(s: &str) -> Result<Self, ParseError> {
match s {
"ALL" => Ok(Self::All),
#[allow(deprecated)]
"NURSERY" => Ok(Self::Nursery),
"C" => Ok(Self::C),
"T" => Ok(Self::T),
_ => {
let (linter, code) =
Linter::parse_code(s).ok_or_else(|| ParseError::Unknown(s.to_string()))?;
if code.is_empty() {
return Ok(Self::Linter(linter));
}
// Does the selector select a single rule?
let prefix = RuleCodePrefix::parse(&linter, code)
.map_err(|_| ParseError::Unknown(s.to_string()))?;
if is_single_rule_selector(&prefix) {
Ok(Self::Rule {
prefix,
redirected_from: None,
})
} else {
Ok(Self::Prefix {
prefix,
redirected_from: None,
})
}
}
}
}
}
impl Serialize for RuleSelector {
@@ -232,6 +253,28 @@ impl RuleSelector {
})
}
/// Returns the selector string that this selector is a redirect of (if any)
///
/// # Panics
///
/// If a redirect is not a valid rule selector
pub fn redirected_from(&self) -> Option<RuleSelector> {
match self {
Self::Rule {
prefix: _,
redirected_from,
}
| Self::Prefix {
prefix: _,
redirected_from,
} => redirected_from.map(|literal| {
RuleSelector::from_str_no_redirect(literal)
.expect("Redirects should always be valid selectors")
}),
_ => None,
}
}
/// Returns true if this selector is exact i.e. selects a single rule by code
pub fn is_exact(&self) -> bool {
matches!(self, Self::Rule { .. })
@@ -268,6 +311,8 @@ pub struct PreviewOptions {
#[cfg(feature = "schemars")]
mod schema {
use std::str::FromStr;
use itertools::Itertools;
use schemars::JsonSchema;
use schemars::_serde_json::Value;
@@ -314,23 +359,13 @@ mod schema {
.filter(|p| {
// Exclude any prefixes where all of the rules are removed
if let Ok(Self::Rule { prefix, .. } | Self::Prefix { prefix, .. }) =
RuleSelector::parse_no_redirect(p)
RuleSelector::from_str(p)
{
!prefix.rules().all(|rule| rule.is_removed())
} else {
true
}
})
.filter(|_rule| {
// Filter out all test-only rules
#[cfg(feature = "test-rules")]
#[allow(clippy::used_underscore_binding)]
if _rule.starts_with("RUF9") {
return false;
}
true
})
.sorted()
.map(Value::String)
.collect(),
@@ -363,41 +398,6 @@ impl RuleSelector {
}
}
}
/// Parse [`RuleSelector`] from a string; but do not follow redirects.
pub fn parse_no_redirect(s: &str) -> Result<Self, ParseError> {
// **Changes should be reflected in `from_str` as well**
match s {
"ALL" => Ok(Self::All),
#[allow(deprecated)]
"NURSERY" => Ok(Self::Nursery),
"C" => Ok(Self::C),
"T" => Ok(Self::T),
_ => {
let (linter, code) =
Linter::parse_code(s).ok_or_else(|| ParseError::Unknown(s.to_string()))?;
if code.is_empty() {
return Ok(Self::Linter(linter));
}
let prefix = RuleCodePrefix::parse(&linter, code)
.map_err(|_| ParseError::Unknown(s.to_string()))?;
if is_single_rule_selector(&prefix) {
Ok(Self::Rule {
prefix,
redirected_from: None,
})
} else {
Ok(Self::Prefix {
prefix,
redirected_from: None,
})
}
}
}
}
}
#[derive(EnumIter, PartialEq, Eq, PartialOrd, Ord, Copy, Clone, Debug)]

View File

@@ -1,16 +1,14 @@
/// See: [eradicate.py](https://github.com/myint/eradicate/blob/98f199940979c94447a461d50d27862b118b282d/eradicate.py)
use aho_corasick::AhoCorasick;
use itertools::Itertools;
use once_cell::sync::Lazy;
use regex::{Regex, RegexSet};
use ruff_python_parser::parse_suite;
use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer};
use ruff_text_size::TextSize;
static CODE_INDICATORS: Lazy<AhoCorasick> = Lazy::new(|| {
AhoCorasick::new([
"(", ")", "[", "]", "{", "}", ":", "=", "%", "return", "break", "continue", "import",
"(", ")", "[", "]", "{", "}", ":", "=", "%", "print", "return", "break", "continue",
"import",
])
.unwrap()
});
@@ -46,14 +44,6 @@ pub(crate) fn comment_contains_code(line: &str, task_tags: &[String]) -> bool {
return false;
}
// Fast path: if the comment contains consecutive identifiers, we know it won't parse.
let tokenizer = SimpleTokenizer::starts_at(TextSize::default(), line).skip_trivia();
if tokenizer.tuple_windows().any(|(first, second)| {
first.kind == SimpleTokenKind::Name && second.kind == SimpleTokenKind::Name
}) {
return false;
}
// Ignore task tag comments (e.g., "# TODO(tom): Refactor").
if line
.split(&[' ', ':', '('])
@@ -133,10 +123,9 @@ mod tests {
#[test]
fn comment_contains_code_with_print() {
assert!(comment_contains_code("#print", &[]));
assert!(comment_contains_code("#print(1)", &[]));
assert!(!comment_contains_code("#print", &[]));
assert!(!comment_contains_code("#print 1", &[]));
assert!(!comment_contains_code("#to print", &[]));
}

View File

@@ -1,7 +1,7 @@
use ruff_python_ast::Expr;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Expr;
use ruff_python_semantic::Modules;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
@@ -47,10 +47,6 @@ impl Violation for SixPY3 {
/// YTT202
pub(crate) fn name_or_attribute(checker: &mut Checker, expr: &Expr) {
if !checker.semantic().seen_module(Modules::SIX) {
return;
}
if checker
.semantic()
.resolve_call_path(expr)

View File

@@ -3,7 +3,6 @@ use ruff_python_ast::ExprCall;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::call_path::CallPath;
use ruff_python_semantic::Modules;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
@@ -43,19 +42,17 @@ impl Violation for BlockingOsCallInAsyncFunction {
/// ASYNC102
pub(crate) fn blocking_os_call(checker: &mut Checker, call: &ExprCall) {
if checker.semantic().seen_module(Modules::OS) {
if checker.semantic().in_async_context() {
if checker
.semantic()
.resolve_call_path(call.func.as_ref())
.as_ref()
.is_some_and(is_unsafe_os_method)
{
checker.diagnostics.push(Diagnostic::new(
BlockingOsCallInAsyncFunction,
call.func.range(),
));
}
if checker.semantic().in_async_context() {
if checker
.semantic()
.resolve_call_path(call.func.as_ref())
.as_ref()
.is_some_and(is_unsafe_os_method)
{
checker.diagnostics.push(Diagnostic::new(
BlockingOsCallInAsyncFunction,
call.func.range(),
));
}
}
}

Some files were not shown because too many files have changed in this diff Show More