Compare commits

...

99 Commits

Author SHA1 Message Date
Charlie Marsh
7617519b4f Skip python -m ruff --help on linux-cross 2023-05-12 15:46:42 -04:00
Charlie Marsh
bc7ddd8f3a Temporarily create release on-tag 2023-05-12 15:31:48 -04:00
Charlie Marsh
e6bb5cddcf Add Astral badge to the repo (#4401) 2023-05-12 19:27:38 +00:00
Charlie Marsh
dcedd5cd9d Bump version to 0.0.267 (#4400) 2023-05-12 19:04:56 +00:00
konstin
606b6ac3df Workaround for maturin bug (#4399) 2023-05-12 18:55:55 +00:00
Zanie Adkins
ebda9b31d9 Update CI to test python -m ruff on release (#4397) 2023-05-12 18:47:30 +00:00
Lotem
52f6663089 Implement RUF010 to detect explicit type conversions within f-strings (#4387) 2023-05-12 18:12:58 +00:00
Charlie Marsh
a6176d2c70 Add PyTorch to user list (#4393) 2023-05-12 18:02:13 +00:00
OMEGA_RAZER
1d165f7e9d Add linting badge that can be used to display usage (#3938) 2023-05-12 17:58:29 +00:00
Charlie Marsh
e96092291d Update Ruff badge (#4392) 2023-05-12 13:42:33 -04:00
Charlie Marsh
67076b2dcb Bump version to 0.0.266 (#4391) 2023-05-12 13:11:03 -04:00
Charlie Marsh
7e3ba7f32a Use bitflags for tracking Context flags (#4381) 2023-05-12 16:07:26 +00:00
konstin
09dbd2029c Update maturin to maturin 0.15 (#3999)
* Update maturin to maturin>=0.14.17

This allows removing the deprecated `[package.metadata.maturin]`

* Update to maturin 0.15
2023-05-12 15:43:06 +02:00
Jonathan Plasse
1380bd94da Expose more fields in rule explanation (#4367) 2023-05-11 19:22:23 -04:00
Jonathan Plasse
c10a4535b9 Disallow unreachable_pub (#4314) 2023-05-11 18:00:00 -04:00
Charlie Marsh
97802e7466 Ignore some methods on list in flake8-boolean-trap (#4385) 2023-05-11 21:54:59 +00:00
Jonathan Plasse
4fd4a65718 Isolate show statistic integration test (#4383) 2023-05-11 21:42:34 +00:00
Charlie Marsh
d78c614764 Remove special-casing for flake8-builtins rules (#4380) 2023-05-11 16:39:28 -04:00
Charlie Marsh
3f3dd7af99 Move some recursion out of the pre-visit statement phase (#4379) 2023-05-11 15:46:25 -04:00
Charlie Marsh
871b92a385 Avoid re-using imports beyond current edit site (#4378) 2023-05-11 14:47:18 -04:00
Charlie Marsh
9158f13ee6 Respect __all__ imports when determining definition visibility (#4357) 2023-05-11 17:43:51 +00:00
Charlie Marsh
72e0ffc1ac Delay computation of Definition visibility (#4339) 2023-05-11 17:14:29 +00:00
Charlie Marsh
ffcf0618c7 Avoid underflow in expected-special-method-signature (#4377) 2023-05-11 12:47:47 -04:00
Micha Reiser
1ccef5150d Remove lifetime from FormatContext (#4376) 2023-05-11 15:43:42 +00:00
konstin
6a52577630 Ecosystem CI: Allow storing checkouts locally (#4192)
* Ecosystem CI: Allow storing checkouts locally

This adds a --checkouts options to (re)use a local directory instead of checkouts into a tempdir

* Fix missing path conversion
2023-05-11 17:36:44 +02:00
konstin
3c2f41b615 Also show rule codes in autofix errors in production codes (#4327)
I needed those changes for #4326
2023-05-11 17:36:03 +02:00
Calum Young
b76b4b6016 List rule changes in ecosystem (#4371)
* Count changes for each rule

* Handle case where rule matches were found in a line

* List and sort by changes

* Remove detail from rule changes

* Add comment about leading :

* Only print rule changes if rule changes are present

* Use re.search and match group

* Remove dict().items()

* Use match group to extract rule code
2023-05-11 16:33:15 +02:00
Jeong, YunWon
bbadbb5de5 Refactor code to use the new RustPython is method (#4369) 2023-05-11 16:16:36 +02:00
Calum Young
ba6370e5d0 Move black excludes from pre-commit config to pyproject.toml (#4370) 2023-05-11 09:00:05 -04:00
Jeong, YunWon
be6e00ef6e Re-integrate RustPython parser repository (#4359)
Co-authored-by: Micha Reiser <micha@reiser.io>
2023-05-11 07:47:17 +00:00
Charlie Marsh
865205d992 Implement pygrep-hook's Mock-mistake diagnostic (#4366) 2023-05-11 03:26:29 +00:00
Charlie Marsh
572adf7994 Use target name in hardcoded-password diagnostics (#4365) 2023-05-11 02:54:27 +00:00
Charlie Marsh
3b26bf84f5 Avoid debug panic with empty indent replacement (#4364) 2023-05-11 02:42:18 +00:00
Charlie Marsh
f4f88308ae Remove Copy and destructure Snapshot (#4358) 2023-05-10 19:46:18 +00:00
Charlie Marsh
ea3d3a655d Add a Snapshot abstraction for deferring and restoring visitor context (#4353) 2023-05-10 16:50:47 +00:00
Charlie Marsh
fd34797d0f Add a specialized StatementVisitor (#4349) 2023-05-10 12:42:20 -04:00
dependabot[bot]
6532455672 Bump json5 from 1.0.1 to 1.0.2 in /playground (#4354) 2023-05-10 16:34:37 +00:00
Charlie Marsh
257c571c43 Remove pub from some Checker fields (#4352) 2023-05-10 12:33:47 -04:00
Charlie Marsh
ccdee55e6e Tweak capitalization of B021 message (#4350) 2023-05-10 15:59:00 +00:00
Charlie Marsh
6d6d7abf70 Use short-import for HashMap (#4351) 2023-05-10 15:46:55 +00:00
konstin
0096938789 Optionally show fixes when using --features ecosystem_ci with cargo and --show-fixes at runtime (#4191)
* Generate fixes when using --show-fixes

Example command: `cargo run --bin ruff -- --no-cache --select F401
--show-source --show-fixes
crates/ruff/resources/test/fixtures/pyflakes/F401_9.py`

Before, `--show-fixes` was ignored:

```
crates/ruff/resources/test/fixtures/pyflakes/F401_9.py:4:22: F401 [*] `foo.baz` imported but unused
  |
4 | __all__ = ("bar",)
5 | from foo import bar, baz
  |                      ^^^ F401
  |
  = help: Remove unused import: `foo.baz`

Found 1 error.
[*] 1 potentially fixable with the --fix option.
```

After:

```
crates/ruff/resources/test/fixtures/pyflakes/F401_9.py:4:22: F401 [*] `foo.baz` imported but unused
  |
4 | __all__ = ("bar",)
5 | from foo import bar, baz
  |                      ^^^ F401
  |
  = help: Remove unused import: `foo.baz`

ℹ Suggested fix
1 1 | """Test: late-binding of `__all__`."""
2 2 |
3 3 | __all__ = ("bar",)
4   |-from foo import bar, baz
  4 |+from foo import bar

Found 1 error.
[*] 1 potentially fixable with the --fix option.
```

* Add `--format ecosystem-ci`

* cargo dev generate-all

* Put behind cargo feature

* Regenerate docs

* Don't test ecosystem_ci feature on CI

* Use top level flag instead

* Fix

* Simplify code based on #4191

* Remove old TODO comment
2023-05-10 17:45:57 +02:00
Micha Reiser
853d8354cb JSON Emitter: Use one indexed column numbers for edits (#4007)
I noticed in the byte-offsets refactor that the `JsonEmitter` uses one indexed column numbers for the diagnostic start and end locations but not for `edits`.

This PR changes the `JsonEmitter` to emit one-indexed column numbers for edits, as we already do for `Message::location` and `Message::end_location`.

## Open questions

~We'll need to change the LSP to subtract 1 from the columns in `_parse_fix`~

6e44fadf8a/ruff_lsp/server.py (L129-L150)

~@charliermarsh is there a way to get the ruff version in that method? If not, then I recommend adding a `version` that we increment whenever we make incompatible changes to the serialized message. We can then use it in the LSP to correctly compute the column offset.~

I'll use the presence of the `Fix::applicability` field to detect if the Ruff version uses one or zero-based column indices.

See https://github.com/charliermarsh/ruff-lsp/pull/103
2023-05-10 17:21:02 +02:00
Charlie Marsh
5f64d2346f Enforce max-doc-length for multi-line docstrings (#4347) 2023-05-10 11:06:07 -04:00
Micha Reiser
ddbe5a1243 Add Fix::applicability to JSON output (#4341) 2023-05-10 14:34:53 +00:00
Evan Rittenhouse
04097d194c Fix false positives in PD002 (#4337) 2023-05-10 16:04:28 +02:00
Micha Reiser
a2b8487ae3 Remove functor from autofix title (#4245) 2023-05-10 07:21:15 +00:00
Micha Reiser
8969ad5879 Always generate fixes (#4239) 2023-05-10 07:06:14 +00:00
Micha Reiser
bfa1c28c00 Use non-empty ranges for logical-lines diagnostics (#4133) 2023-05-10 06:44:33 +00:00
Zanie Adkins
cf7aa26aa4 Add Applicability to Fix (#4303)
Co-authored-by: Micha Reiser <micha@reiser.io>
2023-05-10 08:42:46 +02:00
Micha Reiser
d66ce76691 Truncate SyntaxErrors before newline character (#4124) 2023-05-10 08:37:57 +02:00
Tom Kuson
b8bb9e8b92 Add docs for flake8-simplify rules (#4334) 2023-05-10 03:03:24 +00:00
Charlie Marsh
5e46dcbf21 Handle .encode calls on parenthesized expressions (#4338) 2023-05-09 22:57:10 -04:00
trag1c
045449ab12 Improved E713 & E714 code examples (#4336) 2023-05-09 22:27:44 -04:00
Tom Kuson
d5ff8d7c43 Add flake8-pie documentation (#4332) 2023-05-09 22:11:30 +00:00
Charlie Marsh
d92fb11e80 Include positional- and keyword-only arguments in too-many-arguments (#4329) 2023-05-09 18:05:53 -04:00
Charlie Marsh
3d947196f8 Make violation struct fields private (#4331) 2023-05-09 18:00:20 -04:00
Charlie Marsh
e846f2688b Avoid SIM105 autofixes that would remove comments (#4330) 2023-05-09 21:30:56 +00:00
Charlie Marsh
7b91a162c6 Remove current_ prefix from some Context methods (#4325) 2023-05-09 19:40:12 +00:00
Charlie Marsh
8c2cfade90 Move show_source onto CLI settings group (#4317) 2023-05-09 17:26:25 +00:00
Charlie Marsh
a435c0df4b Remove deprecated update-check setting (#4313) 2023-05-09 13:10:02 -04:00
Aaron Cunningham
48e1852893 Revert the B027 autofix logic (#4310) 2023-05-09 13:08:20 -04:00
Calum Young
03f141f53d Check that all rules have descriptions (#4315) 2023-05-09 16:53:23 +00:00
Calum Young
8dea47afc1 Update mkdocs unformatted example error message (#4312) 2023-05-09 12:36:13 -04:00
Charlie Marsh
d3b71f1e04 Run autofix on initial watcher pass (#4311) 2023-05-09 12:35:32 -04:00
Mikko Leppänen
04e8e74499 Feat: detect changes also in configuration files (#4169) 2023-05-09 16:22:52 +00:00
konstin
318653c427 Write diagnostic name when failing to create fix (#4309) 2023-05-09 17:46:40 +02:00
Marti Raudsepp
f08fd5cbf0 Tweak package metadata URLs, add changelog and docs (#4304) 2023-05-09 11:32:47 -04:00
Micha Reiser
99a755f936 Add schemars feature (#4305) 2023-05-09 16:15:18 +02:00
Aurelio Jargas
e7dfb35778 UP011: Fix typo in rule description (#4306) 2023-05-09 08:49:15 -04:00
Dhruv Manilawala
085fd37209 Preserve whitespace around ListComp brackets in C419 (#4099) 2023-05-09 08:43:05 +02:00
Charlie Marsh
83536cf87b Ignore TRY301 exceptions without except handlers (#4301) 2023-05-09 03:38:02 +00:00
Charlie Marsh
9366eb919d Specify exact command in incorrect parentheses suggestion (#4300) 2023-05-09 02:21:54 +00:00
Charlie Marsh
8be51942dd Use ruff_python_semantic abstract utility in flake8-pytest-style (#4299) 2023-05-08 22:12:28 -04:00
Charlie Marsh
d365dab904 Include static and class methods in in abstract decorator list (#4298) 2023-05-08 21:54:02 -04:00
Charlie Marsh
f23851130a Add flynt to documentation (#4295) 2023-05-09 00:52:41 +00:00
Aarni Koskela
efdf383f5e Implement Flynt static string join transform as FLY002 (#4196) 2023-05-08 20:46:38 -04:00
Charlie Marsh
61f21a6513 Rewrite not not a as bool(a) in boolean contexts (#4294) 2023-05-08 23:38:24 +00:00
Charlie Marsh
43d6aa9173 Clarify some docstring-related docs (#4292) 2023-05-08 22:24:53 +00:00
Charlie Marsh
c54e48dce5 Avoid panics for f-string rewrites at start-of-file (#4291) 2023-05-08 19:44:57 +00:00
Charlie Marsh
b913e99bde Explicitly support ASCII-only for capitalization checks (#4290) 2023-05-08 15:41:11 -04:00
Dhruv Manilawala
4ac506526b Avoid D403 if first char cannot be uppercased (#4283) 2023-05-08 15:33:24 -04:00
Calum Young
cd41de2588 Check docs formatting check (#4270) 2023-05-08 19:03:22 +00:00
Dhruv Manilawala
3344d367f5 Avoid fixing PD002 in a lambda expression (#4286) 2023-05-08 18:24:27 +00:00
Aarni Koskela
d7a369e7dc Update confusable character mapping (#4274) 2023-05-08 14:20:44 -04:00
Jonathan Plasse
1b1788c8ad Fix replace_whitespace() tabulation to space (#4226)
Co-authored-by: Micha Reiser <micha@reiser.io>
2023-05-08 12:03:04 +00:00
Micha Reiser
4d5a339d9e Remove Fix::from(Edit) and add deprecated replacement methods to Diagnostics (#4275) 2023-05-08 10:25:50 +00:00
Zanie Adkins
0801f14046 Refactor Fix and Edit API (#4198) 2023-05-08 11:57:03 +02:00
Micha Reiser
edaf891042 Fix jemalloc page size on aarch64 (#4247)
Co-authored-by: konstin <konstin@mailbox.org>
2023-05-08 08:10:03 +02:00
Trevor McCulloch
3beff29026 [pylint] Implement nested-min-max (W3301) (#4200) 2023-05-07 03:14:14 +00:00
Jerome Leclanche
5ac2c7d293 Add .git-rewrite folder to default ignored folder paths (#4261) 2023-05-06 22:40:38 -04:00
Charlie Marsh
e66fdb83d0 Respect insertion location when importing symbols (#4258) 2023-05-07 02:32:40 +00:00
Charlie Marsh
a95bafefb0 Fix RET504 example in docs (#4260) 2023-05-06 16:56:52 -04:00
Charlie Marsh
539af34f58 Add a utility method to detect top-level state (#4259) 2023-05-06 20:24:27 +00:00
Charlie Marsh
983bb31577 Remove RefEquality usages from Context (#4257) 2023-05-06 15:55:14 -04:00
Charlie Marsh
b98b604071 Remove some deferred &Stmt references (#4256) 2023-05-06 18:42:35 +00:00
Charlie Marsh
cd27b39aff Re-order some code in scope.rs (#4255) 2023-05-06 16:36:20 +00:00
Charlie Marsh
a9fc648faf Use NodeId for Binding source (#4234) 2023-05-06 16:20:08 +00:00
Charlie Marsh
c1f0661225 Replace parents statement stack with a Nodes abstraction (#4233) 2023-05-06 16:12:41 +00:00
Dhruv Manilawala
2c91412321 Consider Flask app logger as logger candidate (#4253) 2023-05-06 11:31:10 -04:00
745 changed files with 15107 additions and 10883 deletions

View File

@@ -33,4 +33,5 @@ rustflags = [
"-Wclippy::rc_buffer",
"-Wclippy::rc_mutex",
"-Wclippy::rest_pat_in_fully_bound_structs",
"-Wunreachable_pub"
]

View File

@@ -237,5 +237,7 @@ jobs:
run: python scripts/transform_readme.py --target mkdocs
- name: "Generate docs"
run: python scripts/generate_mkdocs.py
- name: "Check docs formatting"
run: python scripts/check_docs_formatted.py
- name: "Build docs"
run: mkdocs build --strict

View File

@@ -1,9 +1,9 @@
name: "[ruff] Release"
on:
workflow_dispatch:
release:
types: [published]
push:
tags:
- v*
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -33,9 +33,11 @@ jobs:
with:
target: x86_64
args: --release --out dist --sdist
- name: "Install built wheel - x86_64"
- name: "Test wheel - x86_64"
run: |
pip install dist/${{ env.PACKAGE_NAME }}-*.whl --force-reinstall
ruff --help
python -m ruff --help
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
@@ -43,9 +45,9 @@ jobs:
path: dist
- name: "Archive binary"
run: |
ARCHIVE_FILE=ruff-x86_64-apple-darwin.tar.gz
tar czvf $ARCHIVE_FILE -C target/x86_64-apple-darwin/release ruff
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
ARCHIVE_FILE=ruff-x86_64-apple-darwin.tar.gz
tar czvf $ARCHIVE_FILE -C target/x86_64-apple-darwin/release ruff
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
- name: "Upload binary"
uses: actions/upload-artifact@v3
with:
@@ -68,9 +70,11 @@ jobs:
uses: PyO3/maturin-action@v1
with:
args: --release --universal2 --out dist
- name: "Install built wheel - universal2"
- name: "Test wheel - universal2"
run: |
pip install dist/${{ env.PACKAGE_NAME }}-*universal2.whl --force-reinstall
ruff --help
python -m ruff --help
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
@@ -78,9 +82,9 @@ jobs:
path: dist
- name: "Archive binary"
run: |
ARCHIVE_FILE=ruff-aarch64-apple-darwin.tar.gz
tar czvf $ARCHIVE_FILE -C target/aarch64-apple-darwin/release ruff
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
ARCHIVE_FILE=ruff-aarch64-apple-darwin.tar.gz
tar czvf $ARCHIVE_FILE -C target/aarch64-apple-darwin/release ruff
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
- name: "Upload binary"
uses: actions/upload-artifact@v3
with:
@@ -113,11 +117,13 @@ jobs:
with:
target: ${{ matrix.platform.target }}
args: --release --out dist
- name: "Install built wheel"
- name: "Test wheel"
if: ${{ !startsWith(matrix.platform.target, 'aarch64') }}
shell: bash
run: |
python -m pip install dist/${{ env.PACKAGE_NAME }}-*.whl --force-reinstall
ruff --help
python -m ruff --help
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
@@ -126,9 +132,9 @@ jobs:
- name: "Archive binary"
shell: bash
run: |
ARCHIVE_FILE=ruff-${{ matrix.platform.target }}.zip
7z a $ARCHIVE_FILE ./target/${{ matrix.platform.target }}/release/ruff.exe
sha256sum $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
ARCHIVE_FILE=ruff-${{ matrix.platform.target }}.zip
7z a $ARCHIVE_FILE ./target/${{ matrix.platform.target }}/release/ruff.exe
sha256sum $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
- name: "Upload binary"
uses: actions/upload-artifact@v3
with:
@@ -158,10 +164,12 @@ jobs:
target: ${{ matrix.target }}
manylinux: auto
args: --release --out dist
- name: "Install built wheel"
- name: "Test wheel"
if: ${{ startsWith(matrix.target, 'x86_64') }}
run: |
pip install dist/${{ env.PACKAGE_NAME }}-*.whl --force-reinstall
ruff --help
python -m ruff --help
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
@@ -169,9 +177,9 @@ jobs:
path: dist
- name: "Archive binary"
run: |
ARCHIVE_FILE=ruff-${{ matrix.target }}.tar.gz
tar czvf $ARCHIVE_FILE -C target/${{ matrix.target }}/release ruff
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
ARCHIVE_FILE=ruff-${{ matrix.target }}.tar.gz
tar czvf $ARCHIVE_FILE -C target/${{ matrix.target }}/release ruff
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
- name: "Upload binary"
uses: actions/upload-artifact@v3
with:
@@ -187,6 +195,9 @@ jobs:
platform:
- target: aarch64-unknown-linux-gnu
arch: aarch64
# see https://github.com/charliermarsh/ruff/issues/3791
# and https://github.com/gnzlbg/jemallocator/issues/170#issuecomment-1503228963
maturin_docker_options: -e JEMALLOC_SYS_WITH_LG_PAGE=16
- target: armv7-unknown-linux-gnueabihf
arch: armv7
- target: s390x-unknown-linux-gnu
@@ -195,6 +206,7 @@ jobs:
arch: ppc64le
- target: powerpc64-unknown-linux-gnu
arch: ppc64
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
@@ -207,10 +219,11 @@ jobs:
with:
target: ${{ matrix.platform.target }}
manylinux: auto
docker-options: ${{ matrix.platform.maturin_docker_options }}
args: --release --out dist
- uses: uraimo/run-on-arch-action@v2
if: matrix.platform.arch != 'ppc64'
name: Install built wheel
name: Test wheel
with:
arch: ${{ matrix.platform.arch }}
distro: ubuntu20.04
@@ -221,6 +234,7 @@ jobs:
pip3 install -U pip
run: |
pip3 install ${{ env.PACKAGE_NAME }} --no-index --find-links dist/ --force-reinstall
ruff --help
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
@@ -228,9 +242,9 @@ jobs:
path: dist
- name: "Archive binary"
run: |
ARCHIVE_FILE=ruff-${{ matrix.platform.target }}.tar.gz
tar czvf $ARCHIVE_FILE -C target/${{ matrix.platform.target }}/release ruff
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
ARCHIVE_FILE=ruff-${{ matrix.platform.target }}.tar.gz
tar czvf $ARCHIVE_FILE -C target/${{ matrix.platform.target }}/release ruff
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
- name: "Upload binary"
uses: actions/upload-artifact@v3
with:
@@ -260,7 +274,7 @@ jobs:
target: ${{ matrix.target }}
manylinux: musllinux_1_2
args: --release --out dist
- name: "Install built wheel"
- name: "Test wheel"
if: matrix.target == 'x86_64-unknown-linux-musl'
uses: addnab/docker-run-action@v3
with:
@@ -269,6 +283,8 @@ jobs:
run: |
apk add py3-pip
pip3 install ${{ env.PACKAGE_NAME }} --no-index --find-links /io/dist/ --force-reinstall
ruff --help
python -m ruff --help
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
@@ -276,9 +292,9 @@ jobs:
path: dist
- name: "Archive binary"
run: |
ARCHIVE_FILE=ruff-${{ matrix.target }}.tar.gz
tar czvf $ARCHIVE_FILE -C target/${{ matrix.target }}/release ruff
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
ARCHIVE_FILE=ruff-${{ matrix.target }}.tar.gz
tar czvf $ARCHIVE_FILE -C target/${{ matrix.target }}/release ruff
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
- name: "Upload binary"
uses: actions/upload-artifact@v3
with:
@@ -294,8 +310,10 @@ jobs:
platform:
- target: aarch64-unknown-linux-musl
arch: aarch64
maturin_docker_options: -e JEMALLOC_SYS_WITH_LG_PAGE=16
- target: armv7-unknown-linux-musleabihf
arch: armv7
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
@@ -309,8 +327,9 @@ jobs:
target: ${{ matrix.platform.target }}
manylinux: musllinux_1_2
args: --release --out dist
docker-options: ${{ matrix.platform.maturin_docker_options }}
- uses: uraimo/run-on-arch-action@v2
name: Install built wheel
name: Test wheel
with:
arch: ${{ matrix.platform.arch }}
distro: alpine_latest
@@ -319,6 +338,7 @@ jobs:
apk add py3-pip
run: |
pip3 install ${{ env.PACKAGE_NAME }} --no-index --find-links dist/ --force-reinstall
ruff check --help
- name: "Upload wheels"
uses: actions/upload-artifact@v3
with:
@@ -326,9 +346,9 @@ jobs:
path: dist
- name: "Archive binary"
run: |
ARCHIVE_FILE=ruff-${{ matrix.platform.target }}.tar.gz
tar czvf $ARCHIVE_FILE -C target/${{ matrix.platform.target }}/release ruff
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
ARCHIVE_FILE=ruff-${{ matrix.platform.target }}.tar.gz
tar czvf $ARCHIVE_FILE -C target/${{ matrix.platform.target }}/release ruff
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
- name: "Upload binary"
uses: actions/upload-artifact@v3
with:

3
.gitignore vendored
View File

@@ -3,7 +3,8 @@
crates/ruff/resources/test/cpython
mkdocs.yml
.overrides
github_search.jsonl
ruff-old
github_search*.jsonl
###
# Rust.gitignore

View File

@@ -63,11 +63,6 @@ repos:
rev: 23.1.0
hooks:
- id: black
exclude: |
(?x)^(
crates/ruff/resources/.*|
crates/ruff_python_formatter/resources/.*
)$
ci:
skip: [cargo-fmt, clippy, dev-generate-all]

View File

@@ -1,5 +1,16 @@
# Breaking Changes
## 0.0.267
### `update-check` is no longer a valid configuration option ([#4313](https://github.com/charliermarsh/ruff/pull/4313))
The `update-check` functionality was deprecated in [#2530](https://github.com/charliermarsh/ruff/pull/2530),
in that the behavior itself was removed, and Ruff was changed to warn when that option was enabled.
Now, Ruff will throw an error when `update-check` is provided via a configuration file (e.g.,
`update-check = false`) or through the command-line, since it has no effect. Users should remove
this option from their configuration.
## 0.0.265
### `--fix-only` now exits with a zero exit code, unless `--exit-non-zero-on-fix` is specified ([#4146](https://github.com/charliermarsh/ruff/pull/4146))

278
Cargo.lock generated
View File

@@ -144,15 +144,6 @@ version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d92bec98840b8f03a5ff5413de5293bfcd8bf96467cf5452609f939ec6f5de16"
[[package]]
name = "ascii-canvas"
version = "3.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8824ecca2e851cec16968d54a01dd372ef8f95b244fb84b84e70128be347c3c6"
dependencies = [
"term",
]
[[package]]
name = "assert_cmd"
version = "2.0.11"
@@ -200,21 +191,6 @@ dependencies = [
"serde",
]
[[package]]
name = "bit-set"
version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0700ddab506f33b20a03b13996eccd309a48e5ff77d0d95926aa0210fb4e95f1"
dependencies = [
"bit-vec",
]
[[package]]
name = "bit-vec"
version = "0.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "349f9b6a179ed607305526ca489b34ad0a41aed5f7980fa90eb03160b69598fb"
[[package]]
name = "bitflags"
version = "1.3.2"
@@ -223,9 +199,9 @@ checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a"
[[package]]
name = "bitflags"
version = "2.1.0"
version = "2.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c70beb79cbb5ce9c4f8e20849978f34225931f665bb49efa6982875a4d5facb3"
checksum = "24a6904aef64d73cf10ab17ebace7befb918b82164785cb89907993be7f83813"
[[package]]
name = "bstr"
@@ -700,16 +676,6 @@ dependencies = [
"dirs-sys 0.4.0",
]
[[package]]
name = "dirs-next"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b98cf8ebf19c3d1b223e151f99a4f9f0690dca41414773390fc824184ac833e1"
dependencies = [
"cfg-if",
"dirs-sys-next",
]
[[package]]
name = "dirs-sys"
version = "0.3.7"
@@ -732,17 +698,6 @@ dependencies = [
"windows-sys 0.45.0",
]
[[package]]
name = "dirs-sys-next"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ebda144c4fe02d1f7ea1a7d9641b6fc6b580adcfa024ae48797ecdeb6825b4d"
dependencies = [
"libc",
"redox_users",
"winapi",
]
[[package]]
name = "doc-comment"
version = "0.3.3"
@@ -767,15 +722,6 @@ version = "1.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7fcaabb2fef8c910e7f4c7ce9f67a1283a1715879a7c230ca9d6d1ae31f16d91"
[[package]]
name = "ena"
version = "0.14.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c533630cf40e9caa44bd91aadc88a75d75a4c3a12b4cfde353cbed41daa1e1f1"
dependencies = [
"log",
]
[[package]]
name = "encode_unicode"
version = "0.3.6"
@@ -833,15 +779,9 @@ dependencies = [
"windows-sys 0.48.0",
]
[[package]]
name = "fixedbitset"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.265"
version = "0.0.267"
dependencies = [
"anyhow",
"clap 4.2.4",
@@ -1168,37 +1108,11 @@ dependencies = [
"libc",
]
[[package]]
name = "lalrpop"
version = "0.19.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f34313ec00c2eb5c3c87ca6732ea02dcf3af99c3ff7a8fb622ffb99c9d860a87"
dependencies = [
"ascii-canvas",
"bit-set",
"diff",
"ena",
"is-terminal",
"itertools",
"lalrpop-util",
"petgraph",
"pico-args",
"regex",
"regex-syntax 0.6.29",
"string_cache",
"term",
"tiny-keccak",
"unicode-xid",
]
[[package]]
name = "lalrpop-util"
version = "0.19.9"
version = "0.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e5c1f7869c94d214466c5fd432dfed12c379fd87786768d36455892d46b18edd"
dependencies = [
"regex",
]
checksum = "3f35c735096c0293d313e8f2a641627472b83d01b937177fe76e5e2708d31e0d"
[[package]]
name = "lazy_static"
@@ -1388,12 +1302,6 @@ version = "1.0.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "308d96db8debc727c3fd9744aac51751243420e46edf401010908da7f8d5e57c"
[[package]]
name = "new_debug_unreachable"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e4a24736216ec316047a1fc4252e27dabb04218aa4a3f37c6e7ddbf1f9782b54"
[[package]]
name = "nextest-workspace-hack"
version = "0.1.0"
@@ -1525,29 +1433,6 @@ dependencies = [
"winapi",
]
[[package]]
name = "parking_lot"
version = "0.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3742b2c103b9f06bc9fff0a37ff4912935851bee6d36f3c02bcc755bcfec228f"
dependencies = [
"lock_api",
"parking_lot_core",
]
[[package]]
name = "parking_lot_core"
version = "0.9.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9069cbb9f99e3a5083476ccb29ceb1de18b9118cafa53e90c9551235de2b9521"
dependencies = [
"cfg-if",
"libc",
"redox_syscall 0.2.16",
"smallvec",
"windows-sys 0.45.0",
]
[[package]]
name = "paste"
version = "1.0.12"
@@ -1624,23 +1509,13 @@ version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "478c572c3d73181ff3c2539045f6eb99e5491218eae919370993b890cdbdd98e"
[[package]]
name = "petgraph"
version = "0.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4dd7d28ee937e54fe3080c91faa1c3a46c06de6252988a7f4592ba2310ef22a4"
dependencies = [
"fixedbitset",
"indexmap",
]
[[package]]
name = "phf"
version = "0.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "928c6535de93548188ef63bb7c4036bd415cd8f36ad25af44b9789b2ee72a48c"
dependencies = [
"phf_shared 0.11.1",
"phf_shared",
]
[[package]]
@@ -1650,7 +1525,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a56ac890c5e3ca598bbdeaa99964edb5b0258a583a9eb6ef4e89fc85d9224770"
dependencies = [
"phf_generator",
"phf_shared 0.11.1",
"phf_shared",
]
[[package]]
@@ -1659,19 +1534,10 @@ version = "0.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1181c94580fa345f50f19d738aaa39c0ed30a600d95cb2d3e23f94266f14fbf"
dependencies = [
"phf_shared 0.11.1",
"phf_shared",
"rand",
]
[[package]]
name = "phf_shared"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b6796ad771acdc0123d2a88dc428b5e38ef24456743ddb1744ed628f9815c096"
dependencies = [
"siphasher",
]
[[package]]
name = "phf_shared"
version = "0.11.1"
@@ -1681,12 +1547,6 @@ dependencies = [
"siphasher",
]
[[package]]
name = "pico-args"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "db8bcd96cb740d03149cbad5518db9fd87126a10ab519c011893b1754134c468"
[[package]]
name = "pin-project-lite"
version = "0.2.9"
@@ -1738,12 +1598,6 @@ version = "0.2.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b40af805b3121feab8a3c29f04d8ad262fa8e0561883e7653e024ae4479e6de"
[[package]]
name = "precomputed-hash"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "925383efa346730478fb4838dbe9137d2a47675ad789c546d150a6e1dd4ab31c"
[[package]]
name = "predicates"
version = "3.0.3"
@@ -1944,7 +1798,7 @@ checksum = "af83e617f331cc6ae2da5443c602dfa5af81e517212d9d611a5b3ba1777b5370"
dependencies = [
"aho-corasick 1.0.1",
"memchr",
"regex-syntax 0.7.1",
"regex-syntax",
]
[[package]]
@@ -1953,12 +1807,6 @@ version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6c230d73fb8d8c1b9c0b3135c5142a8acee3a0558fb8db5cf1cb65f8d7862132"
[[package]]
name = "regex-syntax"
version = "0.6.29"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f162c6dd7b008981e4d40210aca20b4bd0f9b60ca9271061b07f78537722f2e1"
[[package]]
name = "regex-syntax"
version = "0.7.1"
@@ -2004,11 +1852,11 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.265"
version = "0.0.267"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
"bitflags 2.1.0",
"bitflags 2.2.1",
"chrono",
"clap 4.2.4",
"colored",
@@ -2093,7 +1941,7 @@ dependencies = [
[[package]]
name = "ruff_cli"
version = "0.0.265"
version = "0.0.267"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -2101,7 +1949,7 @@ dependencies = [
"assert_cmd",
"atty",
"bincode",
"bitflags 2.1.0",
"bitflags 2.2.1",
"cachedir",
"chrono",
"clap 4.2.4",
@@ -2200,7 +2048,7 @@ name = "ruff_python_ast"
version = "0.0.0"
dependencies = [
"anyhow",
"bitflags 2.1.0",
"bitflags 2.2.1",
"is-macro",
"itertools",
"log",
@@ -2209,10 +2057,9 @@ dependencies = [
"num-traits",
"once_cell",
"regex",
"ruff_rustpython",
"ruff_text_size",
"rustc-hash",
"rustpython-common",
"rustpython-literal 0.2.0 (git+https://github.com/RustPython/Parser.git?rev=947fb53d0b41fec465db3d8e725bdb2eec1299ec)",
"rustpython-parser",
"serde",
"smallvec",
@@ -2234,7 +2081,6 @@ dependencies = [
"ruff_testing_macros",
"ruff_text_size",
"rustc-hash",
"rustpython-common",
"rustpython-parser",
"similar",
"test-case",
@@ -2244,7 +2090,7 @@ dependencies = [
name = "ruff_python_semantic"
version = "0.0.0"
dependencies = [
"bitflags 2.1.0",
"bitflags 2.2.1",
"is-macro",
"nohash-hasher",
"ruff_python_ast",
@@ -2268,8 +2114,6 @@ name = "ruff_rustpython"
version = "0.0.0"
dependencies = [
"anyhow",
"once_cell",
"rustpython-common",
"rustpython-parser",
]
@@ -2286,7 +2130,7 @@ dependencies = [
[[package]]
name = "ruff_text_size"
version = "0.0.0"
source = "git+https://github.com/charliermarsh/RustPython.git?rev=c3147d2c1524ebd0e90cf1c2938d770314fd5a5a#c3147d2c1524ebd0e90cf1c2938d770314fd5a5a"
source = "git+https://github.com/RustPython/Parser.git?rev=947fb53d0b41fec465db3d8e725bdb2eec1299ec#947fb53d0b41fec465db3d8e725bdb2eec1299ec"
dependencies = [
"schemars",
"serde",
@@ -2357,75 +2201,93 @@ dependencies = [
[[package]]
name = "rustpython-ast"
version = "0.2.0"
source = "git+https://github.com/charliermarsh/RustPython.git?rev=c3147d2c1524ebd0e90cf1c2938d770314fd5a5a#c3147d2c1524ebd0e90cf1c2938d770314fd5a5a"
source = "git+https://github.com/RustPython/Parser.git?rev=947fb53d0b41fec465db3d8e725bdb2eec1299ec#947fb53d0b41fec465db3d8e725bdb2eec1299ec"
dependencies = [
"is-macro",
"num-bigint",
"ruff_text_size",
"rustpython-parser-core",
]
[[package]]
name = "rustpython-common"
version = "0.2.0"
source = "git+https://github.com/charliermarsh/RustPython.git?rev=c3147d2c1524ebd0e90cf1c2938d770314fd5a5a#c3147d2c1524ebd0e90cf1c2938d770314fd5a5a"
source = "git+https://github.com/RustPython/RustPython.git?rev=f3e4d3409253660bd4fa7f3d24d3db747e7dca61#f3e4d3409253660bd4fa7f3d24d3db747e7dca61"
dependencies = [
"ascii",
"bitflags 1.3.2",
"bitflags 2.2.1",
"bstr 0.2.17",
"cfg-if",
"getrandom",
"hexf-parse",
"itertools",
"lexical-parse-float",
"libc",
"lock_api",
"num-bigint",
"num-complex",
"num-traits",
"once_cell",
"radium",
"rand",
"rustpython-literal 0.2.0 (git+https://github.com/youknowone/RustPython-parser.git?rev=5b2af304a2baa53598e594097824165d4ac7a119)",
"siphasher",
"unic-ucd-category",
"volatile",
"widestring",
]
[[package]]
name = "rustpython-compiler-core"
name = "rustpython-literal"
version = "0.2.0"
source = "git+https://github.com/charliermarsh/RustPython.git?rev=c3147d2c1524ebd0e90cf1c2938d770314fd5a5a#c3147d2c1524ebd0e90cf1c2938d770314fd5a5a"
source = "git+https://github.com/youknowone/RustPython-parser.git?rev=5b2af304a2baa53598e594097824165d4ac7a119#5b2af304a2baa53598e594097824165d4ac7a119"
dependencies = [
"bitflags 1.3.2",
"itertools",
"lz4_flex",
"num-bigint",
"num-complex",
"ruff_text_size",
"hexf-parse",
"lexical-parse-float",
"num-traits",
"unic-ucd-category",
]
[[package]]
name = "rustpython-literal"
version = "0.2.0"
source = "git+https://github.com/RustPython/Parser.git?rev=947fb53d0b41fec465db3d8e725bdb2eec1299ec#947fb53d0b41fec465db3d8e725bdb2eec1299ec"
dependencies = [
"hexf-parse",
"lexical-parse-float",
"num-traits",
"unic-ucd-category",
]
[[package]]
name = "rustpython-parser"
version = "0.2.0"
source = "git+https://github.com/charliermarsh/RustPython.git?rev=c3147d2c1524ebd0e90cf1c2938d770314fd5a5a#c3147d2c1524ebd0e90cf1c2938d770314fd5a5a"
source = "git+https://github.com/RustPython/Parser.git?rev=947fb53d0b41fec465db3d8e725bdb2eec1299ec#947fb53d0b41fec465db3d8e725bdb2eec1299ec"
dependencies = [
"anyhow",
"itertools",
"lalrpop",
"lalrpop-util",
"log",
"num-bigint",
"num-traits",
"phf",
"phf_codegen",
"ruff_text_size",
"rustc-hash",
"rustpython-ast",
"rustpython-compiler-core",
"rustpython-parser-core",
"tiny-keccak",
"unic-emoji-char",
"unic-ucd-ident",
"unicode_names2",
]
[[package]]
name = "rustpython-parser-core"
version = "0.2.0"
source = "git+https://github.com/RustPython/Parser.git?rev=947fb53d0b41fec465db3d8e725bdb2eec1299ec#947fb53d0b41fec465db3d8e725bdb2eec1299ec"
dependencies = [
"itertools",
"lz4_flex",
"num-bigint",
"num-complex",
"ruff_text_size",
]
[[package]]
name = "rustversion"
version = "1.0.12"
@@ -2613,19 +2475,6 @@ version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f"
[[package]]
name = "string_cache"
version = "0.8.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f91138e76242f575eb1d3b38b4f1362f10d3a43f47d182a5b359af488a02293b"
dependencies = [
"new_debug_unreachable",
"once_cell",
"parking_lot",
"phf_shared 0.10.0",
"precomputed-hash",
]
[[package]]
name = "strsim"
version = "0.10.0"
@@ -2698,17 +2547,6 @@ dependencies = [
"windows-sys 0.45.0",
]
[[package]]
name = "term"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c59df8ac95d96ff9bede18eb7300b0fda5e5d8d90960e76f8e14ae765eedbf1f"
dependencies = [
"dirs-next",
"rustversion",
"winapi",
]
[[package]]
name = "termcolor"
version = "1.2.0"
@@ -3061,12 +2899,6 @@ version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c0edd1e5b14653f783770bce4a4dabb4a5108a5370a5f5d8cfe8710c361f6c8b"
[[package]]
name = "unicode-xid"
version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f962df74c8c05a667b5ee8bcf162993134c104e96440b663c8daa176dc772d8c"
[[package]]
name = "unicode_names2"
version = "0.6.0"

View File

@@ -11,7 +11,7 @@ authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
[workspace.dependencies]
anyhow = { version = "1.0.69" }
bitflags = { version = "2.1.0" }
bitflags = { version = "2.2.1" }
chrono = { version = "0.4.23", default-features = false, features = ["clock"] }
clap = { version = "4.1.8", features = ["derive"] }
colored = { version = "2.0.0" }
@@ -30,10 +30,11 @@ path-absolutize = { version = "3.0.14" }
proc-macro2 = { version = "1.0.51" }
quote = { version = "1.0.23" }
regex = { version = "1.7.1" }
ruff_text_size = { git = "https://github.com/charliermarsh/RustPython.git", rev = "c3147d2c1524ebd0e90cf1c2938d770314fd5a5a" }
ruff_text_size = { git = "https://github.com/RustPython/Parser.git", rev = "947fb53d0b41fec465db3d8e725bdb2eec1299ec" }
rustc-hash = { version = "1.1.0" }
rustpython-common = { git = "https://github.com/charliermarsh/RustPython.git", rev = "c3147d2c1524ebd0e90cf1c2938d770314fd5a5a" }
rustpython-parser = { git = "https://github.com/charliermarsh/RustPython.git", rev = "c3147d2c1524ebd0e90cf1c2938d770314fd5a5a" }
rustpython-common = { git = "https://github.com/RustPython/RustPython.git", rev = "f3e4d3409253660bd4fa7f3d24d3db747e7dca61" }
rustpython-literal = { git = "https://github.com/RustPython/Parser.git", rev = "947fb53d0b41fec465db3d8e725bdb2eec1299ec" }
rustpython-parser = { git = "https://github.com/RustPython/Parser.git", rev = "947fb53d0b41fec465db3d8e725bdb2eec1299ec" , default-features = false}
schemars = { version = "0.8.12" }
serde = { version = "1.0.152", features = ["derive"] }
serde_json = { version = "1.0.93", features = ["preserve_order"] }

24
LICENSE
View File

@@ -550,6 +550,30 @@ are:
THE SOFTWARE.
"""
- flynt, licensed as follows:
"""
MIT License
Copyright (c) 2019-2022 Ilya Kamenshchikov
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
- isort, licensed as follows:
"""

View File

@@ -2,7 +2,7 @@
# Ruff
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v1.json)](https://github.com/charliermarsh/ruff)
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json)](https://github.com/charliermarsh/ruff)
[![image](https://img.shields.io/pypi/v/ruff.svg)](https://pypi.python.org/pypi/ruff)
[![image](https://img.shields.io/pypi/l/ruff.svg)](https://pypi.python.org/pypi/ruff)
[![image](https://img.shields.io/pypi/pyversions/ruff.svg)](https://pypi.python.org/pypi/ruff)
@@ -137,7 +137,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com) hook:
```yaml
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.265'
rev: 'v0.0.267'
hooks:
- id: ruff
```
@@ -183,6 +183,7 @@ exclude = [
".direnv",
".eggs",
".git",
".git-rewrite",
".hg",
".mypy_cache",
".nox",
@@ -280,12 +281,13 @@ quality tools, including:
- [flake8-tidy-imports](https://pypi.org/project/flake8-tidy-imports/)
- [flake8-type-checking](https://pypi.org/project/flake8-type-checking/)
- [flake8-use-pathlib](https://pypi.org/project/flake8-use-pathlib/)
- [flynt](https://pypi.org/project/flynt/) ([#2102](https://github.com/charliermarsh/ruff/issues/2102))
- [isort](https://pypi.org/project/isort/)
- [mccabe](https://pypi.org/project/mccabe/)
- [pandas-vet](https://pypi.org/project/pandas-vet/)
- [pep8-naming](https://pypi.org/project/pep8-naming/)
- [pydocstyle](https://pypi.org/project/pydocstyle/)
- [pygrep-hooks](https://github.com/pre-commit/pygrep-hooks) ([#980](https://github.com/charliermarsh/ruff/issues/980))
- [pygrep-hooks](https://github.com/pre-commit/pygrep-hooks)
- [pyupgrade](https://pypi.org/project/pyupgrade/)
- [tryceratops](https://pypi.org/project/tryceratops/)
- [yesqa](https://pypi.org/project/yesqa/)
@@ -341,9 +343,9 @@ Ruff is used by a number of major open-source projects and companies, including:
- [Babel](https://github.com/python-babel/babel)
- [Bokeh](https://github.com/bokeh/bokeh)
- [Cryptography (PyCA)](https://github.com/pyca/cryptography)
- [DVC](https://github.com/iterative/dvc)
- [Dagger](https://github.com/dagger/dagger)
- [Dagster](https://github.com/dagster-io/dagster)
- [DVC](https://github.com/iterative/dvc)
- [FastAPI](https://github.com/tiangolo/fastapi)
- [Gradio](https://github.com/gradio-app/gradio)
- [Great Expectations](https://github.com/great-expectations/great_expectations)
@@ -371,8 +373,9 @@ Ruff is used by a number of major open-source projects and companies, including:
- [Polars](https://github.com/pola-rs/polars)
- [PostHog](https://github.com/PostHog/posthog)
- Prefect ([Python SDK](https://github.com/PrefectHQ/prefect), [Marvin](https://github.com/PrefectHQ/marvin))
- [Pydantic](https://github.com/pydantic/pydantic)
- [PyInstaller](https://github.com/pyinstaller/pyinstaller)
- [PyTorch](https://github.com/pytorch/pytorch)
- [Pydantic](https://github.com/pydantic/pydantic)
- [Pylint](https://github.com/PyCQA/pylint)
- [Pynecone](https://github.com/pynecone-io/pynecone)
- [Robyn](https://github.com/sansyrox/robyn)
@@ -395,6 +398,34 @@ Ruff is used by a number of major open-source projects and companies, including:
- [meson-python](https://github.com/mesonbuild/meson-python)
- [nox](https://github.com/wntrblm/nox)
### Show Your Support
If you're using Ruff, consider adding the Ruff badge to project's `README.md`:
```md
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json)](https://github.com/charliermarsh/ruff)
```
...or `README.rst`:
```rst
.. image:: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json
:target: https://github.com/charliermarsh/ruff
:alt: Ruff
```
...or, as HTML:
```html
<a href="https://github.com/charliermarsh/ruff"><img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json" alt="Ruff" style="max-width:100%;"></a>
```
## License
MIT
<div align="center">
<a target="_blank" href="https://astral.sh" style="background:none">
<img src="https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/svg/Astral.svg">
</a>
</div>

8
assets/badge/v2.json Normal file
View File

@@ -0,0 +1,8 @@
{
"label": "",
"message": "Ruff",
"logoSvg": "<svg width=\"510\" height=\"622\" viewBox=\"0 0 510 622\" fill=\"none\" xmlns=\"http://www.w3.org/2000/svg\"><path fill-rule=\"evenodd\" clip-rule=\"evenodd\" d=\"M206.701 0C200.964 0 196.314 4.64131 196.314 10.3667V41.4667C196.314 47.192 191.663 51.8333 185.927 51.8333H156.843C151.107 51.8333 146.456 56.4746 146.456 62.2V145.133C146.456 150.859 141.806 155.5 136.069 155.5H106.986C101.249 155.5 96.5988 160.141 96.5988 165.867V222.883C96.5988 228.609 91.9484 233.25 86.2118 233.25H57.1283C51.3917 233.25 46.7413 237.891 46.7413 243.617V300.633C46.7413 306.359 42.0909 311 36.3544 311H10.387C4.6504 311 0 315.641 0 321.367V352.467C0 358.192 4.6504 362.833 10.387 362.833H145.418C151.154 362.833 155.804 367.475 155.804 373.2V430.217C155.804 435.942 151.154 440.583 145.418 440.583H116.334C110.597 440.583 105.947 445.225 105.947 450.95V507.967C105.947 513.692 101.297 518.333 95.5601 518.333H66.4766C60.74 518.333 56.0896 522.975 56.0896 528.7V611.633C56.0896 617.359 60.74 622 66.4766 622H149.572C155.309 622 159.959 617.359 159.959 611.633V570.167H201.507C207.244 570.167 211.894 565.525 211.894 559.8V528.7C211.894 522.975 216.544 518.333 222.281 518.333H251.365C257.101 518.333 261.752 513.692 261.752 507.967V476.867C261.752 471.141 266.402 466.5 272.138 466.5H301.222C306.959 466.5 311.609 461.859 311.609 456.133V425.033C311.609 419.308 316.259 414.667 321.996 414.667H351.079C356.816 414.667 361.466 410.025 361.466 404.3V373.2C361.466 367.475 366.117 362.833 371.853 362.833H400.937C406.673 362.833 411.324 358.192 411.324 352.467V321.367C411.324 315.641 415.974 311 421.711 311H450.794C456.531 311 461.181 306.359 461.181 300.633V217.7C461.181 211.975 456.531 207.333 450.794 207.333H420.672C414.936 207.333 410.285 202.692 410.285 196.967V165.867C410.285 160.141 414.936 155.5 420.672 155.5H449.756C455.492 155.5 460.143 150.859 460.143 145.133V114.033C460.143 108.308 464.793 103.667 470.53 103.667H499.613C505.35 103.667 510 99.0253 510 93.3V10.3667C510 4.64132 505.35 0 499.613 0H206.701ZM168.269 440.583C162.532 440.583 157.882 445.225 157.882 450.95V507.967C157.882 513.692 153.231 518.333 147.495 518.333H118.411C112.675 518.333 108.024 522.975 108.024 528.7V559.8C108.024 565.525 112.675 570.167 118.411 570.167H159.959V528.7C159.959 522.975 164.61 518.333 170.346 518.333H199.43C205.166 518.333 209.817 513.692 209.817 507.967V476.867C209.817 471.141 214.467 466.5 220.204 466.5H249.287C255.024 466.5 259.674 461.859 259.674 456.133V425.033C259.674 419.308 264.325 414.667 270.061 414.667H299.145C304.881 414.667 309.532 410.025 309.532 404.3V373.2C309.532 367.475 314.182 362.833 319.919 362.833H349.002C354.739 362.833 359.389 358.192 359.389 352.467V321.367C359.389 315.641 364.039 311 369.776 311H398.859C404.596 311 409.246 306.359 409.246 300.633V269.533C409.246 263.808 404.596 259.167 398.859 259.167H318.88C313.143 259.167 308.493 254.525 308.493 248.8V217.7C308.493 211.975 313.143 207.333 318.88 207.333H347.963C353.7 207.333 358.35 202.692 358.35 196.967V165.867C358.35 160.141 363.001 155.5 368.737 155.5H397.821C403.557 155.5 408.208 150.859 408.208 145.133V114.033C408.208 108.308 412.858 103.667 418.595 103.667H447.678C453.415 103.667 458.065 99.0253 458.065 93.3V62.2C458.065 56.4746 453.415 51.8333 447.678 51.8333H208.778C203.041 51.8333 198.391 56.4746 198.391 62.2V145.133C198.391 150.859 193.741 155.5 188.004 155.5H158.921C153.184 155.5 148.534 160.141 148.534 165.867V222.883C148.534 228.609 143.883 233.25 138.147 233.25H109.063C103.327 233.25 98.6762 237.891 98.6762 243.617V300.633C98.6762 306.359 103.327 311 109.063 311H197.352C203.089 311 207.739 315.641 207.739 321.367V430.217C207.739 435.942 203.089 440.583 197.352 440.583H168.269Z\" fill=\"#D7FF64\"/></svg>",
"logoWidth": 10,
"labelColor": "grey",
"color": "#261230"
}

24
assets/svg/Astral.svg Normal file
View File

@@ -0,0 +1,24 @@
<svg width="139" height="24" viewBox="0 0 139 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="138.764" height="24" rx="2.18182" fill="#261230"/>
<path
d="M8.72798 15.2726H9.91316V11.8697L9.6887 10.4062L9.8952 10.3343L12.1309 15.1649L14.3486 10.3343L14.5461 10.4062L14.3486 11.8607V15.2726H15.5248V8.72714H13.9535L12.2117 12.7137H12.0142L10.2723 8.72714H8.72798V15.2726Z"
fill="#D7FF64"/>
<path
d="M22.3432 15.2726H23.6631L21.3017 8.72714H19.7574L17.4589 15.2726H18.7069L19.1558 13.9797H21.9033L22.3432 15.2726ZM19.497 13.0279L19.901 11.8607L20.4308 10.0021H20.6463L21.176 11.8607L21.5711 13.0279H19.497Z"
fill="#D7FF64"/>
<path
d="M25.4209 15.2726H28.1234C30.1077 15.2726 30.9876 14.1413 30.9876 12.0044C30.9876 9.92131 30.1706 8.72714 28.1234 8.72714H25.4209V15.2726ZM26.624 14.2131V9.77765H28.0965C29.147 9.77765 29.7306 10.1907 29.7306 11.4477V12.5521C29.7306 13.6923 29.2817 14.2131 28.0965 14.2131H26.624Z"
fill="#D7FF64"/>
<path
d="M33.079 15.2726H37.6491V14.2131H34.2822V12.3815H37.2002V11.3938H34.2822V9.77765H37.6491V8.72714H33.079V15.2726Z"
fill="#D7FF64"/>
<path
d="M42.923 15.2726H46.2451C47.4572 15.2726 48.2025 14.5812 48.2025 13.5487C48.2025 12.7675 47.8343 12.175 47.0532 11.9954V11.7799C47.6637 11.5734 48.0319 11.0436 48.0319 10.3433C48.0319 9.38259 47.4572 8.72714 46.281 8.72714H42.923V15.2726ZM44.0992 11.4746V9.65195H45.9578C46.4875 9.65195 46.7928 9.92131 46.7928 10.3523V10.7653C46.7928 11.1873 46.4965 11.4746 45.9758 11.4746H44.0992ZM44.0992 14.3388V12.3904H46.0296C46.5863 12.3904 46.9365 12.6418 46.9365 13.1806V13.5666C46.9365 14.0425 46.5684 14.3388 45.9309 14.3388H44.0992Z"
fill="#D7FF64"/>
<path
d="M49.6959 8.72714L52.174 12.579V14.1952H50.1898V15.2726H53.3772V12.579L55.8553 8.72714H54.4456L53.5119 10.2535L52.8744 11.3759H52.6679L52.0483 10.2715L51.1056 8.72714H49.6959Z"
fill="#D7FF64"/>
<path fill-rule="evenodd" clip-rule="evenodd"
d="M74.1824 7.63626C74.1824 7.03377 74.6708 6.54535 75.2733 6.54535H84.0006C84.6031 6.54535 85.0915 7.03377 85.0915 7.63626V9.81808H80.0733V8.94535H79.2006V10.6908H84.0006C84.6031 10.6908 85.0915 11.1792 85.0915 11.7817V16.3635C85.0915 16.966 84.6031 17.4544 84.0006 17.4544H75.2733C74.6708 17.4544 74.1824 16.966 74.1824 16.3635V14.1817L79.2006 14.1817V15.0544H80.0733V13.309L75.2733 13.309C74.6708 13.309 74.1824 12.8206 74.1824 12.2181V7.63626ZM63.4912 6.54545C62.8887 6.54545 62.4003 7.03387 62.4003 7.63636V17.4545H67.4185V14.1818H68.2912V17.4545H73.3094V7.63636C73.3094 7.03387 72.821 6.54545 72.2185 6.54545H63.4912ZM69.164 10.6909V11.5636H66.5458V10.6909H69.164ZM110.619 6.54545C110.016 6.54545 109.528 7.03387 109.528 7.63636V17.4545H114.546V14.1818H115.419V17.4545H120.437V7.63636C120.437 7.03387 119.948 6.54545 119.346 6.54545H110.619ZM116.291 10.6909V11.5636H113.673V10.6909H116.291ZM91.8549 8.29091H96.8731V11.3455C96.8731 11.9479 96.3847 12.4364 95.7822 12.4364H91.8549V13.3091H96.8731V17.4545H87.9276C87.3251 17.4545 86.8367 16.9661 86.8367 16.3636V12.4364H85.964V8.29091H86.8367V6.54545H91.8549V8.29091ZM108.655 7.63636C108.655 7.03387 108.166 6.54545 107.564 6.54545H97.7458V17.4545H102.764V14.1818H103.637V17.4545H108.655V13.3091H106.473V12.4364H107.564C108.166 12.4364 108.655 11.9479 108.655 11.3455V7.63636ZM104.509 10.6909V11.5636H101.891V10.6909H104.509ZM132.218 13.3091L126.327 13.3091V6.54547L121.309 6.54547V17.4546H132.218V13.3091Z"
fill="#D7FF64"/>
</svg>

After

Width:  |  Height:  |  Size: 3.4 KiB

View File

@@ -1,6 +1,6 @@
[package]
name = "flake8-to-ruff"
version = "0.0.265"
version = "0.0.267"
edition = { workspace = true }
rust-version = { workspace = true }

View File

@@ -26,7 +26,7 @@ requires-python = ">=3.7"
repository = "https://github.com/charliermarsh/ruff#subdirectory=crates/flake8_to_ruff"
[build-system]
requires = ["maturin>=0.14,<0.15"]
requires = ["maturin>=0.15.1,<0.16"]
build-backend = "maturin"
[tool.maturin]

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.0.265"
version = "0.0.267"
authors.workspace = true
edition.workspace = true
rust-version.workspace = true
@@ -56,7 +56,7 @@ result-like = { version = "0.4.6" }
rustc-hash = { workspace = true }
rustpython-common = { workspace = true }
rustpython-parser = { workspace = true }
schemars = { workspace = true }
schemars = { workspace = true, optional = true }
semver = { version = "1.0.16" }
serde = { workspace = true }
serde_json = { workspace = true }
@@ -80,5 +80,7 @@ colored = { workspace = true, features = ["no-color"] }
[features]
default = []
schemars = ["dep:schemars"]
logical_lines = []
jupyter_notebook = []
ecosystem_ci = []

View File

@@ -4,7 +4,12 @@ B027 - on lines 13, 16, 19, 23
"""
import abc
from abc import ABC
from abc import abstractmethod, abstractproperty
from abc import (
abstractmethod,
abstractproperty,
abstractclassmethod,
abstractstaticmethod,
)
from abc import abstractmethod as notabstract
from abc import abstractproperty as notabstract_property
@@ -55,6 +60,22 @@ class AbstractClass(ABC):
def abstract_6(self):
...
@abstractclassmethod
def abstract_7(self):
pass
@abc.abstractclassmethod
def abstract_8(self):
...
@abstractstaticmethod
def abstract_9(self):
pass
@abc.abstractstaticmethod
def abstract_10(self):
...
def body_1(self):
print("foo")
...

View File

@@ -1,39 +0,0 @@
"""
Should emit:
B027 - on lines 13, 16, 19, 23
"""
from abc import ABC
class AbstractClass(ABC):
def empty_1(self): # error
...
def empty_2(self): # error
pass
def body_1(self):
print("foo")
...
def body_2(self):
self.body_1()
def foo():
class InnerAbstractClass(ABC):
def empty_1(self): # error
...
def empty_2(self): # error
pass
def body_1(self):
print("foo")
...
def body_2(self):
self.body_1()
return InnerAbstractClass

View File

@@ -17,3 +17,23 @@ all((x.id for x in bar))
async def f() -> bool:
return all([await use_greeting(greeting) for greeting in await greetings()])
# Special comment handling
any(
[ # lbracket comment
# second line comment
i.bit_count()
# random middle comment
for i in range(5) # rbracket comment
] # rpar comment
# trailing comment
)
# Weird case where the function call, opening bracket, and comment are all
# on the same line.
any([ # lbracket comment
# second line comment
i.bit_count() for i in range(5) # rbracket comment
] # rpar comment
)

View File

@@ -7,3 +7,12 @@ foo.info("Hello {}".format("World!"))
logging.log(logging.INFO, msg="Hello {}".format("World!"))
logging.log(level=logging.INFO, msg="Hello {}".format("World!"))
logging.log(msg="Hello {}".format("World!"), level=logging.INFO)
# Flask support
import flask
from flask import current_app
from flask import current_app as app
flask.current_app.logger.info("Hello {}".format("World!"))
current_app.logger.info("Hello {}".format("World!"))
app.logger.log(logging.INFO, "Hello {}".format("World!"))

View File

@@ -1,73 +1,96 @@
def foo():
pass
try:
foo()
except ValueError: # SIM105
pass
try:
foo()
except (ValueError, OSError): # SIM105
pass
try:
foo()
except: # SIM105
pass
try:
foo()
except (a.Error, b.Error): # SIM105
pass
# SIM105
try:
foo()
except ValueError:
print('foo')
pass
# SIM105
try:
foo()
except (ValueError, OSError):
pass
# SIM105
try:
foo()
except:
pass
# SIM105
try:
foo()
except (a.Error, b.Error):
pass
# OK
try:
foo()
except ValueError:
print("foo")
except OSError:
pass
# OK
try:
foo()
except ValueError:
pass
else:
print('bar')
print("bar")
# OK
try:
foo()
except ValueError:
pass
finally:
print('bar')
print("bar")
# OK
try:
foo()
foo()
except ValueError:
pass
# OK
try:
for i in range(3):
foo()
except ValueError:
pass
def bar():
# OK
try:
return foo()
except ValueError:
pass
def with_ellipsis():
# OK
try:
foo()
except ValueError:
...
def with_ellipsis_and_return():
# OK
try:
return foo()
except ValueError:
...
def with_comment():
try:
foo()
except (ValueError, OSError):
pass # Trailing comment.

View File

@@ -1,7 +1,8 @@
"""Case: There's a random import, so it should add `contextlib` after it."""
import math
# SIM105
try:
math.sqrt(-1)
except ValueError: # SIM105
except ValueError:
pass

View File

@@ -6,6 +6,7 @@ def foo():
pass
# SIM105
try:
foo()
except ValueError:

View File

@@ -0,0 +1,16 @@
"""Case: `contextlib` is imported after the call site."""
def foo():
pass
def bar():
# SIM105
try:
foo()
except ValueError:
pass
import contextlib

View File

@@ -12,3 +12,10 @@ if not a == b: # OK
if not a != b: # OK
pass
a = not not b # SIM208
f(not not a) # SIM208
if 1 + (not (not a)): # SIM208
pass

View File

@@ -6,6 +6,7 @@ a = True if b + c else False # SIM210
a = False if b else True # OK
def f():
# OK
def bool():

View File

@@ -0,0 +1,18 @@
import secrets
from random import random, choice
a = "Hello"
ok1 = " ".join([a, " World"]) # OK
ok2 = "".join(["Finally, ", a, " World"]) # OK
ok3 = "x".join(("1", "2", "3")) # OK
ok4 = "y".join([1, 2, 3]) # Technically OK, though would've been an error originally
ok5 = "a".join([random(), random()]) # OK (simple calls)
ok6 = "a".join([secrets.token_urlsafe(), secrets.token_hex()]) # OK (attr calls)
nok1 = "x".join({"4", "5", "yee"}) # Not OK (set)
nok2 = a.join(["1", "2", "3"]) # Not OK (not a static joiner)
nok3 = "a".join(a) # Not OK (not a static joinee)
nok4 = "a".join([a, a, *a]) # Not OK (not a static length)
nok5 = "a".join([choice("flarp")]) # Not OK (not a simple call)
nok6 = "a".join(x for x in "feefoofum") # Not OK (generator)
nok7 = "a".join([f"foo{8}", "bar"]) # Not OK (contains an f-string)

View File

@@ -22,3 +22,7 @@ if True:
x.drop(["a"], axis=1, **kwargs, inplace=True)
x.drop(["a"], axis=1, inplace=True, **kwargs)
f(x.drop(["a"], axis=1, inplace=True))
x.apply(lambda x: x.sort_values('a', inplace=True))
import torch
torch.m.ReLU(inplace=True) # safe because this isn't a pandas call

View File

@@ -2,7 +2,7 @@
"""Here's a top-level docstring that's over the limit."""
def f():
def f1():
"""Here's a docstring that's also over the limit."""
x = 1 # Here's a comment that's over the limit, but it's not standalone.
@@ -16,3 +16,16 @@ def f():
"This is also considered a docstring, and is over the limit."
def f2():
"""Here's a multi-line docstring.
It's over the limit on this line, which isn't the first line in the docstring.
"""
def f3():
"""Here's a multi-line docstring.
It's over the limit on this line, which isn't the first line in the docstring."""

View File

@@ -13,3 +13,15 @@ def another_function():
def utf8_function():
"""éste docstring is capitalized."""
def uppercase_char_not_possible():
"""'args' is not capitalized."""
def non_alphabetic():
"""th!is is not capitalized."""
def non_ascii():
"""th•s is not capitalized."""
def all_caps():
"""th•s is not capitalized."""

View File

@@ -0,0 +1,18 @@
def public_func():
pass
def private_func():
pass
class PublicClass:
class PublicNestedClass:
pass
class PrivateClass:
pass
__all__ = ("public_func", "PublicClass")

View File

@@ -0,0 +1,19 @@
# Errors
assert my_mock.not_called()
assert my_mock.called_once_with()
assert my_mock.not_called
assert my_mock.called_once_with
my_mock.assert_not_called
my_mock.assert_called
my_mock.assert_called_once_with
my_mock.assert_called_once_with
MyMock.assert_called_once_with
# OK
assert my_mock.call_count == 1
assert my_mock.called
my_mock.assert_not_called()
my_mock.assert_called()
my_mock.assert_called_once_with()
"""like :meth:`Mock.assert_called_once_with`"""
"""like :meth:`MagicMock.assert_called_once_with`"""

View File

@@ -0,0 +1,21 @@
min(1, 2, 3)
min(1, min(2, 3))
min(1, min(2, min(3, 4)))
min(1, foo("a", "b"), min(3, 4))
min(1, max(2, 3))
max(1, 2, 3)
max(1, max(2, 3))
max(1, max(2, max(3, 4)))
max(1, foo("a", "b"), max(3, 4))
# These should not trigger; we do not flag cases with keyword args.
min(1, min(2, 3), key=test)
min(1, min(2, 3, key=test))
# This will still trigger, to merge the calls without keyword args.
min(1, min(2, 3, key=test), min(4, 5))
# Don't provide a fix if there are comments within the call.
min(
1, # This is a comment.
min(2, 3),
)

View File

@@ -0,0 +1,5 @@
def main():
exit(0)
import functools

View File

@@ -0,0 +1,5 @@
from sys import argv
def main():
exit(0)

View File

@@ -0,0 +1,5 @@
def main():
exit(0)
from sys import argv

View File

@@ -22,13 +22,13 @@ def f(x=1, y=1, z=1): # OK
pass
def f(x, y, z, /, u, v, w): # OK
def f(x, y, z, /, u, v, w): # Too many arguments (6/5)
pass
def f(x, y, z, *, u, v, w): # OK
def f(x, y, z, *, u, v, w): # Too many arguments (6/5)
pass
def f(x, y, z, a, b, c, *, u, v, w): # Too many arguments (6/5)
def f(x, y, z, a, b, c, *, u, v, w): # Too many arguments (9/5)
pass

View File

@@ -1,19 +1,16 @@
class TestClass:
def __bool__(self):
...
def __bool__(self, x): # too many mandatory args
...
def __bool__(self, x=1): # additional optional args OK
...
def __bool__(self, *args): # varargs OK
...
def __bool__(): # ignored; should be caughty by E0211/N805
...
@staticmethod
def __bool__():
...
@@ -21,31 +18,58 @@ class TestClass:
@staticmethod
def __bool__(x): # too many mandatory args
...
@staticmethod
def __bool__(x=1): # additional optional args OK
...
def __eq__(self, other): # multiple args
...
def __eq__(self, other=1): # expected arg is optional
...
def __eq__(self): # too few mandatory args
...
def __eq__(self, other, other_other): # too many mandatory args
...
def __round__(self): # allow zero additional args.
def __round__(self): # allow zero additional args
...
def __round__(self, x): # allow one additional args.
def __round__(self, x): # allow one additional args
...
def __round__(self, x, y): # disallow 2 args
...
def __round__(self, x, y, z=2): # disallow 3 args even when one is optional
...
...
def __eq__(self, *args): # ignore *args
...
def __eq__(self, x, *args): # extra *args is ok
...
def __eq__(self, x, y, *args): # too many args with *args
...
def __round__(self, *args): # allow zero additional args
...
def __round__(self, x, *args): # allow one additional args
...
def __round__(self, x, y, *args): # disallow 2 args
...
def __eq__(self, **kwargs): # ignore **kwargs
...
def __eq__(self, /, other=42): # ignore positional-only args
...
def __eq__(self, *, other=42): # ignore positional-only args
...

View File

@@ -59,3 +59,14 @@ u"foo".encode("utf-8") # b"foo"
R"foo\o".encode("utf-8") # br"foo\o"
U"foo".encode("utf-8") # b"foo"
print("foo".encode()) # print(b"foo")
# `encode` on parenthesized strings.
(
"abc"
"def"
).encode()
((
"abc"
"def"
)).encode()

View File

@@ -0,0 +1 @@
"{} {}".format(a, b) # Intentionally at start-of-file, to ensure graceful handling.

View File

@@ -0,0 +1,23 @@
bla = b"bla"
def foo(one_arg):
pass
f"{str(bla)}, {repr(bla)}, {ascii(bla)}" # RUF010
f"{foo(bla)}" # OK
f"{str(bla, 'ascii')}, {str(bla, encoding='cp1255')}" # OK
f"{bla!s} {[]!r} {'bar'!a}" # OK
"Not an f-string {str(bla)}, {repr(bla)}, {ascii(bla)}" # OK
def ascii(arg):
pass
f"{ascii(bla)}" # OK

View File

@@ -45,3 +45,10 @@ def good():
logger.exception("a failed")
except Exception:
logger.exception("something failed")
def fine():
try:
a = process() # This throws the exception now
finally:
print("finally")

View File

@@ -5,7 +5,7 @@ use libcst_native::{
Codegen, CodegenState, ImportNames, ParenthesizableWhitespace, SmallStatement, Statement,
};
use ruff_text_size::{TextLen, TextRange, TextSize};
use rustpython_parser::ast::{ExcepthandlerKind, Expr, Keyword, Stmt, StmtKind};
use rustpython_parser::ast::{self, ExcepthandlerKind, Expr, Keyword, Stmt, StmtKind};
use rustpython_parser::{lexer, Mode, Tok};
use ruff_diagnostics::Edit;
@@ -28,21 +28,21 @@ fn has_single_child(body: &[Stmt], deleted: &[&Stmt]) -> bool {
/// Determine if a child is the only statement in its body.
fn is_lone_child(child: &Stmt, parent: &Stmt, deleted: &[&Stmt]) -> Result<bool> {
match &parent.node {
StmtKind::FunctionDef { body, .. }
| StmtKind::AsyncFunctionDef { body, .. }
| StmtKind::ClassDef { body, .. }
| StmtKind::With { body, .. }
| StmtKind::AsyncWith { body, .. } => {
StmtKind::FunctionDef(ast::StmtFunctionDef { body, .. })
| StmtKind::AsyncFunctionDef(ast::StmtAsyncFunctionDef { body, .. })
| StmtKind::ClassDef(ast::StmtClassDef { body, .. })
| StmtKind::With(ast::StmtWith { body, .. })
| StmtKind::AsyncWith(ast::StmtAsyncWith { body, .. }) => {
if body.iter().contains(child) {
Ok(has_single_child(body, deleted))
} else {
bail!("Unable to find child in parent body")
}
}
StmtKind::For { body, orelse, .. }
| StmtKind::AsyncFor { body, orelse, .. }
| StmtKind::While { body, orelse, .. }
| StmtKind::If { body, orelse, .. } => {
StmtKind::For(ast::StmtFor { body, orelse, .. })
| StmtKind::AsyncFor(ast::StmtAsyncFor { body, orelse, .. })
| StmtKind::While(ast::StmtWhile { body, orelse, .. })
| StmtKind::If(ast::StmtIf { body, orelse, .. }) => {
if body.iter().contains(child) {
Ok(has_single_child(body, deleted))
} else if orelse.iter().contains(child) {
@@ -51,18 +51,18 @@ fn is_lone_child(child: &Stmt, parent: &Stmt, deleted: &[&Stmt]) -> Result<bool>
bail!("Unable to find child in parent body")
}
}
StmtKind::Try {
StmtKind::Try(ast::StmtTry {
body,
handlers,
orelse,
finalbody,
}
| StmtKind::TryStar {
})
| StmtKind::TryStar(ast::StmtTryStar {
body,
handlers,
orelse,
finalbody,
} => {
}) => {
if body.iter().contains(child) {
Ok(has_single_child(body, deleted))
} else if orelse.iter().contains(child) {
@@ -70,7 +70,9 @@ fn is_lone_child(child: &Stmt, parent: &Stmt, deleted: &[&Stmt]) -> Result<bool>
} else if finalbody.iter().contains(child) {
Ok(has_single_child(finalbody, deleted))
} else if let Some(body) = handlers.iter().find_map(|handler| match &handler.node {
ExcepthandlerKind::ExceptHandler { body, .. } => {
ExcepthandlerKind::ExceptHandler(ast::ExcepthandlerExceptHandler {
body, ..
}) => {
if body.iter().contains(child) {
Some(body)
} else {
@@ -83,7 +85,7 @@ fn is_lone_child(child: &Stmt, parent: &Stmt, deleted: &[&Stmt]) -> Result<bool>
bail!("Unable to find child in parent body")
}
}
StmtKind::Match { cases, .. } => {
StmtKind::Match(ast::StmtMatch { cases, .. }) => {
if let Some(body) = cases.iter().find_map(|case| {
if case.body.iter().contains(child) {
Some(&case.body)
@@ -166,7 +168,7 @@ fn is_end_of_file(stmt: &Stmt, locator: &Locator) -> bool {
/// remove the entire start and end lines.
/// - If the `Stmt` is the last statement in its parent body, replace it with a
/// `pass` instead.
pub fn delete_stmt(
pub(crate) fn delete_stmt(
stmt: &Stmt,
parent: Option<&Stmt>,
deleted: &[&Stmt],
@@ -203,7 +205,7 @@ pub fn delete_stmt(
}
/// Generate a `Fix` to remove any unused imports from an `import` statement.
pub fn remove_unused_imports<'a>(
pub(crate) fn remove_unused_imports<'a>(
unused_imports: impl Iterator<Item = &'a str>,
stmt: &Stmt,
parent: Option<&Stmt>,
@@ -328,7 +330,7 @@ pub fn remove_unused_imports<'a>(
///
/// Supports the removal of parentheses when this is the only (kw)arg left.
/// For this behavior, set `remove_parentheses` to `true`.
pub fn remove_argument(
pub(crate) fn remove_argument(
locator: &Locator,
call_at: TextSize,
expr_range: TextRange,
@@ -350,7 +352,7 @@ pub fn remove_argument(
if n_arguments == 1 {
// Case 1: there is only one argument.
let mut count: usize = 0;
for (tok, range) in lexer::lex_located(contents, Mode::Module, call_at).flatten() {
for (tok, range) in lexer::lex_starts_at(contents, Mode::Module, call_at).flatten() {
if matches!(tok, Tok::Lpar) {
if count == 0 {
fix_start = Some(if remove_parentheses {
@@ -382,7 +384,7 @@ pub fn remove_argument(
{
// Case 2: argument or keyword is _not_ the last node.
let mut seen_comma = false;
for (tok, range) in lexer::lex_located(contents, Mode::Module, call_at).flatten() {
for (tok, range) in lexer::lex_starts_at(contents, Mode::Module, call_at).flatten() {
if seen_comma {
if matches!(tok, Tok::NonLogicalNewline) {
// Also delete any non-logical newlines after the comma.
@@ -405,7 +407,7 @@ pub fn remove_argument(
} else {
// Case 3: argument or keyword is the last node, so we have to find the last
// comma in the stmt.
for (tok, range) in lexer::lex_located(contents, Mode::Module, call_at).flatten() {
for (tok, range) in lexer::lex_starts_at(contents, Mode::Module, call_at).flatten() {
if range.start() == expr_range.start() {
fix_end = Some(expr_range.end());
break;
@@ -432,17 +434,29 @@ pub fn remove_argument(
/// name on which the `lru_cache` symbol would be made available (`"functools.lru_cache"`).
///
/// Attempts to reuse existing imports when possible.
pub fn get_or_import_symbol(
pub(crate) fn get_or_import_symbol(
module: &str,
member: &str,
at: TextSize,
context: &Context,
importer: &Importer,
locator: &Locator,
) -> Result<(Edit, String)> {
if let Some((source, binding)) = context.resolve_qualified_import_name(module, member) {
// If the symbol is already available in the current scope, use it.
//
// We also add a no-nop edit to force conflicts with any other fixes that might try to
// The exception: the symbol source (i.e., the import statement) comes after the current
// location. For example, we could be generating an edit within a function, and the import
// could be defined in the module scope, but after the function definition. In this case,
// it's unclear whether we can use the symbol (the function could be called between the
// import and the current location, and thus the symbol would not be available). It's also
// unclear whether should add an import statement at the top of the file, since it could
// be shadowed between the import and the current location.
if source.start() > at {
bail!("Unable to use existing symbol `{binding}` due to late-import");
}
// We also add a no-op edit to force conflicts with any other fixes that might try to
// remove the import. Consider:
//
// ```py
@@ -462,7 +476,7 @@ pub fn get_or_import_symbol(
Edit::range_replacement(locator.slice(source.range()).to_string(), source.range());
Ok((import_edit, binding))
} else {
if let Some(stmt) = importer.get_import_from(module) {
if let Some(stmt) = importer.find_import_from(module, at) {
// Case 1: `from functools import lru_cache` is in scope, and we're trying to reference
// `functools.cache`; thus, we add `cache` to the import, and return `"cache"` as the
// bound name.
@@ -473,10 +487,7 @@ pub fn get_or_import_symbol(
let import_edit = importer.add_member(stmt, member)?;
Ok((import_edit, member.to_string()))
} else {
bail!(
"Unable to insert `{}` into scope due to name conflict",
member
)
bail!("Unable to insert `{member}` into scope due to name conflict")
}
} else {
// Case 2: No `functools` import is in scope; thus, we add `import functools`, and
@@ -485,13 +496,11 @@ pub fn get_or_import_symbol(
.find_binding(module)
.map_or(true, |binding| binding.kind.is_builtin())
{
let import_edit = importer.add_import(&AnyImport::Import(Import::module(module)));
let import_edit =
importer.add_import(&AnyImport::Import(Import::module(module)), at);
Ok((import_edit, format!("{module}.{member}")))
} else {
bail!(
"Unable to insert `{}` into scope due to name conflict",
module
)
bail!("Unable to insert `{module}` into scope due to name conflict")
}
}
}

View File

@@ -10,13 +10,16 @@ use ruff_python_ast::source_code::Locator;
use crate::linter::FixTable;
use crate::registry::{AsRule, Rule};
pub mod actions;
pub(crate) mod actions;
/// Auto-fix errors in a file, and write the fixed source code to disk.
pub fn fix_file(diagnostics: &[Diagnostic], locator: &Locator) -> Option<(String, FixTable)> {
pub(crate) fn fix_file(
diagnostics: &[Diagnostic],
locator: &Locator,
) -> Option<(String, FixTable)> {
let mut with_fixes = diagnostics
.iter()
.filter(|diag| !diag.fix.is_empty())
.filter(|diag| diag.fix.is_some())
.peekable();
if with_fixes.peek().is_none() {
@@ -38,11 +41,10 @@ fn apply_fixes<'a>(
for (rule, fix) in diagnostics
.filter_map(|diagnostic| {
if diagnostic.fix.is_empty() {
None
} else {
Some((diagnostic.kind.rule(), &diagnostic.fix))
}
diagnostic
.fix
.as_ref()
.map(|fix| (diagnostic.kind.rule(), fix))
})
.sorted_by(|(rule1, fix1), (rule2, fix2)| cmp_fix(*rule1, *rule2, fix1, fix2))
{
@@ -103,18 +105,20 @@ mod tests {
use ruff_diagnostics::Diagnostic;
use ruff_diagnostics::Edit;
use ruff_diagnostics::Fix;
use ruff_python_ast::source_code::Locator;
use crate::autofix::apply_fixes;
use crate::rules::pycodestyle::rules::MissingNewlineAtEndOfFile;
#[allow(deprecated)]
fn create_diagnostics(edit: impl IntoIterator<Item = Edit>) -> Vec<Diagnostic> {
edit.into_iter()
.map(|edit| Diagnostic {
// The choice of rule here is arbitrary.
kind: MissingNewlineAtEndOfFile.into(),
range: edit.range(),
fix: edit.into(),
fix: Some(Fix::unspecified(edit)),
parent: None,
})
.collect()

View File

@@ -1,25 +1,17 @@
use ruff_text_size::TextRange;
use rustpython_parser::ast::{Expr, Stmt};
use rustpython_parser::ast::Expr;
use ruff_python_ast::types::RefEquality;
use ruff_python_semantic::analyze::visibility::{Visibility, VisibleScope};
use ruff_python_semantic::scope::ScopeId;
use crate::checkers::ast::AnnotationContext;
use crate::docstrings::definition::Definition;
type Context<'a> = (ScopeId, Vec<RefEquality<'a, Stmt>>);
use ruff_python_semantic::context::Snapshot;
/// A collection of AST nodes that are deferred for later analysis.
/// Used to, e.g., store functions, whose bodies shouldn't be analyzed until all
/// module-level definitions have been analyzed.
#[derive(Default)]
pub struct Deferred<'a> {
pub definitions: Vec<(Definition<'a>, Visibility, Context<'a>)>,
pub string_type_definitions: Vec<(TextRange, &'a str, AnnotationContext, Context<'a>)>,
pub type_definitions: Vec<(&'a Expr, AnnotationContext, Context<'a>)>,
pub functions: Vec<(&'a Stmt, Context<'a>, VisibleScope)>,
pub lambdas: Vec<(&'a Expr, Context<'a>)>,
pub for_loops: Vec<(&'a Stmt, Context<'a>)>,
pub assignments: Vec<Context<'a>>,
#[derive(Debug, Default)]
pub(crate) struct Deferred<'a> {
pub(crate) string_type_definitions: Vec<(TextRange, &'a str, Snapshot)>,
pub(crate) future_type_definitions: Vec<(&'a Expr, Snapshot)>,
pub(crate) functions: Vec<Snapshot>,
pub(crate) lambdas: Vec<(&'a Expr, Snapshot)>,
pub(crate) for_loops: Vec<Snapshot>,
pub(crate) assignments: Vec<Snapshot>,
}

File diff suppressed because it is too large Load Diff

View File

@@ -7,7 +7,7 @@ use crate::rules::flake8_no_pep420::rules::implicit_namespace_package;
use crate::rules::pep8_naming::rules::invalid_module_name;
use crate::settings::Settings;
pub fn check_file_path(
pub(crate) fn check_file_path(
path: &Path,
package: Option<&Path>,
settings: &Settings,

View File

@@ -2,20 +2,20 @@
use std::borrow::Cow;
use std::path::Path;
use rustpython_parser::ast::{StmtKind, Suite};
use rustpython_parser::ast::{self, StmtKind, Suite};
use ruff_diagnostics::Diagnostic;
use ruff_python_ast::helpers::to_module_path;
use ruff_python_ast::imports::{ImportMap, ModuleImport};
use ruff_python_ast::source_code::{Indexer, Locator, Stylist};
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::statement_visitor::StatementVisitor;
use ruff_python_stdlib::path::is_python_stub_file;
use crate::directives::IsortDirectives;
use crate::registry::Rule;
use crate::rules::isort;
use crate::rules::isort::track::{Block, ImportTracker};
use crate::settings::{flags, Settings};
use crate::settings::Settings;
fn extract_import_map(path: &Path, package: Option<&Path>, blocks: &[&Block]) -> Option<ImportMap> {
let Some(package) = package else {
@@ -29,20 +29,21 @@ fn extract_import_map(path: &Path, package: Option<&Path>, blocks: &[&Block]) ->
let mut module_imports = Vec::with_capacity(num_imports);
for stmt in blocks.iter().flat_map(|block| &block.imports) {
match &stmt.node {
StmtKind::Import { names } => {
StmtKind::Import(ast::StmtImport { names }) => {
module_imports.extend(
names
.iter()
.map(|name| ModuleImport::new(name.node.name.clone(), stmt.range())),
.map(|name| ModuleImport::new(name.node.name.to_string(), stmt.range())),
);
}
StmtKind::ImportFrom {
StmtKind::ImportFrom(ast::StmtImportFrom {
module,
names,
level,
} => {
let level = level.unwrap_or(0);
}) => {
let level = level.map_or(0, |level| level.to_usize());
let module = if let Some(module) = module {
let module: &String = module.as_ref();
if level == 0 {
Cow::Borrowed(module)
} else {
@@ -72,14 +73,13 @@ fn extract_import_map(path: &Path, package: Option<&Path>, blocks: &[&Block]) ->
}
#[allow(clippy::too_many_arguments)]
pub fn check_imports(
pub(crate) fn check_imports(
python_ast: &Suite,
locator: &Locator,
indexer: &Indexer,
directives: &IsortDirectives,
settings: &Settings,
stylist: &Stylist,
autofix: flags::Autofix,
path: &Path,
package: Option<&Path>,
) -> (Vec<Diagnostic>, Option<ImportMap>) {
@@ -99,7 +99,7 @@ pub fn check_imports(
for block in &blocks {
if !block.imports.is_empty() {
if let Some(diagnostic) = isort::rules::organize_imports(
block, locator, stylist, indexer, settings, autofix, package,
block, locator, stylist, indexer, settings, package,
) {
diagnostics.push(diagnostic);
}
@@ -108,7 +108,7 @@ pub fn check_imports(
}
if settings.rules.enabled(Rule::MissingRequiredImport) {
diagnostics.extend(isort::rules::add_required_imports(
&blocks, python_ast, locator, stylist, settings, autofix, is_stub,
&blocks, python_ast, locator, stylist, settings, is_stub,
));
}

View File

@@ -1,7 +1,7 @@
use ruff_text_size::TextRange;
use rustpython_parser::lexer::LexResult;
use ruff_diagnostics::{Diagnostic, DiagnosticKind, Fix};
use ruff_diagnostics::{Diagnostic, DiagnosticKind};
use ruff_python_ast::source_code::{Locator, Stylist};
use ruff_python_ast::token_kind::TokenKind;
@@ -12,7 +12,7 @@ use crate::rules::pycodestyle::rules::logical_lines::{
whitespace_around_named_parameter_equals, whitespace_before_comment,
whitespace_before_parameters, LogicalLines, TokenFlags,
};
use crate::settings::{flags, Settings};
use crate::settings::Settings;
/// Return the amount of indentation, expanding tabs to the next multiple of 8.
fn expand_indent(line: &str) -> usize {
@@ -30,25 +30,23 @@ fn expand_indent(line: &str) -> usize {
indent
}
pub fn check_logical_lines(
pub(crate) fn check_logical_lines(
tokens: &[LexResult],
locator: &Locator,
stylist: &Stylist,
settings: &Settings,
autofix: flags::Autofix,
) -> Vec<Diagnostic> {
let mut context = LogicalLinesContext::new(settings);
#[cfg(feature = "logical_lines")]
let should_fix_missing_whitespace =
autofix.into() && settings.rules.should_fix(Rule::MissingWhitespace);
let should_fix_missing_whitespace = settings.rules.should_fix(Rule::MissingWhitespace);
#[cfg(not(feature = "logical_lines"))]
let should_fix_missing_whitespace = false;
#[cfg(feature = "logical_lines")]
let should_fix_whitespace_before_parameters =
autofix.into() && settings.rules.should_fix(Rule::WhitespaceBeforeParameters);
settings.rules.should_fix(Rule::WhitespaceBeforeParameters);
#[cfg(not(feature = "logical_lines"))]
let should_fix_whitespace_before_parameters = false;
@@ -138,19 +136,19 @@ impl<'a> LogicalLinesContext<'a> {
}
}
pub fn push<K: Into<DiagnosticKind>>(&mut self, kind: K, range: TextRange) {
pub(crate) fn push<K: Into<DiagnosticKind>>(&mut self, kind: K, range: TextRange) {
let kind = kind.into();
if self.settings.rules.enabled(kind.rule()) {
self.diagnostics.push(Diagnostic {
kind,
range,
fix: Fix::empty(),
fix: None,
parent: None,
});
}
}
pub fn push_diagnostic(&mut self, diagnostic: Diagnostic) {
pub(crate) fn push_diagnostic(&mut self, diagnostic: Diagnostic) {
if self.settings.rules.enabled(diagnostic.kind.rule()) {
self.diagnostics.push(diagnostic);
}

View File

@@ -1,8 +1,8 @@
pub mod ast;
pub mod filesystem;
pub mod imports;
pub(crate) mod ast;
pub(crate) mod filesystem;
pub(crate) mod imports;
#[cfg(feature = "logical_lines")]
pub(crate) mod logical_lines;
pub mod noqa;
pub mod physical_lines;
pub mod tokens;
pub(crate) mod noqa;
pub(crate) mod physical_lines;
pub(crate) mod tokens;

View File

@@ -3,7 +3,7 @@
use itertools::Itertools;
use ruff_text_size::{TextLen, TextRange, TextSize};
use ruff_diagnostics::{Diagnostic, Edit};
use ruff_diagnostics::{Diagnostic, Edit, Fix};
use ruff_python_ast::source_code::Locator;
use crate::noqa;
@@ -11,15 +11,14 @@ use crate::noqa::{Directive, FileExemption, NoqaDirectives, NoqaMapping};
use crate::registry::{AsRule, Rule};
use crate::rule_redirects::get_redirect_target;
use crate::rules::ruff::rules::{UnusedCodes, UnusedNOQA};
use crate::settings::{flags, Settings};
use crate::settings::Settings;
pub fn check_noqa(
pub(crate) fn check_noqa(
diagnostics: &mut Vec<Diagnostic>,
locator: &Locator,
comment_ranges: &[TextRange],
noqa_line_for: &NoqaMapping,
settings: &Settings,
autofix: flags::Autofix,
) -> Vec<usize> {
let enforce_noqa = settings.rules.enabled(Rule::UnusedNOQA);
@@ -101,8 +100,9 @@ pub fn check_noqa(
if line.matches.is_empty() {
let mut diagnostic =
Diagnostic::new(UnusedNOQA { codes: None }, *noqa_range);
if autofix.into() && settings.rules.should_fix(diagnostic.kind.rule()) {
diagnostic.set_fix(delete_noqa(
if settings.rules.should_fix(diagnostic.kind.rule()) {
#[allow(deprecated)]
diagnostic.set_fix_from_edit(delete_noqa(
*leading_spaces,
*noqa_range,
*trailing_spaces,
@@ -169,19 +169,21 @@ pub fn check_noqa(
},
*range,
);
if autofix.into() && settings.rules.should_fix(diagnostic.kind.rule()) {
if settings.rules.should_fix(diagnostic.kind.rule()) {
if valid_codes.is_empty() {
diagnostic.set_fix(delete_noqa(
#[allow(deprecated)]
diagnostic.set_fix_from_edit(delete_noqa(
*leading_spaces,
*range,
*trailing_spaces,
locator,
));
} else {
diagnostic.set_fix(Edit::range_replacement(
#[allow(deprecated)]
diagnostic.set_fix(Fix::unspecified(Edit::range_replacement(
format!("# noqa: {}", valid_codes.join(", ")),
*range,
));
)));
}
}
diagnostics.push(diagnostic);

View File

@@ -19,16 +19,15 @@ use crate::rules::pycodestyle::rules::{
use crate::rules::pygrep_hooks::rules::{blanket_noqa, blanket_type_ignore};
use crate::rules::pylint;
use crate::rules::pyupgrade::rules::unnecessary_coding_comment;
use crate::settings::{flags, Settings};
use crate::settings::Settings;
pub fn check_physical_lines(
pub(crate) fn check_physical_lines(
path: &Path,
locator: &Locator,
stylist: &Stylist,
indexer: &Indexer,
doc_lines: &[TextSize],
settings: &Settings,
autofix: flags::Autofix,
) -> Vec<Diagnostic> {
let mut diagnostics: Vec<Diagnostic> = vec![];
let mut has_any_shebang = false;
@@ -51,10 +50,8 @@ pub fn check_physical_lines(
settings.rules.enabled(Rule::BlankLineWithWhitespace);
let enforce_tab_indentation = settings.rules.enabled(Rule::TabIndentation);
let fix_unnecessary_coding_comment =
autofix.into() && settings.rules.should_fix(Rule::UTF8EncodingDeclaration);
let fix_shebang_whitespace =
autofix.into() && settings.rules.should_fix(Rule::ShebangLeadingWhitespace);
let fix_unnecessary_coding_comment = settings.rules.should_fix(Rule::UTF8EncodingDeclaration);
let fix_shebang_whitespace = settings.rules.should_fix(Rule::ShebangLeadingWhitespace);
let mut commented_lines_iter = indexer.comment_ranges().iter().peekable();
let mut doc_lines_iter = doc_lines.iter().peekable();
@@ -121,7 +118,7 @@ pub fn check_physical_lines(
}
while doc_lines_iter
.next_if(|doc_line_start| line.range().contains(**doc_line_start))
.next_if(|doc_line_start| line.range().contains_inclusive(**doc_line_start))
.is_some()
{
if enforce_doc_line_too_long {
@@ -148,7 +145,7 @@ pub fn check_physical_lines(
}
if enforce_trailing_whitespace || enforce_blank_line_contains_whitespace {
if let Some(diagnostic) = trailing_whitespace(&line, settings, autofix) {
if let Some(diagnostic) = trailing_whitespace(&line, settings) {
diagnostics.push(diagnostic);
}
}
@@ -164,7 +161,7 @@ pub fn check_physical_lines(
if let Some(diagnostic) = no_newline_at_end_of_file(
locator,
stylist,
autofix.into() && settings.rules.should_fix(Rule::MissingNewlineAtEndOfFile),
settings.rules.should_fix(Rule::MissingNewlineAtEndOfFile),
) {
diagnostics.push(diagnostic);
}
@@ -188,7 +185,7 @@ mod tests {
use ruff_python_ast::source_code::{Indexer, Locator, Stylist};
use crate::registry::Rule;
use crate::settings::{flags, Settings};
use crate::settings::Settings;
use super::check_physical_lines;
@@ -211,7 +208,6 @@ mod tests {
line_length,
..Settings::for_rule(Rule::LineTooLong)
},
flags::Autofix::Enabled,
)
};
assert_eq!(check_with_max_line_length(8), vec![]);

View File

@@ -10,15 +10,14 @@ use crate::rules::{
eradicate, flake8_commas, flake8_implicit_str_concat, flake8_pyi, flake8_quotes, pycodestyle,
pylint, pyupgrade, ruff,
};
use crate::settings::{flags, Settings};
use crate::settings::Settings;
use ruff_diagnostics::Diagnostic;
use ruff_python_ast::source_code::Locator;
pub fn check_tokens(
pub(crate) fn check_tokens(
locator: &Locator,
tokens: &[LexResult],
settings: &Settings,
autofix: flags::Autofix,
is_stub: bool,
) -> Vec<Diagnostic> {
let mut diagnostics: Vec<Diagnostic> = vec![];
@@ -85,7 +84,6 @@ pub fn check_tokens(
Context::Comment
},
settings,
autofix,
));
}
}
@@ -96,7 +94,7 @@ pub fn check_tokens(
for (tok, range) in tokens.iter().flatten() {
if matches!(tok, Tok::Comment(_)) {
if let Some(diagnostic) =
eradicate::rules::commented_out_code(locator, *range, settings, autofix)
eradicate::rules::commented_out_code(locator, *range, settings)
{
diagnostics.push(diagnostic);
}
@@ -111,7 +109,7 @@ pub fn check_tokens(
diagnostics.extend(pycodestyle::rules::invalid_escape_sequence(
locator,
*range,
autofix.into() && settings.rules.should_fix(Rule::InvalidEscapeSequence),
settings.rules.should_fix(Rule::InvalidEscapeSequence),
));
}
}
@@ -121,7 +119,7 @@ pub fn check_tokens(
for (tok, range) in tokens.iter().flatten() {
if matches!(tok, Tok::String { .. }) {
diagnostics.extend(
pylint::rules::invalid_string_characters(locator, *range, autofix.into())
pylint::rules::invalid_string_characters(locator, *range)
.into_iter()
.filter(|diagnostic| settings.rules.enabled(diagnostic.kind.rule())),
);
@@ -132,7 +130,7 @@ pub fn check_tokens(
// E701, E702, E703
if enforce_compound_statements {
diagnostics.extend(
pycodestyle::rules::compound_statements(tokens, settings, autofix)
pycodestyle::rules::compound_statements(tokens, settings)
.into_iter()
.filter(|diagnostic| settings.rules.enabled(diagnostic.kind.rule())),
);
@@ -141,7 +139,7 @@ pub fn check_tokens(
// Q001, Q002, Q003
if enforce_quotes {
diagnostics.extend(
flake8_quotes::rules::from_tokens(tokens, locator, settings, autofix)
flake8_quotes::rules::from_tokens(tokens, locator, settings)
.into_iter()
.filter(|diagnostic| settings.rules.enabled(diagnostic.kind.rule())),
);
@@ -163,7 +161,7 @@ pub fn check_tokens(
// COM812, COM818, COM819
if enforce_trailing_comma {
diagnostics.extend(
flake8_commas::rules::trailing_commas(tokens, locator, settings, autofix)
flake8_commas::rules::trailing_commas(tokens, locator, settings)
.into_iter()
.filter(|diagnostic| settings.rules.enabled(diagnostic.kind.rule())),
);
@@ -172,8 +170,7 @@ pub fn check_tokens(
// UP034
if enforce_extraneous_parenthesis {
diagnostics.extend(
pyupgrade::rules::extraneous_parentheses(tokens, locator, settings, autofix)
.into_iter(),
pyupgrade::rules::extraneous_parentheses(tokens, locator, settings).into_iter(),
);
}

View File

@@ -1,6 +1,7 @@
use crate::registry::{Linter, Rule};
use std::fmt::Formatter;
use crate::registry::{Linter, Rule};
#[derive(PartialEq, Eq, PartialOrd, Ord)]
pub struct NoqaCode(&'static str, &'static str);
@@ -211,6 +212,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Pylint, "W1508") => Rule::InvalidEnvvarDefault,
(Pylint, "W2901") => Rule::RedefinedLoopName,
(Pylint, "E0302") => Rule::UnexpectedSpecialMethodSignature,
(Pylint, "W3301") => Rule::NestedMinMax,
// flake8-builtins
(Flake8Builtins, "001") => Rule::BuiltinVariableShadowing,
@@ -561,6 +563,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(PygrepHooks, "002") => Rule::DeprecatedLogWarn,
(PygrepHooks, "003") => Rule::BlanketTypeIgnore,
(PygrepHooks, "004") => Rule::BlanketNOQA,
(PygrepHooks, "005") => Rule::InvalidMockAccess,
// pandas-vet
(PandasVet, "002") => Rule::PandasUseOfInplaceArgument,
@@ -724,6 +727,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Ruff, "007") => Rule::PairwiseOverZipped,
(Ruff, "008") => Rule::MutableDataclassDefault,
(Ruff, "009") => Rule::FunctionCallInDataclassDefaultArgument,
(Ruff, "010") => Rule::ExplicitFStringTypeConversion,
(Ruff, "100") => Rule::UnusedNOQA,
// flake8-django
@@ -735,6 +739,10 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<Rule> {
(Flake8Django, "012") => Rule::DjangoUnorderedBodyContentInModel,
(Flake8Django, "013") => Rule::DjangoNonLeadingReceiverDecorator,
// flynt
// Reserved: (Flynt, "001") => Rule::StringConcatenationToFString,
(Flynt, "002") => Rule::StaticJoinToFString,
_ => return None,
})
}

View File

@@ -16,7 +16,7 @@ fn compose_call_path_inner<'a>(expr: &'a Expression, parts: &mut Vec<&'a str>) {
}
}
pub fn compose_call_path(expr: &Expression) -> Option<String> {
pub(crate) fn compose_call_path(expr: &Expression) -> Option<String> {
let mut segments = vec![];
compose_call_path_inner(expr, &mut segments);
if segments.is_empty() {
@@ -26,7 +26,7 @@ pub fn compose_call_path(expr: &Expression) -> Option<String> {
}
}
pub fn compose_module_path(module: &NameOrAttribute) -> String {
pub(crate) fn compose_module_path(module: &NameOrAttribute) -> String {
match module {
NameOrAttribute::N(name) => name.value.to_string(),
NameOrAttribute::A(attr) => {

View File

@@ -4,21 +4,21 @@ use libcst_native::{
ImportNames, Module, SimpleString, SmallStatement, Statement,
};
pub fn match_module(module_text: &str) -> Result<Module> {
pub(crate) fn match_module(module_text: &str) -> Result<Module> {
match libcst_native::parse_module(module_text, None) {
Ok(module) => Ok(module),
Err(_) => bail!("Failed to extract CST from source"),
}
}
pub fn match_expression(expression_text: &str) -> Result<Expression> {
pub(crate) fn match_expression(expression_text: &str) -> Result<Expression> {
match libcst_native::parse_expression(expression_text) {
Ok(expression) => Ok(expression),
Err(_) => bail!("Failed to extract CST from source"),
}
}
pub fn match_expr<'a, 'b>(module: &'a mut Module<'b>) -> Result<&'a mut Expr<'b>> {
pub(crate) fn match_expr<'a, 'b>(module: &'a mut Module<'b>) -> Result<&'a mut Expr<'b>> {
if let Some(Statement::Simple(expr)) = module.body.first_mut() {
if let Some(SmallStatement::Expr(expr)) = expr.body.first_mut() {
Ok(expr)
@@ -30,7 +30,7 @@ pub fn match_expr<'a, 'b>(module: &'a mut Module<'b>) -> Result<&'a mut Expr<'b>
}
}
pub fn match_import<'a, 'b>(module: &'a mut Module<'b>) -> Result<&'a mut Import<'b>> {
pub(crate) fn match_import<'a, 'b>(module: &'a mut Module<'b>) -> Result<&'a mut Import<'b>> {
if let Some(Statement::Simple(expr)) = module.body.first_mut() {
if let Some(SmallStatement::Import(expr)) = expr.body.first_mut() {
Ok(expr)
@@ -42,7 +42,9 @@ pub fn match_import<'a, 'b>(module: &'a mut Module<'b>) -> Result<&'a mut Import
}
}
pub fn match_import_from<'a, 'b>(module: &'a mut Module<'b>) -> Result<&'a mut ImportFrom<'b>> {
pub(crate) fn match_import_from<'a, 'b>(
module: &'a mut Module<'b>,
) -> Result<&'a mut ImportFrom<'b>> {
if let Some(Statement::Simple(expr)) = module.body.first_mut() {
if let Some(SmallStatement::ImportFrom(expr)) = expr.body.first_mut() {
Ok(expr)
@@ -54,7 +56,7 @@ pub fn match_import_from<'a, 'b>(module: &'a mut Module<'b>) -> Result<&'a mut I
}
}
pub fn match_aliases<'a, 'b>(
pub(crate) fn match_aliases<'a, 'b>(
import_from: &'a mut ImportFrom<'b>,
) -> Result<&'a mut Vec<ImportAlias<'b>>> {
if let ImportNames::Aliases(aliases) = &mut import_from.names {
@@ -64,7 +66,7 @@ pub fn match_aliases<'a, 'b>(
}
}
pub fn match_call<'a, 'b>(expression: &'a mut Expression<'b>) -> Result<&'a mut Call<'b>> {
pub(crate) fn match_call<'a, 'b>(expression: &'a mut Expression<'b>) -> Result<&'a mut Call<'b>> {
if let Expression::Call(call) = expression {
Ok(call)
} else {
@@ -72,7 +74,7 @@ pub fn match_call<'a, 'b>(expression: &'a mut Expression<'b>) -> Result<&'a mut
}
}
pub fn match_comparison<'a, 'b>(
pub(crate) fn match_comparison<'a, 'b>(
expression: &'a mut Expression<'b>,
) -> Result<&'a mut Comparison<'b>> {
if let Expression::Comparison(comparison) = expression {
@@ -82,7 +84,7 @@ pub fn match_comparison<'a, 'b>(
}
}
pub fn match_dict<'a, 'b>(expression: &'a mut Expression<'b>) -> Result<&'a mut Dict<'b>> {
pub(crate) fn match_dict<'a, 'b>(expression: &'a mut Expression<'b>) -> Result<&'a mut Dict<'b>> {
if let Expression::Dict(dict) = expression {
Ok(dict)
} else {
@@ -90,7 +92,7 @@ pub fn match_dict<'a, 'b>(expression: &'a mut Expression<'b>) -> Result<&'a mut
}
}
pub fn match_attribute<'a, 'b>(
pub(crate) fn match_attribute<'a, 'b>(
expression: &'a mut Expression<'b>,
) -> Result<&'a mut Attribute<'b>> {
if let Expression::Attribute(attribute) = expression {
@@ -100,7 +102,7 @@ pub fn match_attribute<'a, 'b>(
}
}
pub fn match_simple_string<'a, 'b>(
pub(crate) fn match_simple_string<'a, 'b>(
expression: &'a mut Expression<'b>,
) -> Result<&'a mut SimpleString<'b>> {
if let Expression::SimpleString(simple_string) = expression {

View File

@@ -1,2 +1,2 @@
pub mod helpers;
pub mod matchers;
pub(crate) mod helpers;
pub(crate) mod matchers;

View File

@@ -1,23 +1,26 @@
//! Doc line extraction. In this context, a doc line is a line consisting of a
//! standalone comment or a constant string statement.
use ruff_text_size::{TextRange, TextSize};
use std::iter::FusedIterator;
use ruff_python_ast::source_code::Locator;
use rustpython_parser::ast::{Constant, ExprKind, Stmt, StmtKind, Suite};
use ruff_text_size::{TextRange, TextSize};
use rustpython_parser::ast::{self, Constant, ExprKind, Stmt, StmtKind, Suite};
use rustpython_parser::lexer::LexResult;
use rustpython_parser::Tok;
use ruff_python_ast::visitor;
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::newlines::UniversalNewlineIterator;
use ruff_python_ast::source_code::Locator;
use ruff_python_ast::statement_visitor::{walk_stmt, StatementVisitor};
/// Extract doc lines (standalone comments) from a token sequence.
pub fn doc_lines_from_tokens<'a>(lxr: &'a [LexResult], locator: &'a Locator<'a>) -> DocLines<'a> {
pub(crate) fn doc_lines_from_tokens<'a>(
lxr: &'a [LexResult],
locator: &'a Locator<'a>,
) -> DocLines<'a> {
DocLines::new(lxr, locator)
}
pub struct DocLines<'a> {
pub(crate) struct DocLines<'a> {
inner: std::iter::Flatten<core::slice::Iter<'a, LexResult>>,
locator: &'a Locator<'a>,
prev: TextSize,
@@ -69,29 +72,43 @@ impl Iterator for DocLines<'_> {
impl FusedIterator for DocLines<'_> {}
#[derive(Default)]
struct StringLinesVisitor {
struct StringLinesVisitor<'a> {
string_lines: Vec<TextSize>,
locator: &'a Locator<'a>,
}
impl Visitor<'_> for StringLinesVisitor {
impl StatementVisitor<'_> for StringLinesVisitor<'_> {
fn visit_stmt(&mut self, stmt: &Stmt) {
if let StmtKind::Expr { value } = &stmt.node {
if let ExprKind::Constant {
if let StmtKind::Expr(ast::StmtExpr { value }) = &stmt.node {
if let ExprKind::Constant(ast::ExprConstant {
value: Constant::Str(..),
..
} = &value.node
}) = &value.node
{
self.string_lines.push(value.start());
for line in UniversalNewlineIterator::with_offset(
self.locator.slice(value.range()),
value.start(),
) {
self.string_lines.push(line.start());
}
}
}
visitor::walk_stmt(self, stmt);
walk_stmt(self, stmt);
}
}
impl<'a> StringLinesVisitor<'a> {
fn new(locator: &'a Locator<'a>) -> Self {
Self {
string_lines: Vec::new(),
locator,
}
}
}
/// Extract doc lines (standalone strings) start positions from an AST.
pub fn doc_lines_from_ast(python_ast: &Suite) -> Vec<TextSize> {
let mut visitor = StringLinesVisitor::default();
pub(crate) fn doc_lines_from_ast(python_ast: &Suite, locator: &Locator) -> Vec<TextSize> {
let mut visitor = StringLinesVisitor::new(locator);
visitor.visit_body(python_ast);
visitor.string_lines
}

View File

@@ -1,140 +0,0 @@
use ruff_text_size::{TextRange, TextSize};
use rustpython_parser::ast::{Expr, Stmt};
use std::fmt::{Debug, Formatter};
use std::ops::Deref;
use ruff_python_semantic::analyze::visibility::{
class_visibility, function_visibility, method_visibility, Modifier, Visibility, VisibleScope,
};
#[derive(Debug, Clone)]
pub enum DefinitionKind<'a> {
Module,
Package,
Class(&'a Stmt),
NestedClass(&'a Stmt),
Function(&'a Stmt),
NestedFunction(&'a Stmt),
Method(&'a Stmt),
}
#[derive(Debug)]
pub struct Definition<'a> {
pub kind: DefinitionKind<'a>,
pub docstring: Option<&'a Expr>,
}
#[derive(Debug)]
pub struct Docstring<'a> {
pub kind: DefinitionKind<'a>,
pub expr: &'a Expr,
/// The content of the docstring, including the leading and trailing quotes.
pub contents: &'a str,
/// The range of the docstring body (without the quotes). The range is relative to [`Self::contents`].
pub body_range: TextRange,
pub indentation: &'a str,
}
impl<'a> Docstring<'a> {
pub fn body(&self) -> DocstringBody {
DocstringBody { docstring: self }
}
pub const fn start(&self) -> TextSize {
self.expr.start()
}
pub const fn end(&self) -> TextSize {
self.expr.end()
}
pub const fn range(&self) -> TextRange {
self.expr.range()
}
pub fn leading_quote(&self) -> &'a str {
&self.contents[TextRange::up_to(self.body_range.start())]
}
}
#[derive(Copy, Clone)]
pub struct DocstringBody<'a> {
docstring: &'a Docstring<'a>,
}
impl<'a> DocstringBody<'a> {
#[inline]
pub fn start(self) -> TextSize {
self.range().start()
}
#[inline]
pub fn end(self) -> TextSize {
self.range().end()
}
pub fn range(self) -> TextRange {
self.docstring.body_range + self.docstring.start()
}
pub fn as_str(self) -> &'a str {
&self.docstring.contents[self.docstring.body_range]
}
}
impl Deref for DocstringBody<'_> {
type Target = str;
fn deref(&self) -> &Self::Target {
self.as_str()
}
}
impl Debug for DocstringBody<'_> {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.debug_struct("DocstringBody")
.field("text", &self.as_str())
.field("range", &self.range())
.finish()
}
}
#[derive(Copy, Clone)]
pub enum Documentable {
Class,
Function,
}
pub fn transition_scope(scope: VisibleScope, stmt: &Stmt, kind: Documentable) -> VisibleScope {
match kind {
Documentable::Function => VisibleScope {
modifier: Modifier::Function,
visibility: match scope {
VisibleScope {
modifier: Modifier::Module,
visibility: Visibility::Public,
} => function_visibility(stmt),
VisibleScope {
modifier: Modifier::Class,
visibility: Visibility::Public,
} => method_visibility(stmt),
_ => Visibility::Private,
},
},
Documentable::Class => VisibleScope {
modifier: Modifier::Class,
visibility: match scope {
VisibleScope {
modifier: Modifier::Module,
visibility: Visibility::Public,
} => class_visibility(stmt),
VisibleScope {
modifier: Modifier::Class,
visibility: Visibility::Public,
} => class_visibility(stmt),
_ => Visibility::Private,
},
},
}
}

View File

@@ -1,84 +1,91 @@
//! Extract docstrings from an AST.
use rustpython_parser::ast::{Constant, Expr, ExprKind, Stmt, StmtKind};
use rustpython_parser::ast::{self, Constant, Expr, ExprKind, Stmt, StmtKind};
use ruff_python_semantic::analyze::visibility;
use crate::docstrings::definition::{Definition, DefinitionKind, Documentable};
use ruff_python_semantic::definition::{Definition, DefinitionId, Definitions, Member, MemberKind};
/// Extract a docstring from a function or class body.
pub fn docstring_from(suite: &[Stmt]) -> Option<&Expr> {
pub(crate) fn docstring_from(suite: &[Stmt]) -> Option<&Expr> {
let stmt = suite.first()?;
// Require the docstring to be a standalone expression.
let StmtKind::Expr { value } = &stmt.node else {
let StmtKind::Expr(ast::StmtExpr { value }) = &stmt.node else {
return None;
};
// Only match strings.
if !matches!(
&value.node,
ExprKind::Constant {
ExprKind::Constant(ast::ExprConstant {
value: Constant::Str(_),
..
}
})
) {
return None;
}
Some(value)
}
/// Extract a docstring from a `Definition`.
pub(crate) fn extract_docstring<'a>(definition: &'a Definition<'a>) -> Option<&'a Expr> {
match definition {
Definition::Module(module) => docstring_from(module.python_ast),
Definition::Member(member) => {
if let StmtKind::ClassDef(ast::StmtClassDef { body, .. })
| StmtKind::FunctionDef(ast::StmtFunctionDef { body, .. })
| StmtKind::AsyncFunctionDef(ast::StmtAsyncFunctionDef { body, .. }) =
&member.stmt.node
{
docstring_from(body)
} else {
None
}
}
}
}
#[derive(Copy, Clone)]
pub(crate) enum ExtractionTarget {
Class,
Function,
}
/// Extract a `Definition` from the AST node defined by a `Stmt`.
pub fn extract<'a>(
scope: visibility::VisibleScope,
pub(crate) fn extract_definition<'a>(
target: ExtractionTarget,
stmt: &'a Stmt,
body: &'a [Stmt],
kind: Documentable,
) -> Definition<'a> {
let expr = docstring_from(body);
match kind {
Documentable::Function => match scope {
visibility::VisibleScope {
modifier: visibility::Modifier::Module,
..
} => Definition {
kind: DefinitionKind::Function(stmt),
docstring: expr,
parent: DefinitionId,
definitions: &Definitions<'a>,
) -> Member<'a> {
match target {
ExtractionTarget::Function => match &definitions[parent] {
Definition::Module(..) => Member {
parent,
kind: MemberKind::Function,
stmt,
},
visibility::VisibleScope {
modifier: visibility::Modifier::Class,
Definition::Member(Member {
kind: MemberKind::Class | MemberKind::NestedClass,
..
} => Definition {
kind: DefinitionKind::Method(stmt),
docstring: expr,
}) => Member {
parent,
kind: MemberKind::Method,
stmt,
},
visibility::VisibleScope {
modifier: visibility::Modifier::Function,
..
} => Definition {
kind: DefinitionKind::NestedFunction(stmt),
docstring: expr,
Definition::Member(..) => Member {
parent,
kind: MemberKind::NestedFunction,
stmt,
},
},
Documentable::Class => match scope {
visibility::VisibleScope {
modifier: visibility::Modifier::Module,
..
} => Definition {
kind: DefinitionKind::Class(stmt),
docstring: expr,
ExtractionTarget::Class => match &definitions[parent] {
Definition::Module(..) => Member {
parent,
kind: MemberKind::Class,
stmt,
},
visibility::VisibleScope {
modifier: visibility::Modifier::Class,
..
} => Definition {
kind: DefinitionKind::NestedClass(stmt),
docstring: expr,
},
visibility::VisibleScope {
modifier: visibility::Modifier::Function,
..
} => Definition {
kind: DefinitionKind::NestedClass(stmt),
docstring: expr,
Definition::Member(..) => Member {
parent,
kind: MemberKind::NestedClass,
stmt,
},
},
}

View File

@@ -1,6 +1,89 @@
pub mod definition;
pub mod extraction;
pub mod google;
pub mod numpy;
pub mod sections;
pub mod styles;
use std::fmt::{Debug, Formatter};
use std::ops::Deref;
use ruff_text_size::{TextRange, TextSize};
use rustpython_parser::ast::Expr;
use ruff_python_semantic::definition::Definition;
pub(crate) mod extraction;
pub(crate) mod google;
pub(crate) mod numpy;
pub(crate) mod sections;
pub(crate) mod styles;
#[derive(Debug)]
pub(crate) struct Docstring<'a> {
pub(crate) definition: &'a Definition<'a>,
pub(crate) expr: &'a Expr,
/// The content of the docstring, including the leading and trailing quotes.
pub(crate) contents: &'a str,
/// The range of the docstring body (without the quotes). The range is relative to [`Self::contents`].
pub(crate) body_range: TextRange,
pub(crate) indentation: &'a str,
}
impl<'a> Docstring<'a> {
pub(crate) fn body(&self) -> DocstringBody {
DocstringBody { docstring: self }
}
pub(crate) const fn start(&self) -> TextSize {
self.expr.start()
}
pub(crate) const fn end(&self) -> TextSize {
self.expr.end()
}
pub(crate) const fn range(&self) -> TextRange {
self.expr.range()
}
pub(crate) fn leading_quote(&self) -> &'a str {
&self.contents[TextRange::up_to(self.body_range.start())]
}
}
#[derive(Copy, Clone)]
pub(crate) struct DocstringBody<'a> {
docstring: &'a Docstring<'a>,
}
impl<'a> DocstringBody<'a> {
#[inline]
pub(crate) fn start(self) -> TextSize {
self.range().start()
}
#[inline]
pub(crate) fn end(self) -> TextSize {
self.range().end()
}
pub(crate) fn range(self) -> TextRange {
self.docstring.body_range + self.docstring.start()
}
pub(crate) fn as_str(self) -> &'a str {
&self.docstring.contents[self.docstring.body_range]
}
}
impl Deref for DocstringBody<'_> {
type Target = str;
fn deref(&self) -> &Self::Target {
self.as_str()
}
}
impl Debug for DocstringBody<'_> {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.debug_struct("DocstringBody")
.field("text", &self.as_str())
.field("range", &self.range())
.finish()
}
}

View File

@@ -4,8 +4,8 @@ use std::fmt::{Debug, Formatter};
use std::iter::FusedIterator;
use strum_macros::EnumIter;
use crate::docstrings::definition::{Docstring, DocstringBody};
use crate::docstrings::styles::SectionStyle;
use crate::docstrings::{Docstring, DocstringBody};
use ruff_python_ast::whitespace;
#[derive(EnumIter, PartialEq, Eq, Debug, Clone, Copy)]
@@ -137,7 +137,7 @@ pub(crate) struct SectionContexts<'a> {
impl<'a> SectionContexts<'a> {
/// Extract all `SectionContext` values from a docstring.
pub fn from_docstring(docstring: &'a Docstring<'a>, style: SectionStyle) -> Self {
pub(crate) fn from_docstring(docstring: &'a Docstring<'a>, style: SectionStyle) -> Self {
let contents = docstring.body();
let mut contexts = Vec::new();
@@ -190,11 +190,11 @@ impl<'a> SectionContexts<'a> {
}
}
pub fn len(&self) -> usize {
pub(crate) fn len(&self) -> usize {
self.contexts.len()
}
pub fn iter(&self) -> SectionContextsIter {
pub(crate) fn iter(&self) -> SectionContextsIter {
SectionContextsIter {
docstring_body: self.docstring.body(),
inner: self.contexts.iter(),

View File

@@ -13,7 +13,7 @@ static COMMA_SEPARATED_LIST_RE: Lazy<Regex> = Lazy::new(|| Regex::new(r"[,\s]").
/// Parse a comma-separated list of `RuleSelector` values (e.g.,
/// "F401,E501").
pub fn parse_prefix_codes(value: &str) -> Vec<RuleSelector> {
pub(crate) fn parse_prefix_codes(value: &str) -> Vec<RuleSelector> {
let mut codes: Vec<RuleSelector> = vec![];
for code in COMMA_SEPARATED_LIST_RE.split(value) {
let code = code.trim();
@@ -30,7 +30,7 @@ pub fn parse_prefix_codes(value: &str) -> Vec<RuleSelector> {
}
/// Parse a comma-separated list of strings (e.g., "__init__.py,__main__.py").
pub fn parse_strings(value: &str) -> Vec<String> {
pub(crate) fn parse_strings(value: &str) -> Vec<String> {
COMMA_SEPARATED_LIST_RE
.split(value)
.map(str::trim)
@@ -40,7 +40,7 @@ pub fn parse_strings(value: &str) -> Vec<String> {
}
/// Parse a boolean.
pub fn parse_bool(value: &str) -> Result<bool> {
pub(crate) fn parse_bool(value: &str) -> Result<bool> {
match value.trim() {
"true" => Ok(true),
"false" => Ok(false),
@@ -138,7 +138,7 @@ fn tokenize_files_to_codes_mapping(value: &str) -> Vec<Token> {
/// Parse a 'files-to-codes' mapping, mimicking Flake8's internal logic.
/// See: <https://github.com/PyCQA/flake8/blob/7dfe99616fc2f07c0017df2ba5fa884158f3ea8a/src/flake8/utils.py#L45>
pub fn parse_files_to_codes_mapping(value: &str) -> Result<Vec<PatternPrefixPair>> {
pub(crate) fn parse_files_to_codes_mapping(value: &str) -> Result<Vec<PatternPrefixPair>> {
if value.trim().is_empty() {
return Ok(vec![]);
}
@@ -178,7 +178,7 @@ pub fn parse_files_to_codes_mapping(value: &str) -> Result<Vec<PatternPrefixPair
}
/// Collect a list of `PatternPrefixPair` structs as a `BTreeMap`.
pub fn collect_per_file_ignores(
pub(crate) fn collect_per_file_ignores(
pairs: Vec<PatternPrefixPair>,
) -> FxHashMap<String, Vec<RuleSelector>> {
let mut per_file_ignores: FxHashMap<String, Vec<RuleSelector>> = FxHashMap::default();

View File

@@ -175,7 +175,7 @@ impl From<&Plugin> for Linter {
///
/// For example, if the user specified a `mypy-init-return` setting, we should
/// infer that `flake8-annotations` is active.
pub fn infer_plugins_from_options(flake8: &HashMap<String, Option<String>>) -> Vec<Plugin> {
pub(crate) fn infer_plugins_from_options(flake8: &HashMap<String, Option<String>>) -> Vec<Plugin> {
let mut plugins = BTreeSet::new();
for key in flake8.keys() {
match key.as_str() {
@@ -292,7 +292,7 @@ pub fn infer_plugins_from_options(flake8: &HashMap<String, Option<String>>) -> V
///
/// For example, if the user ignores `ANN101`, we should infer that
/// `flake8-annotations` is active.
pub fn infer_plugins_from_codes(selectors: &HashSet<RuleSelector>) -> Vec<Plugin> {
pub(crate) fn infer_plugins_from_codes(selectors: &HashSet<RuleSelector>) -> Vec<Plugin> {
// Ignore cases in which we've knowingly changed rule prefixes.
[
Plugin::Flake82020,

View File

@@ -3,8 +3,7 @@
use anyhow::Result;
use libcst_native::{Codegen, CodegenState, ImportAlias, Name, NameOrAttribute};
use ruff_text_size::TextSize;
use rustc_hash::FxHashMap;
use rustpython_parser::ast::{Stmt, StmtKind, Suite};
use rustpython_parser::ast::{self, Stmt, StmtKind, Suite};
use rustpython_parser::{lexer, Mode, Tok};
use ruff_diagnostics::Edit;
@@ -18,10 +17,7 @@ pub struct Importer<'a> {
python_ast: &'a Suite,
locator: &'a Locator<'a>,
stylist: &'a Stylist<'a>,
/// A map from module name to top-level `StmtKind::ImportFrom` statements.
import_from_map: FxHashMap<&'a str, &'a Stmt>,
/// The last top-level import statement.
trailing_import: Option<&'a Stmt>,
ordered_imports: Vec<&'a Stmt>,
}
impl<'a> Importer<'a> {
@@ -30,34 +26,21 @@ impl<'a> Importer<'a> {
python_ast,
locator,
stylist,
import_from_map: FxHashMap::default(),
trailing_import: None,
ordered_imports: Vec::default(),
}
}
/// Visit a top-level import statement.
pub fn visit_import(&mut self, import: &'a Stmt) {
// Store a reference to the import statement in the appropriate map.
match &import.node {
StmtKind::Import { .. } => {
// Nothing to do here, we don't extend top-level `import` statements at all, so
// no need to track them.
}
StmtKind::ImportFrom { module, level, .. } => {
// Store a reverse-map from module name to `import ... from` statement.
if level.map_or(true, |level| level == 0) {
if let Some(module) = module {
self.import_from_map.insert(module.as_str(), import);
}
}
}
_ => {
panic!("Expected StmtKind::Import | StmtKind::ImportFrom");
}
}
self.ordered_imports.push(import);
}
// Store a reference to the last top-level import statement.
self.trailing_import = Some(import);
/// Return the import statement that precedes the given position, if any.
fn preceding_import(&self, at: TextSize) -> Option<&Stmt> {
self.ordered_imports
.partition_point(|stmt| stmt.start() < at)
.checked_sub(1)
.map(|idx| self.ordered_imports[idx])
}
/// Add an import statement to import the given module.
@@ -65,9 +48,9 @@ impl<'a> Importer<'a> {
/// If there are no existing imports, the new import will be added at the top
/// of the file. Otherwise, it will be added after the most recent top-level
/// import statement.
pub fn add_import(&self, import: &AnyImport) -> Edit {
pub fn add_import(&self, import: &AnyImport, at: TextSize) -> Edit {
let required_import = import.to_string();
if let Some(stmt) = self.trailing_import {
if let Some(stmt) = self.preceding_import(at) {
// Insert after the last top-level import.
let Insertion {
prefix,
@@ -88,10 +71,28 @@ impl<'a> Importer<'a> {
}
}
/// Return the top-level [`Stmt`] that imports the given module using `StmtKind::ImportFrom`.
/// if it exists.
pub fn get_import_from(&self, module: &str) -> Option<&Stmt> {
self.import_from_map.get(module).copied()
/// Return the top-level [`Stmt`] that imports the given module using `StmtKind::ImportFrom`
/// preceding the given position, if any.
pub fn find_import_from(&self, module: &str, at: TextSize) -> Option<&Stmt> {
let mut import_from = None;
for stmt in &self.ordered_imports {
if stmt.start() >= at {
break;
}
if let StmtKind::ImportFrom(ast::StmtImportFrom {
module: name,
level,
..
}) = &stmt.node
{
if level.map_or(true, |level| level.to_u32() == 0)
&& name.as_ref().map_or(false, |name| name == module)
{
import_from = Some(*stmt);
}
}
}
import_from
}
/// Add the given member to an existing `StmtKind::ImportFrom` statement.
@@ -177,7 +178,8 @@ fn match_docstring_end(body: &[Stmt]) -> Option<TextSize> {
/// along with a trailing newline suffix.
fn end_of_statement_insertion(stmt: &Stmt, locator: &Locator, stylist: &Stylist) -> Insertion {
let location = stmt.end();
let mut tokens = lexer::lex_located(locator.after(location), Mode::Module, location).flatten();
let mut tokens =
lexer::lex_starts_at(locator.after(location), Mode::Module, location).flatten();
if let Some((Tok::Semi, range)) = tokens.next() {
// If the first token after the docstring is a semicolon, insert after the semicolon as an
// inline statement;
@@ -210,7 +212,7 @@ fn top_of_file_insertion(body: &[Stmt], locator: &Locator, stylist: &Stylist) ->
let mut location = if let Some(location) = match_docstring_end(body) {
// If the first token after the docstring is a semicolon, insert after the semicolon as an
// inline statement;
let first_token = lexer::lex_located(locator.after(location), Mode::Module, location)
let first_token = lexer::lex_starts_at(locator.after(location), Mode::Module, location)
.flatten()
.next();
if let Some((Tok::Semi, range)) = first_token {
@@ -225,7 +227,7 @@ fn top_of_file_insertion(body: &[Stmt], locator: &Locator, stylist: &Stylist) ->
// Skip over any comments and empty lines.
for (tok, range) in
lexer::lex_located(locator.after(location), Mode::Module, location).flatten()
lexer::lex_starts_at(locator.after(location), Mode::Module, location).flatten()
{
if matches!(tok, Tok::Comment(..) | Tok::Newline) {
location = locator.full_line_end(range.end());
@@ -240,11 +242,11 @@ fn top_of_file_insertion(body: &[Stmt], locator: &Locator, stylist: &Stylist) ->
#[cfg(test)]
mod tests {
use anyhow::Result;
use ruff_python_ast::newlines::LineEnding;
use ruff_text_size::TextSize;
use rustpython_parser as parser;
use rustpython_parser::lexer::LexResult;
use ruff_python_ast::newlines::LineEnding;
use ruff_python_ast::source_code::{Locator, Stylist};
use crate::importer::{top_of_file_insertion, Insertion};

View File

@@ -25,13 +25,13 @@ enum State {
}
#[derive(Default)]
pub struct StateMachine {
pub(crate) struct StateMachine {
state: State,
bracket_count: usize,
}
impl StateMachine {
pub fn consume(&mut self, tok: &Tok) -> bool {
pub(crate) fn consume(&mut self, tok: &Tok) -> bool {
match tok {
Tok::NonLogicalNewline
| Tok::Newline

View File

@@ -1 +1 @@
pub mod docstring_detection;
pub(crate) mod docstring_detection;

View File

@@ -77,7 +77,6 @@ pub fn check_path(
directives: &Directives,
settings: &Settings,
noqa: flags::Noqa,
autofix: flags::Autofix,
) -> LinterResult<(Vec<Diagnostic>, Option<ImportMap>)> {
// Aggregate all diagnostics.
let mut diagnostics = vec![];
@@ -99,7 +98,7 @@ pub fn check_path(
.any(|rule_code| rule_code.lint_source().is_tokens())
{
let is_stub = is_python_stub_file(path);
diagnostics.extend(check_tokens(locator, &tokens, settings, autofix, is_stub));
diagnostics.extend(check_tokens(locator, &tokens, settings, is_stub));
}
// Run the filesystem-based rules.
@@ -119,11 +118,7 @@ pub fn check_path(
{
#[cfg(feature = "logical_lines")]
diagnostics.extend(crate::checkers::logical_lines::check_logical_lines(
&tokens,
locator,
stylist,
settings,
flags::Autofix::Enabled,
&tokens, locator, stylist, settings,
));
}
@@ -148,7 +143,6 @@ pub fn check_path(
indexer,
&directives.noqa_line_for,
settings,
autofix,
noqa,
path,
package,
@@ -162,7 +156,6 @@ pub fn check_path(
&directives.isort,
settings,
stylist,
autofix,
path,
package,
);
@@ -170,7 +163,7 @@ pub fn check_path(
diagnostics.extend(import_diagnostics);
}
if use_doc_lines {
doc_lines.extend(doc_lines_from_ast(&python_ast));
doc_lines.extend(doc_lines_from_ast(&python_ast, locator));
}
}
Err(parse_error) => {
@@ -198,7 +191,7 @@ pub fn check_path(
.any(|rule_code| rule_code.lint_source().is_physical_lines())
{
diagnostics.extend(check_physical_lines(
path, locator, stylist, indexer, &doc_lines, settings, autofix,
path, locator, stylist, indexer, &doc_lines, settings,
));
}
@@ -223,7 +216,6 @@ pub fn check_path(
indexer.comment_ranges(),
&directives.noqa_line_for,
settings,
error.as_ref().map_or(autofix, |_| flags::Autofix::Disabled),
);
if noqa.into() {
for index in ignored.iter().rev() {
@@ -293,7 +285,6 @@ pub fn add_noqa_to_path(path: &Path, package: Option<&Path>, settings: &Settings
&directives,
settings,
flags::Noqa::Disabled,
flags::Autofix::Disabled,
);
// Log any parse errors.
@@ -320,7 +311,6 @@ pub fn lint_only(
package: Option<&Path>,
settings: &Settings,
noqa: flags::Noqa,
autofix: flags::Autofix,
) -> LinterResult<(Vec<Message>, Option<ImportMap>)> {
// Tokenize once.
let tokens: Vec<LexResult> = ruff_rustpython::tokenize(contents);
@@ -353,7 +343,6 @@ pub fn lint_only(
&directives,
settings,
noqa,
autofix,
);
result.map(|(diagnostics, imports)| {
@@ -444,7 +433,6 @@ pub fn lint_fix<'a>(
&directives,
settings,
noqa,
flags::Autofix::Enabled,
);
if iterations == 0 {
@@ -509,8 +497,8 @@ fn collect_rule_codes(rules: impl IntoIterator<Item = Rule>) -> String {
#[allow(clippy::print_stderr)]
fn report_failed_to_converge_error(path: &Path, transformed: &str, diagnostics: &[Diagnostic]) {
let codes = collect_rule_codes(diagnostics.iter().map(|diagnostic| diagnostic.kind.rule()));
if cfg!(debug_assertions) {
let codes = collect_rule_codes(diagnostics.iter().map(|diagnostic| diagnostic.kind.rule()));
eprintln!(
"{}: Failed to converge after {} iterations in `{}` with rule codes {}:---\n{}\n---",
"debug error".red().bold(),
@@ -528,13 +516,14 @@ This indicates a bug in `{}`. If you could open an issue at:
{}/issues/new?title=%5BInfinite%20loop%5D
...quoting the contents of `{}`, along with the `pyproject.toml` settings and executed command, we'd be very appreciative!
...quoting the contents of `{}`, the rule codes {}, along with the `pyproject.toml` settings and executed command, we'd be very appreciative!
"#,
"error".red().bold(),
MAX_ITERATIONS,
CARGO_PKG_NAME,
CARGO_PKG_REPOSITORY,
fs::relativize_path(path),
codes
);
}
}
@@ -546,8 +535,8 @@ fn report_autofix_syntax_error(
error: &ParseError,
rules: impl IntoIterator<Item = Rule>,
) {
let codes = collect_rule_codes(rules);
if cfg!(debug_assertions) {
let codes = collect_rule_codes(rules);
eprintln!(
"{}: Autofix introduced a syntax error in `{}` with rule codes {}: {}\n---\n{}\n---",
"error".red().bold(),
@@ -565,12 +554,13 @@ This indicates a bug in `{}`. If you could open an issue at:
{}/issues/new?title=%5BAutofix%20error%5D
...quoting the contents of `{}`, along with the `pyproject.toml` settings and executed command, we'd be very appreciative!
...quoting the contents of `{}`, the rule codes {}, along with the `pyproject.toml` settings and executed command, we'd be very appreciative!
"#,
"error".red().bold(),
CARGO_PKG_NAME,
CARGO_PKG_REPOSITORY,
fs::relativize_path(path),
codes,
);
}
}

View File

@@ -1,4 +1,4 @@
use std::fmt::{Display, Formatter};
use std::fmt::{Display, Formatter, Write};
use std::path::Path;
use std::sync::Mutex;
@@ -9,7 +9,7 @@ use fern;
use log::Level;
use once_cell::sync::Lazy;
use ruff_python_ast::source_code::SourceCode;
use rustpython_parser::ParseError;
use rustpython_parser::{ParseError, ParseErrorType};
pub(crate) static WARNINGS: Lazy<Mutex<Vec<&'static str>>> = Lazy::new(Mutex::default);
@@ -145,17 +145,89 @@ impl<'a> DisplayParseError<'a> {
impl Display for DisplayParseError<'_> {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let source_location = self.source_code.source_location(self.error.location);
let source_location = self.source_code.source_location(self.error.offset);
write!(
f,
"{header} {path}{colon}{row}{colon}{column}{colon} {inner}",
header = "Failed to parse ".bold(),
header = "Failed to parse".bold(),
path = fs::relativize_path(Path::new(&self.error.source_path)).bold(),
row = source_location.row,
column = source_location.column,
colon = ":".cyan(),
inner = &self.error.error
inner = &DisplayParseErrorType(&self.error.error)
)
}
}
pub(crate) struct DisplayParseErrorType<'a>(&'a ParseErrorType);
impl<'a> DisplayParseErrorType<'a> {
pub(crate) fn new(error: &'a ParseErrorType) -> Self {
Self(error)
}
}
impl Display for DisplayParseErrorType<'_> {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self.0 {
ParseErrorType::Eof => write!(f, "Expected token but reached end of file."),
ParseErrorType::ExtraToken(ref tok) => write!(
f,
"Got extraneous token: {tok}",
tok = TruncateAtNewline(&tok)
),
ParseErrorType::InvalidToken => write!(f, "Got invalid token"),
ParseErrorType::UnrecognizedToken(ref tok, ref expected) => {
if let Some(expected) = expected.as_ref() {
write!(
f,
"expected '{expected}', but got {tok}",
tok = TruncateAtNewline(&tok)
)
} else {
write!(f, "unexpected token {tok}", tok = TruncateAtNewline(&tok))
}
}
ParseErrorType::Lexical(ref error) => write!(f, "{error}"),
}
}
}
/// Truncates the display text before the first newline character to avoid line breaks.
struct TruncateAtNewline<'a>(&'a dyn Display);
impl Display for TruncateAtNewline<'_> {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
struct TruncateAdapter<'a> {
inner: &'a mut dyn std::fmt::Write,
after_new_line: bool,
}
impl std::fmt::Write for TruncateAdapter<'_> {
fn write_str(&mut self, s: &str) -> std::fmt::Result {
if self.after_new_line {
Ok(())
} else {
if let Some(end) = s.find(['\n', '\r']) {
self.inner.write_str(&s[..end])?;
self.inner.write_str("\u{23ce}...")?;
self.after_new_line = true;
Ok(())
} else {
self.inner.write_str(s)
}
}
}
}
write!(
TruncateAdapter {
inner: f,
after_new_line: false,
},
"{}",
self.0
)
}
}

View File

@@ -1,6 +1,6 @@
use crate::message::Message;
use colored::{Color, ColoredString, Colorize, Styles};
use ruff_diagnostics::Fix;
use ruff_diagnostics::{Applicability, Fix};
use ruff_python_ast::source_code::{OneIndexed, SourceFile};
use ruff_text_size::{TextRange, TextSize};
use similar::{ChangeTag, TextDiff};
@@ -21,15 +21,11 @@ pub(super) struct Diff<'a> {
}
impl<'a> Diff<'a> {
pub fn from_message(message: &'a Message) -> Option<Diff> {
if message.fix.is_empty() {
None
} else {
Some(Diff {
source_code: &message.file,
fix: &message.fix,
})
}
pub(crate) fn from_message(message: &'a Message) -> Option<Diff> {
message.fix.as_ref().map(|fix| Diff {
source_code: &message.file,
fix,
})
}
}
@@ -51,7 +47,13 @@ impl Display for Diff<'_> {
let diff = TextDiff::from_lines(self.source_code.source_text(), &output);
writeln!(f, "{}", " Suggested fix".blue())?;
let message = match self.fix.applicability() {
Applicability::Automatic => "Fix",
Applicability::Suggested => "Suggested fix",
Applicability::Manual => "Possible fix",
Applicability::Unspecified => "Suggested fix", // For backwards compatibility, unspecified fixes are 'suggested'
};
writeln!(f, " {}", message.blue())?;
let (largest_old, largest_new) = diff
.ops()

View File

@@ -125,7 +125,7 @@ impl Display for DisplayGroupedMessage<'_> {
self.column_length.get() - calculate_print_width(start_location.column).get()
),
code_and_body = RuleCodeAndBody {
message_kind: &message.kind,
message,
show_fix_status: self.show_fix_status
},
)?;

View File

@@ -1,10 +1,10 @@
use crate::message::{Emitter, EmitterContext, Message};
use crate::registry::AsRule;
use ruff_diagnostics::Edit;
use ruff_python_ast::source_code::{SourceCode, SourceLocation};
use ruff_python_ast::source_code::SourceCode;
use serde::ser::SerializeSeq;
use serde::{Serialize, Serializer};
use serde_json::{json, Value};
use serde_json::json;
use std::io::Write;
#[derive(Default)]
@@ -37,14 +37,13 @@ impl Serialize for ExpandedMessages<'_> {
for message in self.messages {
let source_code = message.file.to_source_code();
let fix = if message.fix.is_empty() {
None
} else {
Some(json!({
let fix = message.fix.as_ref().map(|fix| {
json!({
"applicability": fix.applicability(),
"message": message.kind.suggestion.as_deref(),
"edits": &ExpandedEdits { edits: message.fix.edits(), source_code: &source_code },
}))
};
"edits": &ExpandedEdits { edits: fix.edits(), source_code: &source_code },
})
});
let start_location = source_code.source_location(message.start());
let end_location = source_code.source_location(message.end());
@@ -80,12 +79,10 @@ impl Serialize for ExpandedEdits<'_> {
let mut s = serializer.serialize_seq(Some(self.edits.len()))?;
for edit in self.edits {
let start_location = self.source_code.source_location(edit.start());
let end_location = self.source_code.source_location(edit.end());
let value = json!({
"content": edit.content().unwrap_or_default(),
"location": to_zero_indexed_column(&start_location),
"end_location": to_zero_indexed_column(&end_location)
"location": self.source_code.source_location(edit.start()),
"end_location": self.source_code.source_location(edit.end())
});
s.serialize_element(&value)?;
@@ -95,13 +92,6 @@ impl Serialize for ExpandedEdits<'_> {
}
}
fn to_zero_indexed_column(location: &SourceLocation) -> Value {
json!({
"row": location.row,
"column": location.column.to_zero_indexed()
})
}
#[cfg(test)]
mod tests {
use crate::message::tests::{capture_emitter_output, create_messages};

View File

@@ -33,7 +33,7 @@ use ruff_python_ast::source_code::{SourceFile, SourceLocation};
pub struct Message {
pub kind: DiagnosticKind,
pub range: TextRange,
pub fix: Fix,
pub fix: Option<Fix>,
pub file: SourceFile,
pub noqa_offset: TextSize,
}
@@ -181,7 +181,11 @@ def fibonacci(n):
multiple: false,
},
TextRange::new(TextSize::from(7), TextSize::from(9)),
);
)
.with_fix(Fix::suggested(Edit::range_deletion(TextRange::new(
TextSize::from(0),
TextSize::from(10),
))));
let fib_source = SourceFileBuilder::new("fib.py", fib).finish();
@@ -191,10 +195,10 @@ def fibonacci(n):
},
TextRange::new(TextSize::from(94), TextSize::from(95)),
)
.with_fix(Fix::new(vec![Edit::deletion(
.with_fix(Fix::suggested(Edit::deletion(
TextSize::from(94),
TextSize::from(99),
)]));
)));
let file_2 = r#"if a == 1: pass"#;

View File

@@ -6,7 +6,23 @@ expression: content
{
"code": "F401",
"message": "`os` imported but unused",
"fix": null,
"fix": {
"applicability": "Suggested",
"message": "Remove unused import: `os`",
"edits": [
{
"content": "",
"location": {
"row": 1,
"column": 1
},
"end_location": {
"row": 2,
"column": 1
}
}
]
},
"location": {
"row": 1,
"column": 8
@@ -22,17 +38,18 @@ expression: content
"code": "F841",
"message": "Local variable `x` is assigned to but never used",
"fix": {
"applicability": "Suggested",
"message": "Remove assignment to unused variable `x`",
"edits": [
{
"content": "",
"location": {
"row": 6,
"column": 4
"column": 5
},
"end_location": {
"row": 6,
"column": 9
"column": 10
}
}
]

View File

@@ -6,7 +6,6 @@ use annotate_snippets::display_list::{DisplayList, FormatOptions};
use annotate_snippets::snippet::{Annotation, AnnotationType, Slice, Snippet, SourceAnnotation};
use bitflags::bitflags;
use colored::Colorize;
use ruff_diagnostics::DiagnosticKind;
use ruff_python_ast::source_code::{OneIndexed, SourceLocation};
use ruff_text_size::{TextRange, TextSize};
use std::borrow::Cow;
@@ -93,7 +92,7 @@ impl Emitter for TextEmitter {
col = diagnostic_location.column,
sep = ":".cyan(),
code_and_body = RuleCodeAndBody {
message_kind: &message.kind,
message,
show_fix_status: self.flags.contains(EmitterFlags::SHOW_FIX_STATUS)
}
)?;
@@ -114,45 +113,35 @@ impl Emitter for TextEmitter {
}
pub(super) struct RuleCodeAndBody<'a> {
pub message_kind: &'a DiagnosticKind,
pub show_fix_status: bool,
pub(crate) message: &'a Message,
pub(crate) show_fix_status: bool,
}
impl Display for RuleCodeAndBody<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
if self.show_fix_status && self.message_kind.fixable {
let kind = &self.message.kind;
if self.show_fix_status && self.message.fix.is_some() {
write!(
f,
"{code} {autofix}{body}",
code = self
.message_kind
.rule()
.noqa_code()
.to_string()
.red()
.bold(),
code = kind.rule().noqa_code().to_string().red().bold(),
autofix = format_args!("[{}] ", "*".cyan()),
body = self.message_kind.body,
body = kind.body,
)
} else {
write!(
f,
"{code} {body}",
code = self
.message_kind
.rule()
.noqa_code()
.to_string()
.red()
.bold(),
body = self.message_kind.body,
code = kind.rule().noqa_code().to_string().red().bold(),
body = kind.body,
)
}
}
}
pub(super) struct MessageCodeFrame<'a> {
pub message: &'a Message,
pub(crate) message: &'a Message,
}
impl Display for MessageCodeFrame<'_> {
@@ -245,36 +234,39 @@ impl Display for MessageCodeFrame<'_> {
}
fn replace_whitespace(source: &str, annotation_range: TextRange) -> SourceCode {
static TAB_SIZE: TextSize = TextSize::new(4);
static TAB_SIZE: u32 = 4; // TODO(jonathan): use `pycodestyle.tab-size`
let mut result = String::new();
let mut last_end = 0;
let mut range = annotation_range;
let mut column = 0;
for (index, m) in source.match_indices(['\t', '\n', '\r']) {
match m {
"\t" => {
let tab_width = TAB_SIZE - TextSize::new(column % 4);
for (index, c) in source.chars().enumerate() {
match c {
'\t' => {
let tab_width = TAB_SIZE - column % TAB_SIZE;
column += tab_width;
if index < usize::from(annotation_range.start()) {
range += tab_width - TextSize::new(1);
range += TextSize::new(tab_width - 1);
} else if index < usize::from(annotation_range.end()) {
range = range.add_end(tab_width - TextSize::new(1));
range = range.add_end(TextSize::new(tab_width - 1));
}
result.push_str(&source[last_end..index]);
for _ in 0..u32::from(tab_width) {
for _ in 0..tab_width {
result.push(' ');
}
last_end = index + 1;
}
"\n" | "\r" => {
'\n' | '\r' => {
column = 0;
}
_ => unreachable!(),
_ => {
column += 1;
}
}
}

View File

@@ -27,7 +27,7 @@ static NOQA_LINE_REGEX: Lazy<Regex> = Lazy::new(|| {
static SPLIT_COMMA_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"[,\s]").unwrap());
#[derive(Debug)]
pub enum Directive<'a> {
pub(crate) enum Directive<'a> {
None,
// (leading spaces, noqa_range, trailing_spaces)
All(TextSize, TextRange, TextSize),
@@ -36,7 +36,7 @@ pub enum Directive<'a> {
}
/// Extract the noqa `Directive` from a line of Python source code.
pub fn extract_noqa_directive<'a>(range: TextRange, locator: &'a Locator) -> Directive<'a> {
pub(crate) fn extract_noqa_directive<'a>(range: TextRange, locator: &'a Locator) -> Directive<'a> {
let text = &locator.contents()[range];
match NOQA_LINE_REGEX.captures(text) {
Some(caps) => match (
@@ -123,7 +123,7 @@ fn parse_file_exemption(line: &str) -> ParsedExemption {
/// Returns `true` if the string list of `codes` includes `code` (or an alias
/// thereof).
pub fn includes(needle: Rule, haystack: &[&str]) -> bool {
pub(crate) fn includes(needle: Rule, haystack: &[&str]) -> bool {
let needle = needle.noqa_code();
haystack
.iter()
@@ -131,7 +131,7 @@ pub fn includes(needle: Rule, haystack: &[&str]) -> bool {
}
/// Returns `true` if the given [`Rule`] is ignored at the specified `lineno`.
pub fn rule_is_ignored(
pub(crate) fn rule_is_ignored(
code: Rule,
offset: TextSize,
noqa_line_for: &NoqaMapping,
@@ -146,7 +146,7 @@ pub fn rule_is_ignored(
}
}
pub enum FileExemption {
pub(crate) enum FileExemption {
None,
All,
Codes(Vec<NoqaCode>),
@@ -154,7 +154,7 @@ pub enum FileExemption {
/// Extract the [`FileExemption`] for a given Python source file, enumerating any rules that are
/// globally ignored within the file.
pub fn file_exemption(contents: &str, comment_ranges: &[TextRange]) -> FileExemption {
pub(crate) fn file_exemption(contents: &str, comment_ranges: &[TextRange]) -> FileExemption {
let mut exempt_codes: Vec<NoqaCode> = vec![];
for range in comment_ranges {
@@ -184,7 +184,7 @@ pub fn file_exemption(contents: &str, comment_ranges: &[TextRange]) -> FileExemp
}
/// Adds noqa comments to suppress all diagnostics of a file.
pub fn add_noqa(
pub(crate) fn add_noqa(
path: &Path,
diagnostics: &[Diagnostic],
locator: &Locator,
@@ -368,9 +368,9 @@ fn push_codes<I: Display>(str: &mut String, codes: impl Iterator<Item = I>) {
#[derive(Debug)]
pub(crate) struct NoqaDirectiveLine<'a> {
// The range of the text line for which the noqa directive applies.
pub range: TextRange,
pub directive: Directive<'a>,
pub matches: Vec<NoqaCode>,
pub(crate) range: TextRange,
pub(crate) directive: Directive<'a>,
pub(crate) matches: Vec<NoqaCode>,
}
#[derive(Debug, Default)]
@@ -379,7 +379,10 @@ pub(crate) struct NoqaDirectives<'a> {
}
impl<'a> NoqaDirectives<'a> {
pub fn from_commented_ranges(comment_ranges: &[TextRange], locator: &'a Locator<'a>) -> Self {
pub(crate) fn from_commented_ranges(
comment_ranges: &[TextRange],
locator: &'a Locator<'a>,
) -> Self {
let mut directives = Vec::new();
for comment_range in comment_ranges {
@@ -409,11 +412,11 @@ impl<'a> NoqaDirectives<'a> {
Self { inner: directives }
}
pub fn find_line_with_directive(&self, offset: TextSize) -> Option<&NoqaDirectiveLine> {
pub(crate) fn find_line_with_directive(&self, offset: TextSize) -> Option<&NoqaDirectiveLine> {
self.find_line_index(offset).map(|index| &self.inner[index])
}
pub fn find_line_with_directive_mut(
pub(crate) fn find_line_with_directive_mut(
&mut self,
offset: TextSize,
) -> Option<&mut NoqaDirectiveLine<'a>> {
@@ -438,7 +441,7 @@ impl<'a> NoqaDirectives<'a> {
.ok()
}
pub fn lines(&self) -> &[NoqaDirectiveLine] {
pub(crate) fn lines(&self) -> &[NoqaDirectiveLine] {
&self.inner
}
}

View File

@@ -189,6 +189,7 @@ ruff_macros::register_rules!(
rules::pylint::rules::LoggingTooFewArgs,
rules::pylint::rules::LoggingTooManyArgs,
rules::pylint::rules::UnexpectedSpecialMethodSignature,
rules::pylint::rules::NestedMinMax,
// flake8-builtins
rules::flake8_builtins::rules::BuiltinVariableShadowing,
rules::flake8_builtins::rules::BuiltinArgumentShadowing,
@@ -505,10 +506,11 @@ ruff_macros::register_rules!(
rules::flake8_datetimez::rules::CallDateToday,
rules::flake8_datetimez::rules::CallDateFromtimestamp,
// pygrep-hooks
rules::pygrep_hooks::rules::Eval,
rules::pygrep_hooks::rules::DeprecatedLogWarn,
rules::pygrep_hooks::rules::BlanketTypeIgnore,
rules::pygrep_hooks::rules::BlanketNOQA,
rules::pygrep_hooks::rules::BlanketTypeIgnore,
rules::pygrep_hooks::rules::DeprecatedLogWarn,
rules::pygrep_hooks::rules::Eval,
rules::pygrep_hooks::rules::InvalidMockAccess,
// pandas-vet
rules::pandas_vet::rules::PandasUseOfInplaceArgument,
rules::pandas_vet::rules::PandasUseOfDotIsNull,
@@ -661,6 +663,7 @@ ruff_macros::register_rules!(
rules::ruff::rules::PairwiseOverZipped,
rules::ruff::rules::MutableDataclassDefault,
rules::ruff::rules::FunctionCallInDataclassDefaultArgument,
rules::ruff::rules::ExplicitFStringTypeConversion,
// flake8-django
rules::flake8_django::rules::DjangoNullableModelStringField,
rules::flake8_django::rules::DjangoLocalsInRenderFunction,
@@ -669,6 +672,8 @@ ruff_macros::register_rules!(
rules::flake8_django::rules::DjangoModelWithoutDunderStr,
rules::flake8_django::rules::DjangoUnorderedBodyContentInModel,
rules::flake8_django::rules::DjangoNonLeadingReceiverDecorator,
// flynt
rules::flynt::rules::StaticJoinToFString,
);
pub trait AsRule {
@@ -824,6 +829,9 @@ pub enum Linter {
/// [tryceratops](https://pypi.org/project/tryceratops/1.1.0/)
#[prefix = "TRY"]
Tryceratops,
/// [flynt](https://pypi.org/project/flynt/)
#[prefix = "FLY"]
Flynt,
/// NumPy-specific rules
#[prefix = "NPY"]
Numpy,

View File

@@ -3,14 +3,16 @@ use ruff_macros::CacheKey;
use std::fmt::{Debug, Formatter};
use std::iter::FusedIterator;
const RULESET_SIZE: usize = 10;
/// A set of [`Rule`]s.
///
/// Uses a bitset where a bit of one signals that the Rule with that [u16] is in this set.
#[derive(Clone, Default, CacheKey, PartialEq, Eq)]
pub struct RuleSet([u64; 10]);
pub struct RuleSet([u64; RULESET_SIZE]);
impl RuleSet {
const EMPTY: [u64; 10] = [0; 10];
const EMPTY: [u64; RULESET_SIZE] = [0; RULESET_SIZE];
// 64 fits into a u16 without truncation
#[allow(clippy::cast_possible_truncation)]

View File

@@ -1,9 +1,5 @@
use std::str::FromStr;
use itertools::Itertools;
use schemars::_serde_json::Value;
use schemars::schema::{InstanceType, Schema, SchemaObject};
use schemars::JsonSchema;
use serde::de::{self, Visitor};
use serde::{Deserialize, Serialize};
use strum::IntoEnumIterator;
@@ -198,59 +194,71 @@ pub(crate) const fn prefix_to_selector(prefix: RuleCodePrefix) -> RuleSelector {
}
}
impl JsonSchema for RuleSelector {
fn schema_name() -> String {
"RuleSelector".to_string()
}
#[cfg(feature = "schemars")]
mod schema {
use crate::registry::RuleNamespace;
use crate::rule_selector::{Linter, Rule, RuleCodePrefix};
use crate::RuleSelector;
use itertools::Itertools;
use schemars::_serde_json::Value;
use schemars::schema::{InstanceType, Schema, SchemaObject};
use schemars::JsonSchema;
use strum::IntoEnumIterator;
fn json_schema(_gen: &mut schemars::gen::SchemaGenerator) -> Schema {
Schema::Object(SchemaObject {
instance_type: Some(InstanceType::String.into()),
enum_values: Some(
[
// Include the non-standard "ALL" selector.
"ALL".to_string(),
// Include the legacy "C" and "T" selectors.
"C".to_string(),
"T".to_string(),
// Include some common redirect targets for those legacy selectors.
"C9".to_string(),
"T1".to_string(),
"T2".to_string(),
]
.into_iter()
.chain(
RuleCodePrefix::iter()
.filter(|p| {
// Once logical lines are active by default, please remove this.
// This is here because generate-all output otherwise depends on
// the feature sets which makes the test running with
// `--all-features` fail
!Rule::from_code(&format!(
"{}{}",
p.linter().common_prefix(),
p.short_code()
))
.unwrap()
.lint_source()
.is_logical_lines()
})
.map(|p| {
let prefix = p.linter().common_prefix();
let code = p.short_code();
format!("{prefix}{code}")
})
.chain(Linter::iter().filter_map(|l| {
let prefix = l.common_prefix();
(!prefix.is_empty()).then(|| prefix.to_string())
})),
)
.sorted()
.map(Value::String)
.collect(),
),
..SchemaObject::default()
})
impl JsonSchema for RuleSelector {
fn schema_name() -> String {
"RuleSelector".to_string()
}
fn json_schema(_gen: &mut schemars::gen::SchemaGenerator) -> Schema {
Schema::Object(SchemaObject {
instance_type: Some(InstanceType::String.into()),
enum_values: Some(
[
// Include the non-standard "ALL" selector.
"ALL".to_string(),
// Include the legacy "C" and "T" selectors.
"C".to_string(),
"T".to_string(),
// Include some common redirect targets for those legacy selectors.
"C9".to_string(),
"T1".to_string(),
"T2".to_string(),
]
.into_iter()
.chain(
RuleCodePrefix::iter()
.filter(|p| {
// Once logical lines are active by default, please remove this.
// This is here because generate-all output otherwise depends on
// the feature sets which makes the test running with
// `--all-features` fail
!Rule::from_code(&format!(
"{}{}",
p.linter().common_prefix(),
p.short_code()
))
.unwrap()
.lint_source()
.is_logical_lines()
})
.map(|p| {
let prefix = p.linter().common_prefix();
let code = p.short_code();
format!("{prefix}{code}")
})
.chain(Linter::iter().filter_map(|l| {
let prefix = l.common_prefix();
(!prefix.is_empty()).then(|| prefix.to_string())
})),
)
.sorted()
.map(Value::String)
.collect(),
),
..SchemaObject::default()
})
}
}
}

View File

@@ -31,7 +31,7 @@ static PARTIAL_DICTIONARY_REGEX: Lazy<Regex> =
static PRINT_RETURN_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"^(print|return)\b\s*").unwrap());
/// Returns `true` if a comment contains Python code.
pub fn comment_contains_code(line: &str, task_tags: &[String]) -> bool {
pub(crate) fn comment_contains_code(line: &str, task_tags: &[String]) -> bool {
let line = if let Some(line) = line.trim().strip_prefix('#') {
line.trim_start_matches([' ', '#'])
} else {

View File

@@ -1,11 +1,11 @@
use ruff_text_size::TextRange;
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit};
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::source_code::Locator;
use crate::registry::Rule;
use crate::settings::{flags, Settings};
use crate::settings::Settings;
use super::detection::comment_contains_code;
@@ -46,19 +46,22 @@ fn is_standalone_comment(line: &str) -> bool {
}
/// ERA001
pub fn commented_out_code(
pub(crate) fn commented_out_code(
locator: &Locator,
range: TextRange,
settings: &Settings,
autofix: flags::Autofix,
) -> Option<Diagnostic> {
let line = locator.full_lines(range);
// Verify that the comment is on its own line, and that it contains code.
if is_standalone_comment(line) && comment_contains_code(line, &settings.task_tags[..]) {
let mut diagnostic = Diagnostic::new(CommentedOutCode, range);
if autofix.into() && settings.rules.should_fix(Rule::CommentedOutCode) {
diagnostic.set_fix(Edit::range_deletion(locator.full_lines_range(range)));
if settings.rules.should_fix(Rule::CommentedOutCode) {
#[allow(deprecated)]
diagnostic.set_fix(Fix::unspecified(Edit::range_deletion(
locator.full_lines_range(range),
)));
}
Some(diagnostic)
} else {

View File

@@ -1,5 +1,5 @@
use num_bigint::BigInt;
use rustpython_parser::ast::{Cmpop, Constant, Expr, ExprKind, Located};
use rustpython_parser::ast::{self, Attributed, Cmpop, Constant, Expr, ExprKind};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
@@ -121,19 +121,18 @@ fn is_sys(checker: &Checker, expr: &Expr, target: &str) -> bool {
}
/// YTT101, YTT102, YTT301, YTT303
pub fn subscript(checker: &mut Checker, value: &Expr, slice: &Expr) {
pub(crate) fn subscript(checker: &mut Checker, value: &Expr, slice: &Expr) {
if is_sys(checker, value, "version") {
match &slice.node {
ExprKind::Slice {
ExprKind::Slice(ast::ExprSlice {
lower: None,
upper: Some(upper),
step: None,
..
} => {
if let ExprKind::Constant {
}) => {
if let ExprKind::Constant(ast::ExprConstant {
value: Constant::Int(i),
..
} = &upper.node
}) = &upper.node
{
if *i == BigInt::from(1)
&& checker.settings.rules.enabled(Rule::SysVersionSlice1)
@@ -151,10 +150,10 @@ pub fn subscript(checker: &mut Checker, value: &Expr, slice: &Expr) {
}
}
ExprKind::Constant {
ExprKind::Constant(ast::ExprConstant {
value: Constant::Int(i),
..
} => {
}) => {
if *i == BigInt::from(2) && checker.settings.rules.enabled(Rule::SysVersion2) {
checker
.diagnostics
@@ -173,23 +172,25 @@ pub fn subscript(checker: &mut Checker, value: &Expr, slice: &Expr) {
}
/// YTT103, YTT201, YTT203, YTT204, YTT302
pub fn compare(checker: &mut Checker, left: &Expr, ops: &[Cmpop], comparators: &[Expr]) {
pub(crate) fn compare(checker: &mut Checker, left: &Expr, ops: &[Cmpop], comparators: &[Expr]) {
match &left.node {
ExprKind::Subscript { value, slice, .. } if is_sys(checker, value, "version_info") => {
if let ExprKind::Constant {
ExprKind::Subscript(ast::ExprSubscript { value, slice, .. })
if is_sys(checker, value, "version_info") =>
{
if let ExprKind::Constant(ast::ExprConstant {
value: Constant::Int(i),
..
} = &slice.node
}) = &slice.node
{
if *i == BigInt::from(0) {
if let (
[Cmpop::Eq | Cmpop::NotEq],
[Located {
[Attributed {
node:
ExprKind::Constant {
ExprKind::Constant(ast::ExprConstant {
value: Constant::Int(n),
..
},
}),
..
}],
) = (ops, comparators)
@@ -205,12 +206,12 @@ pub fn compare(checker: &mut Checker, left: &Expr, ops: &[Cmpop], comparators: &
} else if *i == BigInt::from(1) {
if let (
[Cmpop::Lt | Cmpop::LtE | Cmpop::Gt | Cmpop::GtE],
[Located {
[Attributed {
node:
ExprKind::Constant {
ExprKind::Constant(ast::ExprConstant {
value: Constant::Int(_),
..
},
}),
..
}],
) = (ops, comparators)
@@ -225,17 +226,17 @@ pub fn compare(checker: &mut Checker, left: &Expr, ops: &[Cmpop], comparators: &
}
}
ExprKind::Attribute { value, attr, .. }
ExprKind::Attribute(ast::ExprAttribute { value, attr, .. })
if is_sys(checker, value, "version_info") && attr == "minor" =>
{
if let (
[Cmpop::Lt | Cmpop::LtE | Cmpop::Gt | Cmpop::GtE],
[Located {
[Attributed {
node:
ExprKind::Constant {
ExprKind::Constant(ast::ExprConstant {
value: Constant::Int(_),
..
},
}),
..
}],
) = (ops, comparators)
@@ -258,12 +259,12 @@ pub fn compare(checker: &mut Checker, left: &Expr, ops: &[Cmpop], comparators: &
if is_sys(checker, left, "version") {
if let (
[Cmpop::Lt | Cmpop::LtE | Cmpop::Gt | Cmpop::GtE],
[Located {
[Attributed {
node:
ExprKind::Constant {
ExprKind::Constant(ast::ExprConstant {
value: Constant::Str(s),
..
},
}),
..
}],
) = (ops, comparators)
@@ -284,7 +285,7 @@ pub fn compare(checker: &mut Checker, left: &Expr, ops: &[Cmpop], comparators: &
}
/// YTT202
pub fn name_or_attribute(checker: &mut Checker, expr: &Expr) {
pub(crate) fn name_or_attribute(checker: &mut Checker, expr: &Expr) {
if checker
.ctx
.resolve_call_path(expr)

View File

@@ -6,14 +6,18 @@ use ruff_diagnostics::Edit;
use ruff_python_ast::source_code::Locator;
/// ANN204
pub fn add_return_annotation(locator: &Locator, stmt: &Stmt, annotation: &str) -> Result<Edit> {
pub(crate) fn add_return_annotation(
locator: &Locator,
stmt: &Stmt,
annotation: &str,
) -> Result<Edit> {
let contents = &locator.contents()[stmt.range()];
// Find the colon (following the `def` keyword).
let mut seen_lpar = false;
let mut seen_rpar = false;
let mut count: usize = 0;
for (tok, range) in lexer::lex_located(contents, Mode::Module, stmt.start()).flatten() {
for (tok, range) in lexer::lex_starts_at(contents, Mode::Module, stmt.start()).flatten() {
if seen_lpar && seen_rpar {
if matches!(tok, Tok::Colon) {
return Ok(Edit::insertion(format!(" -> {annotation}"), range.start()));

View File

@@ -1,36 +1,38 @@
use rustpython_parser::ast::{Arguments, Expr, Stmt, StmtKind};
use rustpython_parser::ast::{self, Arguments, Expr, Stmt, StmtKind};
use ruff_python_ast::cast;
use ruff_python_semantic::analyze::visibility;
use ruff_python_semantic::definition::{Definition, Member, MemberKind};
use crate::checkers::ast::Checker;
use crate::docstrings::definition::{Definition, DefinitionKind};
pub(super) fn match_function_def(stmt: &Stmt) -> (&str, &Arguments, Option<&Expr>, &Vec<Stmt>) {
match &stmt.node {
StmtKind::FunctionDef {
StmtKind::FunctionDef(ast::StmtFunctionDef {
name,
args,
returns,
body,
..
}
| StmtKind::AsyncFunctionDef {
})
| StmtKind::AsyncFunctionDef(ast::StmtAsyncFunctionDef {
name,
args,
returns,
body,
..
} => (name, args, returns.as_ref().map(|expr| &**expr), body),
}) => (name, args, returns.as_ref().map(|expr| &**expr), body),
_ => panic!("Found non-FunctionDef in match_name"),
}
}
/// Return the name of the function, if it's overloaded.
pub fn overloaded_name(checker: &Checker, definition: &Definition) -> Option<String> {
if let DefinitionKind::Function(stmt)
| DefinitionKind::NestedFunction(stmt)
| DefinitionKind::Method(stmt) = definition.kind
pub(crate) fn overloaded_name(checker: &Checker, definition: &Definition) -> Option<String> {
if let Definition::Member(Member {
kind: MemberKind::Function | MemberKind::NestedFunction | MemberKind::Method,
stmt,
..
}) = definition
{
if visibility::is_overload(&checker.ctx, cast::decorator_list(stmt)) {
let (name, ..) = match_function_def(stmt);
@@ -45,10 +47,16 @@ pub fn overloaded_name(checker: &Checker, definition: &Definition) -> Option<Str
/// Return `true` if the definition is the implementation for an overloaded
/// function.
pub fn is_overload_impl(checker: &Checker, definition: &Definition, overloaded_name: &str) -> bool {
if let DefinitionKind::Function(stmt)
| DefinitionKind::NestedFunction(stmt)
| DefinitionKind::Method(stmt) = definition.kind
pub(crate) fn is_overload_impl(
checker: &Checker,
definition: &Definition,
overloaded_name: &str,
) -> bool {
if let Definition::Member(Member {
kind: MemberKind::Function | MemberKind::NestedFunction | MemberKind::Method,
stmt,
..
}) = definition
{
if visibility::is_overload(&checker.ctx, cast::decorator_list(stmt)) {
false

View File

@@ -1,16 +1,16 @@
use rustpython_parser::ast::{Constant, Expr, ExprKind, Stmt};
use rustpython_parser::ast::{Expr, ExprKind, Stmt};
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::ReturnStatementVisitor;
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::statement_visitor::StatementVisitor;
use ruff_python_ast::{cast, helpers};
use ruff_python_semantic::analyze::visibility;
use ruff_python_semantic::analyze::visibility::Visibility;
use ruff_python_semantic::definition::{Definition, Member, MemberKind};
use ruff_python_stdlib::typing::SIMPLE_MAGIC_RETURN_TYPES;
use crate::checkers::ast::Checker;
use crate::docstrings::definition::{Definition, DefinitionKind};
use crate::registry::{AsRule, Rule};
use super::fixes;
@@ -37,7 +37,7 @@ use super::helpers::match_function_def;
/// ```
#[violation]
pub struct MissingTypeFunctionArgument {
pub name: String,
name: String,
}
impl Violation for MissingTypeFunctionArgument {
@@ -69,7 +69,7 @@ impl Violation for MissingTypeFunctionArgument {
/// ```
#[violation]
pub struct MissingTypeArgs {
pub name: String,
name: String,
}
impl Violation for MissingTypeArgs {
@@ -101,7 +101,7 @@ impl Violation for MissingTypeArgs {
/// ```
#[violation]
pub struct MissingTypeKwargs {
pub name: String,
name: String,
}
impl Violation for MissingTypeKwargs {
@@ -138,7 +138,7 @@ impl Violation for MissingTypeKwargs {
/// ```
#[violation]
pub struct MissingTypeSelf {
pub name: String,
name: String,
}
impl Violation for MissingTypeSelf {
@@ -177,7 +177,7 @@ impl Violation for MissingTypeSelf {
/// ```
#[violation]
pub struct MissingTypeCls {
pub name: String,
name: String,
}
impl Violation for MissingTypeCls {
@@ -209,7 +209,7 @@ impl Violation for MissingTypeCls {
/// ```
#[violation]
pub struct MissingReturnTypeUndocumentedPublicFunction {
pub name: String,
name: String,
}
impl Violation for MissingReturnTypeUndocumentedPublicFunction {
@@ -241,7 +241,7 @@ impl Violation for MissingReturnTypeUndocumentedPublicFunction {
/// ```
#[violation]
pub struct MissingReturnTypePrivateFunction {
pub name: String,
name: String,
}
impl Violation for MissingReturnTypePrivateFunction {
@@ -286,7 +286,7 @@ impl Violation for MissingReturnTypePrivateFunction {
/// ```
#[violation]
pub struct MissingReturnTypeSpecialMethod {
pub name: String,
name: String,
}
impl AlwaysAutofixableViolation for MissingReturnTypeSpecialMethod {
@@ -326,7 +326,7 @@ impl AlwaysAutofixableViolation for MissingReturnTypeSpecialMethod {
/// ```
#[violation]
pub struct MissingReturnTypeStaticMethod {
pub name: String,
name: String,
}
impl Violation for MissingReturnTypeStaticMethod {
@@ -362,7 +362,7 @@ impl Violation for MissingReturnTypeStaticMethod {
/// ```
#[violation]
pub struct MissingReturnTypeClassMethod {
pub name: String,
name: String,
}
impl Violation for MissingReturnTypeClassMethod {
@@ -403,7 +403,7 @@ impl Violation for MissingReturnTypeClassMethod {
/// - [Mypy: The Any type](https://mypy.readthedocs.io/en/stable/kinds_of_types.html#the-any-type)
#[violation]
pub struct AnyType {
pub name: String,
name: String,
}
impl Violation for AnyType {
@@ -416,16 +416,11 @@ impl Violation for AnyType {
fn is_none_returning(body: &[Stmt]) -> bool {
let mut visitor = ReturnStatementVisitor::default();
for stmt in body {
visitor.visit_stmt(stmt);
}
visitor.visit_body(body);
for expr in visitor.returns.into_iter().flatten() {
if !matches!(
expr.node,
ExprKind::Constant {
value: Constant::None,
..
}
ExprKind::Constant(ref constant) if constant.value.is_none()
) {
return false;
}
@@ -451,7 +446,7 @@ fn check_dynamically_typed<F>(
}
/// Generate flake8-annotation checks for a given `Definition`.
pub fn definition(
pub(crate) fn definition(
checker: &Checker,
definition: &Definition,
visibility: Visibility,
@@ -459,285 +454,285 @@ pub fn definition(
// TODO(charlie): Consider using the AST directly here rather than `Definition`.
// We could adhere more closely to `flake8-annotations` by defining public
// vs. secret vs. protected.
if let DefinitionKind::Function(stmt)
| DefinitionKind::NestedFunction(stmt)
| DefinitionKind::Method(stmt) = &definition.kind
let Definition::Member(Member {
kind,
stmt,
..
}) = definition else {
return vec![];
};
let is_method = match kind {
MemberKind::Method => true,
MemberKind::Function | MemberKind::NestedFunction => false,
_ => return vec![],
};
let (name, args, returns, body) = match_function_def(stmt);
// Keep track of whether we've seen any typed arguments or return values.
let mut has_any_typed_arg = false; // Any argument has been typed?
let mut has_typed_return = false; // Return value has been typed?
let mut has_typed_self_or_cls = false; // Has a typed `self` or `cls` argument?
// Temporary storage for diagnostics; we emit them at the end
// unless configured to suppress ANN* for declarations that are fully untyped.
let mut diagnostics = Vec::new();
// ANN001, ANN401
for arg in args
.posonlyargs
.iter()
.chain(args.args.iter())
.chain(args.kwonlyargs.iter())
.skip(
// If this is a non-static method, skip `cls` or `self`.
usize::from(
is_method && !visibility::is_staticmethod(&checker.ctx, cast::decorator_list(stmt)),
),
)
{
let is_method = matches!(definition.kind, DefinitionKind::Method(_));
let (name, args, returns, body) = match_function_def(stmt);
// Keep track of whether we've seen any typed arguments or return values.
let mut has_any_typed_arg = false; // Any argument has been typed?
let mut has_typed_return = false; // Return value has been typed?
let mut has_typed_self_or_cls = false; // Has a typed `self` or `cls` argument?
// Temporary storage for diagnostics; we emit them at the end
// unless configured to suppress ANN* for declarations that are fully untyped.
let mut diagnostics = Vec::new();
// ANN001, ANN401
for arg in args
.posonlyargs
.iter()
.chain(args.args.iter())
.chain(args.kwonlyargs.iter())
.skip(
// If this is a non-static method, skip `cls` or `self`.
usize::from(
is_method
&& !visibility::is_staticmethod(&checker.ctx, cast::decorator_list(stmt)),
),
)
{
// ANN401 for dynamically typed arguments
if let Some(annotation) = &arg.node.annotation {
has_any_typed_arg = true;
if checker.settings.rules.enabled(Rule::AnyType) {
check_dynamically_typed(
checker,
annotation,
|| arg.node.arg.to_string(),
&mut diagnostics,
);
}
} else {
if !(checker.settings.flake8_annotations.suppress_dummy_args
&& checker.settings.dummy_variable_rgx.is_match(&arg.node.arg))
{
if checker
.settings
.rules
.enabled(Rule::MissingTypeFunctionArgument)
{
diagnostics.push(Diagnostic::new(
MissingTypeFunctionArgument {
name: arg.node.arg.to_string(),
},
arg.range(),
));
}
}
}
}
// ANN002, ANN401
if let Some(arg) = &args.vararg {
if let Some(expr) = &arg.node.annotation {
has_any_typed_arg = true;
if !checker.settings.flake8_annotations.allow_star_arg_any {
if checker.settings.rules.enabled(Rule::AnyType) {
let name = &arg.node.arg;
check_dynamically_typed(
checker,
expr,
|| format!("*{name}"),
&mut diagnostics,
);
}
}
} else {
if !(checker.settings.flake8_annotations.suppress_dummy_args
&& checker.settings.dummy_variable_rgx.is_match(&arg.node.arg))
{
if checker.settings.rules.enabled(Rule::MissingTypeArgs) {
diagnostics.push(Diagnostic::new(
MissingTypeArgs {
name: arg.node.arg.to_string(),
},
arg.range(),
));
}
}
}
}
// ANN003, ANN401
if let Some(arg) = &args.kwarg {
if let Some(expr) = &arg.node.annotation {
has_any_typed_arg = true;
if !checker.settings.flake8_annotations.allow_star_arg_any {
if checker.settings.rules.enabled(Rule::AnyType) {
let name = &arg.node.arg;
check_dynamically_typed(
checker,
expr,
|| format!("**{name}"),
&mut diagnostics,
);
}
}
} else {
if !(checker.settings.flake8_annotations.suppress_dummy_args
&& checker.settings.dummy_variable_rgx.is_match(&arg.node.arg))
{
if checker.settings.rules.enabled(Rule::MissingTypeKwargs) {
diagnostics.push(Diagnostic::new(
MissingTypeKwargs {
name: arg.node.arg.to_string(),
},
arg.range(),
));
}
}
}
}
// ANN101, ANN102
if is_method && !visibility::is_staticmethod(&checker.ctx, cast::decorator_list(stmt)) {
if let Some(arg) = args.posonlyargs.first().or_else(|| args.args.first()) {
if arg.node.annotation.is_none() {
if visibility::is_classmethod(&checker.ctx, cast::decorator_list(stmt)) {
if checker.settings.rules.enabled(Rule::MissingTypeCls) {
diagnostics.push(Diagnostic::new(
MissingTypeCls {
name: arg.node.arg.to_string(),
},
arg.range(),
));
}
} else {
if checker.settings.rules.enabled(Rule::MissingTypeSelf) {
diagnostics.push(Diagnostic::new(
MissingTypeSelf {
name: arg.node.arg.to_string(),
},
arg.range(),
));
}
}
} else {
has_typed_self_or_cls = true;
}
}
}
// ANN201, ANN202, ANN401
if let Some(expr) = &returns {
has_typed_return = true;
// ANN401 for dynamically typed arguments
if let Some(annotation) = &arg.node.annotation {
has_any_typed_arg = true;
if checker.settings.rules.enabled(Rule::AnyType) {
check_dynamically_typed(checker, expr, || name.to_string(), &mut diagnostics);
check_dynamically_typed(
checker,
annotation,
|| arg.node.arg.to_string(),
&mut diagnostics,
);
}
} else if !(
// Allow omission of return annotation if the function only returns `None`
// (explicitly or implicitly).
checker.settings.flake8_annotations.suppress_none_returning && is_none_returning(body)
) {
if is_method && visibility::is_classmethod(&checker.ctx, cast::decorator_list(stmt)) {
if checker
.settings
.rules
.enabled(Rule::MissingReturnTypeClassMethod)
{
diagnostics.push(Diagnostic::new(
MissingReturnTypeClassMethod {
name: name.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
));
}
} else if is_method
&& visibility::is_staticmethod(&checker.ctx, cast::decorator_list(stmt))
} else {
if !(checker.settings.flake8_annotations.suppress_dummy_args
&& checker.settings.dummy_variable_rgx.is_match(&arg.node.arg))
{
if checker
.settings
.rules
.enabled(Rule::MissingReturnTypeStaticMethod)
.enabled(Rule::MissingTypeFunctionArgument)
{
diagnostics.push(Diagnostic::new(
MissingReturnTypeStaticMethod {
name: name.to_string(),
MissingTypeFunctionArgument {
name: arg.node.arg.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
arg.range(),
));
}
} else if is_method && visibility::is_init(name) {
// Allow omission of return annotation in `__init__` functions, as long as at
// least one argument is typed.
if checker
.settings
.rules
.enabled(Rule::MissingReturnTypeSpecialMethod)
{
if !(checker.settings.flake8_annotations.mypy_init_return && has_any_typed_arg)
{
let mut diagnostic = Diagnostic::new(
MissingReturnTypeSpecialMethod {
name: name.to_string(),
}
}
}
// ANN002, ANN401
if let Some(arg) = &args.vararg {
if let Some(expr) = &arg.node.annotation {
has_any_typed_arg = true;
if !checker.settings.flake8_annotations.allow_star_arg_any {
if checker.settings.rules.enabled(Rule::AnyType) {
let name = &arg.node.arg;
check_dynamically_typed(checker, expr, || format!("*{name}"), &mut diagnostics);
}
}
} else {
if !(checker.settings.flake8_annotations.suppress_dummy_args
&& checker.settings.dummy_variable_rgx.is_match(&arg.node.arg))
{
if checker.settings.rules.enabled(Rule::MissingTypeArgs) {
diagnostics.push(Diagnostic::new(
MissingTypeArgs {
name: arg.node.arg.to_string(),
},
arg.range(),
));
}
}
}
}
// ANN003, ANN401
if let Some(arg) = &args.kwarg {
if let Some(expr) = &arg.node.annotation {
has_any_typed_arg = true;
if !checker.settings.flake8_annotations.allow_star_arg_any {
if checker.settings.rules.enabled(Rule::AnyType) {
let name = &arg.node.arg;
check_dynamically_typed(
checker,
expr,
|| format!("**{name}"),
&mut diagnostics,
);
}
}
} else {
if !(checker.settings.flake8_annotations.suppress_dummy_args
&& checker.settings.dummy_variable_rgx.is_match(&arg.node.arg))
{
if checker.settings.rules.enabled(Rule::MissingTypeKwargs) {
diagnostics.push(Diagnostic::new(
MissingTypeKwargs {
name: arg.node.arg.to_string(),
},
arg.range(),
));
}
}
}
}
// ANN101, ANN102
if is_method && !visibility::is_staticmethod(&checker.ctx, cast::decorator_list(stmt)) {
if let Some(arg) = args.posonlyargs.first().or_else(|| args.args.first()) {
if arg.node.annotation.is_none() {
if visibility::is_classmethod(&checker.ctx, cast::decorator_list(stmt)) {
if checker.settings.rules.enabled(Rule::MissingTypeCls) {
diagnostics.push(Diagnostic::new(
MissingTypeCls {
name: arg.node.arg.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
);
if checker.patch(diagnostic.kind.rule()) {
diagnostic.try_set_fix(|| {
fixes::add_return_annotation(checker.locator, stmt, "None")
});
}
diagnostics.push(diagnostic);
arg.range(),
));
}
} else {
if checker.settings.rules.enabled(Rule::MissingTypeSelf) {
diagnostics.push(Diagnostic::new(
MissingTypeSelf {
name: arg.node.arg.to_string(),
},
arg.range(),
));
}
}
} else if is_method && visibility::is_magic(name) {
if checker
.settings
.rules
.enabled(Rule::MissingReturnTypeSpecialMethod)
{
} else {
has_typed_self_or_cls = true;
}
}
}
// ANN201, ANN202, ANN401
if let Some(expr) = &returns {
has_typed_return = true;
if checker.settings.rules.enabled(Rule::AnyType) {
check_dynamically_typed(checker, expr, || name.to_string(), &mut diagnostics);
}
} else if !(
// Allow omission of return annotation if the function only returns `None`
// (explicitly or implicitly).
checker.settings.flake8_annotations.suppress_none_returning && is_none_returning(body)
) {
if is_method && visibility::is_classmethod(&checker.ctx, cast::decorator_list(stmt)) {
if checker
.settings
.rules
.enabled(Rule::MissingReturnTypeClassMethod)
{
diagnostics.push(Diagnostic::new(
MissingReturnTypeClassMethod {
name: name.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
));
}
} else if is_method && visibility::is_staticmethod(&checker.ctx, cast::decorator_list(stmt))
{
if checker
.settings
.rules
.enabled(Rule::MissingReturnTypeStaticMethod)
{
diagnostics.push(Diagnostic::new(
MissingReturnTypeStaticMethod {
name: name.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
));
}
} else if is_method && visibility::is_init(name) {
// Allow omission of return annotation in `__init__` functions, as long as at
// least one argument is typed.
if checker
.settings
.rules
.enabled(Rule::MissingReturnTypeSpecialMethod)
{
if !(checker.settings.flake8_annotations.mypy_init_return && has_any_typed_arg) {
let mut diagnostic = Diagnostic::new(
MissingReturnTypeSpecialMethod {
name: name.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
);
let return_type = SIMPLE_MAGIC_RETURN_TYPES.get(name);
if let Some(return_type) = return_type {
if checker.patch(diagnostic.kind.rule()) {
diagnostic.try_set_fix(|| {
fixes::add_return_annotation(checker.locator, stmt, return_type)
});
}
if checker.patch(diagnostic.kind.rule()) {
#[allow(deprecated)]
diagnostic.try_set_fix_from_edit(|| {
fixes::add_return_annotation(checker.locator, stmt, "None")
});
}
diagnostics.push(diagnostic);
}
} else {
match visibility {
Visibility::Public => {
if checker
.settings
.rules
.enabled(Rule::MissingReturnTypeUndocumentedPublicFunction)
{
diagnostics.push(Diagnostic::new(
MissingReturnTypeUndocumentedPublicFunction {
name: name.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
));
}
}
} else if is_method && visibility::is_magic(name) {
if checker
.settings
.rules
.enabled(Rule::MissingReturnTypeSpecialMethod)
{
let mut diagnostic = Diagnostic::new(
MissingReturnTypeSpecialMethod {
name: name.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
);
let return_type = SIMPLE_MAGIC_RETURN_TYPES.get(name);
if let Some(return_type) = return_type {
if checker.patch(diagnostic.kind.rule()) {
#[allow(deprecated)]
diagnostic.try_set_fix_from_edit(|| {
fixes::add_return_annotation(checker.locator, stmt, return_type)
});
}
Visibility::Private => {
if checker
.settings
.rules
.enabled(Rule::MissingReturnTypePrivateFunction)
{
diagnostics.push(Diagnostic::new(
MissingReturnTypePrivateFunction {
name: name.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
));
}
}
diagnostics.push(diagnostic);
}
} else {
match visibility {
Visibility::Public => {
if checker
.settings
.rules
.enabled(Rule::MissingReturnTypeUndocumentedPublicFunction)
{
diagnostics.push(Diagnostic::new(
MissingReturnTypeUndocumentedPublicFunction {
name: name.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
));
}
}
Visibility::Private => {
if checker
.settings
.rules
.enabled(Rule::MissingReturnTypePrivateFunction)
{
diagnostics.push(Diagnostic::new(
MissingReturnTypePrivateFunction {
name: name.to_string(),
},
helpers::identifier_range(stmt, checker.locator),
));
}
}
}
}
// If settings say so, don't report any of the
// diagnostics gathered here if there were no type annotations at all.
if checker.settings.flake8_annotations.ignore_fully_untyped
&& !(has_any_typed_arg || has_typed_self_or_cls || has_typed_return)
{
vec![]
} else {
diagnostics
}
} else {
}
// If settings say so, don't report any of the
// diagnostics gathered here if there were no type annotations at all.
if checker.settings.flake8_annotations.ignore_fully_untyped
&& !(has_any_typed_arg || has_typed_self_or_cls || has_typed_return)
{
vec![]
} else {
diagnostics
}
}

View File

@@ -1,19 +1,17 @@
//! Settings for the `flake-annotations` plugin.
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use ruff_macros::CacheKey;
use ruff_macros::ConfigurationOptions;
#[derive(
Debug, PartialEq, Eq, Default, Serialize, Deserialize, ConfigurationOptions, JsonSchema,
)]
#[derive(Debug, PartialEq, Eq, Default, Serialize, Deserialize, ConfigurationOptions)]
#[serde(
deny_unknown_fields,
rename_all = "kebab-case",
rename = "Flake8AnnotationsOptions"
)]
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
pub struct Options {
#[option(
default = "false",

View File

@@ -1,6 +1,6 @@
use once_cell::sync::Lazy;
use regex::Regex;
use rustpython_parser::ast::{Constant, Expr, ExprKind};
use rustpython_parser::ast::{self, Constant, Expr, ExprKind};
use crate::checkers::ast::Checker;
@@ -8,23 +8,23 @@ static PASSWORD_CANDIDATE_REGEX: Lazy<Regex> = Lazy::new(|| {
Regex::new(r"(^|_)(?i)(pas+wo?r?d|pass(phrase)?|pwd|token|secrete?)($|_)").unwrap()
});
pub fn string_literal(expr: &Expr) -> Option<&str> {
pub(crate) fn string_literal(expr: &Expr) -> Option<&str> {
match &expr.node {
ExprKind::Constant {
ExprKind::Constant(ast::ExprConstant {
value: Constant::Str(string),
..
} => Some(string),
}) => Some(string),
_ => None,
}
}
pub fn matches_password_name(string: &str) -> bool {
pub(crate) fn matches_password_name(string: &str) -> bool {
PASSWORD_CANDIDATE_REGEX.is_match(string)
}
pub fn is_untyped_exception(type_: Option<&Expr>, checker: &Checker) -> bool {
pub(crate) fn is_untyped_exception(type_: Option<&Expr>, checker: &Checker) -> bool {
type_.map_or(true, |type_| {
if let ExprKind::Tuple { elts, .. } = &type_.node {
if let ExprKind::Tuple(ast::ExprTuple { elts, .. }) = &type_.node {
elts.iter().any(|type_| {
checker
.ctx

View File

@@ -36,6 +36,6 @@ impl Violation for Assert {
}
/// S101
pub fn assert_used(stmt: &Stmt) -> Diagnostic {
pub(crate) fn assert_used(stmt: &Stmt) -> Diagnostic {
Diagnostic::new(Assert, TextRange::at(stmt.start(), "assert".text_len()))
}

View File

@@ -1,7 +1,7 @@
use num_traits::ToPrimitive;
use once_cell::sync::Lazy;
use rustc_hash::FxHashMap;
use rustpython_parser::ast::{Constant, Expr, ExprKind, Keyword, Operator};
use rustpython_parser::ast::{self, Constant, Expr, ExprKind, Keyword, Operator};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
@@ -12,7 +12,7 @@ use crate::checkers::ast::Checker;
#[violation]
pub struct BadFilePermissions {
pub mask: u16,
mask: u16,
}
impl Violation for BadFilePermissions {
@@ -70,14 +70,14 @@ static PYSTAT_MAPPING: Lazy<FxHashMap<&'static str, u16>> = Lazy::new(|| {
fn get_int_value(expr: &Expr) -> Option<u16> {
match &expr.node {
ExprKind::Constant {
ExprKind::Constant(ast::ExprConstant {
value: Constant::Int(value),
..
} => value.to_u16(),
ExprKind::Attribute { .. } => {
}) => value.to_u16(),
ExprKind::Attribute(_) => {
compose_call_path(expr).and_then(|path| PYSTAT_MAPPING.get(path.as_str()).copied())
}
ExprKind::BinOp { left, op, right } => {
ExprKind::BinOp(ast::ExprBinOp { left, op, right }) => {
if let (Some(left_value), Some(right_value)) =
(get_int_value(left), get_int_value(right))
{
@@ -96,7 +96,7 @@ fn get_int_value(expr: &Expr) -> Option<u16> {
}
/// S103
pub fn bad_file_permissions(
pub(crate) fn bad_file_permissions(
checker: &mut Checker,
func: &Expr,
args: &[Expr],

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::{Expr, ExprKind};
use rustpython_parser::ast::{self, Expr, ExprKind};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
@@ -14,8 +14,8 @@ impl Violation for ExecBuiltin {
}
/// S102
pub fn exec_used(expr: &Expr, func: &Expr) -> Option<Diagnostic> {
let ExprKind::Name { id, .. } = &func.node else {
pub(crate) fn exec_used(expr: &Expr, func: &Expr) -> Option<Diagnostic> {
let ExprKind::Name(ast::ExprName { id, .. }) = &func.node else {
return None;
};
if id != "exec" {

View File

@@ -13,7 +13,7 @@ impl Violation for HardcodedBindAllInterfaces {
}
/// S104
pub fn hardcoded_bind_all_interfaces(value: &str, range: TextRange) -> Option<Diagnostic> {
pub(crate) fn hardcoded_bind_all_interfaces(value: &str, range: TextRange) -> Option<Diagnostic> {
if value == "0.0.0.0" {
Some(Diagnostic::new(HardcodedBindAllInterfaces, range))
} else {

View File

@@ -7,33 +7,36 @@ use super::super::helpers::{matches_password_name, string_literal};
#[violation]
pub struct HardcodedPasswordDefault {
pub string: String,
name: String,
}
impl Violation for HardcodedPasswordDefault {
#[derive_message_formats]
fn message(&self) -> String {
let HardcodedPasswordDefault { string } = self;
format!("Possible hardcoded password: \"{}\"", string.escape_debug())
let HardcodedPasswordDefault { name } = self;
format!(
"Possible hardcoded password assigned to function default: \"{}\"",
name.escape_debug()
)
}
}
fn check_password_kwarg(arg: &Arg, default: &Expr) -> Option<Diagnostic> {
let string = string_literal(default).filter(|string| !string.is_empty())?;
string_literal(default).filter(|string| !string.is_empty())?;
let kwarg_name = &arg.node.arg;
if !matches_password_name(kwarg_name) {
return None;
}
Some(Diagnostic::new(
HardcodedPasswordDefault {
string: string.to_string(),
name: kwarg_name.to_string(),
},
default.range(),
))
}
/// S107
pub fn hardcoded_password_default(arguments: &Arguments) -> Vec<Diagnostic> {
pub(crate) fn hardcoded_password_default(arguments: &Arguments) -> Vec<Diagnostic> {
let mut diagnostics: Vec<Diagnostic> = Vec::new();
let defaults_start =

View File

@@ -7,30 +7,33 @@ use super::super::helpers::{matches_password_name, string_literal};
#[violation]
pub struct HardcodedPasswordFuncArg {
pub string: String,
name: String,
}
impl Violation for HardcodedPasswordFuncArg {
#[derive_message_formats]
fn message(&self) -> String {
let HardcodedPasswordFuncArg { string } = self;
format!("Possible hardcoded password: \"{}\"", string.escape_debug())
let HardcodedPasswordFuncArg { name } = self;
format!(
"Possible hardcoded password assigned to argument: \"{}\"",
name.escape_debug()
)
}
}
/// S106
pub fn hardcoded_password_func_arg(keywords: &[Keyword]) -> Vec<Diagnostic> {
pub(crate) fn hardcoded_password_func_arg(keywords: &[Keyword]) -> Vec<Diagnostic> {
keywords
.iter()
.filter_map(|keyword| {
let string = string_literal(&keyword.node.value).filter(|string| !string.is_empty())?;
string_literal(&keyword.node.value).filter(|string| !string.is_empty())?;
let arg = keyword.node.arg.as_ref()?;
if !matches_password_name(arg) {
return None;
}
Some(Diagnostic::new(
HardcodedPasswordFuncArg {
string: string.to_string(),
name: arg.to_string(),
},
keyword.range(),
))

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::{Constant, Expr, ExprKind};
use rustpython_parser::ast::{self, Constant, Expr, ExprKind};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
@@ -7,49 +7,59 @@ use super::super::helpers::{matches_password_name, string_literal};
#[violation]
pub struct HardcodedPasswordString {
pub string: String,
name: String,
}
impl Violation for HardcodedPasswordString {
#[derive_message_formats]
fn message(&self) -> String {
let HardcodedPasswordString { string } = self;
format!("Possible hardcoded password: \"{}\"", string.escape_debug())
let HardcodedPasswordString { name } = self;
format!(
"Possible hardcoded password assigned to: \"{}\"",
name.escape_debug()
)
}
}
fn is_password_target(target: &Expr) -> bool {
fn password_target(target: &Expr) -> Option<&str> {
let target_name = match &target.node {
// variable = "s3cr3t"
ExprKind::Name { id, .. } => id,
ExprKind::Name(ast::ExprName { id, .. }) => id.as_str(),
// d["password"] = "s3cr3t"
ExprKind::Subscript { slice, .. } => match &slice.node {
ExprKind::Constant {
ExprKind::Subscript(ast::ExprSubscript { slice, .. }) => match &slice.node {
ExprKind::Constant(ast::ExprConstant {
value: Constant::Str(string),
..
} => string,
_ => return false,
}) => string,
_ => return None,
},
// obj.password = "s3cr3t"
ExprKind::Attribute { attr, .. } => attr,
_ => return false,
ExprKind::Attribute(ast::ExprAttribute { attr, .. }) => attr,
_ => return None,
};
matches_password_name(target_name)
if matches_password_name(target_name) {
Some(target_name)
} else {
None
}
}
/// S105
pub fn compare_to_hardcoded_password_string(left: &Expr, comparators: &[Expr]) -> Vec<Diagnostic> {
pub(crate) fn compare_to_hardcoded_password_string(
left: &Expr,
comparators: &[Expr],
) -> Vec<Diagnostic> {
comparators
.iter()
.filter_map(|comp| {
let string = string_literal(comp).filter(|string| !string.is_empty())?;
if !is_password_target(left) {
string_literal(comp).filter(|string| !string.is_empty())?;
let Some(name) = password_target(left) else {
return None;
}
};
Some(Diagnostic::new(
HardcodedPasswordString {
string: string.to_string(),
name: name.to_string(),
},
comp.range(),
))
@@ -58,13 +68,19 @@ pub fn compare_to_hardcoded_password_string(left: &Expr, comparators: &[Expr]) -
}
/// S105
pub fn assign_hardcoded_password_string(value: &Expr, targets: &[Expr]) -> Option<Diagnostic> {
if let Some(string) = string_literal(value).filter(|string| !string.is_empty()) {
pub(crate) fn assign_hardcoded_password_string(
value: &Expr,
targets: &[Expr],
) -> Option<Diagnostic> {
if string_literal(value)
.filter(|string| !string.is_empty())
.is_some()
{
for target in targets {
if is_password_target(target) {
if let Some(name) = password_target(target) {
return Some(Diagnostic::new(
HardcodedPasswordString {
string: string.to_string(),
name: name.to_string(),
},
value.range(),
));

View File

@@ -1,6 +1,6 @@
use once_cell::sync::Lazy;
use regex::Regex;
use rustpython_parser::ast::{Expr, ExprKind, Operator};
use rustpython_parser::ast::{self, Expr, ExprKind, Operator};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
@@ -56,18 +56,18 @@ fn unparse_string_format_expression(checker: &mut Checker, expr: &Expr) -> Optio
match &expr.node {
// "select * from table where val = " + "str" + ...
// "select * from table where val = %s" % ...
ExprKind::BinOp {
ExprKind::BinOp(ast::ExprBinOp {
op: Operator::Add | Operator::Mod,
..
} => {
let Some(parent) = checker.ctx.current_expr_parent() else {
}) => {
let Some(parent) = checker.ctx.expr_parent() else {
if any_over_expr(expr, &has_string_literal) {
return Some(unparse_expr(expr, checker.stylist));
}
return None;
};
// Only evaluate the full BinOp, not the nested components.
let ExprKind::BinOp { .. } = &parent.node else {
let ExprKind::BinOp(_ )= &parent.node else {
if any_over_expr(expr, &has_string_literal) {
return Some(unparse_expr(expr, checker.stylist));
}
@@ -75,8 +75,8 @@ fn unparse_string_format_expression(checker: &mut Checker, expr: &Expr) -> Optio
};
None
}
ExprKind::Call { func, .. } => {
let ExprKind::Attribute{ attr, value, .. } = &func.node else {
ExprKind::Call(ast::ExprCall { func, .. }) => {
let ExprKind::Attribute(ast::ExprAttribute { attr, value, .. }) = &func.node else {
return None;
};
// "select * from table where val = {}".format(...)
@@ -86,13 +86,13 @@ fn unparse_string_format_expression(checker: &mut Checker, expr: &Expr) -> Optio
None
}
// f"select * from table where val = {val}"
ExprKind::JoinedStr { .. } => Some(unparse_expr(expr, checker.stylist)),
ExprKind::JoinedStr(_) => Some(unparse_expr(expr, checker.stylist)),
_ => None,
}
}
/// S608
pub fn hardcoded_sql_expression(checker: &mut Checker, expr: &Expr) {
pub(crate) fn hardcoded_sql_expression(checker: &mut Checker, expr: &Expr) {
match unparse_string_format_expression(checker, expr) {
Some(string) if matches_sql_statement(&string) => {
checker

View File

@@ -5,7 +5,7 @@ use ruff_macros::{derive_message_formats, violation};
#[violation]
pub struct HardcodedTempFile {
pub string: String,
string: String,
}
impl Violation for HardcodedTempFile {
@@ -20,7 +20,7 @@ impl Violation for HardcodedTempFile {
}
/// S108
pub fn hardcoded_tmp_directory(
pub(crate) fn hardcoded_tmp_directory(
expr: &Expr,
value: &str,
prefixes: &[String],

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::{Constant, Expr, ExprKind, Keyword};
use rustpython_parser::ast::{self, Constant, Expr, ExprKind, Keyword};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
@@ -10,7 +10,7 @@ use super::super::helpers::string_literal;
#[violation]
pub struct HashlibInsecureHashFunction {
pub string: String,
string: String,
}
impl Violation for HashlibInsecureHashFunction {
@@ -27,10 +27,10 @@ fn is_used_for_security(call_args: &SimpleCallArgs) -> bool {
match call_args.keyword_argument("usedforsecurity") {
Some(expr) => !matches!(
&expr.node,
ExprKind::Constant {
ExprKind::Constant(ast::ExprConstant {
value: Constant::Bool(false),
..
}
})
),
_ => true,
}
@@ -42,7 +42,7 @@ enum HashlibCall {
}
/// S324
pub fn hashlib_insecure_hash_functions(
pub(crate) fn hashlib_insecure_hash_functions(
checker: &mut Checker,
func: &Expr,
args: &[Expr],

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::{Constant, Expr, ExprKind, Keyword};
use rustpython_parser::ast::{self, Constant, Expr, ExprKind, Keyword};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
@@ -8,7 +8,7 @@ use crate::checkers::ast::Checker;
#[violation]
pub struct Jinja2AutoescapeFalse {
pub value: bool,
value: bool,
}
impl Violation for Jinja2AutoescapeFalse {
@@ -30,7 +30,7 @@ impl Violation for Jinja2AutoescapeFalse {
}
/// S701
pub fn jinja2_autoescape_false(
pub(crate) fn jinja2_autoescape_false(
checker: &mut Checker,
func: &Expr,
args: &[Expr],
@@ -47,13 +47,13 @@ pub fn jinja2_autoescape_false(
if let Some(autoescape_arg) = call_args.keyword_argument("autoescape") {
match &autoescape_arg.node {
ExprKind::Constant {
ExprKind::Constant(ast::ExprConstant {
value: Constant::Bool(true),
..
} => (),
ExprKind::Call { func, .. } => {
if let ExprKind::Name { id, .. } = &func.node {
if id.as_str() != "select_autoescape" {
}) => (),
ExprKind::Call(ast::ExprCall { func, .. }) => {
if let ExprKind::Name(ast::ExprName { id, .. }) = &func.node {
if id != "select_autoescape" {
checker.diagnostics.push(Diagnostic::new(
Jinja2AutoescapeFalse { value: true },
autoescape_arg.range(),

View File

@@ -17,7 +17,7 @@ impl Violation for LoggingConfigInsecureListen {
}
/// S612
pub fn logging_config_insecure_listen(
pub(crate) fn logging_config_insecure_listen(
checker: &mut Checker,
func: &Expr,
args: &[Expr],

View File

@@ -1,35 +1,37 @@
pub use assert_used::{assert_used, Assert};
pub use bad_file_permissions::{bad_file_permissions, BadFilePermissions};
pub use exec_used::{exec_used, ExecBuiltin};
pub use hardcoded_bind_all_interfaces::{
pub(crate) use assert_used::{assert_used, Assert};
pub(crate) use bad_file_permissions::{bad_file_permissions, BadFilePermissions};
pub(crate) use exec_used::{exec_used, ExecBuiltin};
pub(crate) use hardcoded_bind_all_interfaces::{
hardcoded_bind_all_interfaces, HardcodedBindAllInterfaces,
};
pub use hardcoded_password_default::{hardcoded_password_default, HardcodedPasswordDefault};
pub use hardcoded_password_func_arg::{hardcoded_password_func_arg, HardcodedPasswordFuncArg};
pub use hardcoded_password_string::{
pub(crate) use hardcoded_password_default::{hardcoded_password_default, HardcodedPasswordDefault};
pub(crate) use hardcoded_password_func_arg::{
hardcoded_password_func_arg, HardcodedPasswordFuncArg,
};
pub(crate) use hardcoded_password_string::{
assign_hardcoded_password_string, compare_to_hardcoded_password_string, HardcodedPasswordString,
};
pub use hardcoded_sql_expression::{hardcoded_sql_expression, HardcodedSQLExpression};
pub use hardcoded_tmp_directory::{hardcoded_tmp_directory, HardcodedTempFile};
pub use hashlib_insecure_hash_functions::{
pub(crate) use hardcoded_sql_expression::{hardcoded_sql_expression, HardcodedSQLExpression};
pub(crate) use hardcoded_tmp_directory::{hardcoded_tmp_directory, HardcodedTempFile};
pub(crate) use hashlib_insecure_hash_functions::{
hashlib_insecure_hash_functions, HashlibInsecureHashFunction,
};
pub use jinja2_autoescape_false::{jinja2_autoescape_false, Jinja2AutoescapeFalse};
pub use logging_config_insecure_listen::{
pub(crate) use jinja2_autoescape_false::{jinja2_autoescape_false, Jinja2AutoescapeFalse};
pub(crate) use logging_config_insecure_listen::{
logging_config_insecure_listen, LoggingConfigInsecureListen,
};
pub use request_with_no_cert_validation::{
pub(crate) use request_with_no_cert_validation::{
request_with_no_cert_validation, RequestWithNoCertValidation,
};
pub use request_without_timeout::{request_without_timeout, RequestWithoutTimeout};
pub use shell_injection::{
pub(crate) use request_without_timeout::{request_without_timeout, RequestWithoutTimeout};
pub(crate) use shell_injection::{
shell_injection, CallWithShellEqualsTrue, StartProcessWithAShell, StartProcessWithNoShell,
StartProcessWithPartialPath, SubprocessPopenWithShellEqualsTrue,
SubprocessWithoutShellEqualsTrue,
};
pub use snmp_insecure_version::{snmp_insecure_version, SnmpInsecureVersion};
pub use snmp_weak_cryptography::{snmp_weak_cryptography, SnmpWeakCryptography};
pub use suspicious_function_call::{
pub(crate) use snmp_insecure_version::{snmp_insecure_version, SnmpInsecureVersion};
pub(crate) use snmp_weak_cryptography::{snmp_weak_cryptography, SnmpWeakCryptography};
pub(crate) use suspicious_function_call::{
suspicious_function_call, SuspiciousEvalUsage, SuspiciousFTPLibUsage,
SuspiciousInsecureCipherModeUsage, SuspiciousInsecureCipherUsage, SuspiciousInsecureHashUsage,
SuspiciousMarkSafeUsage, SuspiciousMarshalUsage, SuspiciousMktempUsage,
@@ -39,9 +41,9 @@ pub use suspicious_function_call::{
SuspiciousXMLExpatReaderUsage, SuspiciousXMLMiniDOMUsage, SuspiciousXMLPullDOMUsage,
SuspiciousXMLSaxUsage,
};
pub use try_except_continue::{try_except_continue, TryExceptContinue};
pub use try_except_pass::{try_except_pass, TryExceptPass};
pub use unsafe_yaml_load::{unsafe_yaml_load, UnsafeYAMLLoad};
pub(crate) use try_except_continue::{try_except_continue, TryExceptContinue};
pub(crate) use try_except_pass::{try_except_pass, TryExceptPass};
pub(crate) use unsafe_yaml_load::{unsafe_yaml_load, UnsafeYAMLLoad};
mod assert_used;
mod bad_file_permissions;

Some files were not shown because too many files have changed in this diff Show More