Compare commits

...

42 Commits

Author SHA1 Message Date
Dhruv Manilawala
ee258caed7 Bump version to 0.6.3 (#13152) 2024-08-29 20:29:33 +05:30
Aditya Pal
b4d9d26020 Update faq.md to highlight changes to src (#13145)
This attempts to close https://github.com/astral-sh/ruff/issues/13134

## Summary

Documentation change to address
https://github.com/astral-sh/ruff/issues/13134

## Test Plan

Markdown Changes were previewed
2024-08-29 11:57:53 +00:00
Steve C
a99832088a [ruff] - extend comment deletions for unused-noqa (RUF100) (#13105)
## Summary

Extends deletions for RUF100, deleting trailing text from noqa
directives, while preserving upcoming comments on the same line if any.

In cases where it deletes a comment up to another comment on the same
line, the whitespace between them is now shown to be in the autofix in
the diagnostic as well. Leading whitespace before the removed comment is
not, though.

Fixes #12251 

## Test Plan

`cargo test`
2024-08-29 10:50:16 +05:30
Carl Meyer
770ef2ab27 [red-knot] support deferred evaluation of type expressions (#13131)
Prototype deferred evaluation of type expressions by deferring
evaluation of class bases in a stub file. This allows self-referential
class definitions, as occur with the definition of `str` in typeshed
(which inherits `Sequence[str]`).

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2024-08-28 11:41:01 -07:00
Alex Waygood
c6023c03a2 [red-knot] Add docs on using RAYON_NUM_THREADS for better logging (#13140)
Followup to #13049. We check files concurrently now; to get readable
logs, you probably want to switch that off
2024-08-28 17:14:56 +01:00
Adam Kuhn
df694ca1c1 [FastAPI] Avoid introducing invalid syntax in fix for fast-api-non-annotated-dependency (FAST002) (#13133) 2024-08-28 15:29:00 +00:00
Calum Young
2e75cfbfe7 Format PYI examples in docs as .pyi-file snippets (#13116) 2024-08-28 13:20:40 +01:00
Alex Waygood
cfafaa7637 [red-knot] Remove very noisy tracing call when resolving ImportFrom statements (#13136) 2024-08-28 10:05:00 +00:00
Jonathan Plasse
3e9c7adeee Replace crates by dependi for VS Code Dev Container (#13125)
## Summary
crates is now recommending migrating to dependi
![Screenshot from 2024-08-27
19-33-39](https://github.com/user-attachments/assets/c8f6480e-a07c-41a5-8bd0-d808c5a987a0)

## Test Plan
Opening dev-container installs correctly dependi
2024-08-28 09:53:27 +05:30
Chris Krycho
81cd438d88 red-knot: infer and display ellipsis type (#13124)
## Summary

Just what it says on the tin: adds basic `EllipsisType` inference for
any time `...` appears in the AST.

## Test Plan

Test that `x = ...` produces exactly what we would expect.

---------

Co-authored-by: Carl Meyer <carl@oddbird.net>
2024-08-27 20:52:53 +01:00
Dylan
483748c188 [flake8-implicit-str-concat] Normalize octals before merging concatenated strings in single-line-implicit-string-concatenation (ISC001) (#13118)
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2024-08-27 18:53:27 +01:00
Calum Young
eb3dc37faa Add note about how Ruff handles PYI files wrt target version (#13111)
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2024-08-27 17:28:22 +00:00
Chris Krycho
aba1802828 red-knot: infer multiplication for strings and integers (#13117)
## Summary

The resulting type when multiplying a string literal by an integer
literal is one of two types:

- `StringLiteral`, in the case where it is a reasonably small resulting
string (arbitrarily bounded here to 4096 bytes, roughly a page on many
operating systems), including the fully expanded string.
- `LiteralString`, matching Pyright etc., for strings larger than that.

Additionally:

- Switch to using `Box<str>` instead of `String` for the internal value
of `StringLiteral`, saving some non-trivial byte overhead (and keeping
the total number of allocations the same).
- Be clearer and more accurate about which types we ought to defer to in
`StringLiteral` and `LiteralString` member lookup.

## Test Plan

Added a test case covering multiplication times integers: positive,
negative, zero, and in and out of bounds.

---------

Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
Co-authored-by: Carl Meyer <carl@astral.sh>
2024-08-27 09:00:36 -07:00
Tom Kuson
96b42b0c8f [DOC201] Permit explicit None in functions that only return None (#13064)
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2024-08-27 16:00:18 +00:00
Niels Wouda
e6d0c4a65d Add time, tzinfo, and timezone as immutable function calls (#13109) 2024-08-27 15:51:32 +01:00
Calum Young
4e1b289a67 Disable E741 in stub files (#13119)
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2024-08-27 15:02:14 +01:00
Alex Waygood
a5ef124201 [red-knot] Improve the accuracy of the unresolved-import check (#13055) 2024-08-27 14:17:22 +01:00
Chris Krycho
390bb43276 red-knot: flatten match expression in infer_binary_expression (#13115)
## Summary

This fixes the outstanding TODO and make it easier to work with new
cases. (Tidy first, *then* implement, basically!)

## Test Plan

After making this change all the existing tests still pass. A classic
refactor win. 🎉
2024-08-26 12:34:07 -07:00
Chris Krycho
fe8b15291f red-knot: implement unary minus on integer literals (#13114)
# Summary

Add support for the first unary operator: negating integer literals. The
resulting type is another integer literal, with the value being the
negated value of the literal. All other types continue to return
`Type::Unknown` for the present, but this is designed to make it easy to
extend easily with other combinations of operator and operand.

Contributes to #12701.

## Test Plan

Add tests with basic negation, including of very large integers and
double negation.
2024-08-26 12:08:18 -07:00
Dhruv Manilawala
c8e01d7c53 Update dependency in insider requirements.txt (#13112) 2024-08-26 19:02:27 +00:00
Chris Krycho
c4d628cc4c red-knot: infer string literal types (#13113)
## Summary

Introduce a `StringLiteralType` with corresponding `Display` type and a
relatively basic test that the resulting representation is as expected.

Note: we currently always allocate for `StringLiteral` types. This may
end up being a perf issue later, at which point we may want to look at
other ways of representing `value` here, i.e. with some kind of smarter
string structure which can reuse types. That is most likely to show up
with e.g. concatenation.

Contributes to #12701.

## Test Plan

Added a test for individual strings with both single and double quotes
as well as concatenated strings with both forms.
2024-08-26 11:42:34 -07:00
Calum Young
ab3648c4c5 Format docs with ruff formatter (#13087)
## Summary

Now that Ruff provides a formatter, there is no need to rely on Black to
check that the docs are formatted correctly in
`check_docs_formatted.py`. This PR swaps out Black for the Ruff
formatter and updates inconsistencies between the two.

This PR will be a precursor to another PR
([branch](https://github.com/calumy/ruff/tree/format-pyi-in-docs)),
updating the `check_docs_formatted.py` script to check for pyi files,
fixing #11568.

## Test Plan

- CI to check that the docs are formatted correctly using the updated
script.
2024-08-26 21:25:10 +05:30
Teodoro Freund
a822fd6642 Fixed benchmarking section in Contributing guide (#13107)
## Summary

Noticed there was a wrong tip on the Contributing guide, `cargo
benchmark lexer` wouldn't run any benches.
Probably a missed update on #9535 

It may make sense to remove the `cargo benchmark` command from the guide
altogether, but up to the mantainers.
2024-08-26 18:49:01 +05:30
Calum Young
f8f2e2a442 Add anchor tags to README headers (#13083)
## Summary

This pull request adds anchor tags to the elements referenced in the
table of contents section of the readme used on
[PyPI](https://pypi.org/project/ruff/) as an attempt to fix #7257. This
update follows [this
suggestion](https://github.com/pypa/readme_renderer/issues/169#issuecomment-808577486)
to add anchor tags (with no spaces) after the title that is to be linked
to.

## Test Plan

- This has been tested on GitHub to check that the additional tags do
not interfere with how the read me is rendered; see:
https://github.com/calumy/ruff/blob/add-links-to-pypi-docs/README.md
- MK docs were generated using the `generate_mkdocs.py` script; however
as the added tags are beyond the comment `<!-- End section: Overview
-->`, they are excluded so will not change how the docs are rendered.
- I was unable to verify how PyPI renders this change, any suggestions
would be appreciated and I can follow up on this. Hopefully, the four
thumbs up/heart on [this
comment](https://github.com/pypa/readme_renderer/issues/169#issuecomment-808577486)
and [this
suggestion](https://github.com/pypa/readme_renderer/issues/169#issuecomment-1765616890)
all suggest that this approach should work.
2024-08-26 12:42:35 +05:30
Steve C
0b5828a1e8 [flake8-simplify] - extend open-file-with-context-handler to work with dbm.sqlite3 (SIM115) (#13104)
## Summary

Adds upcoming `dbm.sqlite3` to rule that suggests using context managers
to open things with.

See: https://docs.python.org/3.13/library/dbm.html#module-dbm.sqlite3

## Test Plan

`cargo test`
2024-08-26 08:11:03 +01:00
Steve C
5af48337a5 [pylint] - fix incorrect starred expression replacement for nested-min-max (PLW3301) (#13089)
## Summary

Moves the min/max detection up, and fixes #13088 

## Test Plan

`cargo test`
2024-08-26 10:01:38 +05:30
renovate[bot]
39ad6b9472 Update tj-actions/changed-files action to v45 (#13102) 2024-08-25 22:11:24 -04:00
renovate[bot]
41dec93cd2 Update dependency monaco-editor to ^0.51.0 (#13101) 2024-08-25 22:11:15 -04:00
renovate[bot]
aee2caa733 Update NPM Development dependencies (#13100) 2024-08-25 22:11:07 -04:00
renovate[bot]
fe5544e137 Update dependency react-resizable-panels to v2.1.1 (#13098) 2024-08-25 22:11:01 -04:00
renovate[bot]
14c014a48b Update Rust crate syn to v2.0.76 (#13097) 2024-08-25 22:10:57 -04:00
renovate[bot]
ecd0597d6b Update Rust crate serde_json to v1.0.127 (#13096) 2024-08-25 22:10:50 -04:00
renovate[bot]
202271fba6 Update Rust crate serde to v1.0.209 (#13095) 2024-08-25 22:10:45 -04:00
renovate[bot]
4bdb0b4f86 Update Rust crate quote to v1.0.37 (#13094) 2024-08-25 22:10:38 -04:00
renovate[bot]
2286f916c1 Update Rust crate libc to v0.2.158 (#13093) 2024-08-25 22:10:32 -04:00
renovate[bot]
1e4c944251 Update pre-commit dependencies (#13099)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-26 01:49:41 +01:00
Calum Young
f50f8732e9 [flake8-pytest-style] Improve help message for pytest-incorrect-mark-parentheses-style (PT023) (#13092) 2024-08-26 01:37:57 +01:00
Micha Reiser
ecab04e338 Basic concurrent checking (#13049) 2024-08-24 09:53:27 +01:00
Dylan
8c09496b07 [red-knot] Resolve function annotations before adding function symbol (#13084)
This PR has the `SemanticIndexBuilder` visit function definition
annotations before adding the function symbol/name to the builder.

For example, the following snippet no longer causes a panic:

```python
def bool(x) -> bool:
    Return True
```

Note: This fix changes the ordering of the global symbol table.

Closes #13069
2024-08-23 19:31:36 -07:00
Alex Waygood
d19fd1b91c [red-knot] Add symbols for for loop variables (#13075)
## Summary

This PR adds symbols introduced by `for` loops to red-knot:
- `x` in `for x in range(10): pass`
- `x` and `y` in `for x, y in d.items(): pass`
- `a`, `b`, `c` and `d` in `for [((a,), b), (c, d)] in foo: pass`

## Test Plan

Several tests added, and the assertion in the benchmarks has been
updated.

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
2024-08-23 23:40:27 +01:00
Dhruv Manilawala
99df859e20 Include all required keys for Zed settings (#13082)
## Summary

Closes: #13081
2024-08-23 16:18:05 +00:00
Micha Reiser
2d5fe9a6d3 Fix dark theme on initial page load (#13077) 2024-08-23 12:53:43 +00:00
117 changed files with 2836 additions and 828 deletions

View File

@@ -20,7 +20,7 @@
"extensions": [
"ms-python.python",
"rust-lang.rust-analyzer",
"serayuzgur.crates",
"fill-labs.dependi",
"tamasfe.even-better-toml",
"Swellaby.vscode-rust-test-adapter",
"charliermarsh.ruff"

View File

@@ -37,7 +37,7 @@ jobs:
with:
fetch-depth: 0
- uses: tj-actions/changed-files@v44
- uses: tj-actions/changed-files@v45
id: changed
with:
files_yaml: |

View File

@@ -45,7 +45,7 @@ repos:
)$
- repo: https://github.com/crate-ci/typos
rev: v1.23.6
rev: v1.24.1
hooks:
- id: typos
@@ -59,7 +59,7 @@ repos:
pass_filenames: false # This makes it a lot faster
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.6.1
rev: v0.6.2
hooks:
- id: ruff-format
- id: ruff

View File

@@ -1,5 +1,27 @@
# Changelog
## 0.6.3
### Preview features
- \[`flake8-simplify`\] Extend `open-file-with-context-handler` to work with `dbm.sqlite3` (`SIM115`) ([#13104](https://github.com/astral-sh/ruff/pull/13104))
- \[`pycodestyle`\] Disable `E741` in stub files (`.pyi`) ([#13119](https://github.com/astral-sh/ruff/pull/13119))
- \[`pydoclint`\] Avoid `DOC201` on explicit returns in functions that only return `None` ([#13064](https://github.com/astral-sh/ruff/pull/13064))
### Rule changes
- \[`flake8-async`\] Disable check for `asyncio` before Python 3.11 (`ASYNC109`) ([#13023](https://github.com/astral-sh/ruff/pull/13023))
### Bug fixes
- \[`FastAPI`\] Avoid introducing invalid syntax in fix for `fast-api-non-annotated-dependency` (`FAST002`) ([#13133](https://github.com/astral-sh/ruff/pull/13133))
- \[`flake8-implicit-str-concat`\] Normalize octals before merging concatenated strings in `single-line-implicit-string-concatenation` (`ISC001`) ([#13118](https://github.com/astral-sh/ruff/pull/13118))
- \[`flake8-pytest-style`\] Improve help message for `pytest-incorrect-mark-parentheses-style` (`PT023`) ([#13092](https://github.com/astral-sh/ruff/pull/13092))
- \[`pylint`\] Avoid autofix for calls that aren't `min` or `max` as starred expression (`PLW3301`) ([#13089](https://github.com/astral-sh/ruff/pull/13089))
- \[`ruff`\] Add `datetime.time`, `datetime.tzinfo`, and `datetime.timezone` as immutable function calls (`RUF009`) ([#13109](https://github.com/astral-sh/ruff/pull/13109))
- \[`ruff`\] Extend comment deletion for `RUF100` to include trailing text from `noqa` directives while preserving any following comments on the same line, if any ([#13105](https://github.com/astral-sh/ruff/pull/13105))
- Fix dark theme on initial page load for the Ruff playground ([#13077](https://github.com/astral-sh/ruff/pull/13077))
## 0.6.2
### Preview features

View File

@@ -530,6 +530,8 @@ You can run the benchmarks with
cargo benchmark
```
`cargo benchmark` is an alias for `cargo bench -p ruff_benchmark --bench linter --bench formatter --`
#### Benchmark-driven Development
Ruff uses [Criterion.rs](https://bheisler.github.io/criterion.rs/book/) for benchmarks. You can use
@@ -568,7 +570,7 @@ cargo install critcmp
#### Tips
- Use `cargo bench -p ruff_benchmark <filter>` to only run specific benchmarks. For example: `cargo benchmark lexer`
- Use `cargo bench -p ruff_benchmark <filter>` to only run specific benchmarks. For example: `cargo bench -p ruff_benchmark lexer`
to only run the lexer benchmarks.
- Use `cargo bench -p ruff_benchmark -- --quiet` for a more cleaned up output (without statistical relevance)
- Use `cargo bench -p ruff_benchmark -- --quick` to get faster results (more prone to noise)

32
Cargo.lock generated
View File

@@ -1256,9 +1256,9 @@ checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
[[package]]
name = "libc"
version = "0.2.157"
version = "0.2.158"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "374af5f94e54fa97cf75e945cce8a6b201e88a1a07e688b47dfd2a59c66dbd86"
checksum = "d8adc4bb1803a324070e64a98ae98f38934d91957a99cfb3a43dcbc01bc56439"
[[package]]
name = "libcst"
@@ -1827,9 +1827,9 @@ dependencies = [
[[package]]
name = "quote"
version = "1.0.36"
version = "1.0.37"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fa76aaf39101c457836aec0ce2316dbdc3ab723cdda1c6bd4e6ad4208acaca7"
checksum = "b5b9d34b8991d19d98081b46eacdd8eb58c6f2b201139f7c5f643cc155a633af"
dependencies = [
"proc-macro2",
]
@@ -1989,6 +1989,7 @@ dependencies = [
"anyhow",
"crossbeam",
"notify",
"rayon",
"red_knot_python_semantic",
"ruff_cache",
"ruff_db",
@@ -2089,7 +2090,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.6.2"
version = "0.6.3"
dependencies = [
"anyhow",
"argfile",
@@ -2147,6 +2148,7 @@ dependencies = [
"criterion",
"mimalloc",
"once_cell",
"rayon",
"red_knot_python_semantic",
"red_knot_workspace",
"ruff_db",
@@ -2281,7 +2283,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.6.2"
version = "0.6.3"
dependencies = [
"aho-corasick",
"annotate-snippets 0.9.2",
@@ -2601,7 +2603,7 @@ dependencies = [
[[package]]
name = "ruff_wasm"
version = "0.6.2"
version = "0.6.3"
dependencies = [
"console_error_panic_hook",
"console_log",
@@ -2828,9 +2830,9 @@ checksum = "1c107b6f4780854c8b126e228ea8869f4d7b71260f962fefb57b996b8959ba6b"
[[package]]
name = "serde"
version = "1.0.208"
version = "1.0.209"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cff085d2cb684faa248efb494c39b68e522822ac0de72ccf08109abde717cfb2"
checksum = "99fce0ffe7310761ca6bf9faf5115afbc19688edd00171d81b1bb1b116c63e09"
dependencies = [
"serde_derive",
]
@@ -2848,9 +2850,9 @@ dependencies = [
[[package]]
name = "serde_derive"
version = "1.0.208"
version = "1.0.209"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24008e81ff7613ed8e5ba0cfaf24e2c2f1e5b8a0495711e44fcd4882fca62bcf"
checksum = "a5831b979fd7b5439637af1752d535ff49f4860c0f341d1baeb6faf0f4242170"
dependencies = [
"proc-macro2",
"quote",
@@ -2870,9 +2872,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.125"
version = "1.0.127"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "83c8e735a073ccf5be70aa8066aa984eaf2fa000db6c8d0100ae605b366d31ed"
checksum = "8043c06d9f82bd7271361ed64f415fe5e12a77fdb52e573e7f06a516dea329ad"
dependencies = [
"itoa",
"memchr",
@@ -3031,9 +3033,9 @@ checksum = "81cdd64d312baedb58e21336b31bc043b77e01cc99033ce76ef539f78e965ebc"
[[package]]
name = "syn"
version = "2.0.75"
version = "2.0.76"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f6af063034fc1935ede7be0122941bafa9bacb949334d090b77ca98b5817c7d9"
checksum = "578e081a14e0cefc3279b0472138c513f37b41a08d5a3cca9b6e4e8ceb6cd525"
dependencies = [
"proc-macro2",
"quote",

View File

@@ -110,7 +110,7 @@ For more, see the [documentation](https://docs.astral.sh/ruff/).
1. [Who's Using Ruff?](#whos-using-ruff)
1. [License](#license)
## Getting Started
## Getting Started<a id="getting-started"></a>
For more, see the [documentation](https://docs.astral.sh/ruff/).
@@ -136,8 +136,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.6.2/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.6.2/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.6.3/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.6.3/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -170,7 +170,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.6.2
rev: v0.6.3
hooks:
# Run the linter.
- id: ruff
@@ -195,7 +195,7 @@ jobs:
- uses: chartboost/ruff-action@v1
```
### Configuration
### Configuration<a id="configuration"></a>
Ruff can be configured through a `pyproject.toml`, `ruff.toml`, or `.ruff.toml` file (see:
[_Configuration_](https://docs.astral.sh/ruff/configuration/), or [_Settings_](https://docs.astral.sh/ruff/settings/)
@@ -291,7 +291,7 @@ features that may change prior to stabilization.
See `ruff help` for more on Ruff's top-level commands, or `ruff help check` and `ruff help format`
for more on the linting and formatting commands, respectively.
## Rules
## Rules<a id="rules"></a>
<!-- Begin section: Rules -->
@@ -367,21 +367,21 @@ quality tools, including:
For a complete enumeration of the supported rules, see [_Rules_](https://docs.astral.sh/ruff/rules/).
## Contributing
## Contributing<a id="contributing"></a>
Contributions are welcome and highly appreciated. To get started, check out the
[**contributing guidelines**](https://docs.astral.sh/ruff/contributing/).
You can also join us on [**Discord**](https://discord.com/invite/astral-sh).
## Support
## Support<a id="support"></a>
Having trouble? Check out the existing issues on [**GitHub**](https://github.com/astral-sh/ruff/issues),
or feel free to [**open a new one**](https://github.com/astral-sh/ruff/issues/new).
You can also ask for help on [**Discord**](https://discord.com/invite/astral-sh).
## Acknowledgements
## Acknowledgements<a id="acknowledgements"></a>
Ruff's linter draws on both the APIs and implementation details of many other
tools in the Python ecosystem, especially [Flake8](https://github.com/PyCQA/flake8), [Pyflakes](https://github.com/PyCQA/pyflakes),
@@ -405,7 +405,7 @@ Ruff is the beneficiary of a large number of [contributors](https://github.com/a
Ruff is released under the MIT license.
## Who's Using Ruff?
## Who's Using Ruff?<a id="whos-using-ruff"></a>
Ruff is used by a number of major open-source projects and companies, including:
@@ -524,7 +524,7 @@ If you're using Ruff, consider adding the Ruff badge to your project's `README.m
<a href="https://github.com/astral-sh/ruff"><img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json" alt="Ruff" style="max-width:100%;"></a>
```
## License
## License<a id="license"></a>
This repository is licensed under the [MIT License](https://github.com/astral-sh/ruff/blob/main/LICENSE)

View File

@@ -13,12 +13,17 @@ The CLI supports different verbosity levels.
- `-vv` activates `debug!` and timestamps: This should be enough information to get to the bottom of bug reports. When you're processing many packages or files, you'll get pages and pages of output, but each line is link to a specific action or state change.
- `-vvv` activates `trace!` (only in debug builds) and shows tracing-spans: At this level, you're logging everything. Most of this is wasted, it's really slow, we dump e.g. the entire resolution graph. Only useful to developers, and you almost certainly want to use `RED_KNOT_LOG` to filter it down to the area your investigating.
## `RED_KNOT_LOG`
## Better logging with `RED_KNOT_LOG` and `RAYON_NUM_THREADS`
By default, the CLI shows messages from the `ruff` and `red_knot` crates. Tracing messages from other crates are not shown.
The `RED_KNOT_LOG` environment variable allows you to customize which messages are shown by specifying one
or more [filter directives](https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html#directives).
The `RAYON_NUM_THREADS` environment variable, meanwhile, can be used to control the level of concurrency red-knot uses.
By default, red-knot will attempt to parallelize its work so that multiple files are checked simultaneously,
but this can result in a confused logging output where messages from different threads are intertwined.
To switch off concurrency entirely and have more readable logs, use `RAYON_NUM_THREADS=1`.
### Examples
#### Show all debug messages

View File

@@ -575,7 +575,7 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
let index = semantic_index(&db, file);
let global_table = symbol_table(&db, global_scope(&db, file));
assert_eq!(names(&global_table), vec!["f", "str", "int"]);
assert_eq!(names(&global_table), vec!["str", "int", "f"]);
let [(function_scope_id, _function_scope)] = index
.child_scopes(FileScopeId::global())
@@ -1095,4 +1095,56 @@ match subject:
vec!["subject", "a", "b", "c", "d", "f", "e", "h", "g", "Foo", "i", "j", "k", "l"]
);
}
#[test]
fn for_loops_single_assignment() {
let TestCase { db, file } = test_case("for x in a: pass");
let scope = global_scope(&db, file);
let global_table = symbol_table(&db, scope);
assert_eq!(&names(&global_table), &["a", "x"]);
let use_def = use_def_map(&db, scope);
let definition = use_def
.first_public_definition(global_table.symbol_id_by_name("x").unwrap())
.unwrap();
assert!(matches!(definition.node(&db), DefinitionKind::For(_)));
}
#[test]
fn for_loops_simple_unpacking() {
let TestCase { db, file } = test_case("for (x, y) in a: pass");
let scope = global_scope(&db, file);
let global_table = symbol_table(&db, scope);
assert_eq!(&names(&global_table), &["a", "x", "y"]);
let use_def = use_def_map(&db, scope);
let x_definition = use_def
.first_public_definition(global_table.symbol_id_by_name("x").unwrap())
.unwrap();
let y_definition = use_def
.first_public_definition(global_table.symbol_id_by_name("y").unwrap())
.unwrap();
assert!(matches!(x_definition.node(&db), DefinitionKind::For(_)));
assert!(matches!(y_definition.node(&db), DefinitionKind::For(_)));
}
#[test]
fn for_loops_complex_unpacking() {
let TestCase { db, file } = test_case("for [((a,) b), (c, d)] in e: pass");
let scope = global_scope(&db, file);
let global_table = symbol_table(&db, scope);
assert_eq!(&names(&global_table), &["e", "a", "b", "c", "d"]);
let use_def = use_def_map(&db, scope);
let definition = use_def
.first_public_definition(global_table.symbol_id_by_name("a").unwrap())
.unwrap();
assert!(matches!(definition.node(&db), DefinitionKind::For(_)));
}
}

View File

@@ -15,7 +15,7 @@ use crate::semantic_index::ast_ids::node_key::ExpressionNodeKey;
use crate::semantic_index::ast_ids::AstIdsBuilder;
use crate::semantic_index::definition::{
AssignmentDefinitionNodeRef, ComprehensionDefinitionNodeRef, Definition, DefinitionNodeKey,
DefinitionNodeRef, ImportFromDefinitionNodeRef,
DefinitionNodeRef, ForStmtDefinitionNodeRef, ImportFromDefinitionNodeRef,
};
use crate::semantic_index::expression::Expression;
use crate::semantic_index::symbol::{
@@ -392,20 +392,6 @@ where
self.visit_decorator(decorator);
}
let symbol = self
.add_or_update_symbol(function_def.name.id.clone(), SymbolFlags::IS_DEFINED);
self.add_definition(symbol, function_def);
// The default value of the parameters needs to be evaluated in the
// enclosing scope.
for default in function_def
.parameters
.iter_non_variadic_params()
.filter_map(|param| param.default.as_deref())
{
self.visit_expr(default);
}
self.with_type_params(
NodeWithScopeRef::FunctionTypeParameters(function_def),
function_def.type_params.as_deref(),
@@ -426,6 +412,21 @@ where
builder.pop_scope()
},
);
// The default value of the parameters needs to be evaluated in the
// enclosing scope.
for default in function_def
.parameters
.iter_non_variadic_params()
.filter_map(|param| param.default.as_deref())
{
self.visit_expr(default);
}
// The symbol for the function name itself has to be evaluated
// at the end to match the runtime evaluation of parameter defaults
// and return-type annotations.
let symbol = self
.add_or_update_symbol(function_def.name.id.clone(), SymbolFlags::IS_DEFINED);
self.add_definition(symbol, function_def);
}
ast::Stmt::ClassDef(class) => {
for decorator in &class.decorator_list {
@@ -578,6 +579,27 @@ where
ast::Stmt::Break(_) => {
self.loop_break_states.push(self.flow_snapshot());
}
ast::Stmt::For(
for_stmt @ ast::StmtFor {
range: _,
is_async: _,
target,
iter,
body,
orelse,
},
) => {
// TODO add control flow similar to `ast::Stmt::While` above
self.add_standalone_expression(iter);
self.visit_expr(iter);
debug_assert!(self.current_assignment.is_none());
self.current_assignment = Some(for_stmt.into());
self.visit_expr(target);
self.current_assignment = None;
self.visit_body(body);
self.visit_body(orelse);
}
_ => {
walk_stmt(self, stmt);
}
@@ -624,6 +646,15 @@ where
Some(CurrentAssignment::AugAssign(aug_assign)) => {
self.add_definition(symbol, aug_assign);
}
Some(CurrentAssignment::For(node)) => {
self.add_definition(
symbol,
ForStmtDefinitionNodeRef {
iterable: &node.iter,
target: name_node,
},
);
}
Some(CurrentAssignment::Named(named)) => {
// TODO(dhruvmanila): If the current scope is a comprehension, then the
// named expression is implicitly nonlocal. This is yet to be
@@ -796,6 +827,7 @@ enum CurrentAssignment<'a> {
Assign(&'a ast::StmtAssign),
AnnAssign(&'a ast::StmtAnnAssign),
AugAssign(&'a ast::StmtAugAssign),
For(&'a ast::StmtFor),
Named(&'a ast::ExprNamed),
Comprehension {
node: &'a ast::Comprehension,
@@ -822,6 +854,12 @@ impl<'a> From<&'a ast::StmtAugAssign> for CurrentAssignment<'a> {
}
}
impl<'a> From<&'a ast::StmtFor> for CurrentAssignment<'a> {
fn from(value: &'a ast::StmtFor) -> Self {
Self::For(value)
}
}
impl<'a> From<&'a ast::ExprNamed> for CurrentAssignment<'a> {
fn from(value: &'a ast::ExprNamed) -> Self {
Self::Named(value)

View File

@@ -39,6 +39,7 @@ impl<'db> Definition<'db> {
pub(crate) enum DefinitionNodeRef<'a> {
Import(&'a ast::Alias),
ImportFrom(ImportFromDefinitionNodeRef<'a>),
For(ForStmtDefinitionNodeRef<'a>),
Function(&'a ast::StmtFunctionDef),
Class(&'a ast::StmtClassDef),
NamedExpression(&'a ast::ExprNamed),
@@ -92,6 +93,12 @@ impl<'a> From<ImportFromDefinitionNodeRef<'a>> for DefinitionNodeRef<'a> {
}
}
impl<'a> From<ForStmtDefinitionNodeRef<'a>> for DefinitionNodeRef<'a> {
fn from(value: ForStmtDefinitionNodeRef<'a>) -> Self {
Self::For(value)
}
}
impl<'a> From<AssignmentDefinitionNodeRef<'a>> for DefinitionNodeRef<'a> {
fn from(node_ref: AssignmentDefinitionNodeRef<'a>) -> Self {
Self::Assignment(node_ref)
@@ -134,6 +141,12 @@ pub(crate) struct WithItemDefinitionNodeRef<'a> {
pub(crate) target: &'a ast::ExprName,
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct ForStmtDefinitionNodeRef<'a> {
pub(crate) iterable: &'a ast::Expr,
pub(crate) target: &'a ast::ExprName,
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct ComprehensionDefinitionNodeRef<'a> {
pub(crate) node: &'a ast::Comprehension,
@@ -174,6 +187,12 @@ impl DefinitionNodeRef<'_> {
DefinitionNodeRef::AugmentedAssignment(augmented_assignment) => {
DefinitionKind::AugmentedAssignment(AstNodeRef::new(parsed, augmented_assignment))
}
DefinitionNodeRef::For(ForStmtDefinitionNodeRef { iterable, target }) => {
DefinitionKind::For(ForStmtDefinitionKind {
iterable: AstNodeRef::new(parsed.clone(), iterable),
target: AstNodeRef::new(parsed, target),
})
}
DefinitionNodeRef::Comprehension(ComprehensionDefinitionNodeRef { node, first }) => {
DefinitionKind::Comprehension(ComprehensionDefinitionKind {
node: AstNodeRef::new(parsed, node),
@@ -212,6 +231,10 @@ impl DefinitionNodeRef<'_> {
}) => target.into(),
Self::AnnotatedAssignment(node) => node.into(),
Self::AugmentedAssignment(node) => node.into(),
Self::For(ForStmtDefinitionNodeRef {
iterable: _,
target,
}) => target.into(),
Self::Comprehension(ComprehensionDefinitionNodeRef { node, first: _ }) => node.into(),
Self::Parameter(node) => match node {
ast::AnyParameterRef::Variadic(parameter) => parameter.into(),
@@ -232,6 +255,7 @@ pub enum DefinitionKind {
Assignment(AssignmentDefinitionKind),
AnnotatedAssignment(AstNodeRef<ast::StmtAnnAssign>),
AugmentedAssignment(AstNodeRef<ast::StmtAugAssign>),
For(ForStmtDefinitionKind),
Comprehension(ComprehensionDefinitionKind),
Parameter(AstNodeRef<ast::Parameter>),
ParameterWithDefault(AstNodeRef<ast::ParameterWithDefault>),
@@ -302,6 +326,22 @@ impl WithItemDefinitionKind {
}
}
#[derive(Clone, Debug)]
pub struct ForStmtDefinitionKind {
iterable: AstNodeRef<ast::Expr>,
target: AstNodeRef<ast::ExprName>,
}
impl ForStmtDefinitionKind {
pub(crate) fn iterable(&self) -> &ast::Expr {
self.iterable.node()
}
pub(crate) fn target(&self) -> &ast::ExprName {
self.target.node()
}
}
#[derive(Copy, Clone, Eq, PartialEq, Hash, Debug)]
pub(crate) struct DefinitionNodeKey(NodeKey);
@@ -347,6 +387,12 @@ impl From<&ast::StmtAugAssign> for DefinitionNodeKey {
}
}
impl From<&ast::StmtFor> for DefinitionNodeKey {
fn from(value: &ast::StmtFor) -> Self {
Self(NodeKey::from_node(value))
}
}
impl From<&ast::Comprehension> for DefinitionNodeKey {
fn from(node: &ast::Comprehension) -> Self {
Self(NodeKey::from_node(node))

View File

@@ -1,8 +1,9 @@
use ruff_db::files::File;
use ruff_python_ast::name::Name;
use ruff_python_ast as ast;
use crate::builtins::builtins_scope;
use crate::semantic_index::definition::Definition;
use crate::semantic_index::ast_ids::HasScopedAstId;
use crate::semantic_index::definition::{Definition, DefinitionKind};
use crate::semantic_index::symbol::{ScopeId, ScopedSymbolId};
use crate::semantic_index::{
global_scope, semantic_index, symbol_table, use_def_map, DefinitionWithConstraints,
@@ -14,7 +15,8 @@ use crate::{Db, FxOrderSet};
pub(crate) use self::builder::{IntersectionBuilder, UnionBuilder};
pub(crate) use self::diagnostic::TypeCheckDiagnostics;
pub(crate) use self::infer::{
infer_definition_types, infer_expression_types, infer_scope_types, TypeInference,
infer_deferred_types, infer_definition_types, infer_expression_types, infer_scope_types,
TypeInference,
};
mod builder;
@@ -88,6 +90,24 @@ pub(crate) fn definition_ty<'db>(db: &'db dyn Db, definition: Definition<'db>) -
inference.definition_ty(definition)
}
/// Infer the type of a (possibly deferred) sub-expression of a [`Definition`].
///
/// ## Panics
/// If the given expression is not a sub-expression of the given [`Definition`].
pub(crate) fn definition_expression_ty<'db>(
db: &'db dyn Db,
definition: Definition<'db>,
expression: &ast::Expr,
) -> Type<'db> {
let expr_id = expression.scoped_ast_id(db, definition.scope(db));
let inference = infer_definition_types(db, definition);
if let Some(ty) = inference.try_expression_ty(expr_id) {
ty
} else {
infer_deferred_types(db, definition).expression_ty(expr_id)
}
}
/// Infer the combined type of an array of [`Definition`]s, plus one optional "unbound type".
///
/// Will return a union if there is more than one definition, or at least one plus an unbound
@@ -181,6 +201,11 @@ pub enum Type<'db> {
IntLiteral(i64),
/// A boolean literal, either `True` or `False`.
BooleanLiteral(bool),
/// A string literal
StringLiteral(StringLiteralType<'db>),
/// A string known to originate only from literal values, but whose value is not known (unlike
/// `StringLiteral` above).
LiteralString,
/// A bytes literal
BytesLiteral(BytesLiteralType<'db>),
// TODO protocols, callable types, overloads, generics, type vars
@@ -238,7 +263,7 @@ impl<'db> Type<'db> {
/// us to explicitly consider whether to handle an error or propagate
/// it up the call stack.
#[must_use]
pub fn member(&self, db: &'db dyn Db, name: &Name) -> Type<'db> {
pub fn member(&self, db: &'db dyn Db, name: &ast::name::Name) -> Type<'db> {
match self {
Type::Any => Type::Any,
Type::Never => {
@@ -278,6 +303,16 @@ impl<'db> Type<'db> {
Type::Unknown
}
Type::BooleanLiteral(_) => Type::Unknown,
Type::StringLiteral(_) => {
// TODO defer to `typing.LiteralString`/`builtins.str` methods
// from typeshed's stubs
Type::Unknown
}
Type::LiteralString => {
// TODO defer to `typing.LiteralString`/`builtins.str` methods
// from typeshed's stubs
Type::Unknown
}
Type::BytesLiteral(_) => {
// TODO defer to Type::Instance(<bytes from typeshed>).member
Type::Unknown
@@ -299,7 +334,7 @@ impl<'db> Type<'db> {
#[salsa::interned]
pub struct FunctionType<'db> {
/// name of the function at definition
pub name: Name,
pub name: ast::name::Name,
/// types of all decorators on this function
decorators: Vec<Type<'db>>,
@@ -314,19 +349,33 @@ impl<'db> FunctionType<'db> {
#[salsa::interned]
pub struct ClassType<'db> {
/// Name of the class at definition
pub name: Name,
pub name: ast::name::Name,
/// Types of all class bases
bases: Vec<Type<'db>>,
definition: Definition<'db>,
body_scope: ScopeId<'db>,
}
impl<'db> ClassType<'db> {
/// Return an iterator over the types of this class's bases.
///
/// # Panics:
/// If `definition` is not a `DefinitionKind::Class`.
pub fn bases(&self, db: &'db dyn Db) -> impl Iterator<Item = Type<'db>> {
let definition = self.definition(db);
let DefinitionKind::Class(class_stmt_node) = definition.node(db) else {
panic!("Class type definition must have DefinitionKind::Class");
};
class_stmt_node
.bases()
.iter()
.map(move |base_expr| definition_expression_ty(db, definition, base_expr))
}
/// Returns the class member of this class named `name`.
///
/// The member resolves to a member of the class itself or any of its bases.
pub fn class_member(self, db: &'db dyn Db, name: &Name) -> Type<'db> {
pub fn class_member(self, db: &'db dyn Db, name: &ast::name::Name) -> Type<'db> {
let member = self.own_class_member(db, name);
if !member.is_unbound() {
return member;
@@ -336,12 +385,12 @@ impl<'db> ClassType<'db> {
}
/// Returns the inferred type of the class member named `name`.
pub fn own_class_member(self, db: &'db dyn Db, name: &Name) -> Type<'db> {
pub fn own_class_member(self, db: &'db dyn Db, name: &ast::name::Name) -> Type<'db> {
let scope = self.body_scope(db);
symbol_ty_by_name(db, scope, name)
}
pub fn inherited_class_member(self, db: &'db dyn Db, name: &Name) -> Type<'db> {
pub fn inherited_class_member(self, db: &'db dyn Db, name: &ast::name::Name) -> Type<'db> {
for base in self.bases(db) {
let member = base.member(db, name);
if !member.is_unbound() {
@@ -378,6 +427,12 @@ pub struct IntersectionType<'db> {
negative: FxOrderSet<Type<'db>>,
}
#[salsa::interned]
pub struct StringLiteralType<'db> {
#[return_ref]
value: Box<str>,
}
#[salsa::interned]
pub struct BytesLiteralType<'db> {
#[return_ref]
@@ -432,7 +487,7 @@ mod tests {
let foo = system_path_to_file(&db, "src/foo.py").context("Failed to resolve foo.py")?;
let diagnostics = super::check_types(&db, foo);
assert_diagnostic_messages(&diagnostics, &["Import 'bar' could not be resolved."]);
assert_diagnostic_messages(&diagnostics, &["Cannot resolve import 'bar'."]);
Ok(())
}
@@ -445,7 +500,7 @@ mod tests {
.unwrap();
let foo = system_path_to_file(&db, "src/foo.py").unwrap();
let diagnostics = super::check_types(&db, foo);
assert_diagnostic_messages(&diagnostics, &["Import 'bar' could not be resolved."]);
assert_diagnostic_messages(&diagnostics, &["Cannot resolve import 'bar'."]);
}
#[test]
@@ -457,15 +512,9 @@ mod tests {
let b_file = system_path_to_file(&db, "/src/b.py").unwrap();
let b_file_diagnostics = super::check_types(&db, b_file);
assert_diagnostic_messages(
&b_file_diagnostics,
&["Could not resolve import of 'thing' from 'a'"],
);
assert_diagnostic_messages(&b_file_diagnostics, &["Module 'a' has no member 'thing'"]);
}
#[ignore = "\
A spurious second 'Unresolved import' diagnostic message is emitted on `b.py`, \
despite the symbol existing in the symbol table for `a.py`"]
#[test]
fn resolved_import_of_symbol_from_unresolved_import() {
let mut db = setup_db();
@@ -478,10 +527,7 @@ despite the symbol existing in the symbol table for `a.py`"]
let a_file = system_path_to_file(&db, "/src/a.py").unwrap();
let a_file_diagnostics = super::check_types(&db, a_file);
assert_diagnostic_messages(
&a_file_diagnostics,
&["Import 'foo' could not be resolved."],
);
assert_diagnostic_messages(&a_file_diagnostics, &["Cannot resolve import 'foo'."]);
// Importing the unresolved import into a second first-party file should not trigger
// an additional "unresolved import" violation

View File

@@ -41,6 +41,12 @@ impl Display for DisplayType<'_> {
Type::BooleanLiteral(boolean) => {
write!(f, "Literal[{}]", if *boolean { "True" } else { "False" })
}
Type::StringLiteral(string) => write!(
f,
r#"Literal["{}"]"#,
string.value(self.db).replace('"', r#"\""#)
),
Type::LiteralString => write!(f, "LiteralString"),
Type::BytesLiteral(bytes) => {
let escape =
AsciiEscape::with_preferred_quote(bytes.value(self.db).as_ref(), Quote::Double);

View File

@@ -1,4 +1,4 @@
//! We have three Salsa queries for inferring types at three different granularities: scope-level,
//! We have Salsa queries for inferring types at three different granularities: scope-level,
//! definition-level, and expression-level.
//!
//! Scope-level inference is for when we are actually checking a file, and need to check types for
@@ -11,15 +11,21 @@
//! allows us to handle import cycles without getting into a cycle of scope-level inference
//! queries.
//!
//! The expression-level inference query is needed in only a few cases. Since an assignment
//! statement can have multiple targets (via `x = y = z` or unpacking `(x, y) = z`, it can be
//! associated with multiple definitions. In order to avoid inferring the type of the right-hand
//! side once per definition, we infer it as a standalone query, so its result will be cached by
//! Salsa. We also need the expression-level query for inferring types in type guard expressions
//! (e.g. the test clause of an `if` statement.)
//! The expression-level inference query is needed in only a few cases. Since some assignments can
//! have multiple targets (via `x = y = z` or unpacking `(x, y) = z`, they can be associated with
//! multiple definitions (one per assigned symbol). In order to avoid inferring the type of the
//! right-hand side once per definition, we infer it as a standalone query, so its result will be
//! cached by Salsa. We also need the expression-level query for inferring types in type guard
//! expressions (e.g. the test clause of an `if` statement.)
//!
//! Inferring types at any of the three region granularities returns a [`TypeInference`], which
//! holds types for every [`Definition`] and expression within the inferred region.
//!
//! Some type expressions can require deferred evaluation. This includes all type expressions in
//! stub files, or annotation expressions in modules with `from __future__ import annotations`, or
//! stringified annotations. We have a fourth Salsa query for inferring the deferred types
//! associated with a particular definition. Scope-level inference infers deferred types for all
//! definitions once the rest of the types in the scope have been inferred.
use std::num::NonZeroU32;
use rustc_hash::FxHashMap;
@@ -28,8 +34,7 @@ use salsa::plumbing::AsId;
use ruff_db::files::File;
use ruff_db::parsed::parsed_module;
use ruff_python_ast as ast;
use ruff_python_ast::{AnyNodeRef, ExprContext};
use ruff_python_ast::{self as ast, AnyNodeRef, ExprContext, UnaryOp};
use ruff_text_size::Ranged;
use crate::builtins::builtins_scope;
@@ -43,8 +48,8 @@ use crate::semantic_index::symbol::{FileScopeId, NodeWithScopeKind, NodeWithScop
use crate::semantic_index::SemanticIndex;
use crate::types::diagnostic::{TypeCheckDiagnostic, TypeCheckDiagnostics};
use crate::types::{
builtins_symbol_ty_by_name, definitions_ty, global_symbol_ty_by_name, BytesLiteralType,
ClassType, FunctionType, Name, Type, UnionBuilder,
builtins_symbol_ty_by_name, definitions_ty, global_symbol_ty_by_name, symbol_ty,
BytesLiteralType, ClassType, FunctionType, StringLiteralType, Type, UnionBuilder,
};
use crate::Db;
@@ -97,6 +102,28 @@ pub(crate) fn infer_definition_types<'db>(
TypeInferenceBuilder::new(db, InferenceRegion::Definition(definition), index).finish()
}
/// Infer types for all deferred type expressions in a [`Definition`].
///
/// Deferred expressions are type expressions (annotations, base classes, aliases...) in a stub
/// file, or in a file with `from __future__ import annotations`, or stringified annotations.
#[salsa::tracked(return_ref)]
pub(crate) fn infer_deferred_types<'db>(
db: &'db dyn Db,
definition: Definition<'db>,
) -> TypeInference<'db> {
let file = definition.file(db);
let _span = tracing::trace_span!(
"infer_deferred_types",
definition = ?definition.as_id(),
file = %file.path(db)
)
.entered();
let index = semantic_index(db, file);
TypeInferenceBuilder::new(db, InferenceRegion::Deferred(definition), index).finish()
}
/// Infer all types for an [`Expression`] (including sub-expressions).
/// Use rarely; only for cases where we'd otherwise risk double-inferring an expression: RHS of an
/// assignment, which might be unpacking/multi-target and thus part of multiple definitions, or a
@@ -119,8 +146,13 @@ pub(crate) fn infer_expression_types<'db>(
/// A region within which we can infer types.
pub(crate) enum InferenceRegion<'db> {
/// infer types for a standalone [`Expression`]
Expression(Expression<'db>),
/// infer types for a [`Definition`]
Definition(Definition<'db>),
/// infer deferred types for a [`Definition`]
Deferred(Definition<'db>),
/// infer types for an entire [`ScopeId`]
Scope(ScopeId<'db>),
}
@@ -135,14 +167,20 @@ pub(crate) struct TypeInference<'db> {
/// The diagnostics for this region.
diagnostics: TypeCheckDiagnostics,
/// Are there deferred type expressions in this region?
has_deferred: bool,
}
impl<'db> TypeInference<'db> {
#[allow(unused)]
pub(crate) fn expression_ty(&self, expression: ScopedExpressionId) -> Type<'db> {
self.expressions[&expression]
}
pub(crate) fn try_expression_ty(&self, expression: ScopedExpressionId) -> Option<Type<'db>> {
self.expressions.get(&expression).copied()
}
pub(crate) fn definition_ty(&self, definition: Definition<'db>) -> Type<'db> {
self.definitions[&definition]
}
@@ -218,6 +256,12 @@ struct TypeInferenceBuilder<'db> {
}
impl<'db> TypeInferenceBuilder<'db> {
/// How big a string do we build before bailing?
///
/// This is a fairly arbitrary number. It should be *far* more than enough
/// for most use cases, but we can reevaluate it later if useful.
const MAX_STRING_LITERAL_SIZE: usize = 4096;
/// Creates a new builder for inferring types in a region.
pub(super) fn new(
db: &'db dyn Db,
@@ -226,7 +270,9 @@ impl<'db> TypeInferenceBuilder<'db> {
) -> Self {
let (file, scope) = match region {
InferenceRegion::Expression(expression) => (expression.file(db), expression.scope(db)),
InferenceRegion::Definition(definition) => (definition.file(db), definition.scope(db)),
InferenceRegion::Definition(definition) | InferenceRegion::Deferred(definition) => {
(definition.file(db), definition.scope(db))
}
InferenceRegion::Scope(scope) => (scope.file(db), scope),
};
@@ -246,6 +292,17 @@ impl<'db> TypeInferenceBuilder<'db> {
self.types.definitions.extend(inference.definitions.iter());
self.types.expressions.extend(inference.expressions.iter());
self.types.diagnostics.extend(&inference.diagnostics);
self.types.has_deferred |= inference.has_deferred;
}
/// Are we currently inferring types in a stub file?
fn is_stub(&self) -> bool {
self.file.is_stub(self.db.upcast())
}
/// Are we currently inferred deferred types?
fn is_deferred(&self) -> bool {
matches!(self.region, InferenceRegion::Deferred(_))
}
/// Infers types in the given [`InferenceRegion`].
@@ -253,6 +310,7 @@ impl<'db> TypeInferenceBuilder<'db> {
match self.region {
InferenceRegion::Scope(scope) => self.infer_region_scope(scope),
InferenceRegion::Definition(definition) => self.infer_region_definition(definition),
InferenceRegion::Deferred(definition) => self.infer_region_deferred(definition),
InferenceRegion::Expression(expression) => self.infer_region_expression(expression),
}
}
@@ -286,6 +344,20 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_generator_expression_scope(generator.node());
}
}
if self.types.has_deferred {
let mut deferred_expression_types: FxHashMap<ScopedExpressionId, Type<'db>> =
FxHashMap::default();
for definition in self.types.definitions.keys() {
if infer_definition_types(self.db, *definition).has_deferred {
let deferred = infer_deferred_types(self.db, *definition);
deferred_expression_types.extend(deferred.expressions.iter());
}
}
self.types
.expressions
.extend(deferred_expression_types.iter());
}
}
fn infer_region_definition(&mut self, definition: Definition<'db>) {
@@ -317,6 +389,13 @@ impl<'db> TypeInferenceBuilder<'db> {
DefinitionKind::AugmentedAssignment(augmented_assignment) => {
self.infer_augment_assignment_definition(augmented_assignment.node(), definition);
}
DefinitionKind::For(for_statement_definition) => {
self.infer_for_statement_definition(
for_statement_definition.target(),
for_statement_definition.iterable(),
definition,
);
}
DefinitionKind::NamedExpression(named_expression) => {
self.infer_named_expression_definition(named_expression.node(), definition);
}
@@ -339,6 +418,19 @@ impl<'db> TypeInferenceBuilder<'db> {
}
}
fn infer_region_deferred(&mut self, definition: Definition<'db>) {
match definition.node(self.db) {
DefinitionKind::Function(_function) => {
// TODO self.infer_function_deferred(function.node());
}
DefinitionKind::Class(class) => self.infer_class_deferred(class.node()),
DefinitionKind::AnnotatedAssignment(_annotated_assignment) => {
// TODO self.infer_annotated_assignment_deferred(annotated_assignment.node());
}
_ => {}
}
}
fn infer_region_expression(&mut self, expression: Expression<'db>) {
self.infer_expression(expression.node_ref(self.db));
}
@@ -368,9 +460,9 @@ impl<'db> TypeInferenceBuilder<'db> {
let Some(type_params) = function.type_params.as_deref() else {
panic!("function type params scope without type params");
};
self.infer_optional_expression(function.returns.as_deref());
self.infer_type_parameters(type_params);
self.infer_parameters(&function.parameters);
self.infer_optional_expression(function.returns.as_deref());
}
fn infer_function_body(&mut self, function: &ast::StmtFunctionDef) {
@@ -543,7 +635,7 @@ impl<'db> TypeInferenceBuilder<'db> {
name,
type_params: _,
decorator_list,
arguments,
arguments: _,
body: _,
} = class;
@@ -551,21 +643,40 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_decorator(decorator);
}
// TODO if there are type params, the bases should be inferred inside that scope (only)
let bases = arguments
.as_deref()
.map(|arguments| self.infer_arguments(arguments))
.unwrap_or(Vec::new());
let body_scope = self
.index
.node_scope(NodeWithScopeRef::Class(class))
.to_scope_id(self.db, self.file);
let class_ty = Type::Class(ClassType::new(self.db, name.id.clone(), bases, body_scope));
let class_ty = Type::Class(ClassType::new(
self.db,
name.id.clone(),
definition,
body_scope,
));
self.types.definitions.insert(definition, class_ty);
for keyword in class.keywords() {
self.infer_expression(&keyword.value);
}
// inference of bases deferred in stubs
// TODO also defer stringified generic type parameters
if !self.is_stub() {
for base in class.bases() {
self.infer_expression(base);
}
}
}
fn infer_class_deferred(&mut self, class: &ast::StmtClassDef) {
if self.is_stub() {
self.types.has_deferred = true;
for base in class.bases() {
self.infer_expression(base);
}
}
}
fn infer_if_statement(&mut self, if_statement: &ast::StmtIf) {
@@ -865,11 +976,48 @@ impl<'db> TypeInferenceBuilder<'db> {
} = for_statement;
self.infer_expression(iter);
self.infer_expression(target);
// TODO more complex assignment targets
if let ast::Expr::Name(name) = &**target {
self.infer_definition(name);
} else {
self.infer_expression(target);
}
self.infer_body(body);
self.infer_body(orelse);
}
fn infer_for_statement_definition(
&mut self,
target: &ast::ExprName,
iterable: &ast::Expr,
definition: Definition<'db>,
) {
let expression = self.index.expression(iterable);
let result = infer_expression_types(self.db, expression);
self.extend(result);
let iterable_ty = self
.types
.expression_ty(iterable.scoped_ast_id(self.db, self.scope));
// TODO(Alex): only a valid iterable if the *type* of `iterable_ty` has an `__iter__`
// member (dunders are never looked up on an instance)
let _dunder_iter_ty = iterable_ty.member(self.db, &ast::name::Name::from("__iter__"));
// TODO(Alex):
// - infer the return type of the `__iter__` method, which gives us the iterator
// - lookup the `__next__` method on the iterator
// - infer the return type of the iterator's `__next__` method,
// which gives us the type of the variable being bound here
// (...or the type of the object being unpacked into multiple definitions, if it's something like
// `for k, v in d.items(): ...`)
let loop_var_value_ty = Type::Unknown;
self.types
.expressions
.insert(target.scoped_ast_id(self.db, self.scope), loop_var_value_ty);
self.types.definitions.insert(definition, loop_var_value_ty);
}
fn infer_while_statement(&mut self, while_statement: &ast::StmtWhile) {
let ast::StmtWhile {
range: _,
@@ -891,31 +1039,23 @@ impl<'db> TypeInferenceBuilder<'db> {
}
}
fn infer_import_definition(&mut self, alias: &ast::Alias, definition: Definition<'db>) {
fn infer_import_definition(&mut self, alias: &'db ast::Alias, definition: Definition<'db>) {
let ast::Alias {
range: _,
name,
asname: _,
} = alias;
let module_ty = ModuleName::new(name)
.ok_or(ModuleResolutionError::InvalidSyntax)
.and_then(|module_name| self.module_ty_from_name(module_name));
let module_ty = match module_ty {
Ok(ty) => ty,
Err(ModuleResolutionError::InvalidSyntax) => {
tracing::debug!("Failed to resolve import due to invalid syntax");
Type::Unknown
}
Err(ModuleResolutionError::UnresolvedModule) => {
self.add_diagnostic(
AnyNodeRef::Alias(alias),
"unresolved-import",
format_args!("Import '{name}' could not be resolved."),
);
let module_ty = if let Some(module_name) = ModuleName::new(name) {
if let Some(module) = self.module_ty_from_name(module_name) {
module
} else {
self.unresolved_module_diagnostic(alias, 0, Some(name));
Type::Unknown
}
} else {
tracing::debug!("Failed to resolve import due to invalid syntax");
Type::Unknown
};
self.types.definitions.insert(definition, module_ty);
@@ -955,6 +1095,23 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_optional_expression(cause.as_deref());
}
fn unresolved_module_diagnostic(
&mut self,
import_node: impl Into<AnyNodeRef<'db>>,
level: u32,
module: Option<&str>,
) {
self.add_diagnostic(
import_node.into(),
"unresolved-import",
format_args!(
"Cannot resolve import '{}{}'.",
".".repeat(level as usize),
module.unwrap_or_default()
),
);
}
/// Given a `from .foo import bar` relative import, resolve the relative module
/// we're importing `bar` from into an absolute [`ModuleName`]
/// using the name of the module we're currently analyzing.
@@ -969,15 +1126,9 @@ impl<'db> TypeInferenceBuilder<'db> {
&self,
tail: Option<&str>,
level: NonZeroU32,
) -> Result<ModuleName, ModuleResolutionError> {
let Some(module) = file_to_module(self.db, self.file) else {
tracing::debug!(
"Relative module resolution '{}' failed; could not resolve file '{}' to a module",
format_import_from_module(level.get(), tail),
self.file.path(self.db)
);
return Err(ModuleResolutionError::UnresolvedModule);
};
) -> Result<ModuleName, ModuleNameResolutionError> {
let module = file_to_module(self.db, self.file)
.ok_or(ModuleNameResolutionError::UnknownCurrentModule)?;
let mut level = level.get();
if module.kind().is_package() {
level -= 1;
@@ -986,22 +1137,18 @@ impl<'db> TypeInferenceBuilder<'db> {
for _ in 0..level {
module_name = module_name
.parent()
.ok_or(ModuleResolutionError::UnresolvedModule)?;
.ok_or(ModuleNameResolutionError::TooManyDots)?;
}
if let Some(tail) = tail {
if let Some(valid_tail) = ModuleName::new(tail) {
module_name.extend(&valid_tail);
} else {
tracing::debug!("Relative module resolution failed: invalid syntax");
return Err(ModuleResolutionError::InvalidSyntax);
}
let tail = ModuleName::new(tail).ok_or(ModuleNameResolutionError::InvalidSyntax)?;
module_name.extend(&tail);
}
Ok(module_name)
}
fn infer_import_from_definition(
&mut self,
import_from: &ast::StmtImportFrom,
import_from: &'db ast::StmtImportFrom,
alias: &ast::Alias,
definition: Definition<'db>,
) {
@@ -1015,8 +1162,8 @@ impl<'db> TypeInferenceBuilder<'db> {
// `follow_relative_import_bare_to_module()` and
// `follow_nonexistent_import_bare_to_module()`.
let ast::StmtImportFrom { module, level, .. } = import_from;
tracing::trace!("Resolving imported object {alias:?} from statement {import_from:?}");
let module = module.as_deref();
let module_name = if let Some(level) = NonZeroU32::new(*level) {
tracing::trace!(
"Resolving imported object '{}' from module '{}' relative to file '{}'",
@@ -1033,10 +1180,41 @@ impl<'db> TypeInferenceBuilder<'db> {
);
module
.and_then(ModuleName::new)
.ok_or(ModuleResolutionError::InvalidSyntax)
.ok_or(ModuleNameResolutionError::InvalidSyntax)
};
let module_ty = module_name.and_then(|module_name| self.module_ty_from_name(module_name));
let module_ty = match module_name {
Ok(name) => {
if let Some(ty) = self.module_ty_from_name(name) {
ty
} else {
self.unresolved_module_diagnostic(import_from, *level, module);
Type::Unknown
}
}
Err(ModuleNameResolutionError::InvalidSyntax) => {
tracing::debug!("Failed to resolve import due to invalid syntax");
// Invalid syntax diagnostics are emitted elsewhere.
Type::Unknown
}
Err(ModuleNameResolutionError::TooManyDots) => {
tracing::debug!(
"Relative module resolution '{}' failed: too many leading dots",
format_import_from_module(*level, module),
);
self.unresolved_module_diagnostic(import_from, *level, module);
Type::Unknown
}
Err(ModuleNameResolutionError::UnknownCurrentModule) => {
tracing::debug!(
"Relative module resolution '{}' failed; could not resolve file '{}' to a module",
format_import_from_module(*level, module),
self.file.path(self.db)
);
self.unresolved_module_diagnostic(import_from, *level, module);
Type::Unknown
}
};
let ast::Alias {
range: _,
@@ -1044,39 +1222,30 @@ impl<'db> TypeInferenceBuilder<'db> {
asname: _,
} = alias;
// If a symbol is unbound in the module the symbol was originally defined in,
// when we're trying to import the symbol from that module into "our" module,
// the runtime error will occur immediately (rather than when the symbol is *used*,
// as would be the case for a symbol with type `Unbound`), so it's appropriate to
// think of the type of the imported symbol as `Unknown` rather than `Unbound`
let member_ty = module_ty
.unwrap_or(Type::Unbound)
.member(self.db, &Name::new(&name.id))
.replace_unbound_with(self.db, Type::Unknown);
let member_ty = module_ty.member(self.db, &ast::name::Name::new(&name.id));
if matches!(module_ty, Err(ModuleResolutionError::UnresolvedModule)) {
self.add_diagnostic(
AnyNodeRef::StmtImportFrom(import_from),
"unresolved-import",
format_args!(
"Import '{}{}' could not be resolved.",
".".repeat(*level as usize),
module.unwrap_or_default()
),
);
} else if module_ty.is_ok() && member_ty.is_unknown() {
// TODO: What if it's a union where one of the elements is `Unbound`?
if member_ty.is_unbound() {
self.add_diagnostic(
AnyNodeRef::Alias(alias),
"unresolved-import",
format_args!(
"Could not resolve import of '{name}' from '{}{}'",
"Module '{}{}' has no member '{name}'",
".".repeat(*level as usize),
module.unwrap_or_default()
),
);
}
self.types.definitions.insert(definition, member_ty);
// If a symbol is unbound in the module the symbol was originally defined in,
// when we're trying to import the symbol from that module into "our" module,
// the runtime error will occur immediately (rather than when the symbol is *used*,
// as would be the case for a symbol with type `Unbound`), so it's appropriate to
// think of the type of the imported symbol as `Unknown` rather than `Unbound`
self.types.definitions.insert(
definition,
member_ty.replace_unbound_with(self.db, Type::Unknown),
);
}
fn infer_return_statement(&mut self, ret: &ast::StmtReturn) {
@@ -1090,13 +1259,8 @@ impl<'db> TypeInferenceBuilder<'db> {
}
}
fn module_ty_from_name(
&self,
module_name: ModuleName,
) -> Result<Type<'db>, ModuleResolutionError> {
resolve_module(self.db, module_name)
.map(|module| Type::Module(module.file()))
.ok_or(ModuleResolutionError::UnresolvedModule)
fn module_ty_from_name(&self, module_name: ModuleName) -> Option<Type<'db>> {
resolve_module(self.db, module_name).map(|module| Type::Module(module.file()))
}
fn infer_decorator(&mut self, decorator: &ast::Decorator) -> Type<'db> {
@@ -1199,13 +1363,16 @@ impl<'db> TypeInferenceBuilder<'db> {
Type::BooleanLiteral(*value)
}
#[allow(clippy::unused_self)]
fn infer_string_literal_expression(&mut self, _literal: &ast::ExprStringLiteral) -> Type<'db> {
// TODO Literal["..."] or str
Type::Unknown
fn infer_string_literal_expression(&mut self, literal: &ast::ExprStringLiteral) -> Type<'db> {
let value = if literal.value.len() <= Self::MAX_STRING_LITERAL_SIZE {
literal.value.to_str().into()
} else {
Box::default()
};
Type::StringLiteral(StringLiteralType::new(self.db, value))
}
#[allow(clippy::unused_self)]
fn infer_bytes_literal_expression(&mut self, literal: &ast::ExprBytesLiteral) -> Type<'db> {
// TODO: ignoring r/R prefixes for now, should normalize bytes values
Type::BytesLiteral(BytesLiteralType::new(
@@ -1268,8 +1435,7 @@ impl<'db> TypeInferenceBuilder<'db> {
&mut self,
_literal: &ast::ExprEllipsisLiteral,
) -> Type<'db> {
// TODO Ellipsis
Type::Unknown
builtins_symbol_ty_by_name(self.db, "Ellipsis")
}
fn infer_tuple_expression(&mut self, tuple: &ast::ExprTuple) -> Type<'db> {
@@ -1601,10 +1767,19 @@ impl<'db> TypeInferenceBuilder<'db> {
fn infer_name_expression(&mut self, name: &ast::ExprName) -> Type<'db> {
let ast::ExprName { range: _, id, ctx } = name;
let file_scope_id = self.scope.file_scope_id(self.db);
// if we're inferring types of deferred expressions, always treat them as public symbols
if self.is_deferred() {
let symbols = self.index.symbol_table(file_scope_id);
let symbol = symbols
.symbol_id_by_name(id)
.expect("Expected the symbol table to create a symbol for every Name node");
return symbol_ty(self.db, self.scope, symbol);
}
match ctx {
ExprContext::Load => {
let file_scope_id = self.scope.file_scope_id(self.db);
let use_def = self.index.use_def_map(file_scope_id);
let use_id = name.scoped_use_id(self.db, self.scope);
let may_be_unbound = use_def.use_may_be_unbound(use_id);
@@ -1654,7 +1829,7 @@ impl<'db> TypeInferenceBuilder<'db> {
} = attribute;
let value_ty = self.infer_expression(value);
let member_ty = value_ty.member(self.db, &Name::new(&attr.id));
let member_ty = value_ty.member(self.db, &ast::name::Name::new(&attr.id));
match ctx {
ExprContext::Load => member_ty,
@@ -1666,14 +1841,14 @@ impl<'db> TypeInferenceBuilder<'db> {
fn infer_unary_expression(&mut self, unary: &ast::ExprUnaryOp) -> Type<'db> {
let ast::ExprUnaryOp {
range: _,
op: _,
op,
operand,
} = unary;
self.infer_expression(operand);
// TODO unary op types
Type::Unknown
match (op, self.infer_expression(operand)) {
(UnaryOp::USub, Type::IntLiteral(value)) => Type::IntLiteral(-value),
_ => Type::Unknown, // TODO other unary op types
}
}
fn infer_binary_expression(&mut self, binary: &ast::ExprBinOp) -> Type<'db> {
@@ -1687,61 +1862,71 @@ impl<'db> TypeInferenceBuilder<'db> {
let left_ty = self.infer_expression(left);
let right_ty = self.infer_expression(right);
// TODO flatten the matches by matching on (left_ty, right_ty, op)
match left_ty {
Type::Any => Type::Any,
Type::Unknown => Type::Unknown,
Type::IntLiteral(n) => {
match right_ty {
Type::IntLiteral(m) => {
match op {
ast::Operator::Add => {
n.checked_add(m).map(Type::IntLiteral).unwrap_or_else(|| {
builtins_symbol_ty_by_name(self.db, "int").instance()
})
}
ast::Operator::Sub => {
n.checked_sub(m).map(Type::IntLiteral).unwrap_or_else(|| {
builtins_symbol_ty_by_name(self.db, "int").instance()
})
}
ast::Operator::Mult => {
n.checked_mul(m).map(Type::IntLiteral).unwrap_or_else(|| {
builtins_symbol_ty_by_name(self.db, "int").instance()
})
}
ast::Operator::Div => {
n.checked_div(m).map(Type::IntLiteral).unwrap_or_else(|| {
builtins_symbol_ty_by_name(self.db, "int").instance()
})
}
ast::Operator::Mod => n
.checked_rem(m)
.map(Type::IntLiteral)
// TODO division by zero error
.unwrap_or(Type::Unknown),
_ => Type::Unknown, // TODO
}
match (left_ty, right_ty, op) {
(Type::Any, _, _) | (_, Type::Any, _) => Type::Any,
(Type::Unknown, _, _) | (_, Type::Unknown, _) => Type::Unknown,
(Type::IntLiteral(n), Type::IntLiteral(m), ast::Operator::Add) => n
.checked_add(m)
.map(Type::IntLiteral)
.unwrap_or_else(|| builtins_symbol_ty_by_name(self.db, "int").instance()),
(Type::IntLiteral(n), Type::IntLiteral(m), ast::Operator::Sub) => n
.checked_sub(m)
.map(Type::IntLiteral)
.unwrap_or_else(|| builtins_symbol_ty_by_name(self.db, "int").instance()),
(Type::IntLiteral(n), Type::IntLiteral(m), ast::Operator::Mult) => n
.checked_mul(m)
.map(Type::IntLiteral)
.unwrap_or_else(|| builtins_symbol_ty_by_name(self.db, "int").instance()),
(Type::IntLiteral(n), Type::IntLiteral(m), ast::Operator::Div) => n
.checked_div(m)
.map(Type::IntLiteral)
.unwrap_or_else(|| builtins_symbol_ty_by_name(self.db, "int").instance()),
(Type::IntLiteral(n), Type::IntLiteral(m), ast::Operator::Mod) => n
.checked_rem(m)
.map(Type::IntLiteral)
// TODO division by zero error
.unwrap_or(Type::Unknown),
(Type::BytesLiteral(lhs), Type::BytesLiteral(rhs), ast::Operator::Add) => {
Type::BytesLiteral(BytesLiteralType::new(
self.db,
[lhs.value(self.db).as_ref(), rhs.value(self.db).as_ref()]
.concat()
.into_boxed_slice(),
))
}
(Type::StringLiteral(lhs), Type::StringLiteral(rhs), ast::Operator::Add) => {
Type::StringLiteral(StringLiteralType::new(self.db, {
let lhs_value = lhs.value(self.db).to_string();
let rhs_value = rhs.value(self.db).as_ref();
(lhs_value + rhs_value).into()
}))
}
(Type::StringLiteral(s), Type::IntLiteral(n), ast::Operator::Mult)
| (Type::IntLiteral(n), Type::StringLiteral(s), ast::Operator::Mult) => {
if n < 1 {
Type::StringLiteral(StringLiteralType::new(self.db, Box::default()))
} else if let Ok(n) = usize::try_from(n) {
if n.checked_mul(s.value(self.db).len())
.is_some_and(|new_length| new_length <= Self::MAX_STRING_LITERAL_SIZE)
{
let new_literal = s.value(self.db).repeat(n);
Type::StringLiteral(StringLiteralType::new(self.db, new_literal.into()))
} else {
Type::LiteralString
}
_ => Type::Unknown, // TODO
}
}
Type::BytesLiteral(lhs) => {
match right_ty {
Type::BytesLiteral(rhs) => {
match op {
ast::Operator::Add => Type::BytesLiteral(BytesLiteralType::new(
self.db,
[lhs.value(self.db).as_ref(), rhs.value(self.db).as_ref()]
.concat()
.into_boxed_slice(),
)),
_ => Type::Unknown, // TODO
}
}
_ => Type::Unknown, // TODO
} else {
Type::LiteralString
}
}
_ => Type::Unknown, // TODO
}
}
@@ -1882,14 +2067,26 @@ fn format_import_from_module(level: u32, module: Option<&str>) -> String {
)
}
/// Various ways in which resolving a [`ModuleName`]
/// from an [`ast::StmtImport`] or [`ast::StmtImportFrom`] node might fail
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
enum ModuleResolutionError {
enum ModuleNameResolutionError {
/// The import statement has invalid syntax
InvalidSyntax,
UnresolvedModule,
/// We couldn't resolve the file we're currently analyzing back to a module
/// (Only necessary for relative import statements)
UnknownCurrentModule,
/// The relative import statement seems to take us outside of the module search path
/// (e.g. our current module is `foo.bar`, and the relative import statement in `foo.bar`
/// is `from ....baz import spam`)
TooManyDots,
}
#[cfg(test)]
mod tests {
use anyhow::Context;
use ruff_db::files::{system_path_to_file, File};
@@ -1908,6 +2105,8 @@ mod tests {
use crate::types::{global_symbol_ty_by_name, infer_definition_types, symbol_ty_by_name, Type};
use crate::{HasTy, ProgramSettings, SemanticModel};
use super::TypeInferenceBuilder;
fn setup_db() -> TestDb {
let db = TestDb::new();
@@ -2168,7 +2367,6 @@ mod tests {
let base_names: Vec<_> = class
.bases(&db)
.iter()
.map(|base_ty| format!("{}", base_ty.display(&db)))
.collect();
@@ -2243,6 +2441,26 @@ mod tests {
Ok(())
}
#[test]
fn negated_int_literal() -> anyhow::Result<()> {
let mut db = setup_db();
db.write_dedented(
"src/a.py",
"
x = -1
y = -1234567890987654321
z = --987
",
)?;
assert_public_ty(&db, "src/a.py", "x", "Literal[-1]");
assert_public_ty(&db, "src/a.py", "y", "Literal[-1234567890987654321]");
assert_public_ty(&db, "src/a.py", "z", "Literal[987]");
Ok(())
}
#[test]
fn boolean_literal() -> anyhow::Result<()> {
let mut db = setup_db();
@@ -2255,6 +2473,86 @@ mod tests {
Ok(())
}
#[test]
fn string_type() -> anyhow::Result<()> {
let mut db = setup_db();
db.write_dedented(
"src/a.py",
r#"
w = "Hello"
x = 'world'
y = "Guten " + 'tag'
z = 'bon ' + "jour"
"#,
)?;
assert_public_ty(&db, "src/a.py", "w", r#"Literal["Hello"]"#);
assert_public_ty(&db, "src/a.py", "x", r#"Literal["world"]"#);
assert_public_ty(&db, "src/a.py", "y", r#"Literal["Guten tag"]"#);
assert_public_ty(&db, "src/a.py", "z", r#"Literal["bon jour"]"#);
Ok(())
}
#[test]
fn string_type_with_nested_quotes() -> anyhow::Result<()> {
let mut db = setup_db();
db.write_dedented(
"src/a.py",
r#"
x = 'I say "hello" to you'
y = "You say \"hey\" back"
z = 'No "closure here'
"#,
)?;
assert_public_ty(&db, "src/a.py", "x", r#"Literal["I say \"hello\" to you"]"#);
assert_public_ty(&db, "src/a.py", "y", r#"Literal["You say \"hey\" back"]"#);
assert_public_ty(&db, "src/a.py", "z", r#"Literal["No \"closure here"]"#);
Ok(())
}
#[test]
fn multiplied_string() -> anyhow::Result<()> {
let mut db = setup_db();
db.write_dedented(
"src/a.py",
&format!(
r#"
w = 2 * "hello"
x = "goodbye" * 3
y = "a" * {y}
z = {z} * "b"
a = 0 * "hello"
b = -3 * "hello"
"#,
y = TypeInferenceBuilder::MAX_STRING_LITERAL_SIZE,
z = TypeInferenceBuilder::MAX_STRING_LITERAL_SIZE + 1
),
)?;
assert_public_ty(&db, "src/a.py", "w", r#"Literal["hellohello"]"#);
assert_public_ty(&db, "src/a.py", "x", r#"Literal["goodbyegoodbyegoodbye"]"#);
assert_public_ty(
&db,
"src/a.py",
"y",
&format!(
r#"Literal["{}"]"#,
"a".repeat(TypeInferenceBuilder::MAX_STRING_LITERAL_SIZE)
),
);
assert_public_ty(&db, "src/a.py", "z", "LiteralString");
assert_public_ty(&db, "src/a.py", "a", r#"Literal[""]"#);
assert_public_ty(&db, "src/a.py", "b", r#"Literal[""]"#);
Ok(())
}
#[test]
fn bytes_type() -> anyhow::Result<()> {
let mut db = setup_db();
@@ -2277,6 +2575,24 @@ mod tests {
Ok(())
}
#[test]
fn ellipsis_type() -> anyhow::Result<()> {
let mut db = setup_db();
db.write_dedented(
"src/a.py",
"
x = ...
",
)?;
// TODO: update this once `infer_ellipsis_literal_expression` correctly
// infers `types.EllipsisType`.
assert_public_ty(&db, "src/a.py", "x", "Unknown | Literal[EllipsisType]");
Ok(())
}
#[test]
fn resolve_union() -> anyhow::Result<()> {
let mut db = setup_db();
@@ -2605,14 +2921,14 @@ mod tests {
let Type::Class(c_class) = c_ty else {
panic!("C is not a Class")
};
let c_bases = c_class.bases(&db);
let b_ty = c_bases.first().unwrap();
let mut c_bases = c_class.bases(&db);
let b_ty = c_bases.next().unwrap();
let Type::Class(b_class) = b_ty else {
panic!("B is not a Class")
};
assert_eq!(b_class.name(&db), "B");
let b_bases = b_class.bases(&db);
let a_ty = b_bases.first().unwrap();
let mut b_bases = b_class.bases(&db);
let a_ty = b_bases.next().unwrap();
let Type::Class(a_class) = a_ty else {
panic!("A is not a Class")
};
@@ -2844,6 +3160,24 @@ mod tests {
Ok(())
}
/// A class's bases can be self-referential; this looks silly but a slightly more complex
/// version of it actually occurs in typeshed: `class str(Sequence[str]): ...`
#[test]
fn cyclical_class_pyi_definition() -> anyhow::Result<()> {
let mut db = setup_db();
db.write_file("/src/a.pyi", "class C(C): ...")?;
assert_public_ty(&db, "/src/a.pyi", "C", "Literal[C]");
Ok(())
}
#[test]
fn str_builtin() -> anyhow::Result<()> {
let mut db = setup_db();
db.write_file("/src/a.py", "x = str")?;
assert_public_ty(&db, "/src/a.py", "x", "Literal[str]");
Ok(())
}
#[test]
fn narrow_not_none() -> anyhow::Result<()> {
let mut db = setup_db();

View File

@@ -11,14 +11,14 @@ fn check() {
};
let mut workspace = Workspace::new("/", &settings).expect("Workspace to be created");
let test = workspace
workspace
.open_file("test.py", "import random22\n")
.expect("File to be opened");
let result = workspace.check_file(&test).expect("Check to succeed");
let result = workspace.check().expect("Check to succeed");
assert_eq!(
result,
vec!["/test.py:1:8: Import 'random22' could not be resolved.",]
vec!["/test.py:1:8: Cannot resolve import 'random22'."]
);
}

View File

@@ -22,12 +22,13 @@ ruff_text_size = { workspace = true }
anyhow = { workspace = true }
crossbeam = { workspace = true }
notify = { workspace = true }
rayon = { workspace = true }
rustc-hash = { workspace = true }
salsa = { workspace = true }
tracing = { workspace = true }
[dev-dependencies]
ruff_db = { workspace = true, features = ["testing"]}
ruff_db = { workspace = true, features = ["testing"] }
[lints]
workspace = true

View File

@@ -0,0 +1,11 @@
def bool(x) -> bool:
return True
class MyClass: ...
def MyClass() -> MyClass: ...
def x(self) -> x: ...

View File

@@ -0,0 +1,2 @@
def bool(x=bool):
return x

View File

@@ -56,6 +56,8 @@ impl RootDatabase {
}
pub fn check_file(&self, file: File) -> Result<Vec<String>, Cancelled> {
let _span = tracing::debug_span!("check_file", file=%file.path(self)).entered();
self.with_db(|db| check_file(db, file))
}

View File

@@ -15,7 +15,8 @@ use ruff_db::{
use ruff_python_ast::{name::Name, PySourceType};
use ruff_text_size::Ranged;
use crate::workspace::files::{Index, Indexed, PackageFiles};
use crate::db::RootDatabase;
use crate::workspace::files::{Index, Indexed, IndexedIter, PackageFiles};
use crate::{
db::Db,
lint::{lint_semantic, lint_syntax},
@@ -190,23 +191,35 @@ impl Workspace {
}
/// Checks all open files in the workspace and its dependencies.
#[tracing::instrument(level = "debug", skip_all)]
pub fn check(self, db: &dyn Db) -> Vec<String> {
pub fn check(self, db: &RootDatabase) -> Vec<String> {
let workspace_span = tracing::debug_span!("check_workspace");
let _span = workspace_span.enter();
tracing::debug!("Checking workspace");
let files = WorkspaceFiles::new(db, self);
let result = Arc::new(std::sync::Mutex::new(Vec::new()));
let inner_result = Arc::clone(&result);
let mut result = Vec::new();
let db = db.snapshot();
let workspace_span = workspace_span.clone();
if let Some(open_files) = self.open_files(db) {
for file in open_files {
result.extend_from_slice(&check_file(db, *file));
rayon::scope(move |scope| {
for file in &files {
let result = inner_result.clone();
let db = db.snapshot();
let workspace_span = workspace_span.clone();
scope.spawn(move |_| {
let check_file_span = tracing::debug_span!(parent: &workspace_span, "check_file", file=%file.path(&db));
let _entered = check_file_span.entered();
let file_diagnostics = check_file(&db, file);
result.lock().unwrap().extend(file_diagnostics);
});
}
} else {
for package in self.packages(db) {
result.extend(package.check(db));
}
}
});
result
Arc::into_inner(result).unwrap().into_inner().unwrap()
}
/// Opens a file in the workspace.
@@ -324,19 +337,6 @@ impl Package {
index.insert(file);
}
#[tracing::instrument(level = "debug", skip(db))]
pub(crate) fn check(self, db: &dyn Db) -> Vec<String> {
tracing::debug!("Checking package '{}'", self.root(db));
let mut result = Vec::new();
for file in &self.files(db) {
let diagnostics = check_file(db, file);
result.extend_from_slice(&diagnostics);
}
result
}
/// Returns the files belonging to this package.
pub fn files(self, db: &dyn Db) -> Indexed<'_> {
let files = self.file_set(db);
@@ -384,9 +384,7 @@ impl Package {
#[salsa::tracked]
pub(super) fn check_file(db: &dyn Db, file: File) -> Vec<String> {
let path = file.path(db);
let _span = tracing::debug_span!("check_file", file=%path).entered();
tracing::debug!("Checking file '{path}'");
tracing::debug!("Checking file '{path}'", path = file.path(db));
let mut diagnostics = Vec::new();
@@ -474,6 +472,73 @@ fn discover_package_files(db: &dyn Db, path: &SystemPath) -> FxHashSet<File> {
files
}
#[derive(Debug)]
enum WorkspaceFiles<'a> {
OpenFiles(&'a FxHashSet<File>),
PackageFiles(Vec<Indexed<'a>>),
}
impl<'a> WorkspaceFiles<'a> {
fn new(db: &'a dyn Db, workspace: Workspace) -> Self {
if let Some(open_files) = workspace.open_files(db) {
WorkspaceFiles::OpenFiles(open_files)
} else {
WorkspaceFiles::PackageFiles(
workspace
.packages(db)
.map(|package| package.files(db))
.collect(),
)
}
}
}
impl<'a> IntoIterator for &'a WorkspaceFiles<'a> {
type Item = File;
type IntoIter = WorkspaceFilesIter<'a>;
fn into_iter(self) -> Self::IntoIter {
match self {
WorkspaceFiles::OpenFiles(files) => WorkspaceFilesIter::OpenFiles(files.iter()),
WorkspaceFiles::PackageFiles(package_files) => {
let mut package_files = package_files.iter();
WorkspaceFilesIter::PackageFiles {
current: package_files.next().map(IntoIterator::into_iter),
package_files,
}
}
}
}
}
enum WorkspaceFilesIter<'db> {
OpenFiles(std::collections::hash_set::Iter<'db, File>),
PackageFiles {
package_files: std::slice::Iter<'db, Indexed<'db>>,
current: Option<IndexedIter<'db>>,
},
}
impl Iterator for WorkspaceFilesIter<'_> {
type Item = File;
fn next(&mut self) -> Option<Self::Item> {
match self {
WorkspaceFilesIter::OpenFiles(files) => files.next().copied(),
WorkspaceFilesIter::PackageFiles {
package_files,
current,
} => loop {
if let Some(file) = current.as_mut().and_then(Iterator::next) {
return Some(file);
}
*current = Some(package_files.next()?.into_iter());
},
}
}
}
#[cfg(test)]
mod tests {
use ruff_db::files::system_path_to_file;

View File

@@ -158,9 +158,11 @@ impl Deref for Indexed<'_> {
}
}
pub(super) type IndexedIter<'a> = std::iter::Copied<std::collections::hash_set::Iter<'a, File>>;
impl<'a> IntoIterator for &'a Indexed<'_> {
type Item = File;
type IntoIter = std::iter::Copied<std::collections::hash_set::Iter<'a, File>>;
type IntoIter = IndexedIter<'a>;
fn into_iter(self) -> Self::IntoIter {
self.files.iter().copied()

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.6.2"
version = "0.6.3"
publish = true
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -37,13 +37,14 @@ name = "red_knot"
harness = false
[dependencies]
codspeed-criterion-compat = { workspace = true, default-features = false, optional = true }
criterion = { workspace = true, default-features = false }
once_cell = { workspace = true }
rayon = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
url = { workspace = true }
ureq = { workspace = true }
criterion = { workspace = true, default-features = false }
codspeed-criterion-compat = { workspace = true, default-features = false, optional = true }
[dev-dependencies]
ruff_db = { workspace = true }

View File

@@ -1,5 +1,6 @@
#![allow(clippy::disallowed_names)]
use rayon::ThreadPoolBuilder;
use red_knot_python_semantic::PythonVersion;
use red_knot_workspace::db::RootDatabase;
use red_knot_workspace::watch::{ChangeEvent, ChangedKind};
@@ -20,18 +21,9 @@ struct Case {
const TOMLLIB_312_URL: &str = "https://raw.githubusercontent.com/python/cpython/8e8a4baf652f6e1cee7acde9d78c4b6154539748/Lib/tomllib";
// This first "unresolved import" is because we don't understand `*` imports yet.
// The following "unresolved import" violations are because we can't distinguish currently from
// "Symbol exists in the module but its type is unknown" and
// "Symbol does not exist in the module"
// The "unresolved import" is because we don't understand `*` imports yet.
static EXPECTED_DIAGNOSTICS: &[&str] = &[
"/src/tomllib/_parser.py:7:29: Could not resolve import of 'Iterable' from 'collections.abc'",
"/src/tomllib/_parser.py:10:20: Could not resolve import of 'Any' from 'typing'",
"/src/tomllib/_parser.py:13:5: Could not resolve import of 'RE_DATETIME' from '._re'",
"/src/tomllib/_parser.py:14:5: Could not resolve import of 'RE_LOCALTIME' from '._re'",
"/src/tomllib/_parser.py:15:5: Could not resolve import of 'RE_NUMBER' from '._re'",
"/src/tomllib/_parser.py:20:21: Could not resolve import of 'Key' from '._types'",
"/src/tomllib/_parser.py:20:26: Could not resolve import of 'ParseFloat' from '._types'",
"/src/tomllib/_parser.py:7:29: Module 'collections.abc' has no member 'Iterable'",
"Line 69 is too long (89 characters)",
"Use double quotes for strings",
"Use double quotes for strings",
@@ -40,23 +32,8 @@ static EXPECTED_DIAGNOSTICS: &[&str] = &[
"Use double quotes for strings",
"Use double quotes for strings",
"Use double quotes for strings",
"/src/tomllib/_parser.py:153:22: Name 'key' used when not defined.",
"/src/tomllib/_parser.py:153:27: Name 'flag' used when not defined.",
"/src/tomllib/_parser.py:159:16: Name 'k' used when not defined.",
"/src/tomllib/_parser.py:161:25: Name 'k' used when not defined.",
"/src/tomllib/_parser.py:168:16: Name 'k' used when not defined.",
"/src/tomllib/_parser.py:169:22: Name 'k' used when not defined.",
"/src/tomllib/_parser.py:170:25: Name 'k' used when not defined.",
"/src/tomllib/_parser.py:180:16: Name 'k' used when not defined.",
"/src/tomllib/_parser.py:182:31: Name 'k' used when not defined.",
"/src/tomllib/_parser.py:206:16: Name 'k' used when not defined.",
"/src/tomllib/_parser.py:207:22: Name 'k' used when not defined.",
"/src/tomllib/_parser.py:208:25: Name 'k' used when not defined.",
"/src/tomllib/_parser.py:330:32: Name 'header' used when not defined.",
"/src/tomllib/_parser.py:330:41: Name 'key' used when not defined.",
"/src/tomllib/_parser.py:333:26: Name 'cont_key' used when not defined.",
"/src/tomllib/_parser.py:334:71: Name 'cont_key' used when not defined.",
"/src/tomllib/_parser.py:337:31: Name 'cont_key' used when not defined.",
"/src/tomllib/_parser.py:628:75: Name 'e' used when not defined.",
"/src/tomllib/_parser.py:686:23: Name 'parse_float' used when not defined.",
];
@@ -112,7 +89,25 @@ fn setup_case() -> Case {
}
}
static RAYON_INITIALIZED: std::sync::Once = std::sync::Once::new();
fn setup_rayon() {
// Initialize the rayon thread pool outside the benchmark because it has a significant cost.
// We limit the thread pool to only one (the current thread) because we're focused on
// where red knot spends time and less about how well the code runs concurrently.
// We might want to add a benchmark focusing on concurrency to detect congestion in the future.
RAYON_INITIALIZED.call_once(|| {
ThreadPoolBuilder::new()
.num_threads(1)
.use_current_thread()
.build_global()
.unwrap();
});
}
fn benchmark_incremental(criterion: &mut Criterion) {
setup_rayon();
criterion.bench_function("red_knot_check_file[incremental]", |b| {
b.iter_batched_ref(
|| {
@@ -149,6 +144,8 @@ fn benchmark_incremental(criterion: &mut Criterion) {
}
fn benchmark_cold(criterion: &mut Criterion) {
setup_rayon();
criterion.bench_function("red_knot_check_file[cold]", |b| {
b.iter_batched_ref(
setup_case,

View File

@@ -8,6 +8,7 @@ use salsa::{Durability, Setter};
pub use file_root::{FileRoot, FileRootKind};
pub use path::FilePath;
use ruff_notebook::{Notebook, NotebookError};
use ruff_python_ast::PySourceType;
use crate::file_revision::FileRevision;
use crate::files::file_root::FileRoots;
@@ -424,6 +425,13 @@ impl File {
pub fn exists(self, db: &dyn Db) -> bool {
self.status(db) == FileStatus::Exists
}
/// Returns `true` if the file should be analyzed as a type stub.
pub fn is_stub(self, db: &dyn Db) -> bool {
self.path(db)
.extension()
.is_some_and(|extension| PySourceType::from_extension(extension).is_stub())
}
}
/// A virtual file that doesn't exist on the file system.

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.6.2"
version = "0.6.3"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -17,7 +17,7 @@ app = FastAPI()
router = APIRouter()
# Errors
# Fixable errors
@app.get("/items/")
def get_items(
@@ -40,6 +40,34 @@ def do_stuff(
# do stuff
pass
@app.get("/users/")
def get_users(
skip: int,
limit: int,
current_user: User = Depends(get_current_user),
):
pass
@app.get("/users/")
def get_users(
current_user: User = Depends(get_current_user),
skip: int = 0,
limit: int = 10,
):
pass
# Non fixable errors
@app.get("/users/")
def get_users(
skip: int = 0,
limit: int = 10,
current_user: User = Depends(get_current_user),
):
pass
# Unchanged

View File

@@ -79,3 +79,15 @@ _ = f"a {f"first"
+ f"second"} d"
_ = f"a {f"first {f"middle"}"
+ f"second"} d"
# See https://github.com/astral-sh/ruff/issues/12936
_ = "\12""0" # fix should be "\0120"
_ = "\\12""0" # fix should be "\\120"
_ = "\\\12""0" # fix should be "\\\0120"
_ = "\12 0""0" # fix should be "\12 00"
_ = r"\12"r"0" # fix should be r"\120"
_ = "\12 and more""0" # fix should be "\12 and more0"
_ = "\8""0" # fix should be "\80"
_ = "\12""8" # fix should be "\128"
_ = "\12""foo" # fix should be "\12foo"
_ = "\12" "" # fix should be "\12"

View File

@@ -243,3 +243,16 @@ def aliased():
from tarfile import TarFile as TF
f = TF("foo").open()
f.close()
import dbm.sqlite3
# OK
with dbm.sqlite3.open("foo.db") as f:
print(f.keys())
# OK
dbm.sqlite3.open("foo.db").close()
# SIM115
f = dbm.sqlite3.open("foo.db")
f.close()

View File

@@ -0,0 +1,75 @@
from contextlib import contextmanager
l = 0
I = 0
O = 0
l: int = 0
a, l = 0, 1
[a, l] = 0, 1
a, *l = 0, 1, 2
a = l = 0
o = 0
i = 0
for l in range(3):
pass
for a, l in zip(range(3), range(3)):
pass
def f1():
global l
l = 0
def f2():
l = 0
def f3():
nonlocal l
l = 1
f3()
return l
def f4(l, /, I):
return l, I, O
def f5(l=0, *, I=1):
return l, I
def f6(*l, **I):
return l, I
@contextmanager
def ctx1():
yield 0
with ctx1() as l:
pass
@contextmanager
def ctx2():
yield 0, 1
with ctx2() as (a, l):
pass
try:
pass
except ValueError as l:
pass
if (l := 5) > 0:
pass

View File

@@ -119,3 +119,91 @@ class A(metaclass=abc.abcmeta):
def f(self):
"""Lorem ipsum."""
return True
# OK - implicit None early return
def foo(obj: object) -> None:
"""A very helpful docstring.
Args:
obj (object): An object.
"""
if obj is None:
return
print(obj)
# OK - explicit None early return
def foo(obj: object) -> None:
"""A very helpful docstring.
Args:
obj (object): An object.
"""
if obj is None:
return None
print(obj)
# OK - explicit None early return w/o useful type annotations
def foo(obj):
"""A very helpful docstring.
Args:
obj (object): An object.
"""
if obj is None:
return None
print(obj)
# OK - multiple explicit None early returns
def foo(obj: object) -> None:
"""A very helpful docstring.
Args:
obj (object): An object.
"""
if obj is None:
return None
if obj == "None":
return
if obj == 0:
return None
print(obj)
# DOC201 - non-early return explicit None
def foo(x: int) -> int | None:
"""A very helpful docstring.
Args:
x (int): An interger.
"""
if x < 0:
return None
else:
return x
# DOC201 - non-early return explicit None w/o useful type annotations
def foo(x):
"""A very helpful docstring.
Args:
x (int): An interger.
"""
if x < 0:
return None
else:
return x
# DOC201 - only returns None, but return annotation is not None
def foo(s: str) -> str | None:
"""A very helpful docstring.
Args:
s (str): A string.
"""
return None

View File

@@ -85,3 +85,105 @@ class A(metaclass=abc.abcmeta):
def f(self):
"""Lorem ipsum."""
return True
# OK - implicit None early return
def foo(obj: object) -> None:
"""A very helpful docstring.
Parameters
----------
obj : object
An object.
"""
if obj is None:
return
print(obj)
# OK - explicit None early return
def foo(obj: object) -> None:
"""A very helpful docstring.
Parameters
----------
obj : object
An object.
"""
if obj is None:
return None
print(obj)
# OK - explicit None early return w/o useful type annotations
def foo(obj):
"""A very helpful docstring.
Parameters
----------
obj : object
An object.
"""
if obj is None:
return None
print(obj)
# OK - multiple explicit None early returns
def foo(obj: object) -> None:
"""A very helpful docstring.
Parameters
----------
obj : object
An object.
"""
if obj is None:
return None
if obj == "None":
return
if obj == 0:
return None
print(obj)
# DOC201 - non-early return explicit None
def foo(x: int) -> int | None:
"""A very helpful docstring.
Parameters
----------
x : int
An interger.
"""
if x < 0:
return None
else:
return x
# DOC201 - non-early return explicit None w/o useful type annotations
def foo(x):
"""A very helpful docstring.
Parameters
----------
x : int
An interger.
"""
if x < 0:
return None
else:
return x
# DOC201 - only returns None, but return annotation is not None
def foo(s: str) -> str | None:
"""A very helpful docstring.
Parameters
----------
x : str
A string.
"""
return None

View File

@@ -42,3 +42,16 @@ max(1, max(*a))
import builtins
builtins.min(1, min(2, 3))
# PLW3301
max_word_len = max(
max(len(word) for word in "blah blah blah".split(" ")),
len("Done!"),
)
# OK
max_word_len = max(
*(len(word) for word in "blah blah blah".split(" ")),
len("Done!"),
)

View File

@@ -9,3 +9,13 @@ dictionary = {
#import os # noqa: E501
def f():
data = 1
# line below should autofix to `return data # fmt: skip`
return data # noqa: RET504 # fmt: skip
def f():
data = 1
# line below should autofix to `return data`
return data # noqa: RET504 - intentional incorrect noqa, will be removed

View File

@@ -70,11 +70,11 @@ pub(crate) fn except_handler(except_handler: &ExceptHandler, checker: &mut Check
}
if let Some(name) = name {
if checker.enabled(Rule::AmbiguousVariableName) {
if let Some(diagnostic) =
pycodestyle::rules::ambiguous_variable_name(name.as_str(), name.range())
{
checker.diagnostics.push(diagnostic);
}
pycodestyle::rules::ambiguous_variable_name(
checker,
name.as_str(),
name.range(),
);
}
if checker.enabled(Rule::BuiltinVariableShadowing) {
flake8_builtins::rules::builtin_variable_shadowing(checker, name, name.range());

View File

@@ -259,11 +259,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
}
}
if checker.enabled(Rule::AmbiguousVariableName) {
if let Some(diagnostic) =
pycodestyle::rules::ambiguous_variable_name(id, expr.range())
{
checker.diagnostics.push(diagnostic);
}
pycodestyle::rules::ambiguous_variable_name(checker, id, expr.range());
}
if !checker.semantic.current_scope().kind.is_class() {
if checker.enabled(Rule::BuiltinVariableShadowing) {

View File

@@ -8,11 +8,11 @@ use crate::rules::{flake8_builtins, pep8_naming, pycodestyle};
/// Run lint rules over a [`Parameter`] syntax node.
pub(crate) fn parameter(parameter: &Parameter, checker: &mut Checker) {
if checker.enabled(Rule::AmbiguousVariableName) {
if let Some(diagnostic) =
pycodestyle::rules::ambiguous_variable_name(&parameter.name, parameter.name.range())
{
checker.diagnostics.push(diagnostic);
}
pycodestyle::rules::ambiguous_variable_name(
checker,
&parameter.name,
parameter.name.range(),
);
}
if checker.enabled(Rule::InvalidArgumentName) {
if let Some(diagnostic) = pep8_naming::rules::invalid_argument_name(

View File

@@ -24,16 +24,16 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
pylint::rules::global_at_module_level(checker, stmt);
}
if checker.enabled(Rule::AmbiguousVariableName) {
checker.diagnostics.extend(names.iter().filter_map(|name| {
pycodestyle::rules::ambiguous_variable_name(name, name.range())
}));
for name in names {
pycodestyle::rules::ambiguous_variable_name(checker, name, name.range());
}
}
}
Stmt::Nonlocal(nonlocal @ ast::StmtNonlocal { names, range: _ }) => {
if checker.enabled(Rule::AmbiguousVariableName) {
checker.diagnostics.extend(names.iter().filter_map(|name| {
pycodestyle::rules::ambiguous_variable_name(name, name.range())
}));
for name in names {
pycodestyle::rules::ambiguous_variable_name(checker, name, name.range());
}
}
if checker.enabled(Rule::NonlocalWithoutBinding) {
if !checker.semantic.scope_id.is_global() {

View File

@@ -118,10 +118,10 @@ pub(crate) fn check_noqa(
match &line.directive {
Directive::All(directive) => {
if line.matches.is_empty() {
let edit = delete_comment(directive.range(), locator);
let mut diagnostic =
Diagnostic::new(UnusedNOQA { codes: None }, directive.range());
diagnostic
.set_fix(Fix::safe_edit(delete_comment(directive.range(), locator)));
diagnostic.set_fix(Fix::safe_edit(edit));
diagnostics.push(diagnostic);
}
@@ -172,6 +172,14 @@ pub(crate) fn check_noqa(
&& unknown_codes.is_empty()
&& unmatched_codes.is_empty())
{
let edit = if valid_codes.is_empty() {
delete_comment(directive.range(), locator)
} else {
Edit::range_replacement(
format!("# noqa: {}", valid_codes.join(", ")),
directive.range(),
)
};
let mut diagnostic = Diagnostic::new(
UnusedNOQA {
codes: Some(UnusedCodes {
@@ -195,17 +203,7 @@ pub(crate) fn check_noqa(
},
directive.range(),
);
if valid_codes.is_empty() {
diagnostic.set_fix(Fix::safe_edit(delete_comment(
directive.range(),
locator,
)));
} else {
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
format!("# noqa: {}", valid_codes.join(", ")),
directive.range(),
)));
}
diagnostic.set_fix(Fix::safe_edit(edit));
diagnostics.push(diagnostic);
}
}

View File

@@ -99,11 +99,8 @@ pub(crate) fn delete_comment(range: TextRange, locator: &Locator) -> Edit {
}
// Ex) `x = 1 # noqa here`
else {
// Replace `# noqa here` with `# here`.
Edit::range_replacement(
"# ".to_string(),
TextRange::new(range.start(), range.end() + trailing_space_len),
)
// Remove `# noqa here` and whitespace
Edit::deletion(range.start() - leading_space_len, line_range.end())
}
}

View File

@@ -1,4 +1,4 @@
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast as ast;
use ruff_python_ast::helpers::map_callable;
@@ -59,14 +59,16 @@ use crate::settings::types::PythonVersion;
#[violation]
pub struct FastApiNonAnnotatedDependency;
impl AlwaysFixableViolation for FastApiNonAnnotatedDependency {
impl Violation for FastApiNonAnnotatedDependency {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
format!("FastAPI dependency without `Annotated`")
}
fn fix_title(&self) -> String {
"Replace with `Annotated`".to_string()
fn fix_title(&self) -> Option<String> {
Some("Replace with `Annotated`".to_string())
}
}
@@ -75,64 +77,95 @@ pub(crate) fn fastapi_non_annotated_dependency(
checker: &mut Checker,
function_def: &ast::StmtFunctionDef,
) {
if !checker.semantic().seen_module(Modules::FASTAPI) {
return;
}
if !is_fastapi_route(function_def, checker.semantic()) {
if !checker.semantic().seen_module(Modules::FASTAPI)
|| !is_fastapi_route(function_def, checker.semantic())
{
return;
}
let mut updatable_count = 0;
let mut has_non_updatable_default = false;
let total_params = function_def.parameters.args.len();
for parameter in &function_def.parameters.args {
if let (Some(annotation), Some(default)) =
(&parameter.parameter.annotation, &parameter.default)
{
if checker
.semantic()
.resolve_qualified_name(map_callable(default))
.is_some_and(|qualified_name| {
matches!(
qualified_name.segments(),
[
"fastapi",
"Query"
| "Path"
| "Body"
| "Cookie"
| "Header"
| "File"
| "Form"
| "Depends"
| "Security"
]
)
})
{
let mut diagnostic =
Diagnostic::new(FastApiNonAnnotatedDependency, parameter.range);
let needs_update = matches!(
(&parameter.parameter.annotation, &parameter.default),
(Some(_annotation), Some(default)) if is_fastapi_dependency(checker, default)
);
diagnostic.try_set_fix(|| {
let module = if checker.settings.target_version >= PythonVersion::Py39 {
"typing"
} else {
"typing_extensions"
};
let (import_edit, binding) = checker.importer().get_or_import_symbol(
&ImportRequest::import_from(module, "Annotated"),
function_def.start(),
checker.semantic(),
)?;
let content = format!(
"{}: {}[{}, {}]",
parameter.parameter.name.id,
binding,
checker.locator().slice(annotation.range()),
checker.locator().slice(default.range())
);
let parameter_edit = Edit::range_replacement(content, parameter.range());
Ok(Fix::unsafe_edits(import_edit, [parameter_edit]))
});
checker.diagnostics.push(diagnostic);
}
if needs_update {
updatable_count += 1;
// Determine if it's safe to update this parameter:
// - if all parameters are updatable its safe.
// - if we've encountered a non-updatable parameter with a default value, it's no longer
// safe. (https://github.com/astral-sh/ruff/issues/12982)
let safe_to_update = updatable_count == total_params || !has_non_updatable_default;
create_diagnostic(checker, parameter, safe_to_update);
} else if parameter.default.is_some() {
has_non_updatable_default = true;
}
}
}
fn is_fastapi_dependency(checker: &Checker, expr: &ast::Expr) -> bool {
checker
.semantic()
.resolve_qualified_name(map_callable(expr))
.is_some_and(|qualified_name| {
matches!(
qualified_name.segments(),
[
"fastapi",
"Query"
| "Path"
| "Body"
| "Cookie"
| "Header"
| "File"
| "Form"
| "Depends"
| "Security"
]
)
})
}
fn create_diagnostic(
checker: &mut Checker,
parameter: &ast::ParameterWithDefault,
safe_to_update: bool,
) {
let mut diagnostic = Diagnostic::new(FastApiNonAnnotatedDependency, parameter.range);
if safe_to_update {
if let (Some(annotation), Some(default)) =
(&parameter.parameter.annotation, &parameter.default)
{
diagnostic.try_set_fix(|| {
let module = if checker.settings.target_version >= PythonVersion::Py39 {
"typing"
} else {
"typing_extensions"
};
let (import_edit, binding) = checker.importer().get_or_import_symbol(
&ImportRequest::import_from(module, "Annotated"),
parameter.range.start(),
checker.semantic(),
)?;
let content = format!(
"{}: {}[{}, {}]",
parameter.parameter.name.id,
binding,
checker.locator().slice(annotation.range()),
checker.locator().slice(default.range())
);
let parameter_edit = Edit::range_replacement(content, parameter.range);
Ok(Fix::unsafe_edits(import_edit, [parameter_edit]))
});
}
} else {
diagnostic.fix = None;
}
checker.diagnostics.push(diagnostic);
}

View File

@@ -261,3 +261,72 @@ FAST002.py:38:5: FAST002 [*] FastAPI dependency without `Annotated`
39 40 | ):
40 41 | # do stuff
41 42 | pass
FAST002.py:47:5: FAST002 [*] FastAPI dependency without `Annotated`
|
45 | skip: int,
46 | limit: int,
47 | current_user: User = Depends(get_current_user),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FAST002
48 | ):
49 | pass
|
= help: Replace with `Annotated`
Unsafe fix
12 12 | Security,
13 13 | )
14 14 | from pydantic import BaseModel
15 |+from typing import Annotated
15 16 |
16 17 | app = FastAPI()
17 18 | router = APIRouter()
--------------------------------------------------------------------------------
44 45 | def get_users(
45 46 | skip: int,
46 47 | limit: int,
47 |- current_user: User = Depends(get_current_user),
48 |+ current_user: Annotated[User, Depends(get_current_user)],
48 49 | ):
49 50 | pass
50 51 |
FAST002.py:53:5: FAST002 [*] FastAPI dependency without `Annotated`
|
51 | @app.get("/users/")
52 | def get_users(
53 | current_user: User = Depends(get_current_user),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FAST002
54 | skip: int = 0,
55 | limit: int = 10,
|
= help: Replace with `Annotated`
Unsafe fix
12 12 | Security,
13 13 | )
14 14 | from pydantic import BaseModel
15 |+from typing import Annotated
15 16 |
16 17 | app = FastAPI()
17 18 | router = APIRouter()
--------------------------------------------------------------------------------
50 51 |
51 52 | @app.get("/users/")
52 53 | def get_users(
53 |- current_user: User = Depends(get_current_user),
54 |+ current_user: Annotated[User, Depends(get_current_user)],
54 55 | skip: int = 0,
55 56 | limit: int = 10,
56 57 | ):
FAST002.py:67:5: FAST002 FastAPI dependency without `Annotated`
|
65 | skip: int = 0,
66 | limit: int = 10,
67 | current_user: User = Depends(get_current_user),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FAST002
68 | ):
69 | pass
|
= help: Replace with `Annotated`

View File

@@ -1,3 +1,5 @@
use std::borrow::Cow;
use itertools::Itertools;
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
@@ -172,9 +174,16 @@ fn concatenate_strings(a_range: TextRange, b_range: TextRange, locator: &Locator
return None;
}
let a_body = &a_text[a_leading_quote.len()..a_text.len() - a_trailing_quote.len()];
let mut a_body =
Cow::Borrowed(&a_text[a_leading_quote.len()..a_text.len() - a_trailing_quote.len()]);
let b_body = &b_text[b_leading_quote.len()..b_text.len() - b_trailing_quote.len()];
if a_leading_quote.find(['r', 'R']).is_none()
&& matches!(b_body.bytes().next(), Some(b'0'..=b'7'))
{
normalize_ending_octal(&mut a_body);
}
let concatenation = format!("{a_leading_quote}{a_body}{b_body}{a_trailing_quote}");
let range = TextRange::new(a_range.start(), b_range.end());
@@ -183,3 +192,39 @@ fn concatenate_strings(a_range: TextRange, b_range: TextRange, locator: &Locator
range,
)))
}
/// Pads an octal at the end of the string
/// to three digits, if necessary.
fn normalize_ending_octal(text: &mut Cow<'_, str>) {
// Early return for short strings
if text.len() < 2 {
return;
}
let mut rev_bytes = text.bytes().rev();
if let Some(last_byte @ b'0'..=b'7') = rev_bytes.next() {
// "\y" -> "\00y"
if has_odd_consecutive_backslashes(&mut rev_bytes.clone()) {
let prefix = &text[..text.len() - 2];
*text = Cow::Owned(format!("{prefix}\\00{}", last_byte as char));
}
// "\xy" -> "\0xy"
else if let Some(penultimate_byte @ b'0'..=b'7') = rev_bytes.next() {
if has_odd_consecutive_backslashes(&mut rev_bytes.clone()) {
let prefix = &text[..text.len() - 3];
*text = Cow::Owned(format!(
"{prefix}\\0{}{}",
penultimate_byte as char, last_byte as char
));
}
}
}
}
fn has_odd_consecutive_backslashes(mut itr: impl Iterator<Item = u8>) -> bool {
let mut odd_backslashes = false;
while let Some(b'\\') = itr.next() {
odd_backslashes = !odd_backslashes;
}
odd_backslashes
}

View File

@@ -296,4 +296,202 @@ ISC.py:73:20: ISC001 [*] Implicitly concatenated string literals on one line
75 75 | f"def"} g"
76 76 |
ISC.py:84:5: ISC001 [*] Implicitly concatenated string literals on one line
|
83 | # See https://github.com/astral-sh/ruff/issues/12936
84 | _ = "\12""0" # fix should be "\0120"
| ^^^^^^^^ ISC001
85 | _ = "\\12""0" # fix should be "\\120"
86 | _ = "\\\12""0" # fix should be "\\\0120"
|
= help: Combine string literals
Safe fix
81 81 | + f"second"} d"
82 82 |
83 83 | # See https://github.com/astral-sh/ruff/issues/12936
84 |-_ = "\12""0" # fix should be "\0120"
84 |+_ = "\0120" # fix should be "\0120"
85 85 | _ = "\\12""0" # fix should be "\\120"
86 86 | _ = "\\\12""0" # fix should be "\\\0120"
87 87 | _ = "\12 0""0" # fix should be "\12 00"
ISC.py:85:5: ISC001 [*] Implicitly concatenated string literals on one line
|
83 | # See https://github.com/astral-sh/ruff/issues/12936
84 | _ = "\12""0" # fix should be "\0120"
85 | _ = "\\12""0" # fix should be "\\120"
| ^^^^^^^^^ ISC001
86 | _ = "\\\12""0" # fix should be "\\\0120"
87 | _ = "\12 0""0" # fix should be "\12 00"
|
= help: Combine string literals
Safe fix
82 82 |
83 83 | # See https://github.com/astral-sh/ruff/issues/12936
84 84 | _ = "\12""0" # fix should be "\0120"
85 |-_ = "\\12""0" # fix should be "\\120"
85 |+_ = "\\120" # fix should be "\\120"
86 86 | _ = "\\\12""0" # fix should be "\\\0120"
87 87 | _ = "\12 0""0" # fix should be "\12 00"
88 88 | _ = r"\12"r"0" # fix should be r"\120"
ISC.py:86:5: ISC001 [*] Implicitly concatenated string literals on one line
|
84 | _ = "\12""0" # fix should be "\0120"
85 | _ = "\\12""0" # fix should be "\\120"
86 | _ = "\\\12""0" # fix should be "\\\0120"
| ^^^^^^^^^^ ISC001
87 | _ = "\12 0""0" # fix should be "\12 00"
88 | _ = r"\12"r"0" # fix should be r"\120"
|
= help: Combine string literals
Safe fix
83 83 | # See https://github.com/astral-sh/ruff/issues/12936
84 84 | _ = "\12""0" # fix should be "\0120"
85 85 | _ = "\\12""0" # fix should be "\\120"
86 |-_ = "\\\12""0" # fix should be "\\\0120"
86 |+_ = "\\\0120" # fix should be "\\\0120"
87 87 | _ = "\12 0""0" # fix should be "\12 00"
88 88 | _ = r"\12"r"0" # fix should be r"\120"
89 89 | _ = "\12 and more""0" # fix should be "\12 and more0"
ISC.py:87:5: ISC001 [*] Implicitly concatenated string literals on one line
|
85 | _ = "\\12""0" # fix should be "\\120"
86 | _ = "\\\12""0" # fix should be "\\\0120"
87 | _ = "\12 0""0" # fix should be "\12 00"
| ^^^^^^^^^^ ISC001
88 | _ = r"\12"r"0" # fix should be r"\120"
89 | _ = "\12 and more""0" # fix should be "\12 and more0"
|
= help: Combine string literals
Safe fix
84 84 | _ = "\12""0" # fix should be "\0120"
85 85 | _ = "\\12""0" # fix should be "\\120"
86 86 | _ = "\\\12""0" # fix should be "\\\0120"
87 |-_ = "\12 0""0" # fix should be "\12 00"
87 |+_ = "\12 00" # fix should be "\12 00"
88 88 | _ = r"\12"r"0" # fix should be r"\120"
89 89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 90 | _ = "\8""0" # fix should be "\80"
ISC.py:88:5: ISC001 [*] Implicitly concatenated string literals on one line
|
86 | _ = "\\\12""0" # fix should be "\\\0120"
87 | _ = "\12 0""0" # fix should be "\12 00"
88 | _ = r"\12"r"0" # fix should be r"\120"
| ^^^^^^^^^^ ISC001
89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 | _ = "\8""0" # fix should be "\80"
|
= help: Combine string literals
Safe fix
85 85 | _ = "\\12""0" # fix should be "\\120"
86 86 | _ = "\\\12""0" # fix should be "\\\0120"
87 87 | _ = "\12 0""0" # fix should be "\12 00"
88 |-_ = r"\12"r"0" # fix should be r"\120"
88 |+_ = r"\120" # fix should be r"\120"
89 89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 90 | _ = "\8""0" # fix should be "\80"
91 91 | _ = "\12""8" # fix should be "\128"
ISC.py:89:5: ISC001 [*] Implicitly concatenated string literals on one line
|
87 | _ = "\12 0""0" # fix should be "\12 00"
88 | _ = r"\12"r"0" # fix should be r"\120"
89 | _ = "\12 and more""0" # fix should be "\12 and more0"
| ^^^^^^^^^^^^^^^^^ ISC001
90 | _ = "\8""0" # fix should be "\80"
91 | _ = "\12""8" # fix should be "\128"
|
= help: Combine string literals
Safe fix
86 86 | _ = "\\\12""0" # fix should be "\\\0120"
87 87 | _ = "\12 0""0" # fix should be "\12 00"
88 88 | _ = r"\12"r"0" # fix should be r"\120"
89 |-_ = "\12 and more""0" # fix should be "\12 and more0"
89 |+_ = "\12 and more0" # fix should be "\12 and more0"
90 90 | _ = "\8""0" # fix should be "\80"
91 91 | _ = "\12""8" # fix should be "\128"
92 92 | _ = "\12""foo" # fix should be "\12foo"
ISC.py:90:5: ISC001 [*] Implicitly concatenated string literals on one line
|
88 | _ = r"\12"r"0" # fix should be r"\120"
89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 | _ = "\8""0" # fix should be "\80"
| ^^^^^^^ ISC001
91 | _ = "\12""8" # fix should be "\128"
92 | _ = "\12""foo" # fix should be "\12foo"
|
= help: Combine string literals
Safe fix
87 87 | _ = "\12 0""0" # fix should be "\12 00"
88 88 | _ = r"\12"r"0" # fix should be r"\120"
89 89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 |-_ = "\8""0" # fix should be "\80"
90 |+_ = "\80" # fix should be "\80"
91 91 | _ = "\12""8" # fix should be "\128"
92 92 | _ = "\12""foo" # fix should be "\12foo"
93 93 | _ = "\12" "" # fix should be "\12"
ISC.py:91:5: ISC001 [*] Implicitly concatenated string literals on one line
|
89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 | _ = "\8""0" # fix should be "\80"
91 | _ = "\12""8" # fix should be "\128"
| ^^^^^^^^ ISC001
92 | _ = "\12""foo" # fix should be "\12foo"
93 | _ = "\12" "" # fix should be "\12"
|
= help: Combine string literals
Safe fix
88 88 | _ = r"\12"r"0" # fix should be r"\120"
89 89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 90 | _ = "\8""0" # fix should be "\80"
91 |-_ = "\12""8" # fix should be "\128"
91 |+_ = "\128" # fix should be "\128"
92 92 | _ = "\12""foo" # fix should be "\12foo"
93 93 | _ = "\12" "" # fix should be "\12"
ISC.py:92:5: ISC001 [*] Implicitly concatenated string literals on one line
|
90 | _ = "\8""0" # fix should be "\80"
91 | _ = "\12""8" # fix should be "\128"
92 | _ = "\12""foo" # fix should be "\12foo"
| ^^^^^^^^^^ ISC001
93 | _ = "\12" "" # fix should be "\12"
|
= help: Combine string literals
Safe fix
89 89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 90 | _ = "\8""0" # fix should be "\80"
91 91 | _ = "\12""8" # fix should be "\128"
92 |-_ = "\12""foo" # fix should be "\12foo"
92 |+_ = "\12foo" # fix should be "\12foo"
93 93 | _ = "\12" "" # fix should be "\12"
ISC.py:93:5: ISC001 [*] Implicitly concatenated string literals on one line
|
91 | _ = "\12""8" # fix should be "\128"
92 | _ = "\12""foo" # fix should be "\12foo"
93 | _ = "\12" "" # fix should be "\12"
| ^^^^^^^^ ISC001
|
= help: Combine string literals
Safe fix
90 90 | _ = "\8""0" # fix should be "\80"
91 91 | _ = "\12""8" # fix should be "\128"
92 92 | _ = "\12""foo" # fix should be "\12foo"
93 |-_ = "\12" "" # fix should be "\12"
93 |+_ = "\12" # fix should be "\12"

View File

@@ -50,6 +50,6 @@ ISC.py:80:10: ISC003 Explicitly concatenated string should be implicitly concate
| __________^
81 | | + f"second"} d"
| |_______________^ ISC003
82 |
83 | # See https://github.com/astral-sh/ruff/issues/12936
|

View File

@@ -296,4 +296,202 @@ ISC.py:73:20: ISC001 [*] Implicitly concatenated string literals on one line
75 75 | f"def"} g"
76 76 |
ISC.py:84:5: ISC001 [*] Implicitly concatenated string literals on one line
|
83 | # See https://github.com/astral-sh/ruff/issues/12936
84 | _ = "\12""0" # fix should be "\0120"
| ^^^^^^^^ ISC001
85 | _ = "\\12""0" # fix should be "\\120"
86 | _ = "\\\12""0" # fix should be "\\\0120"
|
= help: Combine string literals
Safe fix
81 81 | + f"second"} d"
82 82 |
83 83 | # See https://github.com/astral-sh/ruff/issues/12936
84 |-_ = "\12""0" # fix should be "\0120"
84 |+_ = "\0120" # fix should be "\0120"
85 85 | _ = "\\12""0" # fix should be "\\120"
86 86 | _ = "\\\12""0" # fix should be "\\\0120"
87 87 | _ = "\12 0""0" # fix should be "\12 00"
ISC.py:85:5: ISC001 [*] Implicitly concatenated string literals on one line
|
83 | # See https://github.com/astral-sh/ruff/issues/12936
84 | _ = "\12""0" # fix should be "\0120"
85 | _ = "\\12""0" # fix should be "\\120"
| ^^^^^^^^^ ISC001
86 | _ = "\\\12""0" # fix should be "\\\0120"
87 | _ = "\12 0""0" # fix should be "\12 00"
|
= help: Combine string literals
Safe fix
82 82 |
83 83 | # See https://github.com/astral-sh/ruff/issues/12936
84 84 | _ = "\12""0" # fix should be "\0120"
85 |-_ = "\\12""0" # fix should be "\\120"
85 |+_ = "\\120" # fix should be "\\120"
86 86 | _ = "\\\12""0" # fix should be "\\\0120"
87 87 | _ = "\12 0""0" # fix should be "\12 00"
88 88 | _ = r"\12"r"0" # fix should be r"\120"
ISC.py:86:5: ISC001 [*] Implicitly concatenated string literals on one line
|
84 | _ = "\12""0" # fix should be "\0120"
85 | _ = "\\12""0" # fix should be "\\120"
86 | _ = "\\\12""0" # fix should be "\\\0120"
| ^^^^^^^^^^ ISC001
87 | _ = "\12 0""0" # fix should be "\12 00"
88 | _ = r"\12"r"0" # fix should be r"\120"
|
= help: Combine string literals
Safe fix
83 83 | # See https://github.com/astral-sh/ruff/issues/12936
84 84 | _ = "\12""0" # fix should be "\0120"
85 85 | _ = "\\12""0" # fix should be "\\120"
86 |-_ = "\\\12""0" # fix should be "\\\0120"
86 |+_ = "\\\0120" # fix should be "\\\0120"
87 87 | _ = "\12 0""0" # fix should be "\12 00"
88 88 | _ = r"\12"r"0" # fix should be r"\120"
89 89 | _ = "\12 and more""0" # fix should be "\12 and more0"
ISC.py:87:5: ISC001 [*] Implicitly concatenated string literals on one line
|
85 | _ = "\\12""0" # fix should be "\\120"
86 | _ = "\\\12""0" # fix should be "\\\0120"
87 | _ = "\12 0""0" # fix should be "\12 00"
| ^^^^^^^^^^ ISC001
88 | _ = r"\12"r"0" # fix should be r"\120"
89 | _ = "\12 and more""0" # fix should be "\12 and more0"
|
= help: Combine string literals
Safe fix
84 84 | _ = "\12""0" # fix should be "\0120"
85 85 | _ = "\\12""0" # fix should be "\\120"
86 86 | _ = "\\\12""0" # fix should be "\\\0120"
87 |-_ = "\12 0""0" # fix should be "\12 00"
87 |+_ = "\12 00" # fix should be "\12 00"
88 88 | _ = r"\12"r"0" # fix should be r"\120"
89 89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 90 | _ = "\8""0" # fix should be "\80"
ISC.py:88:5: ISC001 [*] Implicitly concatenated string literals on one line
|
86 | _ = "\\\12""0" # fix should be "\\\0120"
87 | _ = "\12 0""0" # fix should be "\12 00"
88 | _ = r"\12"r"0" # fix should be r"\120"
| ^^^^^^^^^^ ISC001
89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 | _ = "\8""0" # fix should be "\80"
|
= help: Combine string literals
Safe fix
85 85 | _ = "\\12""0" # fix should be "\\120"
86 86 | _ = "\\\12""0" # fix should be "\\\0120"
87 87 | _ = "\12 0""0" # fix should be "\12 00"
88 |-_ = r"\12"r"0" # fix should be r"\120"
88 |+_ = r"\120" # fix should be r"\120"
89 89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 90 | _ = "\8""0" # fix should be "\80"
91 91 | _ = "\12""8" # fix should be "\128"
ISC.py:89:5: ISC001 [*] Implicitly concatenated string literals on one line
|
87 | _ = "\12 0""0" # fix should be "\12 00"
88 | _ = r"\12"r"0" # fix should be r"\120"
89 | _ = "\12 and more""0" # fix should be "\12 and more0"
| ^^^^^^^^^^^^^^^^^ ISC001
90 | _ = "\8""0" # fix should be "\80"
91 | _ = "\12""8" # fix should be "\128"
|
= help: Combine string literals
Safe fix
86 86 | _ = "\\\12""0" # fix should be "\\\0120"
87 87 | _ = "\12 0""0" # fix should be "\12 00"
88 88 | _ = r"\12"r"0" # fix should be r"\120"
89 |-_ = "\12 and more""0" # fix should be "\12 and more0"
89 |+_ = "\12 and more0" # fix should be "\12 and more0"
90 90 | _ = "\8""0" # fix should be "\80"
91 91 | _ = "\12""8" # fix should be "\128"
92 92 | _ = "\12""foo" # fix should be "\12foo"
ISC.py:90:5: ISC001 [*] Implicitly concatenated string literals on one line
|
88 | _ = r"\12"r"0" # fix should be r"\120"
89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 | _ = "\8""0" # fix should be "\80"
| ^^^^^^^ ISC001
91 | _ = "\12""8" # fix should be "\128"
92 | _ = "\12""foo" # fix should be "\12foo"
|
= help: Combine string literals
Safe fix
87 87 | _ = "\12 0""0" # fix should be "\12 00"
88 88 | _ = r"\12"r"0" # fix should be r"\120"
89 89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 |-_ = "\8""0" # fix should be "\80"
90 |+_ = "\80" # fix should be "\80"
91 91 | _ = "\12""8" # fix should be "\128"
92 92 | _ = "\12""foo" # fix should be "\12foo"
93 93 | _ = "\12" "" # fix should be "\12"
ISC.py:91:5: ISC001 [*] Implicitly concatenated string literals on one line
|
89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 | _ = "\8""0" # fix should be "\80"
91 | _ = "\12""8" # fix should be "\128"
| ^^^^^^^^ ISC001
92 | _ = "\12""foo" # fix should be "\12foo"
93 | _ = "\12" "" # fix should be "\12"
|
= help: Combine string literals
Safe fix
88 88 | _ = r"\12"r"0" # fix should be r"\120"
89 89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 90 | _ = "\8""0" # fix should be "\80"
91 |-_ = "\12""8" # fix should be "\128"
91 |+_ = "\128" # fix should be "\128"
92 92 | _ = "\12""foo" # fix should be "\12foo"
93 93 | _ = "\12" "" # fix should be "\12"
ISC.py:92:5: ISC001 [*] Implicitly concatenated string literals on one line
|
90 | _ = "\8""0" # fix should be "\80"
91 | _ = "\12""8" # fix should be "\128"
92 | _ = "\12""foo" # fix should be "\12foo"
| ^^^^^^^^^^ ISC001
93 | _ = "\12" "" # fix should be "\12"
|
= help: Combine string literals
Safe fix
89 89 | _ = "\12 and more""0" # fix should be "\12 and more0"
90 90 | _ = "\8""0" # fix should be "\80"
91 91 | _ = "\12""8" # fix should be "\128"
92 |-_ = "\12""foo" # fix should be "\12foo"
92 |+_ = "\12foo" # fix should be "\12foo"
93 93 | _ = "\12" "" # fix should be "\12"
ISC.py:93:5: ISC001 [*] Implicitly concatenated string literals on one line
|
91 | _ = "\12""8" # fix should be "\128"
92 | _ = "\12""foo" # fix should be "\12foo"
93 | _ = "\12" "" # fix should be "\12"
| ^^^^^^^^ ISC001
|
= help: Combine string literals
Safe fix
90 90 | _ = "\8""0" # fix should be "\80"
91 91 | _ = "\12""8" # fix should be "\128"
92 92 | _ = "\12""foo" # fix should be "\12foo"
93 |-_ = "\12" "" # fix should be "\12"
93 |+_ = "\12" # fix should be "\12"

View File

@@ -27,14 +27,14 @@ use crate::checkers::ast::Checker;
///
/// ## Example
///
/// ```python
/// ```pyi
/// class Foo:
/// def __eq__(self, obj: typing.Any) -> bool: ...
/// ```
///
/// Use instead:
///
/// ```python
/// ```pyi
/// class Foo:
/// def __eq__(self, obj: object) -> bool: ...
/// ```

View File

@@ -34,19 +34,17 @@ use crate::registry::Rule;
/// ```
///
/// ## Example
/// ```python
/// ```pyi
/// import sys
///
/// if sys.version_info > (3, 8):
/// ...
/// if sys.version_info > (3, 8): ...
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// import sys
///
/// if sys.version_info >= (3, 9):
/// ...
/// if sys.version_info >= (3, 9): ...
/// ```
#[violation]
pub struct BadVersionInfoComparison;
@@ -70,27 +68,23 @@ impl Violation for BadVersionInfoComparison {
///
/// ## Example
///
/// ```python
/// ```pyi
/// import sys
///
/// if sys.version_info < (3, 10):
///
/// def read_data(x, *, preserve_order=True): ...
///
/// else:
///
/// def read_data(x): ...
/// ```
///
/// Use instead:
///
/// ```python
/// ```pyi
/// if sys.version_info >= (3, 10):
///
/// def read_data(x): ...
///
/// else:
///
/// def read_data(x, *, preserve_order=True): ...
/// ```
#[violation]

View File

@@ -21,17 +21,16 @@ use crate::checkers::ast::Checker;
/// precisely.
///
/// ## Example
/// ```python
/// ```pyi
/// from collections import namedtuple
///
/// person = namedtuple("Person", ["name", "age"])
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// from typing import NamedTuple
///
///
/// class Person(NamedTuple):
/// name: str
/// age: int

View File

@@ -20,27 +20,24 @@ use crate::checkers::ast::Checker;
///
/// ## Example
///
/// ```python
/// ```pyi
/// from typing import TypeAlias
///
/// a = b = int
///
///
/// class Klass: ...
///
///
/// Klass.X: TypeAlias = int
/// ```
///
/// Use instead:
///
/// ```python
/// ```pyi
/// from typing import TypeAlias
///
/// a: TypeAlias = int
/// b: TypeAlias = int
///
///
/// class Klass:
/// X: TypeAlias = int
/// ```

View File

@@ -16,19 +16,17 @@ use crate::checkers::ast::Checker;
/// analyze your code.
///
/// ## Example
/// ```python
/// ```pyi
/// import sys
///
/// if (3, 10) <= sys.version_info < (3, 12):
/// ...
/// if (3, 10) <= sys.version_info < (3, 12): ...
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// import sys
///
/// if sys.version_info >= (3, 10) and sys.version_info < (3, 12):
/// ...
/// if sys.version_info >= (3, 10) and sys.version_info < (3, 12): ...
/// ```
///
/// ## References

View File

@@ -25,27 +25,22 @@ use crate::checkers::ast::Checker;
///
/// ## Example
///
/// ```python
/// ```pyi
/// class Foo:
/// def __new__(cls: type[_S], *args: str, **kwargs: int) -> _S: ...
///
/// def foo(self: _S, arg: bytes) -> _S: ...
///
/// @classmethod
/// def bar(cls: type[_S], arg: int) -> _S: ...
/// ```
///
/// Use instead:
///
/// ```python
/// ```pyi
/// from typing import Self
///
///
/// class Foo:
/// def __new__(cls, *args: str, **kwargs: int) -> Self: ...
///
/// def foo(self, arg: bytes) -> Self: ...
///
/// @classmethod
/// def bar(cls, arg: int) -> Self: ...
/// ```

View File

@@ -15,7 +15,7 @@ use crate::checkers::ast::Checker;
///
/// ## Example
///
/// ```python
/// ```pyi
/// def func(param: int) -> str:
/// """This is a docstring."""
/// ...
@@ -23,7 +23,7 @@ use crate::checkers::ast::Checker;
///
/// Use instead:
///
/// ```python
/// ```pyi
/// def func(param: int) -> str: ...
/// ```
#[violation]

View File

@@ -15,14 +15,14 @@ use crate::fix;
/// is redundant.
///
/// ## Example
/// ```python
/// ```pyi
/// class Foo:
/// ...
/// value: int
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// class Foo:
/// value: int
/// ```

View File

@@ -24,10 +24,9 @@ use crate::checkers::ast::Checker;
///
/// ## Example
///
/// ```python
/// ```pyi
/// from types import TracebackType
///
///
/// class Foo:
/// def __exit__(
/// self, typ: BaseException, exc: BaseException, tb: TracebackType
@@ -36,10 +35,9 @@ use crate::checkers::ast::Checker;
///
/// Use instead:
///
/// ```python
/// ```pyi
/// from types import TracebackType
///
///
/// class Foo:
/// def __exit__(
/// self,

View File

@@ -22,14 +22,14 @@ use crate::settings::types::PythonVersion::Py311;
/// members).
///
/// ## Example
/// ```python
/// ```pyi
/// from typing import NoReturn
///
/// def foo(x: NoReturn): ...
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// from typing import Never
///
/// def foo(x: Never): ...

View File

@@ -15,13 +15,13 @@ use crate::checkers::ast::Checker;
/// for this purpose.
///
/// ## Example
/// ```python
/// ```pyi
/// def double(x: int) -> int:
/// return x * 2
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// def double(x: int) -> int: ...
/// ```
///

View File

@@ -51,30 +51,23 @@ use crate::checkers::ast::Checker;
///
/// ## Example
///
/// ```python
/// ```pyi
/// class Foo:
/// def __new__(cls, *args: Any, **kwargs: Any) -> Foo: ...
///
/// def __enter__(self) -> Foo: ...
///
/// async def __aenter__(self) -> Foo: ...
///
/// def __iadd__(self, other: Foo) -> Foo: ...
/// ```
///
/// Use instead:
///
/// ```python
/// ```pyi
/// from typing_extensions import Self
///
///
/// class Foo:
/// def __new__(cls, *args: Any, **kwargs: Any) -> Self: ...
///
/// def __enter__(self) -> Self: ...
///
/// async def __aenter__(self) -> Self: ...
///
/// def __iadd__(self, other: Foo) -> Self: ...
/// ```
/// ## References

View File

@@ -20,13 +20,13 @@ use crate::checkers::ast::Checker;
///
/// ## Example
///
/// ```python
/// ```pyi
/// def foo(arg: int = 693568516352839939918568862861217771399698285293568) -> None: ...
/// ```
///
/// Use instead:
///
/// ```python
/// ```pyi
/// def foo(arg: int = ...) -> None: ...
/// ```
#[violation]

View File

@@ -15,14 +15,14 @@ use crate::fix;
/// stubs.
///
/// ## Example
/// ```python
/// ```pyi
/// class MyClass:
/// x: int
/// pass
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// class MyClass:
/// x: int
/// ```

View File

@@ -13,12 +13,12 @@ use crate::checkers::ast::Checker;
/// in stub files.
///
/// ## Example
/// ```python
/// ```pyi
/// def foo(bar: int) -> list[int]: pass
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// def foo(bar: int) -> list[int]: ...
/// ```
///

View File

@@ -17,13 +17,13 @@ use crate::settings::types::PythonVersion;
///
/// ## Example
///
/// ```python
/// ```pyi
/// def foo(__x: int) -> None: ...
/// ```
///
/// Use instead:
///
/// ```python
/// ```pyi
/// def foo(x: int, /) -> None: ...
/// ```
///

View File

@@ -33,14 +33,14 @@ impl fmt::Display for VarKind {
/// internal to the stub.
///
/// ## Example
/// ```python
/// ```pyi
/// from typing import TypeVar
///
/// T = TypeVar("T")
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// from typing import TypeVar
///
/// _T = TypeVar("_T")

View File

@@ -16,13 +16,13 @@ use crate::checkers::ast::Checker;
///
/// ## Example
///
/// ```python
/// ```pyi
/// def function() -> "int": ...
/// ```
///
/// Use instead:
///
/// ```python
/// ```pyi
/// def function() -> int: ...
/// ```
///

View File

@@ -16,13 +16,13 @@ use crate::fix::snippet::SourceCodeSnippet;
///
/// ## Example
///
/// ```python
/// ```pyi
/// x: Final[Literal[42]]
/// y: Final[Literal[42]] = 42
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// x: Final = 42
/// y: Final = 42
/// ```

View File

@@ -24,14 +24,14 @@ use crate::fix::snippet::SourceCodeSnippet;
/// supertypes of `"A"` and `1` respectively.
///
/// ## Example
/// ```python
/// ```pyi
/// from typing import Literal
///
/// x: Literal["A", b"B"] | str
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// from typing import Literal
///
/// x: Literal[b"B"] | str

View File

@@ -27,13 +27,13 @@ use crate::checkers::ast::Checker;
///
/// ## Example
///
/// ```python
/// ```pyi
/// def foo(x: float | int | str) -> None: ...
/// ```
///
/// Use instead:
///
/// ```python
/// ```pyi
/// def foo(x: float | str) -> None: ...
/// ```
///

View File

@@ -31,13 +31,13 @@ use crate::settings::types::PythonVersion;
///
/// ## Example
///
/// ```python
/// ```pyi
/// def foo(arg: list[int] = list(range(10_000))) -> None: ...
/// ```
///
/// Use instead:
///
/// ```python
/// ```pyi
/// def foo(arg: list[int] = ...) -> None: ...
/// ```
///
@@ -77,13 +77,13 @@ impl AlwaysFixableViolation for TypedArgumentDefaultInStub {
///
/// ## Example
///
/// ```python
/// ```pyi
/// def foo(arg=[]) -> None: ...
/// ```
///
/// Use instead:
///
/// ```python
/// ```pyi
/// def foo(arg=...) -> None: ...
/// ```
///
@@ -122,12 +122,12 @@ impl AlwaysFixableViolation for ArgumentDefaultInStub {
/// or varies according to the current platform or Python version.
///
/// ## Example
/// ```python
/// ```pyi
/// foo: str = "..."
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// foo: str = ...
/// ```
///
@@ -176,12 +176,12 @@ impl Violation for UnannotatedAssignmentInStub {
/// runtime counterparts.
///
/// ## Example
/// ```python
/// ```pyi
/// __all__: list[str]
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// __all__: list[str] = ["foo", "bar"]
/// ```
#[violation]
@@ -210,12 +210,12 @@ impl Violation for UnassignedSpecialVariableInStub {
/// to a normal variable assignment.
///
/// ## Example
/// ```python
/// ```pyi
/// Vector = list[float]
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// from typing import TypeAlias
///
/// Vector: TypeAlias = list[float]

View File

@@ -19,7 +19,7 @@ use crate::fix::edits::delete_stmt;
///
/// ## Example
///
/// ```python
/// ```pyi
/// class Foo:
/// def __repr__(self) -> str: ...
/// ```

View File

@@ -23,13 +23,13 @@ use crate::checkers::ast::Checker;
///
/// ## Example
///
/// ```python
/// ```pyi
/// def foo(arg: str = "51 character stringgggggggggggggggggggggggggggggggg") -> None: ...
/// ```
///
/// Use instead:
///
/// ```python
/// ```pyi
/// def foo(arg: str = ...) -> None: ...
/// ```
#[violation]

View File

@@ -16,7 +16,7 @@ use crate::checkers::ast::Checker;
///
/// ## Example
///
/// ```python
/// ```pyi
/// def function():
/// x = 1
/// y = 2
@@ -25,7 +25,7 @@ use crate::checkers::ast::Checker;
///
/// Use instead:
///
/// ```python
/// ```pyi
/// def function(): ...
/// ```
#[violation]

View File

@@ -13,12 +13,12 @@ use crate::checkers::ast::Checker;
/// to distinguish them from other variables.
///
/// ## Example
/// ```python
/// ```pyi
/// type_alias_name: TypeAlias = int
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// TypeAliasName: TypeAlias = int
/// ```
#[violation]
@@ -45,14 +45,14 @@ impl Violation for SnakeCaseTypeAlias {
/// be avoided.
///
/// ## Example
/// ```python
/// ```pyi
/// from typing import TypeAlias
///
/// _MyTypeT: TypeAlias = int
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// from typing import TypeAlias
///
/// _MyType: TypeAlias = int

View File

@@ -16,12 +16,12 @@ use ruff_macros::{derive_message_formats, violation};
/// stub files are not executed at runtime. The one exception is `# type: ignore`.
///
/// ## Example
/// ```python
/// ```pyi
/// x = 1 # type: int
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// x: int = 1
/// ```
#[violation]

View File

@@ -21,12 +21,12 @@ use crate::renamer::Renamer;
/// `set` builtin.
///
/// ## Example
/// ```python
/// ```pyi
/// from collections.abc import Set
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// from collections.abc import Set as AbstractSet
/// ```
///

View File

@@ -15,14 +15,14 @@ use crate::checkers::ast::Checker;
/// `Literal["foo"] | Literal[42]`, but is clearer and more concise.
///
/// ## Example
/// ```python
/// ```pyi
/// from typing import Literal
///
/// field: Literal[1] | Literal[2] | str
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// from typing import Literal
///
/// field: Literal[1, 2] | str

View File

@@ -17,12 +17,12 @@ use crate::checkers::ast::Checker;
/// annotation, but is cleaner and more concise.
///
/// ## Example
/// ```python
/// ```pyi
/// field: type[int] | type[float] | str
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// field: type[int | float] | str
/// ```
#[violation]

View File

@@ -20,7 +20,7 @@ use crate::registry::Rule;
/// `if sys.platform == "linux"`.
///
/// ## Example
/// ```python
/// ```pyi
/// if sys.platform.startswith("linux"):
/// # Linux specific definitions
/// ...
@@ -30,7 +30,7 @@ use crate::registry::Rule;
/// ```
///
/// Instead, use a simple string comparison, such as `==` or `!=`:
/// ```python
/// ```pyi
/// if sys.platform == "linux":
/// # Linux specific definitions
/// ...
@@ -64,15 +64,13 @@ impl Violation for UnrecognizedPlatformCheck {
/// The list of known platforms is: "linux", "win32", "cygwin", "darwin".
///
/// ## Example
/// ```python
/// if sys.platform == "linus":
/// ...
/// ```pyi
/// if sys.platform == "linus": ...
/// ```
///
/// Use instead:
/// ```python
/// if sys.platform == "linux":
/// ...
/// ```pyi
/// if sys.platform == "linux": ...
/// ```
///
/// ## References

View File

@@ -17,19 +17,17 @@ use crate::registry::Rule;
/// For example, comparing against a string can lead to unexpected behavior.
///
/// ## Example
/// ```python
/// ```pyi
/// import sys
///
/// if sys.version_info[0] == "2":
/// ...
/// if sys.version_info[0] == "2": ...
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// import sys
///
/// if sys.version_info[0] == 2:
/// ...
/// if sys.version_info[0] == 2: ...
/// ```
///
/// ## References
@@ -58,19 +56,17 @@ impl Violation for UnrecognizedVersionInfoCheck {
/// and minor versions.
///
/// ## Example
/// ```python
/// ```pyi
/// import sys
///
/// if sys.version_info >= (3, 4, 3):
/// ...
/// if sys.version_info >= (3, 4, 3): ...
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// import sys
///
/// if sys.version_info >= (3, 4):
/// ...
/// if sys.version_info >= (3, 4): ...
/// ```
///
/// ## References
@@ -96,19 +92,17 @@ impl Violation for PatchVersionComparison {
/// behavior.
///
/// ## Example
/// ```python
/// ```pyi
/// import sys
///
/// if sys.version_info[:2] == (3,):
/// ...
/// if sys.version_info[:2] == (3,): ...
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// import sys
///
/// if sys.version_info[0] == 3:
/// ...
/// if sys.version_info[0] == 3: ...
/// ```
///
/// ## References

View File

@@ -16,7 +16,7 @@ use crate::checkers::ast::Checker;
/// `__all__`, which is known to be supported by all major type checkers.
///
/// ## Example
/// ```python
/// ```pyi
/// import sys
///
/// __all__ = ["A", "B"]
@@ -29,7 +29,7 @@ use crate::checkers::ast::Checker;
/// ```
///
/// Use instead:
/// ```python
/// ```pyi
/// import sys
///
/// __all__ = ["A"]

View File

@@ -16,7 +16,7 @@ use crate::checkers::ast::Checker;
/// should either be used, made public, or removed to avoid confusion.
///
/// ## Example
/// ```python
/// ```pyi
/// import typing
/// import typing_extensions
///
@@ -50,24 +50,21 @@ impl Violation for UnusedPrivateTypeVar {
///
/// ## Example
///
/// ```python
/// ```pyi
/// import typing
///
///
/// class _PrivateProtocol(typing.Protocol):
/// foo: int
/// ```
///
/// Use instead:
///
/// ```python
/// ```pyi
/// import typing
///
///
/// class _PrivateProtocol(typing.Protocol):
/// foo: int
///
///
/// def func(arg: _PrivateProtocol) -> None: ...
/// ```
#[violation]
@@ -93,7 +90,7 @@ impl Violation for UnusedPrivateProtocol {
///
/// ## Example
///
/// ```python
/// ```pyi
/// import typing
///
/// _UnusedTypeAlias: typing.TypeAlias = int
@@ -101,12 +98,11 @@ impl Violation for UnusedPrivateProtocol {
///
/// Use instead:
///
/// ```python
/// ```pyi
/// import typing
///
/// _UsedTypeAlias: typing.TypeAlias = int
///
///
/// def func(arg: _UsedTypeAlias) -> _UsedTypeAlias: ...
/// ```
#[violation]
@@ -132,24 +128,21 @@ impl Violation for UnusedPrivateTypeAlias {
///
/// ## Example
///
/// ```python
/// ```pyi
/// import typing
///
///
/// class _UnusedPrivateTypedDict(typing.TypedDict):
/// foo: list[int]
/// ```
///
/// Use instead:
///
/// ```python
/// ```pyi
/// import typing
///
///
/// class _UsedPrivateTypedDict(typing.TypedDict):
/// foo: set[str]
///
///
/// def func(arg: _UsedPrivateTypedDict) -> _UsedPrivateTypedDict: ...
/// ```
#[violation]

View File

@@ -1,5 +1,3 @@
use std::fmt;
use ruff_diagnostics::{AlwaysFixableViolation, Violation};
use ruff_diagnostics::{Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
@@ -20,6 +18,7 @@ use crate::registry::Rule;
use super::helpers::{
get_mark_decorators, is_pytest_fixture, is_pytest_yield_fixture, keyword_is_literal,
Parentheses,
};
/// ## What it does
@@ -605,21 +604,6 @@ impl AlwaysFixableViolation for PytestUnnecessaryAsyncioMarkOnFixture {
}
}
#[derive(Debug, PartialEq, Eq)]
enum Parentheses {
None,
Empty,
}
impl fmt::Display for Parentheses {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
match self {
Parentheses::None => fmt.write_str(""),
Parentheses::Empty => fmt.write_str("()"),
}
}
}
/// Visitor that skips functions
#[derive(Debug, Default)]
struct SkipFunctionsVisitor<'a> {

View File

@@ -1,3 +1,5 @@
use std::fmt;
use ruff_python_ast::helpers::map_callable;
use ruff_python_ast::name::UnqualifiedName;
use ruff_python_ast::{self as ast, Decorator, Expr, Keyword};
@@ -93,3 +95,18 @@ pub(super) fn split_names(names: &str) -> Vec<&str> {
})
.collect::<Vec<&str>>()
}
#[derive(Debug, PartialEq, Eq, Copy, Clone)]
pub(super) enum Parentheses {
None,
Empty,
}
impl fmt::Display for Parentheses {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
match self {
Parentheses::None => fmt.write_str(""),
Parentheses::Empty => fmt.write_str("()"),
}
}
}

View File

@@ -7,7 +7,7 @@ use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::registry::Rule;
use super::helpers::get_mark_decorators;
use super::helpers::{get_mark_decorators, Parentheses};
/// ## What it does
/// Checks for argument-free `@pytest.mark.<marker>()` decorators with or
@@ -52,8 +52,8 @@ use super::helpers::get_mark_decorators;
#[violation]
pub struct PytestIncorrectMarkParenthesesStyle {
mark_name: String,
expected_parens: String,
actual_parens: String,
expected_parens: Parentheses,
actual_parens: Parentheses,
}
impl AlwaysFixableViolation for PytestIncorrectMarkParenthesesStyle {
@@ -71,7 +71,10 @@ impl AlwaysFixableViolation for PytestIncorrectMarkParenthesesStyle {
}
fn fix_title(&self) -> String {
"Add/remove parentheses".to_string()
match &self.expected_parens {
Parentheses::None => "Remove parentheses".to_string(),
Parentheses::Empty => "Add parentheses".to_string(),
}
}
}
@@ -121,14 +124,14 @@ fn pytest_mark_parentheses(
decorator: &Decorator,
marker: &str,
fix: Fix,
preferred: &str,
actual: &str,
preferred: Parentheses,
actual: Parentheses,
) {
let mut diagnostic = Diagnostic::new(
PytestIncorrectMarkParenthesesStyle {
mark_name: marker.to_string(),
expected_parens: preferred.to_string(),
actual_parens: actual.to_string(),
expected_parens: preferred,
actual_parens: actual,
},
decorator.range(),
);
@@ -153,13 +156,30 @@ fn check_mark_parentheses(checker: &mut Checker, decorator: &Decorator, marker:
&& keywords.is_empty()
{
let fix = Fix::safe_edit(Edit::deletion(func.end(), decorator.end()));
pytest_mark_parentheses(checker, decorator, marker, fix, "", "()");
pytest_mark_parentheses(
checker,
decorator,
marker,
fix,
Parentheses::None,
Parentheses::Empty,
);
}
}
_ => {
if checker.settings.flake8_pytest_style.mark_parentheses {
let fix = Fix::safe_edit(Edit::insertion("()".to_string(), decorator.end()));
pytest_mark_parentheses(checker, decorator, marker, fix, "()", "");
let fix = Fix::safe_edit(Edit::insertion(
Parentheses::Empty.to_string(),
decorator.end(),
));
pytest_mark_parentheses(
checker,
decorator,
marker,
fix,
Parentheses::Empty,
Parentheses::None,
);
}
}
}

View File

@@ -8,7 +8,7 @@ PT023.py:46:1: PT023 [*] Use `@pytest.mark.foo` over `@pytest.mark.foo()`
47 | def test_something():
48 | pass
|
= help: Add/remove parentheses
= help: Remove parentheses
Safe fix
43 43 | # With parentheses
@@ -27,7 +27,7 @@ PT023.py:51:1: PT023 [*] Use `@pytest.mark.foo` over `@pytest.mark.foo()`
52 | class TestClass:
53 | def test_something():
|
= help: Add/remove parentheses
= help: Remove parentheses
Safe fix
48 48 | pass
@@ -47,7 +47,7 @@ PT023.py:58:5: PT023 [*] Use `@pytest.mark.foo` over `@pytest.mark.foo()`
59 | def test_something():
60 | pass
|
= help: Add/remove parentheses
= help: Remove parentheses
Safe fix
55 55 |
@@ -67,7 +67,7 @@ PT023.py:64:5: PT023 [*] Use `@pytest.mark.foo` over `@pytest.mark.foo()`
65 | class TestNestedClass:
66 | def test_something():
|
= help: Add/remove parentheses
= help: Remove parentheses
Safe fix
61 61 |
@@ -88,7 +88,7 @@ PT023.py:72:9: PT023 [*] Use `@pytest.mark.foo` over `@pytest.mark.foo()`
73 | def test_something():
74 | pass
|
= help: Add/remove parentheses
= help: Remove parentheses
Safe fix
69 69 |

View File

@@ -8,7 +8,7 @@ PT023.py:12:1: PT023 [*] Use `@pytest.mark.foo()` over `@pytest.mark.foo`
13 | def test_something():
14 | pass
|
= help: Add/remove parentheses
= help: Add parentheses
Safe fix
9 9 | # Without parentheses
@@ -27,7 +27,7 @@ PT023.py:17:1: PT023 [*] Use `@pytest.mark.foo()` over `@pytest.mark.foo`
18 | class TestClass:
19 | def test_something():
|
= help: Add/remove parentheses
= help: Add parentheses
Safe fix
14 14 | pass
@@ -47,7 +47,7 @@ PT023.py:24:5: PT023 [*] Use `@pytest.mark.foo()` over `@pytest.mark.foo`
25 | def test_something():
26 | pass
|
= help: Add/remove parentheses
= help: Add parentheses
Safe fix
21 21 |
@@ -67,7 +67,7 @@ PT023.py:30:5: PT023 [*] Use `@pytest.mark.foo()` over `@pytest.mark.foo`
31 | class TestNestedClass:
32 | def test_something():
|
= help: Add/remove parentheses
= help: Add parentheses
Safe fix
27 27 |
@@ -88,7 +88,7 @@ PT023.py:38:9: PT023 [*] Use `@pytest.mark.foo()` over `@pytest.mark.foo`
39 | def test_something():
40 | pass
|
= help: Add/remove parentheses
= help: Add parentheses
Safe fix
35 35 |

View File

@@ -165,7 +165,7 @@ fn is_open_preview(semantic: &SemanticModel, call: &ast::ExprCall) -> bool {
| "tokenize"
| "wave",
"open"
] | ["dbm", "gnu" | "ndbm" | "dumb", "open"]
] | ["dbm", "gnu" | "ndbm" | "dumb" | "sqlite3", "open"]
| ["fileinput", "FileInput" | "input"]
| ["io", "open" | "open_code"]
| ["lzma", "LZMAFile" | "open"]

View File

@@ -324,3 +324,11 @@ SIM115.py:244:9: SIM115 Use a context manager for opening files
| ^^^^^^^^^^^^^^ SIM115
245 | f.close()
|
SIM115.py:257:5: SIM115 Use a context manager for opening files
|
256 | # SIM115
257 | f = dbm.sqlite3.open("foo.db")
| ^^^^^^^^^^^^^^^^ SIM115
258 | f.close()
|

View File

@@ -26,6 +26,9 @@ mod tests {
#[test_case(Rule::AmbiguousClassName, Path::new("E742.py"))]
#[test_case(Rule::AmbiguousFunctionName, Path::new("E743.py"))]
#[test_case(Rule::AmbiguousVariableName, Path::new("E741.py"))]
// E741 has different behaviour for `.pyi` files in preview mode;
// this test case checks it still has the old behaviour in stable mode
#[test_case(Rule::AmbiguousVariableName, Path::new("E741.pyi"))]
#[test_case(Rule::LambdaAssignment, Path::new("E731.py"))]
#[test_case(Rule::BareExcept, Path::new("E722.py"))]
#[test_case(Rule::BlankLineWithWhitespace, Path::new("W29.py"))]
@@ -75,6 +78,8 @@ mod tests {
#[test_case(Rule::TooManyNewlinesAtEndOfFile, Path::new("W391_2.py"))]
#[test_case(Rule::TooManyNewlinesAtEndOfFile, Path::new("W391_3.py"))]
#[test_case(Rule::TooManyNewlinesAtEndOfFile, Path::new("W391_4.py"))]
// E741 has different behaviour for `.pyi` files in preview mode
#[test_case(Rule::AmbiguousVariableName, Path::new("E741.pyi"))]
fn preview_rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!(
"preview__{}_{}",

View File

@@ -3,6 +3,7 @@ use ruff_text_size::TextRange;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use crate::checkers::ast::Checker;
use crate::rules::pycodestyle::helpers::is_ambiguous_name;
/// ## What it does
@@ -25,6 +26,13 @@ use crate::rules::pycodestyle::helpers::is_ambiguous_name;
/// o = 123
/// i = 42
/// ```
///
/// ## Preview mode behavior for stub files
/// In [preview] mode, this rule is automatically disabled for all stub files
/// (files with `.pyi` extensions). The rule has little relevance for authors
/// of stubs: a well-written stub should aim to faithfully represent the
/// interface of the equivalent .py file as it exists at runtime, including any
/// ambiguously named variables in the runtime module.
#[violation]
pub struct AmbiguousVariableName(pub String);
@@ -38,13 +46,14 @@ impl Violation for AmbiguousVariableName {
}
/// E741
pub(crate) fn ambiguous_variable_name(name: &str, range: TextRange) -> Option<Diagnostic> {
pub(crate) fn ambiguous_variable_name(checker: &mut Checker, name: &str, range: TextRange) {
if checker.settings.preview.is_enabled() && checker.source_type.is_stub() {
return;
}
if is_ambiguous_name(name) {
Some(Diagnostic::new(
checker.diagnostics.push(Diagnostic::new(
AmbiguousVariableName(name.to_string()),
range,
))
} else {
None
));
}
}

View File

@@ -0,0 +1,211 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---
E741.pyi:3:1: E741 Ambiguous variable name: `l`
|
1 | from contextlib import contextmanager
2 |
3 | l = 0
| ^ E741
4 | I = 0
5 | O = 0
|
E741.pyi:4:1: E741 Ambiguous variable name: `I`
|
3 | l = 0
4 | I = 0
| ^ E741
5 | O = 0
6 | l: int = 0
|
E741.pyi:5:1: E741 Ambiguous variable name: `O`
|
3 | l = 0
4 | I = 0
5 | O = 0
| ^ E741
6 | l: int = 0
|
E741.pyi:6:1: E741 Ambiguous variable name: `l`
|
4 | I = 0
5 | O = 0
6 | l: int = 0
| ^ E741
7 |
8 | a, l = 0, 1
|
E741.pyi:8:4: E741 Ambiguous variable name: `l`
|
6 | l: int = 0
7 |
8 | a, l = 0, 1
| ^ E741
9 | [a, l] = 0, 1
10 | a, *l = 0, 1, 2
|
E741.pyi:9:5: E741 Ambiguous variable name: `l`
|
8 | a, l = 0, 1
9 | [a, l] = 0, 1
| ^ E741
10 | a, *l = 0, 1, 2
11 | a = l = 0
|
E741.pyi:10:5: E741 Ambiguous variable name: `l`
|
8 | a, l = 0, 1
9 | [a, l] = 0, 1
10 | a, *l = 0, 1, 2
| ^ E741
11 | a = l = 0
|
E741.pyi:11:5: E741 Ambiguous variable name: `l`
|
9 | [a, l] = 0, 1
10 | a, *l = 0, 1, 2
11 | a = l = 0
| ^ E741
12 |
13 | o = 0
|
E741.pyi:16:5: E741 Ambiguous variable name: `l`
|
14 | i = 0
15 |
16 | for l in range(3):
| ^ E741
17 | pass
|
E741.pyi:20:8: E741 Ambiguous variable name: `l`
|
20 | for a, l in zip(range(3), range(3)):
| ^ E741
21 | pass
|
E741.pyi:25:12: E741 Ambiguous variable name: `l`
|
24 | def f1():
25 | global l
| ^ E741
26 | l = 0
|
E741.pyi:26:5: E741 Ambiguous variable name: `l`
|
24 | def f1():
25 | global l
26 | l = 0
| ^ E741
|
E741.pyi:30:5: E741 Ambiguous variable name: `l`
|
29 | def f2():
30 | l = 0
| ^ E741
31 |
32 | def f3():
|
E741.pyi:33:18: E741 Ambiguous variable name: `l`
|
32 | def f3():
33 | nonlocal l
| ^ E741
34 | l = 1
|
E741.pyi:34:9: E741 Ambiguous variable name: `l`
|
32 | def f3():
33 | nonlocal l
34 | l = 1
| ^ E741
35 |
36 | f3()
|
E741.pyi:40:8: E741 Ambiguous variable name: `l`
|
40 | def f4(l, /, I):
| ^ E741
41 | return l, I, O
|
E741.pyi:40:14: E741 Ambiguous variable name: `I`
|
40 | def f4(l, /, I):
| ^ E741
41 | return l, I, O
|
E741.pyi:44:8: E741 Ambiguous variable name: `l`
|
44 | def f5(l=0, *, I=1):
| ^ E741
45 | return l, I
|
E741.pyi:44:16: E741 Ambiguous variable name: `I`
|
44 | def f5(l=0, *, I=1):
| ^ E741
45 | return l, I
|
E741.pyi:48:9: E741 Ambiguous variable name: `l`
|
48 | def f6(*l, **I):
| ^ E741
49 | return l, I
|
E741.pyi:48:14: E741 Ambiguous variable name: `I`
|
48 | def f6(*l, **I):
| ^ E741
49 | return l, I
|
E741.pyi:57:16: E741 Ambiguous variable name: `l`
|
57 | with ctx1() as l:
| ^ E741
58 | pass
|
E741.pyi:66:20: E741 Ambiguous variable name: `l`
|
66 | with ctx2() as (a, l):
| ^ E741
67 | pass
|
E741.pyi:71:22: E741 Ambiguous variable name: `l`
|
69 | try:
70 | pass
71 | except ValueError as l:
| ^ E741
72 | pass
|
E741.pyi:74:5: E741 Ambiguous variable name: `l`
|
72 | pass
73 |
74 | if (l := 5) > 0:
| ^ E741
75 | pass
|

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---

View File

@@ -25,7 +25,8 @@ use crate::rules::pydocstyle::settings::Convention;
/// Docstrings missing return sections are a sign of incomplete documentation
/// or refactors.
///
/// This rule is not enforced for abstract methods and stubs functions.
/// This rule is not enforced for abstract methods, stubs functions, or
/// functions that only return `None`.
///
/// ## Example
/// ```python
@@ -494,13 +495,26 @@ fn parse_entries_numpy(content: &str) -> Vec<QualifiedName> {
entries
}
/// An individual documentable statement in a function body.
/// An individual `yield` expression in a function body.
#[derive(Debug)]
struct Entry {
struct YieldEntry {
range: TextRange,
}
impl Ranged for Entry {
impl Ranged for YieldEntry {
fn range(&self) -> TextRange {
self.range
}
}
/// An individual `return` statement in a function body.
#[derive(Debug)]
struct ReturnEntry {
range: TextRange,
is_none_return: bool,
}
impl Ranged for ReturnEntry {
fn range(&self) -> TextRange {
self.range
}
@@ -522,15 +536,15 @@ impl Ranged for ExceptionEntry<'_> {
/// A summary of documentable statements from the function body
#[derive(Debug)]
struct BodyEntries<'a> {
returns: Vec<Entry>,
yields: Vec<Entry>,
returns: Vec<ReturnEntry>,
yields: Vec<YieldEntry>,
raised_exceptions: Vec<ExceptionEntry<'a>>,
}
/// An AST visitor to extract a summary of documentable statements from a function body.
struct BodyVisitor<'a> {
returns: Vec<Entry>,
yields: Vec<Entry>,
returns: Vec<ReturnEntry>,
yields: Vec<YieldEntry>,
currently_suspended_exceptions: Option<&'a ast::Expr>,
raised_exceptions: Vec<ExceptionEntry<'a>>,
semantic: &'a SemanticModel<'a>,
@@ -623,9 +637,12 @@ impl<'a> Visitor<'a> for BodyVisitor<'a> {
}
Stmt::Return(ast::StmtReturn {
range,
value: Some(_),
value: Some(value),
}) => {
self.returns.push(Entry { range: *range });
self.returns.push(ReturnEntry {
range: *range,
is_none_return: value.is_none_literal_expr(),
});
}
Stmt::FunctionDef(_) | Stmt::ClassDef(_) => return,
_ => {}
@@ -640,10 +657,10 @@ impl<'a> Visitor<'a> for BodyVisitor<'a> {
range,
value: Some(_),
}) => {
self.yields.push(Entry { range: *range });
self.yields.push(YieldEntry { range: *range });
}
Expr::YieldFrom(ast::ExprYieldFrom { range, .. }) => {
self.yields.push(Entry { range: *range });
self.yields.push(YieldEntry { range: *range });
}
Expr::Lambda(_) => return,
_ => {}
@@ -737,8 +754,22 @@ pub(crate) fn check_docstring(
let extra_property_decorators = checker.settings.pydocstyle.property_decorators();
if !definition.is_property(extra_property_decorators, checker.semantic()) {
if let Some(body_return) = body_entries.returns.first() {
let diagnostic = Diagnostic::new(DocstringMissingReturns, body_return.range());
diagnostics.push(diagnostic);
match function_def.returns.as_deref() {
Some(returns) if !Expr::is_none_literal_expr(returns) => diagnostics.push(
Diagnostic::new(DocstringMissingReturns, body_return.range()),
),
None if body_entries
.returns
.iter()
.any(|entry| !entry.is_none_return) =>
{
diagnostics.push(Diagnostic::new(
DocstringMissingReturns,
body_return.range(),
));
}
_ => {}
}
}
}
}

View File

@@ -38,3 +38,34 @@ DOC201_google.py:121:9: DOC201 `return` is not documented in docstring
| ^^^^^^^^^^^ DOC201
|
= help: Add a "Returns" section to the docstring
DOC201_google.py:184:9: DOC201 `return` is not documented in docstring
|
182 | """
183 | if x < 0:
184 | return None
| ^^^^^^^^^^^ DOC201
185 | else:
186 | return x
|
= help: Add a "Returns" section to the docstring
DOC201_google.py:197:9: DOC201 `return` is not documented in docstring
|
195 | """
196 | if x < 0:
197 | return None
| ^^^^^^^^^^^ DOC201
198 | else:
199 | return x
|
= help: Add a "Returns" section to the docstring
DOC201_google.py:209:5: DOC201 `return` is not documented in docstring
|
207 | s (str): A string.
208 | """
209 | return None
| ^^^^^^^^^^^ DOC201
|
= help: Add a "Returns" section to the docstring

View File

@@ -27,3 +27,34 @@ DOC201_numpy.py:87:9: DOC201 `return` is not documented in docstring
| ^^^^^^^^^^^ DOC201
|
= help: Add a "Returns" section to the docstring
DOC201_numpy.py:160:9: DOC201 `return` is not documented in docstring
|
158 | """
159 | if x < 0:
160 | return None
| ^^^^^^^^^^^ DOC201
161 | else:
162 | return x
|
= help: Add a "Returns" section to the docstring
DOC201_numpy.py:175:9: DOC201 `return` is not documented in docstring
|
173 | """
174 | if x < 0:
175 | return None
| ^^^^^^^^^^^ DOC201
176 | else:
177 | return x
|
= help: Add a "Returns" section to the docstring
DOC201_numpy.py:189:5: DOC201 `return` is not documented in docstring
|
187 | A string.
188 | """
189 | return None
| ^^^^^^^^^^^ DOC201
|
= help: Add a "Returns" section to the docstring

View File

@@ -57,7 +57,7 @@ impl AlwaysFixableViolation for InvalidCharacterBackspace {
///
/// Use instead:
/// ```python
/// x = "\x1A"
/// x = "\x1a"
/// ```
#[violation]
pub struct InvalidCharacterSub;
@@ -90,7 +90,7 @@ impl AlwaysFixableViolation for InvalidCharacterSub {
///
/// Use instead:
/// ```python
/// x = "\x1B"
/// x = "\x1b"
/// ```
#[violation]
pub struct InvalidCharacterEsc;
@@ -155,7 +155,7 @@ impl AlwaysFixableViolation for InvalidCharacterNul {
///
/// Use instead:
/// ```python
/// x = "Dear Sir\u200B/\u200BMadam" # zero width space
/// x = "Dear Sir\u200b/\u200bMadam" # zero width space
/// ```
#[violation]
pub struct InvalidCharacterZeroWidthSpace;

View File

@@ -101,18 +101,18 @@ fn collect_nested_args(min_max: MinMax, args: &[Expr], semantic: &SemanticModel)
range: _,
}) = arg
{
if let [arg] = &**args {
if arg.as_starred_expr().is_none() {
let new_arg = Expr::Starred(ast::ExprStarred {
value: Box::new(arg.clone()),
ctx: ast::ExprContext::Load,
range: TextRange::default(),
});
new_args.push(new_arg);
continue;
}
}
if MinMax::try_from_call(func, keywords, semantic) == Some(min_max) {
if let [arg] = &**args {
if arg.as_starred_expr().is_none() {
let new_arg = Expr::Starred(ast::ExprStarred {
value: Box::new(arg.clone()),
ctx: ast::ExprContext::Load,
range: TextRange::default(),
});
new_args.push(new_arg);
continue;
}
}
inner(min_max, args, semantic, new_args);
continue;
}

View File

@@ -312,3 +312,33 @@ nested_min_max.py:44:1: PLW3301 [*] Nested `min` calls can be flattened
43 43 | import builtins
44 |-builtins.min(1, min(2, 3))
44 |+builtins.min(1, 2, 3)
45 45 |
46 46 |
47 47 | # PLW3301
nested_min_max.py:48:16: PLW3301 [*] Nested `max` calls can be flattened
|
47 | # PLW3301
48 | max_word_len = max(
| ________________^
49 | | max(len(word) for word in "blah blah blah".split(" ")),
50 | | len("Done!"),
51 | | )
| |_^ PLW3301
52 |
53 | # OK
|
= help: Flatten nested `max` calls
Unsafe fix
45 45 |
46 46 |
47 47 | # PLW3301
48 |-max_word_len = max(
49 |- max(len(word) for word in "blah blah blah".split(" ")),
50 |- len("Done!"),
51 |-)
48 |+max_word_len = max(*(len(word) for word in "blah blah blah".split(" ")), len("Done!"))
52 49 |
53 50 | # OK
54 51 | max_word_len = max(

View File

@@ -112,7 +112,7 @@ RUF100_3.py:6:10: RUF100 [*] Unused blanket `noqa` directive
4 4 | print() # noqa # comment
5 5 | print() # noqa # comment
6 |-print() # noqa comment
6 |+print() # comment
6 |+print()
7 7 | print() # noqa comment
8 8 | print(a) # noqa
9 9 | print(a) # noqa # comment
@@ -133,7 +133,7 @@ RUF100_3.py:7:10: RUF100 [*] Unused blanket `noqa` directive
5 5 | print() # noqa # comment
6 6 | print() # noqa comment
7 |-print() # noqa comment
7 |+print() # comment
7 |+print()
8 8 | print(a) # noqa
9 9 | print(a) # noqa # comment
10 10 | print(a) # noqa # comment
@@ -257,7 +257,7 @@ RUF100_3.py:19:10: RUF100 [*] Unused `noqa` directive (unused: `E501`, `F821`)
17 17 | print() # noqa: E501, F821 # comment
18 18 | print() # noqa: E501, F821 # comment
19 |-print() # noqa: E501, F821 comment
19 |+print() # comment
19 |+print()
20 20 | print() # noqa: E501, F821 comment
21 21 | print(a) # noqa: E501, F821
22 22 | print(a) # noqa: E501, F821 # comment
@@ -278,7 +278,7 @@ RUF100_3.py:20:10: RUF100 [*] Unused `noqa` directive (unused: `E501`, `F821`)
18 18 | print() # noqa: E501, F821 # comment
19 19 | print() # noqa: E501, F821 comment
20 |-print() # noqa: E501, F821 comment
20 |+print() # comment
20 |+print()
21 21 | print(a) # noqa: E501, F821
22 22 | print(a) # noqa: E501, F821 # comment
23 23 | print(a) # noqa: E501, F821 # comment
@@ -428,5 +428,3 @@ RUF100_3.py:28:39: RUF100 [*] Unused `noqa` directive (unused: `E501`)
27 27 | print(a) # comment with unicode µ # noqa: E501
28 |-print(a) # comment with unicode µ # noqa: E501, F821
28 |+print(a) # comment with unicode µ # noqa: F821

View File

@@ -24,6 +24,8 @@ RUF100_5.py:11:1: ERA001 Found commented-out code
|
11 | #import os # noqa: E501
| ^^^^^^^^^^^^^^^^^^^^^^^^ ERA001
12 |
13 | def f():
|
= help: Remove commented-out code
@@ -32,11 +34,16 @@ RUF100_5.py:11:1: ERA001 Found commented-out code
9 9 |
10 10 |
11 |-#import os # noqa: E501
12 11 |
13 12 | def f():
14 13 | data = 1
RUF100_5.py:11:13: RUF100 [*] Unused `noqa` directive (unused: `E501`)
|
11 | #import os # noqa: E501
| ^^^^^^^^^^^^ RUF100
12 |
13 | def f():
|
= help: Remove unused `noqa` directive
@@ -46,5 +53,43 @@ RUF100_5.py:11:13: RUF100 [*] Unused `noqa` directive (unused: `E501`)
10 10 |
11 |-#import os # noqa: E501
11 |+#import os
12 12 |
13 13 | def f():
14 14 | data = 1
RUF100_5.py:16:18: RUF100 [*] Unused `noqa` directive (non-enabled: `RET504`)
|
14 | data = 1
15 | # line below should autofix to `return data # fmt: skip`
16 | return data # noqa: RET504 # fmt: skip
| ^^^^^^^^^^^^^^ RUF100
17 |
18 | def f():
|
= help: Remove unused `noqa` directive
Safe fix
13 13 | def f():
14 14 | data = 1
15 15 | # line below should autofix to `return data # fmt: skip`
16 |- return data # noqa: RET504 # fmt: skip
16 |+ return data # fmt: skip
17 17 |
18 18 | def f():
19 19 | data = 1
RUF100_5.py:21:18: RUF100 [*] Unused `noqa` directive (non-enabled: `RET504`)
|
19 | data = 1
20 | # line below should autofix to `return data`
21 | return data # noqa: RET504 - intentional incorrect noqa, will be removed
| ^^^^^^^^^^^^^^ RUF100
|
= help: Remove unused `noqa` directive
Safe fix
18 18 | def f():
19 19 | data = 1
20 20 | # line below should autofix to `return data`
21 |- return data # noqa: RET504 - intentional incorrect noqa, will be removed
21 |+ return data

View File

@@ -299,8 +299,10 @@ pub fn is_mutable_return_type(qualified_name: &[&str]) -> bool {
pub fn is_immutable_return_type(qualified_name: &[&str]) -> bool {
matches!(
qualified_name,
["datetime", "date" | "datetime" | "timedelta"]
| ["decimal", "Decimal"]
[
"datetime",
"date" | "datetime" | "time" | "timedelta" | "timezone" | "tzinfo"
] | ["decimal", "Decimal"]
| ["fractions", "Fraction"]
| ["operator", "attrgetter" | "itemgetter" | "methodcaller"]
| ["pathlib", "Path"]

Some files were not shown because too many files have changed in this diff Show More