## Summary
This PR fixes the `show_*_msg` macros to pass all the tokens instead of
just a single token. This allows for using various expressions right in
the macro similar to how it would be in `format_args!`.
## Test Plan
`cargo clippy`
## Summary
This PR creates separate functions to check whether the document path is
excluded for linting or formatting. The main motivation is to avoid the
double `Option` for the call sites and makes passing the correct
settings simpler.
## Summary
This changeset adds new tests for public uses of symbols,
considering all possible declaredness and boundness states.
Note that this is a mere documentation of the current behavior. There is
still an [open ticket] questioning some of these choices (or unintential
behaviors).
## Test plan
Made sure that the respective test fails if I add the questionable case
again in `symbol_by_id`:
```rs
Symbol::Type(inferred_ty, Boundness::Bound) => {
Symbol::Type(inferred_ty, Boundness::Bound)
}
```
[open ticket]: https://github.com/astral-sh/ruff/issues/14297
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Replace typo "security_managr" in AIR303 as "security_manager"
## Test Plan
<!-- How was it tested? -->
a test fixture has been updated
## Summary
Simplification follow-up to #15413.
There's no need to have a dedicated `CallOutcome` variant for every
known function, it's only necessary if the special-cased behavior of the
known function includes emitting extra diagnostics. For `typing.cast`,
there's no such need; we can use the regular `Callable` outcome variant,
and update the return type according to the cast. (This is the same way
we already handle `len`.)
One reason to avoid proliferating unnecessary `CallOutcome` variants is
that currently we have to explicitly add emitting call-binding
diagnostics, for each outcome variant. So we were previously wrongly
silencing any binding diagnostics on calls to `typing.cast`. Fixing this
revealed a separate bug, that we were emitting a bogus error anytime
more than one keyword argument mapped to a `**kwargs` parameter. So this
PR also adds test and fix for that bug.
## Test Plan
Existing `cast` tests pass unchanged, added new test for `**kwargs` bug.
## Summary
I noticed this while trying out
https://github.com/astral-sh/ruff-vscode/issues/665 that we use the
`Display` implementation to show the error which hides the context. This
PR changes it to use the `Debug` implementation and adds the message as
a context.
## Test Plan
**Before:**
```
0.001228084s ERROR main ruff_server::session::index::ruff_settings: Unable to find editor-specified configuration file: Failed to parse /private/tmp/hatch-test/ruff.toml
```
**After:**
```
0.002348750s ERROR main ruff_server::session::index::ruff_settings: Unable to load editor-specified configuration file
Caused by:
0: Failed to parse /private/tmp/hatch-test/ruff.toml
1: TOML parse error at line 2, column 18
|
2 | extend-select = ["ASYNC101"]
| ^^^^^^^^^^
Unknown rule selector: `ASYNC101`
```
## Summary
In `SymbolState` merging, use `BitSet::union` instead of inserting
declarations one by one. This used to be the case but was changed in
https://github.com/astral-sh/ruff/pull/15019 because we had to iterate
over declarations anyway.
This is an alternative to https://github.com/astral-sh/ruff/pull/15419
by @MichaReiser. It's similar in performance, but a bit more
declarative and less imperative.
## Summary
Follow-up PR from https://github.com/astral-sh/ruff/pull/15415🥲
The exact same property test already exists:
`intersection_assignable_to_both` and
`all_type_pairs_can_be_assigned_from_their_intersection`
## Test Plan
`cargo test -p red_knot_python_semantic -- --ignored
types::property_tests::flaky`
## Summary
Implements upstream diagnostics `PT029`, `PT030`, `PT031` that function
as pytest.warns corollaries of `PT010`, `PT011`, `PT012` respectively.
Most of the implementation and documentation is designed to mirror those
existing diagnostics.
Closes#14239
## Test Plan
Tests for `PT029`, `PT030`, `PT031` largely copied from `PT010`,
`PT011`, `PT012` respectively.
`cargo nextest run`
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
A small PR to reduce some of the code duplication between the various
branches, make it a little more readable and move the API closer to what
we already have for `KnownClass`
## Summary
The cause of this bug is from
https://github.com/astral-sh/ruff/pull/12575 which was itself a bug fix
but the fix wasn't completely correct.
fixes: #14768
fixes: https://github.com/astral-sh/ruff-vscode/issues/644
## Test Plan
Consider the following three cells:
1.
```python
class Foo:
def __init__(self):
self.x = 1
def __str__(self):
return f"Foo({self.x})"
```
2.
```python
def hello():
print("hello world")
```
3.
```python
y = 1
```
The test case is moving cell 2 to the top i.e., cell 2 goes to position
1 and cell 1 goes to position 2.
Before this fix, it can be seen that the cells were pushed at the end of
the vector:
```
12.643269917s INFO ruff:main ruff_server::edit:📓 Before update: [
NotebookCell {
document: TextDocument {
contents: "class Foo:\n def __init__(self):\n self.x = 1\n\n def __str__(self):\n return f\"Foo({self.x})\"",
},
},
NotebookCell {
document: TextDocument {
contents: "def hello():\n print(\"hello world\")",
},
},
NotebookCell {
document: TextDocument {
contents: "y = 1",
},
},
]
12.643777667s INFO ruff:main ruff_server::edit:📓 After update: [
NotebookCell {
document: TextDocument {
contents: "y = 1",
},
},
NotebookCell {
document: TextDocument {
contents: "class Foo:\n def __init__(self):\n self.x = 1\n\n def __str__(self):\n return f\"Foo({self.x})\"",
},
},
NotebookCell {
document: TextDocument {
contents: "def hello():\n print(\"hello world\")",
},
},
]
```
After the fix in this PR, it can be seen that the cells are being pushed
at the correct `start` index:
```
6.520570917s INFO ruff:main ruff_server::edit:📓 Before update: [
NotebookCell {
document: TextDocument {
contents: "class Foo:\n def __init__(self):\n self.x = 1\n\n def __str__(self):\n return f\"Foo({self.x})\"",
},
},
NotebookCell {
document: TextDocument {
contents: "def hello():\n print(\"hello world\")",
},
},
NotebookCell {
document: TextDocument {
contents: "y = 1",
},
},
]
6.521084792s INFO ruff:main ruff_server::edit:📓 After update: [
NotebookCell {
document: TextDocument {
contents: "def hello():\n print(\"hello world\")",
},
},
NotebookCell {
document: TextDocument {
contents: "class Foo:\n def __init__(self):\n self.x = 1\n\n def __str__(self):\n return f\"Foo({self.x})\"",
},
},
NotebookCell {
document: TextDocument {
contents: "y = 1",
},
},
]
```
## Summary
[**Rendered version of the new test
suite**](https://github.com/astral-sh/ruff/blob/david/intersection-type-tests/crates/red_knot_python_semantic/resources/mdtest/intersection_types.md)
Moves most of our existing intersection-types tests to a dedicated
Markdown test suite, extends the test coverage, unifies the notation for
these tests, groups tests into a proper structure, and adds some
explanations for various simplification strategies.
This changeset also:
- Adds a new simplification where `~Never` is removed from
intersections.
- Adds a new simplification where adding `~object` simplifies the whole
intersection to `Never`
- Avoids unnecessary assignment-checks between inferred and declared
type. This was added to this changeset to avoid many false positive
errors in this test suite.
Resolves the task described in this old comment
[here](e01da82a5a..e7e432bca2 (r1819924085)).
## Test Plan
Running the new Markdown tests
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Prompted by
> One nit: I think we need to consider `Any` and `Unknown` and `Todo` as
all (gradually) equivalent to each other, and thus `type & Any` and
`type & Unknown` and `type & Todo` as also equivalent. The distinction
between `Any` vs `Unknown` vs `Todo` is entirely about
provenance/debugging, there is no type level distinction. (And I've been
wondering if the `Any` vs `Unknown` distinction is really worth it.)
The thought here is that _most_ places want to treat `Any`, `Unknown`,
and `Todo` identically. So this PR simplifies things by having a single
`Type::Any` variant, and moves the provenance part into a new `AnyType`
type. If you need to treat e.g. `Todo` differently, you still can by
pattern-matching into the `AnyType`. But if you don't, you can just use
`Type::Any(_)`.
(This would also allow us to (more easily) distinguish "unknown via an
unannotated value" from "unknown because of a typing error" should we
want to do that in the future)
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
This moves almost all of our existing `UnionBuilder` tests to a
Markdown-based test suite.
I see how this could be a more controversial change, since these tests
where written specifically for `UnionBuilder`, and by creating the union
types using Python type expressions, we add an additional layer on top
(parsing and inference of these expressions) that moves these tests away
from clean unit tests more in the direction of integration tests. Also,
there are probably a few implementation details of `UnionBuilder` hidden
in the test assertions (e.g. order of union elements after
simplifications).
That said, I think we would like to see all those properties that are
being tested here from *any* implementation of union types. And the
Markdown tests come with the usual advantages:
- More consice
- Better readability
- No re-compiliation when working on tests
- Easier to add additional explanations and structure to the test suite
This changeset adds a few additional tests, but keeps the logic of the
existing tests except for a few minor modifications for consistency.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: T-256 <132141463+T-256@users.noreply.github.com>
## Summary
The symlink-approach in the typeshed-sync workflow caused some problems
on Windows, even though it seemed to work fine in CI:
https://github.com/astral-sh/ruff/pull/15138#issuecomment-2578642129
Here, we rely on `build.rs` to patch typeshed instead, which allows us
to get rid of the modifications in the workflow (thank you
@MichaReiser for the idea).
## Test Plan
- Made sure that changes to `knot_extensions.pyi` result in a recompile
of `red_knot_vendored`.
Stabilise [`slice-to-remove-prefix-or-suffix`](https://docs.astral.sh/ruff/rules/slice-to-remove-prefix-or-suffix/) (`FURB188`) for the Ruff 0.9 release.
This is a stylistic rule, but I think it's a pretty uncontroversial one. There are no open issues or PRs regarding it and it's been in preview for a while now.
More refinements to the panic messages for failing mdtests to mimic the
output of the default panic hook more closely:
- We now print out `Box<dyn Any>` if the panic payload is not a string
(which is typically the case for salsa panics).
- We now include the panic's backtrace if you set the `RUST_BACKTRACE`
environment variable.
## Summary
- Add a workflow to run property tests on a daily basis (based on
`daily_fuzz.yaml`)
- Mark `assignable_to_is_reflexive` as flaky (related to #14899)
- Add new (failing) `intersection_assignable_to_both` test (also related
to #14899)
## Test Plan
Ran:
```bash
export QUICKCHECK_TESTS=100000
while cargo test --release -p red_knot_python_semantic -- \
--ignored types::property_tests::stable; do :; done
```
Observed successful property_tests CI run
## Summary
This changeset migrates all existing `is_assignable_to` tests to a
Markdown-based test. It also increases our test coverage in a hopefully
meaningful way (not claiming to be complete in any sense). But at least
I found and fixed one bug while doing so.
## Test Plan
Ran property tests to make sure the new test succeeds after fixing it.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
This fixes#15317. Our `catch_unwind` wrapper installs a panic hook that
captures (the rendered contents of) the panic info when a panic occurs.
Since the intent is that the caller will render the panic info in some
custom way, the hook silences the default stderr panic output.
However, the panic hook is a global resource, so if any one thread was
in the middle of a `catch_unwind` call, we would silence the default
panic output for _all_ threads.
The solution is to also keep a thread local that indicates whether the
current thread is in the middle of our `catch_unwind`, and to fall back
on the default panic hook if not.
## Test Plan
Artificially added an mdtest parse error, ran tests via `cargo test -p
red_knot_python_semantic` to run a large number of tests in parallel.
Before this patch, the panic message was swallowed as reported in
#15317. After, the panic message was shown.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Fix infinite loop issue reported here #15248.
The issue was caused by the break inside the if block, which caused the
flow to exit in an unforeseen way. This caused other issues, eventually
leading to an infinite loop.
Resolves#15248. Resolves#15336.
## Test Plan
Added failing code to fixture.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: dylwil3 <dylwil3@gmail.com>
## Summary
Refer to the VS Code PR
(https://github.com/astral-sh/ruff-vscode/pull/659) for details on the
change.
This PR changes the following:
1. Add tracing span for both request (request id and method name) and
notification (method name) handler
2. Remove the `RUFF_TRACE` environment variable. This was being used to
turn on / off logging for the server
3. Similarly, remove reading the `trace` value from the initialization
options
4. Remove handling the `$/setTrace` notification
5. Remove the specialized `TraceLogWriter` used for Zed and VS Code
(https://github.com/astral-sh/ruff/pull/12564)
Regarding the (5) for the Zed editor, the reason that was implemented
was because there was no way of looking at the stderr messages in the
editor which has been changed. Now, it captures the stderr as part of
the "Server Logs".
(82492d74a8/crates/language_tools/src/lsp_log.rs (L548-L552))
### Question
Regarding (1), I think having just a simple trace level message should
be good for now as the spans are not hierarchical. This could be tackled
with #12744. The difference between the two:
<details><summary>Using <code>tracing::trace</code></summary>
<p>
```
0.019243416s DEBUG ThreadId(08) ruff_server::session::index::ruff_settings: Ignored path via `exclude`: /Users/dhruv/playground/ruff/.vscode
0.026398750s INFO main ruff_server::session::index: Registering workspace: /Users/dhruv/playground/ruff
0.026802125s TRACE ruff:main ruff_server::server::api: Received notification "textDocument/didOpen"
0.026930666s TRACE ruff:main ruff_server::server::api: Received notification "textDocument/didOpen"
0.026962333s TRACE ruff:main ruff_server::server::api: Received request "textDocument/diagnostic" (1)
0.027042875s TRACE ruff:main ruff_server::server::api: Received request "textDocument/diagnostic" (2)
0.027097500s TRACE ruff:main ruff_server::server::api: Received request "textDocument/codeAction" (3)
0.027107458s DEBUG ruff:worker:0 ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/play.py
0.027123541s DEBUG ruff:worker:3 ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/organize_imports.py
0.027514875s INFO ruff:main ruff_server::server: Configuration file watcher successfully registered
0.285689833s TRACE ruff:main ruff_server::server::api: Received request "textDocument/codeAction" (4)
45.741101666s TRACE ruff:main ruff_server::server::api: Received notification "textDocument/didClose"
47.108745500s TRACE ruff:main ruff_server::server::api: Received notification "textDocument/didOpen"
47.109802041s TRACE ruff:main ruff_server::server::api: Received request "textDocument/diagnostic" (5)
47.109926958s TRACE ruff:main ruff_server::server::api: Received request "textDocument/codeAction" (6)
47.110027791s DEBUG ruff:worker:6 ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/play.py
51.863679125s TRACE ruff:main ruff_server::server::api: Received request "textDocument/hover" (7)
```
</p>
</details>
<details><summary>Using <code>tracing::trace_span</code></summary>
<p>
Only logging the enter event:
```
0.018638750s DEBUG ThreadId(11) ruff_server::session::index::ruff_settings: Ignored path via `exclude`: /Users/dhruv/playground/ruff/.vscode
0.025895791s INFO main ruff_server::session::index: Registering workspace: /Users/dhruv/playground/ruff
0.026378791s TRACE ruff:main notification{method="textDocument/didOpen"}: ruff_server::server::api: enter
0.026531208s TRACE ruff:main notification{method="textDocument/didOpen"}: ruff_server::server::api: enter
0.026567583s TRACE ruff:main request{id=1 method="textDocument/diagnostic"}: ruff_server::server::api: enter
0.026652541s TRACE ruff:main request{id=2 method="textDocument/diagnostic"}: ruff_server::server::api: enter
0.026711041s DEBUG ruff:worker:2 ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/organize_imports.py
0.026729166s DEBUG ruff:worker:1 ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/play.py
0.027023083s INFO ruff:main ruff_server::server: Configuration file watcher successfully registered
5.197554750s TRACE ruff:main notification{method="textDocument/didClose"}: ruff_server::server::api: enter
6.534458000s TRACE ruff:main notification{method="textDocument/didOpen"}: ruff_server::server::api: enter
6.535027958s TRACE ruff:main request{id=3 method="textDocument/diagnostic"}: ruff_server::server::api: enter
6.535271166s DEBUG ruff:worker:3 ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/organize_imports.py
6.544240583s TRACE ruff:main request{id=4 method="textDocument/codeAction"}: ruff_server::server::api: enter
7.049692458s TRACE ruff:main request{id=5 method="textDocument/codeAction"}: ruff_server::server::api: enter
7.508142541s TRACE ruff:main request{id=6 method="textDocument/hover"}: ruff_server::server::api: enter
7.872421958s TRACE ruff:main request{id=7 method="textDocument/hover"}: ruff_server::server::api: enter
8.024498583s TRACE ruff:main request{id=8 method="textDocument/codeAction"}: ruff_server::server::api: enter
13.895063666s TRACE ruff:main request{id=9 method="textDocument/codeAction"}: ruff_server::server::api: enter
14.774706083s TRACE ruff:main request{id=10 method="textDocument/hover"}: ruff_server::server::api: enter
16.058918958s TRACE ruff:main notification{method="textDocument/didChange"}: ruff_server::server::api: enter
16.060562208s TRACE ruff:main request{id=11 method="textDocument/diagnostic"}: ruff_server::server::api: enter
16.061109083s DEBUG ruff:worker:8 ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/play.py
21.561742875s TRACE ruff:main notification{method="textDocument/didChange"}: ruff_server::server::api: enter
21.563573791s TRACE ruff:main request{id=12 method="textDocument/diagnostic"}: ruff_server::server::api: enter
21.564206750s DEBUG ruff:worker:4 ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/play.py
21.826691375s TRACE ruff:main request{id=13 method="textDocument/codeAction"}: ruff_server::server::api: enter
22.091080125s TRACE ruff:main request{id=14 method="textDocument/codeAction"}: ruff_server::server::api: enter
```
</p>
</details>
**Todo**
- [x] Update documentation (I'll be adding a troubleshooting section
under "Editors" as a follow-up which is for all editors)
- [x] Check for backwards compatibility. I don't think this should break
backwards compatibility as it's mainly targeted towards improving the
debugging experience.
~**Before I go on to updating the documentation, I'd appreciate initial
review on the chosen approach.**~
resolves: #14959
## Test Plan
Refer to the test plan in
https://github.com/astral-sh/ruff-vscode/pull/659.
Example logs at `debug` level:
```
0.010770083s DEBUG ThreadId(15) ruff_server::session::index::ruff_settings: Ignored path via `exclude`: /Users/dhruv/playground/ruff/.vscode
0.018101916s INFO main ruff_server::session::index: Registering workspace: /Users/dhruv/playground/ruff
0.018559916s DEBUG ruff:worker:4 ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/play.py
0.018992375s INFO ruff:main ruff_server::server: Configuration file watcher successfully registered
23.408802375s DEBUG ruff:worker:11 ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/play.py
24.329127416s DEBUG ruff:worker:6 ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/play.py
```
Example logs at `trace` level:
```
0.010296375s DEBUG ThreadId(13) ruff_server::session::index::ruff_settings: Ignored path via `exclude`: /Users/dhruv/playground/ruff/.vscode
0.017422583s INFO main ruff_server::session::index: Registering workspace: /Users/dhruv/playground/ruff
0.018034458s TRACE ruff:main notification{method="textDocument/didOpen"}: ruff_server::server::api: enter
0.018199708s TRACE ruff:worker:0 request{id=1 method="textDocument/diagnostic"}: ruff_server::server::api: enter
0.018251167s DEBUG ruff:worker:0 request{id=1 method="textDocument/diagnostic"}: ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/play.py
0.018528708s INFO ruff:main ruff_server::server: Configuration file watcher successfully registered
1.611798417s TRACE ruff:worker:1 request{id=2 method="textDocument/codeAction"}: ruff_server::server::api: enter
1.861757542s TRACE ruff:worker:4 request{id=3 method="textDocument/codeAction"}: ruff_server::server::api: enter
7.027361792s TRACE ruff:worker:2 request{id=4 method="textDocument/codeAction"}: ruff_server::server::api: enter
7.851361500s TRACE ruff:worker:5 request{id=5 method="textDocument/codeAction"}: ruff_server::server::api: enter
7.901690875s TRACE ruff:main notification{method="textDocument/didChange"}: ruff_server::server::api: enter
7.903063167s TRACE ruff:worker:10 request{id=6 method="textDocument/diagnostic"}: ruff_server::server::api: enter
7.903183500s DEBUG ruff:worker:10 request{id=6 method="textDocument/diagnostic"}: ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/play.py
8.702385292s TRACE ruff:main notification{method="textDocument/didChange"}: ruff_server::server::api: enter
8.704106625s TRACE ruff:worker:3 request{id=7 method="textDocument/diagnostic"}: ruff_server::server::api: enter
8.704304875s DEBUG ruff:worker:3 request{id=7 method="textDocument/diagnostic"}: ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/play.py
8.966853458s TRACE ruff:worker:9 request{id=8 method="textDocument/codeAction"}: ruff_server::server::api: enter
9.229622792s TRACE ruff:worker:6 request{id=9 method="textDocument/codeAction"}: ruff_server::server::api: enter
10.513111583s TRACE ruff:worker:7 request{id=10 method="textDocument/codeAction"}: ruff_server::server::api: enter
```
## Summary
Adds a type-check-time Python API that allows us to create and
manipulate types and to test various of their properties. For example,
this can be used to write a Markdown test to make sure that `A & B` is a
subtype of `A` and `B`, but not of an unrelated class `C` (something
that requires quite a bit more code to do in Rust):
```py
from knot_extensions import Intersection, is_subtype_of, static_assert
class A: ...
class B: ...
type AB = Intersection[A, B]
static_assert(is_subtype_of(AB, A))
static_assert(is_subtype_of(AB, B))
class C: ...
static_assert(not is_subtype_of(AB, C))
```
I think this functionality is also helpful for interactive debugging
sessions, in order to query various properties of Red Knot's type
system. Which is something that otherwise requires a custom Rust unit
test, some boilerplate code and constant re-compilation.
## Test Plan
- New Markdown tests
- Tested the modified typeshed_sync workflow locally
## Summary
`Type[Any]` should be assignable to `object`. All types should be
assignable to `object`.
We specifically didn't understand the former; this PR adds a test for
it, and a case to ensure that `Type[Any]` is assignable to anything that
`type` is assignable to (which includes `object`).
This PR also adds a property test that all types are assignable to
object. In order to make it pass, I added a special case to check early
if we are assigning to `object` and just return `true`. In principle,
once we get all the more general cases correct, this special case might
be removable. But having the special case for now allows the property
test to pass.
And we add a property test that all types are subtypes of object. This
failed for the case of an intersection with no positive elements (that
is, a negation type). This really does need to be a special case for
`object`, because there is no other type we can know that a negation
type is a subtype of.
## Test Plan
Added unit test and property test.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
When removing `int` in calls like `int(expr)` we may need to keep
parentheses around `expr` even when it is a function call or subscript,
since there may be newlines in between the function/value name and the
opening parentheses/bracket of the argument.
This PR implements that logic.
Closes#15263
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
We now support class patterns in a match statement, adding a narrowing
constraint that within the body of that match arm, we can assume that
the subject is an instance of that class.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This implements checking of calls.
I ended up following Micha's original suggestion from back when the
signature representation was first introduced, and flattening it to a
single array of parameters. This turned out to be easier to manage,
because we can represent parameters using indices into that array, and
represent the bound argument types as an array of the same length.
Starred and double-starred arguments are still TODO; these won't be very
useful until we have generics.
The handling of diagnostics is just hacked into `return_ty_result`,
which was already inconsistent about whether it emitted diagnostics or
not; now it's even more inconsistent. This needs to be addressed, but
could be a follow-up.
The new benchmark errors here surface the need for intersection support
in `is_assignable_to`.
Fixes#14161.
## Test Plan
Added mdtests.
Note: `PLW0101` remains in testing rather than preview, so this PR does
not modify any public behavior (hence the title beginning with
`internal` rather than `pylint`, for the sake of the changelog.)
Fixes an error in the processing of `try` statements in the control flow
graph builder.
When processing a try statement, the block following a `return` was
forced to point to the `finally` block. However, if the return was _in_
the `finally` block, this caused the block to point to itself. In the
case where the whole `try-finally` statement was also included inside of
a loop, this caused an infinite loop in the builder for the control flow
graph as it attempted to resolve edges.
Closes#15248
## Test function
### Source
```python
def l():
while T:
try:
while ():
if 3:
break
finally:
return
```
### Control Flow Graph
```mermaid
flowchart TD
start(("Start"))
return(("End"))
block0[["`*(empty)*`"]]
block1[["Loop continue"]]
block2["return\n"]
block3[["Loop continue"]]
block4["break\n"]
block5["if 3:
break\n"]
block6["while ():
if 3:
break\n"]
block7[["Exception raised"]]
block8["try:
while ():
if 3:
break
finally:
return\n"]
block9["while T:
try:
while ():
if 3:
break
finally:
return\n"]
start --> block9
block9 -- "T" --> block8
block9 -- "else" --> block0
block8 -- "Exception raised" --> block7
block8 -- "else" --> block6
block7 --> block2
block6 -- "()" --> block5
block6 -- "else" --> block2
block5 -- "3" --> block4
block5 -- "else" --> block3
block4 --> block2
block3 --> block6
block2 --> return
block1 --> block9
block0 --> return
```
## Summary
When debugging, I frequently want to know which symbols are being looked
up. `symbol_by_id` adds tracing information, but it only shows the
`ScopedSymbolId`. Since `symbol_by_id` is only called from `symbol`, it
seems reasonable to move the tracing call one level up from
`symbol_by_id` to `symbol`, where we can also show the name of the
symbol.
**Before**:
```
6 └─┐red_knot_python_semantic::types::infer::infer_expression_types{expression=Id(60de), file=/home/shark/tomllib_modified/_parser.py}
6 └─┐red_knot_python_semantic::types::symbol_by_id{symbol=ScopedSymbolId(33)}
6 ┌─┘
6 └─┐red_knot_python_semantic::types::symbol_by_id{symbol=ScopedSymbolId(123)}
6 ┌─┘
6 └─┐red_knot_python_semantic::types::symbol_by_id{symbol=ScopedSymbolId(54)}
6 ┌─┘
6 └─┐red_knot_python_semantic::types::symbol_by_id{symbol=ScopedSymbolId(122)}
6 ┌─┘
6 └─┐red_knot_python_semantic::types::symbol_by_id{symbol=ScopedSymbolId(165)}
6 ┌─┘
6 ┌─┘
6 └─┐red_knot_python_semantic::types::symbol_by_id{symbol=ScopedSymbolId(32)}
6 ┌─┘
6 └─┐red_knot_python_semantic::types::symbol_by_id{symbol=ScopedSymbolId(232)}
6 ┌─┘
6 ┌─┘
6 ┌─┘
6┌─┘
```
**After**:
```
5 └─┐red_knot_python_semantic::types::infer::infer_expression_types{expression=Id(60de), file=/home/shark/tomllib_modified/_parser.py}
5 └─┐red_knot_python_semantic::types::symbol{name="dict"}
5 ┌─┘
5 └─┐red_knot_python_semantic::types::symbol{name="dict"}
5 ┌─┘
5 └─┐red_knot_python_semantic::types::symbol{name="list"}
5 ┌─┘
5 └─┐red_knot_python_semantic::types::symbol{name="list"}
5 ┌─┘
5 └─┐red_knot_python_semantic::types::symbol{name="isinstance"}
5 ┌─┘
5 └─┐red_knot_python_semantic::types::symbol{name="isinstance"}
5 ┌─┘
5 ┌─┘
5 └─┐red_knot_python_semantic::types::symbol{name="ValueError"}
5 ┌─┘
5 └─┐red_knot_python_semantic::types::symbol{name="ValueError"}
5 ┌─┘
5 ┌─┘
5 ┌─┘
5┌─┘
```
## Test Plan
```
cargo run --bin red_knot -- --current-directory path/to/tomllib -vvv
```
## Summary
While looking at #14899, I looked at seeing if I could get shrinking on
the examples. It turned out to be straightforward, with a couple of
caveats.
I'm calling `clone` a lot during shrinking. Since by the shrink step
we're already looking at a test failure this feels fine? Unless I
misunderstood `quickcheck`'s core loop
When shrinking `Intersection`s, in order to just rely on `quickcheck`'s
`Vec` shrinking without thinking about it too much, the shrinking
strategy is:
- try to shrink the negative side (keeping the positive side the same)
- try to shrink the positive side (keeping the negative side the same)
This means that you can't shrink from `(A & B & ~C & ~D)` directly to
`(A & ~C)`! You would first need an intermediate failure at `(A & B &
~C)` or `(A & ~C & ~D)`. This feels good enough. Shrinking the negative
side first also has the benefit of trying to strip down negative
elements in these intersections.
## Test Plan
`cargo test -p red_knot_python_semantic -- --ignored
types::property_tests::stable` still fails as it current does on `main`,
but now the errors seem more minimal.
## Summary
Adds `class-as-data-structure` rule (`B903`). Also compare pylint's `too-few-public-methods` (`PLR0903`).
Took some creative liberty with this by allowing the class to have any
decorators or base classes. There are years-old issues on pylint that
don't approve of the strictness when it comes to these things.
Especially considering that dataclass is a decorator and namedtuple _can
be_ a base class. I feel ignoring those explicitly is redundant all
things considered, but it's not a hill I'm willing to die on!
See: #970
## Test Plan
`cargo test`
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: dylwil3 <dylwil3@gmail.com>
Just like in #15045 for unary expressions: In binary expressions, we
were only looking for dunder expressions for `Type::Instance` types. We
had some special cases for coercing the various `Literal` types into
their corresponding `Instance` types before doing the lookup. But we can
side-step all of that by using the existing `Type::to_meta_type` and
`Type::to_instance` methods.
## Summary
This PR upgrades zizmor to the latest release in our CI. zizmor is a
static analyzer checking for security issues in GitHub workflows. The
new release finds some new issues in our workflows; this PR fixes some
of the issues, and adds ignores for some other issues.
The issues fixed in this PR are new cases of zizmor's
[`template-injection`](https://woodruffw.github.io/zizmor/audits/#template-injection)
rule being emitted. The issues I'm ignoring for now are all to do with
the
[`cache-poisoning`](https://woodruffw.github.io/zizmor/audits/#cache-poisoning)
rule. The main reason I'm fixing some but ignoring others is that I'm
confident fixing the template-injection diagnostics won't have any
impact on how our workflows operate in CI, but I'm worried that fixing
the cache-poisoning diagnostics could slow down our CI a fair bit. I
don't mind if somebody else is motivated to try to fix these
diagnostics, but for now I think I'd prefer to just ignore them; it
doesn't seem high-priority enough to try to fix them right now :-)
## Test Plan
- `uvx pre-commit run -a --hook-stage=manual` passes locally
- Let's see if CI passes on this PR...
Resolves#14840
## Summary
Usage of ellipsis literal as default argument is allowed in stub files.
## Test Plan
Added mdtest for both python files and stub files.
---------
Co-authored-by: Carl Meyer <carl@oddbird.net>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
The test expression in an `elif` clause is evaluated whether or not we
take the branch. Our control flow model for if/elif chains failed to
reflect this, causing wrong inference in cases where an assignment
expression occurs inside an `elif` test expression. Our "no branch taken
yet" snapshot (which is the starting state for every new elif branch)
can't simply be the pre-if state, it must be updated after visiting each
test expression.
Once we do this, it also means we no longer need to track a vector of
narrowing constraints to reapply for each new branch, since our "branch
not taken" state (which is the initial state for each branch) is
continuously updated to include the negative narrowing constraints of
all previous branches.
Fixes#15033.
## Test Plan
Added mdtests.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
We understand `sys.version_info` branches now! As such, I _believe_ this
branch is no longer required; all tests pass without it. I also ran
`QUICKCHECK_TESTS=100000 cargo test -p red_knot_python_semantic --
--ignored types::property_tests::stable`, and no tests failed except for
the known issue with `Type::is_assignable_to()`
(https://github.com/astral-sh/ruff/issues/14899)
## Test Plan
See above
This updates the mdtest harness to catch any panics that occur during
type checking, and to display the panic message as an mdtest failure.
(We don't know which specific line causes the failure, so we attribute
panics to the first line of the test case.)
The default logging level for diagnostics includes logs written using
the `log` crate with level `error`, `warn`, and `info`. An unsuccessful
fix attached to a diagnostic via `try_set_fix` or `try_set_optional_fix`
was logged at level `error`. Note that the user would see these messages
even without passing `--fix`, and possibly also on lines with `noqa`
comments.
This PR changes the logging level here to a `debug`. We also found
ad-hoc instances of error logging in the implementations of several
rules, and have replaced those with either a `debug` or call to
`try_set{_optional}_fix`.
Closes#15229
## Summary
This PR re-introduces the control-flow graph implementation which was
first introduced in #5384, and then removed in #9463 due to not being
feature complete. Mainly, it lacked the ability to process
`try`-`except` blocks, along with some more minor bugs.
Closes#8958 and #8959 and #14881.
## Overview of Changes
I will now highlight the major changes implemented in this PR, in order
of implementation.
1. Introduced a post-processing step in loop handling to find any
`continue` or `break` statements within the loop body and redirect them
appropriately.
2. Introduced a loop-continue block which is always placed at the end of
loop blocks, and ensures proper looping regardless of the internal logic
of the block. This resolves#8958.
3. Implemented `try` processing with the following logic (resolves
#8959):
1. In the example below the cfg first encounters a conditional
`ExceptionRaised` forking if an exception was (or will be) raised in the
try block. This is not possible to know (except for trivial cases) so we
assume both paths can be taken unconditionally.
2. Going down the `try` path the cfg goes `try`->`else`->`finally`
unconditionally.
3. Going down the `except` path the cfg will meet several conditional
`ExceptionCaught` which fork depending on the nature of the exception
caught. Again there's no way to know which exceptions may be raised so
both paths are assumed to be taken unconditionally.
4. If none of the exception blocks catch the exception then the cfg
terminates by raising a new exception.
5. A post-processing step is also implemented to redirect any `raises`
or `returns` within the blocks appropriately.
```python
def func():
try:
print("try")
except Exception:
print("Exception")
except OtherException as e:
print("OtherException")
else:
print("else")
finally:
print("finally")
```
```mermaid
flowchart TD
start(("Start"))
return(("End"))
block0[["`*(empty)*`"]]
block1["print(#quot;finally#quot;)\n"]
block2["print(#quot;else#quot;)\n"]
block3["print(#quot;try#quot;)\n"]
block4[["Exception raised"]]
block5["print(#quot;OtherException#quot;)\n"]
block6["try:
print(#quot;try#quot;)
except Exception:
print(#quot;Exception#quot;)
except OtherException as e:
print(#quot;OtherException#quot;)
else:
print(#quot;else#quot;)
finally:
print(#quot;finally#quot;)\n"]
block7["print(#quot;Exception#quot;)\n"]
block8["try:
print(#quot;try#quot;)
except Exception:
print(#quot;Exception#quot;)
except OtherException as e:
print(#quot;OtherException#quot;)
else:
print(#quot;else#quot;)
finally:
print(#quot;finally#quot;)\n"]
block9["try:
print(#quot;try#quot;)
except Exception:
print(#quot;Exception#quot;)
except OtherException as e:
print(#quot;OtherException#quot;)
else:
print(#quot;else#quot;)
finally:
print(#quot;finally#quot;)\n"]
start --> block9
block9 -- "Exception raised" --> block8
block9 -- "else" --> block3
block8 -- "Exception" --> block7
block8 -- "else" --> block6
block7 --> block1
block6 -- "OtherException" --> block5
block6 -- "else" --> block4
block5 --> block1
block4 --> return
block3 --> block2
block2 --> block1
block1 --> block0
block0 --> return
```
6. Implemented `with` processing with the following logic:
1. `with` statements have no conditional execution (apart from the
hidden logic handling the enter and exit), so the block is assumed to
execute unconditionally.
2. The one exception is that exceptions raised within the block may
result in control flow resuming at the end of the block. Since it is not
possible know if an exception will be raised, or if it will be handled
by the context manager, we assume that execution always continues after
`with` blocks even if the blocks contain `raise` or `return` statements.
This is handled in a post-processing step.
## Test Plan
Additional test fixtures and control-flow fixtures were added.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: dylwil3 <dylwil3@gmail.com>
## Summary
Remove `Type::tuple` in favor of `TupleType::from_elements`, avoid a few
intermediate `Vec`tors. Resolves an old [review
comment](https://github.com/astral-sh/ruff/pull/14744#discussion_r1867493706).
## Test Plan
New regression test for something I ran into while implementing this.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
During https://github.com/astral-sh/ruff/pull/15209, additional spaces
was accidentally added to the rule
`airflow.operators.latest_only.LatestOnlyOperator`. This PR fixes this
issue
## Test Plan
<!-- How was it tested? -->
A test fixture has been included for the rule.
## Summary
Airflow 3.0 removes various deprecated functions, members, modules, and
other values. They have been deprecated in 2.x, but the removal causes
incompatibilities that we want to detect. This PR add rules for the
following.
* Removed class attribute
* `airflow.providers_manager.ProvidersManager.dataset_factories` →
`airflow.providers_manager.ProvidersManager.asset_factories`
* `airflow.providers_manager.ProvidersManager.dataset_uri_handlers` →
`airflow.providers_manager.ProvidersManager.asset_uri_handlers`
*
`airflow.providers_manager.ProvidersManager.dataset_to_openlineage_converters`
→
`airflow.providers_manager.ProvidersManager.asset_to_openlineage_converters`
* `airflow.lineage.hook.DatasetLineageInfo.dataset` →
`airflow.lineage.hook.AssetLineageInfo.asset`
* Removed class method (subclasses in airflow should also checked)
* `airflow.secrets.base_secrets.BaseSecretsBackend.get_conn_uri` →
`airflow.secrets.base_secrets.BaseSecretsBackend.get_conn_value`
* `airflow.secrets.base_secrets.BaseSecretsBackend.get_connections` →
`airflow.secrets.base_secrets.BaseSecretsBackend.get_connection`
* `airflow.hooks.base.BaseHook.get_connections` → use `get_connection`
* `airflow.datasets.BaseDataset.iter_datasets` →
`airflow.sdk.definitions.asset.BaseAsset.iter_assets`
* `airflow.datasets.BaseDataset.iter_dataset_aliases` →
`airflow.sdk.definitions.asset.BaseAsset.iter_asset_aliases`
* Removed constructor args (subclasses in airflow should also checked)
* argument `filename_template`
in`airflow.utils.log.file_task_handler.FileTaskHandler`
* in `BaseOperator`
* `sla`
* `task_concurrency` → `max_active_tis_per_dag`
* in `BaseAuthManager`
* `appbuilder`
* Removed class variable (subclasses anywhere should be checked)
* in `airflow.plugins_manager.AirflowPlugin`
* `executors` (from #43289)
* `hooks`
* `operators`
* `sensors`
* Replaced names
* `airflow.hooks.base_hook.BaseHook` → `airflow.hooks.base.BaseHook`
* `airflow.operators.dagrun_operator.TriggerDagRunLink` →
`airflow.operators.trigger_dagrun.TriggerDagRunLink`
* `airflow.operators.dagrun_operator.TriggerDagRunOperator` →
`airflow.operators.trigger_dagrun.TriggerDagRunOperator`
* `airflow.operators.python_operator.BranchPythonOperator` →
`airflow.operators.python.BranchPythonOperator`
* `airflow.operators.python_operator.PythonOperator` →
`airflow.operators.python.PythonOperator`
* `airflow.operators.python_operator.PythonVirtualenvOperator` →
`airflow.operators.python.PythonVirtualenvOperator`
* `airflow.operators.python_operator.ShortCircuitOperator` →
`airflow.operators.python.ShortCircuitOperator`
* `airflow.operators.latest_only_operator.LatestOnlyOperator` →
`airflow.operators.latest_only.LatestOnlyOperator`
In additional to the changes above, this PR also add utility functions
and improve docstring.
## Test Plan
A test fixture is included in the PR.
## Summary
Changes two things about the entry:
* make the example valid TOML - inline tables must be a single line, at
least till v1.1.0 is released,
but also while in the future the toml version used by ruff might handle
it, it would probably be
good to stick to a spec that's readable by the vast majority of other
tools and versions as well,
especially if people are using `pyproject.toml`. The current example
leads to `ruff` failure.
See https://github.com/toml-lang/toml/pull/904
* adds a line about the ability to add non-Python files to the map,
which I think is a specific and
important feature people should know about (in fact, I would assume this
could potentially
become the single biggest use-case for this).
## Test Plan
Ran doc creation as described in the
[contribution](https://docs.astral.sh/ruff/contributing/#mkdocs) guide.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Part of #13773
This PR adds diagnostics when there is a length mismatch during
unpacking between the number of target expressions and the number of
types for the unpack value expression.
There are 3 cases of diagnostics here where the first two occurs when
there isn't a starred expression and the last one occurs when there's a
starred expression:
1. Number of target expressions is **less** than the number of types
that needs to be unpacked
2. Number of target expressions is **greater** then the number of types
that needs to be unpacked
3. When there's a starred expression as one of the target expression and
the number of target expressions is greater than the number of types
Examples for all each of the above cases:
```py
# red-knot: Too many values to unpack (expected 2, got 3) [lint:invalid-assignment]
a, b = (1, 2, 3)
# red-knot: Not enough values to unpack (expected 2, got 1) [lint:invalid-assignment]
a, b = (1,)
# red-knot: Not enough values to unpack (expected 3 or more, got 2) [lint:invalid-assignment]
a, *b, c, d = (1, 2)
```
The (3) case is a bit special because it uses a distinct wording
"expected n or more" instead of "expected n" because of the starred
expression.
### Location
The diagnostic location is the target expression that's being unpacked.
For nested targets, the location will be the nested expression. For
example:
```py
(a, (b, c), d) = (1, (2, 3, 4), 5)
# ^^^^^^
# red-knot: Too many values to unpack (expected 2, got 3) [lint:invalid-assignment]
```
For future improvements, it would be useful to show the context for why
this unpacking failed. For example, for why the expected number of
targets is `n`, we can highlight the relevant elements for the value
expression.
In the **ecosystem**, **Pyright** uses the target expressions for
location while **mypy** uses the value expression for the location. For
example:
```py
if 1:
# mypy: Too many values to unpack (2 expected, 3 provided) [misc]
# vvvvvvvvv
a, b = (1, 2, 3)
# ^^^^
# Pyright: Expression with type "tuple[Literal[1], Literal[2], Literal[3]]" cannot be assigned to target tuple
# Type "tuple[Literal[1], Literal[2], Literal[3]]" is incompatible with target tuple
# Tuple size mismatch; expected 2 but received 3 [reportAssignmentType]
# red-knot: Too many values to unpack (expected 2, got 3) [lint:invalid-assignment]
```
## Test Plan
Update existing test cases TODO with the error directives.
Fixes: #15176
## Summary
Neither of these rules make any sense in stub files. Technically TC007
should already not have triggered, due to the typing only context of the
binding, but it's better to be explicit.
Keeping TC008 enabled on the other hand makes sense to me, although we
could probably be more aggressive with unquoting in a typing runtime
context.
## Test Plan
`cargo nextest run`
## Summary
Ref:
3533d7f5b4 (r150651102)
This PR removes the `Ranged` implementation on `DefinitionKind` and
instead uses a method called `target_range` to avoid any confusion about
what range this is for i.e., it's not the range of the node that
represents the definition.
## Summary
Related to #13773
This PR adds support for unpacking `for` statement targets.
This involves updating the `value` field in the `Unpack` target to use
an enum which specifies the "where did the value expression came from?".
This is because for an iterable expression, we need to unpack the
iterator type while for assignment statement we need to unpack the value
type itself. And, this needs to be done in the unpack query.
### Question
One of the ways unpacking works in `for` statement is by looking at the
union of the types because if the iterable expression is a tuple then
the iterator type will be union of all the types in the tuple. This
means that the test cases that will test the unpacking in `for`
statement will also implicitly test the unpacking union logic. I was
wondering if it makes sense to merge these cases and only add the ones
that are specific to the union unpacking or for statement unpacking
logic.
## Test Plan
Add test cases involving iterating over a tuple type. I've intentionally
left out certain cases for now and I'm curious to know any thoughts on
the above query.
## Summary
Closes#14975 by modifying the docstring of the InvalidPyprojectToml
rule. Previously the docs were incorrectly stating that author name and
emails must be individual items in the authors list, rather than part of
a single object for each respective author.
## Test Plan
This was a docstring change, no tests needed.
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| [env_logger](https://redirect.github.com/rust-cli/env_logger) |
workspace.dependencies | patch | `0.11.5` -> `0.11.6` |
---
### Release Notes
<details>
<summary>rust-cli/env_logger (env_logger)</summary>
###
[`v0.11.6`](https://redirect.github.com/rust-cli/env_logger/blob/HEAD/CHANGELOG.md#0116---2024-12-20)
[Compare
Source](https://redirect.github.com/rust-cli/env_logger/compare/v0.11.5...v0.11.6)
##### Features
- Opt-in file and line rendering
</details>
---
### Configuration
📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS44MC4wIiwidXBkYXRlZEluVmVyIjoiMzkuODAuMCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| [lsp-server](https://redirect.github.com/rust-lang/rust-analyzer)
([source](https://redirect.github.com/rust-lang/rust-analyzer/tree/HEAD/lib/lsp-server))
| workspace.dependencies | patch | `0.7.7` -> `0.7.8` |
---
### Configuration
📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS44MC4wIiwidXBkYXRlZEluVmVyIjoiMzkuODAuMCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| [serde_json](https://redirect.github.com/serde-rs/json) |
workspace.dependencies | patch | `1.0.133` -> `1.0.134` |
---
### Release Notes
<details>
<summary>serde-rs/json (serde_json)</summary>
###
[`v1.0.134`](https://redirect.github.com/serde-rs/json/releases/tag/v1.0.134)
[Compare
Source](https://redirect.github.com/serde-rs/json/compare/v1.0.133...v1.0.134)
- Add `RawValue` associated constants for literal `null`, `true`,
`false`
([#​1221](https://redirect.github.com/serde-rs/json/issues/1221),
thanks [@​bheylin](https://redirect.github.com/bheylin))
</details>
---
### Configuration
📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS44MC4wIiwidXBkYXRlZEluVmVyIjoiMzkuODAuMCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| [syn](https://redirect.github.com/dtolnay/syn) |
workspace.dependencies | patch | `2.0.90` -> `2.0.91` |
---
### Release Notes
<details>
<summary>dtolnay/syn (syn)</summary>
###
[`v2.0.91`](https://redirect.github.com/dtolnay/syn/releases/tag/2.0.91)
[Compare
Source](https://redirect.github.com/dtolnay/syn/compare/2.0.90...2.0.91)
- Support parsing `Vec<Arm>` using `parse_quote!`
([#​1796](https://redirect.github.com/dtolnay/syn/issues/1796),
[#​1797](https://redirect.github.com/dtolnay/syn/issues/1797))
</details>
---
### Configuration
📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS44MC4wIiwidXBkYXRlZEluVmVyIjoiMzkuODAuMCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
## Summary
This changeset adds support for precise type-inference and
boundness-handling of definitions inside control-flow branches with
statically-known conditions, i.e. test-expressions whose truthiness we
can unambiguously infer as *always false* or *always true*.
This branch also includes:
- `sys.platform` support
- statically-known branches handling for Boolean expressions and while
loops
- new `target-version` requirements in some Markdown tests which were
now required due to the understanding of `sys.version_info` branches.
closes#12700closes#15034
## Performance
### `tomllib`, -7%, needs to resolve one additional module (sys)
| Command | Mean [ms] | Min [ms] | Max [ms] | Relative |
|:---|---:|---:|---:|---:|
| `./red_knot_main --project /home/shark/tomllib` | 22.2 ± 1.3 | 19.1 |
25.6 | 1.00 |
| `./red_knot_feature --project /home/shark/tomllib` | 23.8 ± 1.6 | 20.8
| 28.6 | 1.07 ± 0.09 |
### `black`, -6%
| Command | Mean [ms] | Min [ms] | Max [ms] | Relative |
|:---|---:|---:|---:|---:|
| `./red_knot_main --project /home/shark/black` | 129.3 ± 5.1 | 119.0 |
137.8 | 1.00 |
| `./red_knot_feature --project /home/shark/black` | 136.5 ± 6.8 | 123.8
| 147.5 | 1.06 ± 0.07 |
## Test Plan
- New Markdown tests for the main feature in
`statically-known-branches.md`
- New Markdown tests for `sys.platform`
- Adapted tests for `EllipsisType`, `Never`, etc
## Summary
This PR fixes an issue where Ruff's `D403` rule
(`first-word-uncapitalized`) was not detecting some single-word edge
cases that are picked up by `pydocstyle`.
The change involves extracting the first word of the docstring by
identifying the first whitespace character. This is consistent with
`pydocstyle` which uses `.split()` - see
8d0cdfc93e/src/pydocstyle/checker.py (L581C13-L581C64)
## Example
Here is a playground example -
https://play.ruff.rs/eab9ea59-92cf-4e44-b1a9-b54b7f69b178
```py
def example1():
"""foo"""
def example2():
"""foo
Hello world!
"""
def example3():
"""foo bar
Hello world!
"""
def example4():
"""
foo
"""
def example5():
"""
foo bar
"""
```
`pydocstyle` detects all five cases:
```bash
$ pydocstyle test.py --select D403
dev/test.py:2 in public function `example1`:
D403: First word of the first line should be properly capitalized ('Foo', not 'foo')
dev/test.py:5 in public function `example2`:
D403: First word of the first line should be properly capitalized ('Foo', not 'foo')
dev/test.py:11 in public function `example3`:
D403: First word of the first line should be properly capitalized ('Foo', not 'foo')
dev/test.py:17 in public function `example4`:
D403: First word of the first line should be properly capitalized ('Foo', not 'foo')
dev/test.py:22 in public function `example5`:
D403: First word of the first line should be properly capitalized ('Foo', not 'foo')
```
Ruff (`0.8.4`) fails to catch example2 and example4.
## Test Plan
* Added two new test cases to cover the previously missed single-word
docstring cases.
## Summary
Refer:
https://github.com/astral-sh/ruff/issues/13773#issuecomment-2548020368
This PR adds support for unpacking union types.
Unpacking a union type requires us to first distribute the types for all
the targets that are involved in an unpacking. For example, if there are
two targets and a union type that needs to be unpacked, each target will
get a type from each element in the union type.
For example, if the type is `tuple[int, int] | tuple[int, str]` and the
target has two elements `(a, b)`, then
* The type of `a` will be a union of `int` and `int` which are at index
0 in the first and second tuple respectively which resolves to an `int`.
* Similarly, the type of `b` will be a union of `int` and `str` which
are at index 1 in the first and second tuple respectively which will be
`int | str`.
### Refactors
There are couple of refactors that are added in this PR:
* Add a `debug_assertion` to validate that the unpack target is a list
or a tuple
* Add a separate method to handle starred expression
## Test Plan
Update `unpacking.md` with additional test cases that uses union types.
This is done using parameter type hints style.
## Summary
This PR adds initial support for `type: ignore`. It doesn't do anything
fancy yet like:
* Detecting invalid type ignore comments
* Detecting type ignore comments that are part of another suppression
comment: `# fmt: skip # type: ignore`
* Suppressing specific lints `type: ignore [code]`
* Detecting unsused type ignore comments
* ...
The goal is to add this functionality in separate PRs.
## Test Plan
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fix#11482. Applies
https://github.com/adamchainz/flake8-comprehensions/pull/205 to ruff.
`C416` should be skipped if comprehension contains unpacking. Here's an
example:
```python
list_of_lists = [[1, 2], [3, 4]]
# ruff suggests `list(list_of_lists)` here, but that would change the result.
# `list(list_of_lists)` is not `[(1, 2), (3, 4)]`
a = [(x, y) for x, y in list_of_lists]
# This is equivalent to `list(list_of_lists)`
b = [x for x in list_of_lists]
```
## Test Plan
<!-- How was it tested? -->
Existing checks
---------
Signed-off-by: harupy <hkawamura0130@gmail.com>
## Summary
resolves#14883
This PR removes the known limitation section in the documentation of
`eq-without-hash`. That is not actually a limitation as a subclass
overriding the `__eq__` method would have its `__hash__` set to `None`
implicitly. The user should explicitly inherit the `__hash__` method
from the parent class.
## Test Plan
<img width="619" alt="Screenshot 2024-12-20 at 2 02 47 PM"
src="https://github.com/user-attachments/assets/552defcd-25e1-4153-9ab9-e5b9d5fbe8cc"
/>
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
## Summary
Airflow 3.0 removes various deprecated functions, members, modules, and
other values. They have been deprecated in 2.x, but the removal causes
incompatibilities that we want to detect. This PR deprecates the
following names and add a function for removed methods
* `airflow.datasets.manager.DatasetManager.register_dataset_change` →
`airflow.assets.manager.AssetManager.register_asset_change`
* `airflow.datasets.manager.DatasetManager.create_datasets` →
`airflow.assets.manager.AssetManager.create_assets`
* `airflow.datasets.manager.DatasetManager.notify_dataset_created` →
`airflow.assets.manager.AssetManager.notify_asset_created`
* `airflow.datasets.manager.DatasetManager.notify_dataset_changed` →
`airflow.assets.manager.AssetManager.notify_asset_changed`
* `airflow.datasets.manager.DatasetManager.notify_dataset_alias_created`
→ `airflow.assets.manager.AssetManager.notify_asset_alias_created`
*
`airflow.providers.amazon.auth_manager.aws_auth_manager.AwsAuthManager.is_authorized_dataset`
→
`airflow.providers.amazon.auth_manager.aws_auth_manager.AwsAuthManager.is_authorized_asset`
* `airflow.lineage.hook.HookLineageCollector.create_dataset` →
`airflow.lineage.hook.HookLineageCollector.create_asset`
* `airflow.lineage.hook.HookLineageCollector.add_input_dataset` →
`airflow.lineage.hook.HookLineageCollector.add_input_asset`
* `airflow.lineage.hook.HookLineageCollector.add_output_dataset` →
`airflow.lineage.hook.HookLineageCollector.dd_output_asset`
* `airflow.lineage.hook.HookLineageCollector.collected_datasets` →
`airflow.lineage.hook.HookLineageCollector.collected_assets`
*
`airflow.providers_manager.ProvidersManager.initialize_providers_dataset_uri_resources`
→
`airflow.providers_manager.ProvidersManager.initialize_providers_asset_uri_resources`
## Test Plan
A test fixture is included in the PR.
When confronted with `raise from exc` the parser will now create a
`StmtRaise` that has `None` for the exception and `exc` for the cause.
Before, the parser created a `StmtRaise` with `from` for the exception,
no cause, and a spurious expression `exc` afterwards.
## Summary
A follow up PR on https://github.com/astral-sh/ruff/issues/14991
Ruff ignores hardcoded passwords for typed variables. Add a rule to
catch passwords in typed code bases
## Test Plan
Includes 2 more test typed variables
We have a handy `to_meta_type` that does the right thing for class
instances, and also works for all of the other types that are “instances
of” something. Unless I'm missing something, this should let us get rid
of the catch-all clause in one fell swoop.
cf #14548
## Summary
I'm currently on the fence about landing the #14760 PR because it's
unclear how we'd support tracking used and unused suppression comments
in a performant way:
* Salsa adds an "untracked" dependency to every query reading
accumulated values. This has the effect that the query re-runs on every
revision. For example, a possible future query
`unused_suppression_comments(db, file)` would re-run on every
incremental change and for every file. I don't expect the operation
itself to be expensive, but it all adds up in a project with 100k+ files
* Salsa collects the accumulated values by traversing the entire query
dependency graph. It can skip over sub-graphs if it is known that they
contain no accumulated values. This makes accumulators a great tool for
when they are rare; diagnostics are a good example. Unfortunately,
suppressions are more common, and they often appear in many different
files, making the "skip over subgraphs" optimization less effective.
Because of that, I want to wait to adopt salsa accumulators for type
check diagnostics (we could start using them for other diagnostics)
until we have very specific reasons that justify regressing incremental
check performance.
This PR does a "small" refactor that brings us closer to what I have in
#14760 but without using accumulators. To emit a diagnostic, a method
needs:
* Access to the db
* Access to the currently checked file
This PR introduces a new `InferContext` that holds on to the db, the
current file, and the reported diagnostics. It replaces the
`TypeCheckDiagnosticsBuilder`. We pass the `InferContext` instead of the
`db` to methods that *might* emit diagnostics. This simplifies some of
the `Outcome` methods, which can now be called with a context instead of
a `db` and the diagnostics builder. Having the `db` and the file on a
single type like this would also be useful when using accumulators.
This PR doesn't solve the issue that the `Outcome` types feel somewhat
complicated nor that it can be annoying when you need to report a
`Diagnostic,` but you don't have access to an `InferContext` (or the
file). However, I also believe that accumulators won't solve these
problems because:
* Even with accumulators, it's necessary to have a reference to the file
that's being checked. The struggle would be to get a reference to that
file rather than getting a reference to `InferContext`.
* Users of the `HasTy` trait (e.g., a linter) don't want to bother
getting the `File` when calling `Type::return_ty` because they aren't
interested in the created diagnostics. They just want to know what
calling the current expression would return (and if it even is a
callable). This is what the different methods of `Outcome` enable today.
I can ask for the return type without needing extra data that's only
relevant for emitting a diagnostic.
A shortcoming of this approach is that it is now a bit confusing when to
pass `db` and when an `InferContext`. An option is that we'd make the
`file` on `InferContext` optional (it won't collect any diagnostics if
`None`) and change all methods on `Type` to take `InferContext` as the
first argument instead of a `db`. I'm interested in your opinion on
this.
Accumulators are definitely harder to use incorrectly because they
remove the need to merge the diagnostics explicitly and there's no risk
that we accidentally merge the diagnostics twice, resulting in
duplicated diagnostics. I still value performance more over making our
life slightly easier.
Closes#14000
## Summary
For typing context bindings we know that they won't be available at
runtime. We shouldn't recommend a fix, that will result in name errors
at runtime.
## Test Plan
`cargo nextest run`
This tweaks the new semantics from #15026 a bit when a symbol could be
interpreted both as an attribute and a submodule of a package. For
`from...import`, we should actually prioritize the attribute, because of
how the statement itself is implemented [1].
> 1. check if the imported module has an attribute by that name
> 2. if not, attempt to import a submodule with that name and then check
the imported module again for that attribute
[1] https://docs.python.org/3/reference/simple_stmts.html#the-import-statement
## Summary
Fixes#14550.
Add `AlwaysTruthy` and `AlwaysFalsy` types, representing the set of objects whose `__bool__` method can only ever return `True` or `False`, respectively, and narrow `if x` and `if not x` accordingly.
## Test Plan
- New Markdown test for truthiness narrowing `narrow/truthiness.md`
- unit tests in `types.rs` and `builders.rs` (`cargo test --package
red_knot_python_semantic --lib -- types`)
## Summary
Fixes https://github.com/astral-sh/ruff/issues/15027
The `MemoryFileSystem::write_file` API automatically creates
non-existing ancestor directoryes
but we failed to update the status of the now created ancestor
directories in the `Files` data structure.
## Test Plan
Tested that the case in https://github.com/astral-sh/ruff/issues/15027
now passes regardless of whether the *Simple* case is commented out or
not
Fixes#15012.
```python
def f():
# panics when the code can't find the loop variable
values = [1, 2, 3]
result = []
for i in values:
result.append(i + 1)
del i
```
I'm not sure exactly why this test case panics, but I suspect the `del
i` removes the binding from the semantic model's symbols.
I changed the code to search for the correct binding by directly
iterating through the bindings. Since we know exactly which binding we
want, this should find the loop variable without any complications.
## Summary
This PR updates the logic when raising conflicting declarations
diagnostic to avoid the undeclared path if present.
The conflicting declaration diagnostics is added when there are two or
more declarations in the control flow path of a definition whose type
isn't equivalent to each other. This can be seen in the following
example:
```py
if flag:
x: int
x = 1 # conflicting-declarations: Unknown, int
```
After this PR, we'd avoid considering "Unknown" as part of the
conflicting declarations. This means we'd still flag it for the
following case:
```py
if flag:
x: int
else:
x: str
x = 1 # conflicting-declarations: int, str
```
A solution that's local to the exception control flow was also explored
which required updating the logic for merging the flow snapshot to avoid
considering declarations using a flag. This is preserved here:
https://github.com/astral-sh/ruff/compare/dhruv/control-flow-no-declarations?expand=1.
The main motivation to avoid that is we don't really understand what the
user experience is w.r.t. the Unknown type and the
conflicting-declaration diagnostics. This makes us unsure on what the
right semantics are as to whether that diagnostics should be raised or
not and when to raise them. For now, we've decided to move forward with
this PR and could decide to adopt another solution or remove the
conflicting-declaration diagnostics in the future.
Closes: #13966
## Test Plan
Update the existing mdtest case. Add an additional case specific to
exception control flow to verify that the diagnostic is not being raised
now.
When importing a nested module, we were correctly creating a binding for
the top-most parent, but we were binding that to the nested module, not
to that parent module. Moreover, we weren't treating those submodules as
members of their containing parents. This PR addresses both issues, so
that nested imports work as expected.
As discussed in ~Slack~ whatever chat app I find myself in these days
😄, this requires keeping track of which modules have been imported
within the current file, so that when we resolve member access on a
module reference, we can see if that member has been imported as a
submodule. If so, we return the submodule reference immediately, instead
of checking whether the parent module's definition defines the symbol.
This is currently done in a flow insensitive manner. The `SemanticIndex`
now tracks all of the modules that are imported (via `import`, not via
`from...import`). The member access logic mentioned above currently only
considers module imports in the file containing the attribute
expression.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
This PR introduces three changes to `D403`, which has to do with
capitalizing the first word in a docstring.
1. The diagnostic and fix now skip leading whitespace when determining
what counts as "the first word".
2. The name has been changed to `first-word-uncapitalized` from
`first-line-capitalized`, for both clarity and compliance with our rule
naming policy.
3. The diagnostic message and documentation has been modified slightly
to reflect this.
Closes#14890
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| [colored](https://redirect.github.com/mackwic/colored) |
workspace.dependencies | minor | `2.1.0` -> `2.2.0` |
---
### Release Notes
<details>
<summary>mackwic/colored (colored)</summary>
###
[`v2.2.0`](https://redirect.github.com/mackwic/colored/compare/v2.1.0...v2.2.0)
[Compare
Source](https://redirect.github.com/mackwic/colored/compare/v2.1.0...v2.2.0)
</details>
---
### Configuration
📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS41OC4xIiwidXBkYXRlZEluVmVyIjoiMzkuNTguMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Fixes#14969.
The issue was that this line:
```rust
let from_assign_to_loop = TextRange::new(binding_stmt.end(), for_stmt.start());
```
was not safe if the binding was after the target. The only way (at least
that I can think of) this can happen is if they are in different scopes,
so it now checks for that before checking if there are usages between
the two.
## Summary
The summary is misleading, as well as the
`whitespace-after-open-bracket` and `whitespace-before-close-bracket`
names - it's not only brackets, but also parentheses and braces. Align
the documentation with the actual behaviour.
Don't change the names, but align the documentation with the behaviour.
## Test Plan
No test (documentation).
## Summary
This change adds `name` and `default` functions to `TypeParam` to access
the corresponding attributes more conveniently. I currently have these
as helper functions in code built on top of ruff_python_ast, and they
seemed like they might be generally useful.
## Test Plan
Ran the checks listed in CONTRIBUTING.md#development.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
A class is an instance of its metaclass, so `ClassLiteral("ABC")` is not
disjoint from `Instance("ABCMeta")`. However, we erroneously consider
the two types disjoint on the `main` branch. This PR fixes that.
This bug was uncovered by adding some more core types to the property
tests that provide coverage for classes that have custom metaclasses.
The additions to the property tests are included in this PR.
## Test Plan
New unit tests and property tests added. Tested with:
- `cargo test -p red_knot_python_semantic`
- `QUICKCHECK_TESTS=100000 cargo test -p red_knot_python_semantic --
--ignored types::property_tests::stable`
The assignability property test fails on this branch, but that's a known
issue that exists on `main`, due to
https://github.com/astral-sh/ruff/issues/14899.
## Summary
Teach red-knot that `type[...]` is always disjoint from `None` and from
`LiteralString`. Fixes#14925.
This should properly be generalized to "all instances of final types
which are not subclasses of `type`", but until we support finality,
hardcoding `None` (which is known to be final) allows us to fix the
subtype transitivity property test.
## Test Plan
Existing tests pass, added new unit tests for `is_disjoint_from` and
`is_subtype_of`.
`QUICKCHECK_TESTS=100000 cargo test -p red_knot_python_semantic --
--ignored types::property_tests::stable` fails only the "assignability
is reflexive" test, which is known to fail on `main` (#14899).
The same command, with `property_tests.rs` edited to prevent generating
intersection tests (the cause of #14899), passes all quickcheck tests.
## Summary
This is not strictly required yet, but makes these tests future-proof.
They need a `python-version` requirement as they rely on language
features that are not available in 3.9.
## Summary
Many core Airflow features have been deprecated and moved to Airflow
Providers since users might need to install an additional package (e.g.,
`apache-airflow-provider-fab==1.0.0`); a separate rule (AIR303) is
created for this.
As some of the changes only relate to the module/package moved, instead
of listing out all the functions, variables, and classes in a module or
a package, it warns the user to import from the new path instead of the
specific name.
The following is the ones that has been moved to
`apache-airflow-provider-fab==1.0.0`
* module moved
* `airflow.api.auth.backend.basic_auth` →
`airflow.providers.fab.auth_manager.api.auth.backend.basic_auth`
* `airflow.api.auth.backend.kerberos_auth` →
`airflow.providers.fab.auth_manager.api.auth.backend.kerberos_auth`
* `airflow.auth.managers.fab.api.auth.backend.kerberos_auth` →
`airflow.providers.fab.auth_manager.api.auth.backend.kerberos_auth`
* `airflow.auth.managers.fab.security_manager.override` →
`airflow.providers.fab.auth_manager.security_manager.override`
* classes (e.g., functions, classes) moved
* `airflow.www.security.FabAirflowSecurityManagerOverride` →
`airflow.providers.fab.auth_manager.security_manager.override.FabAirflowSecurityManagerOverride`
* `airflow.auth.managers.fab.fab_auth_manager.FabAuthManager` →
`airflow.providers.fab.auth_manager.security_manager.FabAuthManager`
## Test Plan
A test fixture has been included for the rule.
## Summary
Add support for `typing.TYPE_CHECKING` and
`typing_extensions.TYPE_CHECKING`.
relates to: https://github.com/astral-sh/ruff/issues/14170
## Test Plan
New Markdown-based tests
## Summary
This PR extends the mdtest configuration with a `log` setting that can
be any of:
* `true`: Enables tracing
* `false`: Disables tracing (default)
* String: An ENV_FILTER similar to `RED_KNOT_LOG`
```toml
log = true
```
Closes https://github.com/astral-sh/ruff/issues/13865
## Test Plan
I changed a test and tried `log=true`, `log=false`, and `log=INFO`
## Summary
This PR renames the `--custom-typeshed-dir`, `target-version`, and
`--current-directory` cli options to `--typeshed`,
`--python-version`, and `--project` as discussed in the CLI proposal
document.
I added aliases for `--target-version` (for Ruff compat) and
`--custom-typeshed-dir` (for Alex)
## Test Plan
Long help
```
An extremely fast Python type checker.
Usage: red_knot [OPTIONS] [COMMAND]
Commands:
server Start the language server
help Print this message or the help of the given subcommand(s)
Options:
--project <PROJECT>
Run the command within the given project directory.
All `pyproject.toml` files will be discovered by walking up the directory tree from the project root, as will the project's virtual environment (`.venv`).
Other command-line arguments (such as relative paths) will be resolved relative to the current working directory."#,
--venv-path <PATH>
Path to the virtual environment the project uses.
If provided, red-knot will use the `site-packages` directory of this virtual environment to resolve type information for the project's third-party dependencies.
--typeshed-path <PATH>
Custom directory to use for stdlib typeshed stubs
--extra-search-path <PATH>
Additional path to use as a module-resolution source (can be passed multiple times)
--python-version <VERSION>
Python version to assume when resolving types
[possible values: 3.7, 3.8, 3.9, 3.10, 3.11, 3.12, 3.13]
-v, --verbose...
Use verbose output (or `-vv` and `-vvv` for more verbose output)
-W, --watch
Run in watch mode by re-running whenever files change
-h, --help
Print help (see a summary with '-h')
-V, --version
Print version
```
Short help
```
An extremely fast Python type checker.
Usage: red_knot [OPTIONS] [COMMAND]
Commands:
server Start the language server
help Print this message or the help of the given subcommand(s)
Options:
--project <PROJECT> Run the command within the given project directory
--venv-path <PATH> Path to the virtual environment the project uses
--typeshed-path <PATH> Custom directory to use for stdlib typeshed stubs
--extra-search-path <PATH> Additional path to use as a module-resolution source (can be passed multiple times)
--python-version <VERSION> Python version to assume when resolving types [possible values: 3.7, 3.8, 3.9, 3.10, 3.11, 3.12, 3.13]
-v, --verbose... Use verbose output (or `-vv` and `-vvv` for more verbose output)
-W, --watch Run in watch mode by re-running whenever files change
-h, --help Print help (see more with '--help')
-V, --version Print version
```
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Closes https://github.com/astral-sh/ruff/issues/14892, by adding
`sqlmodel.SQLModel` to the list of classes with default copy semantics.
## Test Plan
Added a test into `RUF012.py` containing the example from the original
issue.
## Summary
Regression test(s) for something that broken while implementing #14759.
We have similar tests for other control flow elements, but feel free to
let me know if this seems superfluous.
## Test Plan
New mdtests
## Summary
`PTH210` renamed to `invalid-pathlib-with-suffix` and extended to check for `.with_suffix(".")`. This caused the fix availability to be downgraded to "Sometimes", since there is no fix offered in this case.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Dylan <53534755+dylwil3@users.noreply.github.com>
## Summary
This PR changes our zizmor configuration to also flag low-severity
security issues in our GitHub Actions workflows. It's a followup to
https://github.com/astral-sh/ruff/pull/14844. The issues being fixed
here were all flagged by [zizmor's `template-injection`
rule](https://woodruffw.github.io/zizmor/audits/#template-injection):
> Detects potential sources of code injection via template expansion.
>
> GitHub Actions allows workflows to define template expansions, which
occur within special `${{ ... }}` delimiters. These expansions happen
before workflow and job execution, meaning the expansion of a given
expression appears verbatim in whatever context it was performed in.
>
> Template expansions aren't syntax-aware, meaning that they can result
in unintended shell injection vectors. This is especially true when
they're used with attacker-controllable expression contexts, such as
`github.event.issue.title` (which the attacker can fully control by
supplying a new issue title).
[...]
> To fully remediate the vulnerability, you should not use `${{
env.VARNAME }}`, since that is still a template expansion. Instead, you
should use `${VARNAME}` to ensure that the shell itself performs the
variable expansion.
## Test Plan
I tested that this passes all zizmore warnings by running `pre-commit
run -a zizmor` locally. The other test is obviously to check that the
workflows all still run correctly in CI 😄
## Summary
Using `typing.LiteralString` breaks as soon as we understand
`sys.version_info` branches, as it's only available in 3.11 and later.
## Test Plan
Made sure it didn't fail on my #14759 branch anymore.
We support using `typing.Type[]` as a base class (and we have tests for
it), but not yet `builtins.type[]`. At some point we should fix that,
but I don't think it';s worth spending much time on now (and it might be
easier once we've implemented generics?). This PR just adds a failing
test with a TODO.
## Summary
Fixes a small scoping issue in `DiagnosticId::matches`
Note: I don't think we should use `lint:id` in mdtests just yet. I worry
that it could lead to many unnecessary churns if we decide **not** to
use `lint:<id>` as the format (e.g., `lint/id`).
The reason why users even see `lint:<rule>` is because the mdtest
framework uses the diagnostic infrastructure
Closes#14910
## Test Plan
Added tests
## Summary
This is the third and last PR in this stack that adds support for
toggling lints at a per-rule level.
This PR introduces a new `LintRegistry`, a central index of known lints.
The registry is required because we want to support lint rules from many
different crates but need a way to look them up by name, e.g., when
resolving a lint from a name in the configuration or analyzing a
suppression comment.
Adding a lint now requires two steps:
1. Declare the lint with `declare_lint`
2. Register the lint in the registry inside the `register_lints`
function.
I considered some more involved macros to avoid changes in two places.
Still, I ultimately decided against it because a) it's just two places
and b) I'd expect that registering a type checker lint will differ from
registering a lint that runs as a rule in the linter. I worry that any
more opinionated design could limit our options when working on the
linter, so I kept it simple.
The second part of this PR is the `RuleSelection`. It stores which lints
are enabled and what severity they should use for created diagnostics.
For now, the `RuleSelection` always gets initialized with all known
lints and it uses their default level.
## Linter crates
Each crate that defines lints should export a `register_lints` function
that accepts a `&mut LintRegistryBuilder` to register all its known
lints in the registry. This should make registering all known lints in a
top-level crate easy: Just call `register_lints` of every crate that
defines lint rules.
I considered defining a `LintCollection` trait and even some fancy
macros to accomplish the same but decided to go for this very simplistic
approach for now. We can add more abstraction once needed.
## Lint rules
This is a bit hand-wavy. I don't have a good sense for how our linter
infrastructure will look like, but I expect we'll need a way to register
the rules that should run as part of the red knot linter. One way is to
keep doing what Ruff does by having one massive `checker` and each lint
rule adds a call to itself in the relevant AST visitor methods. An
alternative is that we have a `LintRule` trait that provides common
hooks and implementations will be called at the "right time". Such a
design would need a way to register all known lint implementations,
possibly with the lint. This is where we'd probably want a dedicated
`register_rule` method. A third option is that lint rules are handled
separately from the `LintRegistry` and are specific to the linter crate.
The current design should be flexible enough to support the three
options.
## Documentation generation
The documentation for all known lints can be generated by creating a
factory, registering all lints by calling the `register_lints` methods,
and then querying the registry for the metadata.
## Deserialization and Schema generation
I haven't fully decided what the best approach is when it comes to
deserializing lint rule names:
* Reject invalid names in the deserializer. This gives us error messages
with line and column numbers (by serde)
* Don't validate lint rule names during deserialization; defer the
validation until the configuration is resolved. This gives us more
control over handling the error, e.g. emit a warning diagnostic instead
of aborting when a rule isn't known.
One technical challenge for both deserialization and schema generation
is that the `Deserialize` and `JSONSchema` traits do not allow passing
the `LintRegistry`, which is required to look up the lints by name. I
suggest that we either rely on the salsa db being set for the current
thread (`salsa::Attach`) or build our own thread-local storage for the
`LintRegistry`. It's the caller's responsibility to make the lint
registry available before calling `Deserialize` or `JSONSchema`.
## CLI support
I prefer deferring adding support for enabling and disabling lints from
the CLI for now because I think it will be easier
to add once I've figured out how to handle configurations.
## Bitset optimization
Ruff tracks the enabled rules using a cheap copyable `Bitset` instead of
a hash map. This helped improve performance by a few percent (see
https://github.com/astral-sh/ruff/pull/3606). However, this approach is
no longer possible because lints have no "cheap" way to compute their
index inside the registry (other than using a hash map).
We could consider doing something similar to Salsa where each
`LintMetadata` stores a `LazyLintIndex`.
```
pub struct LazyLintIndex {
cached: OnceLock<(Nonce, LintIndex)>
}
impl LazyLintIndex {
pub fn get(registry: &LintRegistry, lint: &'static LintMetadata) {
let (nonce, index) = self.cached.get_or_init(|| registry.lint_index(lint));
if registry.nonce() == nonce {
index
} else {
registry.lint_index(lint)
}
}
```
Each registry keeps a map from `LintId` to `LintIndex` where `LintIndex`
is in the range of `0...registry.len()`. The `LazyLintIndex` is based on
the assumption that every program has exactly **one** registry. This
assumption allows to cache the `LintIndex` directly on the
`LintMetadata`. The implementation falls back to the "slow" path if
there is more than one registry at runtime.
I was very close to implementing this optimization because it's kind of
fun to implement. I ultimately decided against it because it adds
complexity and I don't think it's worth doing in Red Knot today:
* Red Knot only queries the rule selection when deciding whether or not
to emit a diagnostic. It is rarely used to detect if a certain code
block should run. This is different from Ruff where the rule selection
is queried many times for every single AST node to determine which rules
*should* run.
* I'm not sure if a 2-3% performance improvement is worth the complexity
I suggest revisiting this decision when working on the linter where a
fast path for deciding if a rule is enabled might be more important (but
that depends on how lint rules are implemented)
## Test Plan
I removed a lint from the default rule registry, and the MD tests
started failing because the diagnostics were no longer emitted.
This PR adds a syntax error if the parser encounters a `TryStmt` that
has except clauses both with and without a star.
The displayed error points to each except clause that contradicts the
original except clause kind. So, for example,
```python
try:
....
except: #<-- we assume this is the desired except kind
....
except*: #<--- error will point here
....
except*: #<--- and here
....
```
Closes#14860
This adds support for `type[Any]`, which represents an unknown type (not
an instance of an unknown type), and `type`, which we are choosing to
interpret as `type[object]`.
Closes#14546
## Summary
This is already several hundred lines of code, and it will get more
complex with call-signature checking.
## Test Plan
This is a pure code move; the moved code wasn't changed, just imports.
Existing tests pass.
## Summary
Add a `is_fully_static` premise to the equivalence on subtyping property tests.
## Test Plan
```
cargo test -p red_knot_python_semantic -- --ignored types::property_tests::stable
```
Without this, `cargo insta test` re-compiles every time it is run, even
if there are no changes. With this, I can re-run `cargo insta test` (or
other `cargo build` commands) without it resulting in re-compiles.
I made an identical change to uv a while back:
https://github.com/astral-sh/uv/pull/6825
## Summary
This is the second PR out of three that adds support for
enabling/disabling lint rules in Red Knot. You may want to take a look
at the [first PR](https://github.com/astral-sh/ruff/pull/14869) in this
stack to familiarize yourself with the used terminology.
This PR adds a new syntax to define a lint:
```rust
declare_lint! {
/// ## What it does
/// Checks for references to names that are not defined.
///
/// ## Why is this bad?
/// Using an undefined variable will raise a `NameError` at runtime.
///
/// ## Example
///
/// ```python
/// print(x) # NameError: name 'x' is not defined
/// ```
pub(crate) static UNRESOLVED_REFERENCE = {
summary: "detects references to names that are not defined",
status: LintStatus::preview("1.0.0"),
default_level: Level::Warn,
}
}
```
A lint has a name and metadata about its status (preview, stable,
removed, deprecated), the default diagnostic level (unless the
configuration changes), and documentation. I use a macro here to derive
the kebab-case name and extract the documentation automatically.
This PR doesn't yet add any mechanism to discover all known lints. This
will be added in the next and last PR in this stack.
## Documentation
I documented some rules but then decided that it's probably not my best
use of time if I document all of them now (it also means that I play
catch-up with all of you forever). That's why I left some rules
undocumented (marked with TODO)
## Where is the best place to define all lints?
I'm not sure. I think what I have in this PR is fine but I also don't
love it because most lints are in a single place but not all of them. If
you have ideas, let me know.
## Why is the message not part of the lint, unlike Ruff's `Violation`
I understand that the main motivation for defining `message` on
`Violation` in Ruff is to remove the need to repeat the same message
over and over again. I'm not sure if this is an actual problem. Most
rules only emit a diagnostic in a single place and they commonly use
different messages if they emit diagnostics in different code paths,
requiring extra fields on the `Violation` struct.
That's why I'm not convinced that there's an actual need for it and
there are alternatives that can reduce the repetition when creating a
diagnostic:
* Create a helper function. We already do this in red knot with the
`add_xy` methods
* Create a custom `Diagnostic` implementation that tailors the entire
diagnostic and pre-codes e.g. the message
Avoiding an extra field on the `Violation` also removes the need to
allocate intermediate strings as it is commonly the place in Ruff.
Instead, Red Knot can use a borrowed string with `format_args`
## Test Plan
`cargo test`
## Summary
This PR introduces a structured `DiagnosticId` instead of using a plain
`&'static str`. It is the first of three in a stack that implements a
basic rules infrastructure for Red Knot.
`DiagnosticId` is an enum over all known diagnostic codes. A closed enum
reduces the risk of accidentally introducing two identical diagnostic
codes. It also opens the possibility of generating reference
documentation from the enum in the future (not part of this PR).
The enum isn't *fully closed* because it uses a `&'static str` for lint
names. This is because we want the flexibility to define lints in
different crates, and all names are only known in `red_knot_linter` or
above. Still, lower-level crates must already reference the lint names
to emit diagnostics. We could define all lint-names in `DiagnosticId`
but I decided against it because:
* We probably want to share the `DiagnosticId` type between Ruff and Red
Knot to avoid extra complexity in the diagnostic crate, and both tools
use different lint names.
* Lints require a lot of extra metadata beyond just the name. That's why
I think defining them close to their implementation is important.
In the long term, we may also want to support plugins, which would make
it impossible to know all lint names at compile time. The next PR in the
stack introduces extra syntax for defining lints.
A closed enum does have a few disadvantages:
* rustc can't help us detect unused diagnostic codes because the enum is
public
* Adding a new diagnostic in the workspace crate now requires changes to
at least two crates: It requires changing the workspace crate to add the
diagnostic and the `ruff_db` crate to define the diagnostic ID. I
consider this an acceptable trade. We may want to move `DiagnosticId` to
its own crate or into a shared `red_knot_diagnostic` crate.
## Preventing duplicate diagnostic identifiers
One goal of this PR is to make it harder to introduce ambiguous
diagnostic IDs, which is achieved by defining a closed enum. However,
the enum isn't fully "closed" because it doesn't explicitly list the IDs
for all lint rules. That leaves the possibility that a lint rule and a
diagnostic ID share the same name.
I made the names unambiguous in this PR by separating them into
different namespaces by using `lint/<rule>` for lint rule codes. I don't
mind the `lint` prefix in a *Ruff next* context, but it is a bit weird
for a standalone type checker. I'd like to not overfocus on this for now
because I see a few different options:
* We remove the `lint` prefix and add a unit test in a top-level crate
that iterates over all known lint rules and diagnostic IDs to ensure the
names are non-overlapping.
* We only render `[lint]` as the error code and add a note to the
diagnostic mentioning the lint rule. This is similar to clippy and has
the advantage that the header line remains short
(`lint/some-long-rule-name` is very long ;))
* Any other form of adjusting the diagnostic rendering to make the
distinction clear
I think we can defer this decision for now because the `DiagnosticId`
contains all the relevant information to change the rendering
accordingly.
## Why `Lint` and not `LintRule`
I see three kinds of diagnostics in Red Knot:
* Non-suppressable: Reveal type, IO errors, configuration errors, etc.
(any `DiagnosticId`)
* Lints: code-related diagnostics that are suppressable.
* Lint rules: The same as lints, but they can be enabled or disabled in
the configuration. The majority of lints in Red Knot and the Ruff
linter.
Our current implementation doesn't distinguish between lints and Lint
rules because we aren't aware of a suppressible code-related lint that
can't be configured in the configuration. The only lint that comes to my
mind is maybe `division-by-zero` if we're 99.99% sure that it is always
right. However, I want to keep the door open to making this distinction
in the future if it proves useful.
Another reason why I chose lint over lint rule (or just rule) is that I
want to leave room for a future lint rule and lint phase concept:
* lint is the *what*: a specific code smell, pattern, or violation
* the lint rule is the *how*: I could see a future `LintRule` trait in
`red_knot_python_linter` that provides the necessary hooks to run as
part of the linter. A lint rule produces diagnostics for exactly one
lint. A lint rule differs from all lints in `red_knot_python_semantic`
because they don't run as "rules" in the Ruff sense. Instead, they're a
side-product of type inference.
* the lint phase is a different form of *how*: A lint phase can produce
many different lints in a single pass. This is a somewhat common pattern
in Ruff where running one analysis collects the necessary information
for finding many different lints
* diagnostic is the *presentation*: Unlike a lint, the diagnostic isn't
the what, but how a specific lint gets presented. I expect that many
lints can use one generic `LintDiagnostic`, but a few lints might need
more flexibility and implement their custom diagnostic rendering (at
least custom `Diagnostic` implementation).
## Test Plan
`cargo test`
## Summary
Add replacement fixes to deprecated arguments of a DAG.
Ref #14582#14626
## Test Plan
Diff was verified and snapshots were updated.
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
## Summary
Per suggestion in
https://github.com/astral-sh/ruff/pull/14802#discussion_r1875455417
This is a bit less error-prone and allows us to handle both expressions
in the current scope or a different scope. Also, there's currently no
need for this method outside of `TypeInferenceBuilder`, so no reason to
expose it in `types.rs`.
## Test Plan
Pure refactor, no functional change; existing tests pass.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Part 1 of the big change introduced in #14828. This temporarily causes
all fixes for `round(...)` to be considered unsafe, but they will
eventually be enhanced.
## Test Plan
`cargo nextest run` and `cargo insta test`.
## Summary
Upgrades to React 19. Closes
https://github.com/astral-sh/ruff/issues/14859
## Test Plan
I ran the playground locally and clicked through the different panels. I
didn't see any warning or error.
## Summary
Close#11243. Fix `pytest-parametrize-names-wrong-type (PT006)` to edit
both `argnames` and `argvalues` if both of them are single-element
tuples/lists.
```python
# Before fix
@pytest.mark.parametrize(("x",), [(1,), (2,)])
def test_foo(x):
...
# After fix:
@pytest.mark.parametrize("x", [1, 2])
def test_foo(x):
...
```
## Test Plan
New test cases
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
|
[python-jsonschema/check-jsonschema](https://redirect.github.com/python-jsonschema/check-jsonschema)
| repository | minor | `0.29.4` -> `0.30.0` |
Note: The `pre-commit` manager in Renovate is not supported by the
`pre-commit` maintainers or community. Please do not report any problems
there, instead [create a Discussion in the Renovate
repository](https://redirect.github.com/renovatebot/renovate/discussions/new)
if you have any questions.
---
### Release Notes
<details>
<summary>python-jsonschema/check-jsonschema
(python-jsonschema/check-jsonschema)</summary>
###
[`v0.30.0`](https://redirect.github.com/python-jsonschema/check-jsonschema/blob/HEAD/CHANGELOG.rst#0300)
[Compare
Source](https://redirect.github.com/python-jsonschema/check-jsonschema/compare/0.29.4...0.30.0)
- Update vendored schemas: azure-pipelines, bitbucket-pipelines,
buildkite,
circle-ci, cloudbuild, dependabot, github-workflows, gitlab-ci, mergify,
readthedocs, renovate, taskfile, woodpecker-ci (2024-11-29)
- Fix caching behavior to always use URL hashes as cache keys. This
fixes a
cache confusion bug in which the wrong schema could be retrieved from
the
cache. This resolves :cve:`2024-53848`. Thanks :user:`sethmlarson` for
reporting!
- Deprecate the `--cache-filename` flag. It no longer has any effect and
will
be removed in a future release.
</details>
---
### Configuration
📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS40Mi40IiwidXBkYXRlZEluVmVyIjoiMzkuNDIuNCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
This PR introduces three changes to the diagnostic and fix behavior
(still under preview) for [boolean-chained-comparison
(PLR1716)](https://docs.astral.sh/ruff/rules/boolean-chained-comparison/#boolean-chained-comparison-plr1716).
1. We now offer a _fix_ in the case of parenthesized expressions like
`(a < b) and b < c`. The fix will merge the chains of comparisons and
then balance parentheses by _adding_ parentheses to one side of the
expression.
2. We now trigger a diagnostic (and fix) in the case where some
comparisons have multiple comparators like `a < b < c and c < d`.
3. When adjacent comparators are parenthesized, we prefer the left
parenthesization and apply the replacement to the whole parenthesized
range. So, for example, `a < (b) and ((b)) < c` becomes `a < (b) < c`.
While these seem like somewhat disconnected changes, they are actually
related. If we only offered (1), then we would see the following fix
behavior:
```diff
- (a < b) and b < c and ((c < d))
+ (a < b < c) and ((c < d))
```
This is because the fix which add parentheses to the first pair of
comparisons overlaps with the fix that removes the `and` between the
second two comparisons. So the latter fix is deferred. However, the
latter fix does not get a second chance because, upon the next lint
iteration, there is no violation of `PLR1716`.
Upon adopting (2), however, both fixes occur by the time ruff completes
several iterations and we get:
```diff
- (a < b) and b < c and ((c < d))
+ ((a < b < c < d))
```
Finally, (3) fixes a previously unobserved bug wherein the autofix for
`a < (b) and b < c` used to result in `a<(b<c` which gives a syntax
error. It could in theory have been fixed in a separate PR, but seems to
be on theme here.
----------
- Closes#13524
- (1), (2), and (3) are implemented in separate commits for ease of
review and modification.
- Technically a user can trigger an error in ruff (by reaching max
iterations) if they have a humongous boolean chained comparison with
differing parentheses levels.
## Summary
Minor change for the documentation of COM818 rule. This was a block
called “In the event that a tuple is intended”, but the suggested change
did not produce a tuple.
## Test Plan
```python
>>> import json
>>> (json.dumps({"bar": 1}),) # this is a tuple
('{"bar": 1}',)
>>> (json.dumps({"bar": 1})) # not a tuple
'{"bar": 1}'
```
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| [pep440_rs](https://redirect.github.com/konstin/pep440-rs) |
workspace.dependencies | patch | `0.7.2` -> `0.7.3` |
---
### Release Notes
<details>
<summary>konstin/pep440-rs (pep440_rs)</summary>
###
[`v0.7.3`](https://redirect.github.com/konstin/pep440-rs/blob/HEAD/Changelog.md#073)
[Compare
Source](https://redirect.github.com/konstin/pep440-rs/compare/v0.7.2...v0.7.3)
- Use once_cell to lower MSRV
</details>
---
### Configuration
📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS40Mi40IiwidXBkYXRlZEluVmVyIjoiMzkuNDIuNCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
## Summary
A [recent exploit](https://github.com/advisories/GHSA-7x29-qqmq-v6qc)
brought attention to how easy it can be for attackers to use template
expansion in GitHub Actions workflows to inject arbitrary code into a
repository. That vulnerability [would have been caught by the zizmor
linter](https://blog.yossarian.net/2024/12/06/zizmor-ultralytics-injection),
which looks for potential security vulnerabilities in GitHub Actions
workflows. This PR adds [zizmor](https://github.com/woodruffw/zizmor) as
a pre-commit hook and fixes the high- and medium-severity warnings
flagged by the tool.
All the warnings fixed in this PR are related to this zizmor check:
https://woodruffw.github.io/zizmor/audits/#artipacked. The summary of
the check is that `actions/checkout` will by default persist git
configuration for the duration of the workflow, which can be insecure.
It's unnecessary unless you actually need to do things with `git` later
on in the workflow. None of our workflows do except for
`publish-docs.yml` and `sync-typeshed.yml`, so I set
`persist-credentials: true` for those two but `persist-credentials:
false` for all other uses of `actions/checkout`.
Unfortunately there are several warnings in `release.yml`, including
four high-severity warnings. However, this is a generated workflow file,
so I have deliberately excluded this file from the check. These are the
findings in `release.yml`:
<details>
<summary>release.yml findings</summary>
```
warning[artipacked]: credential persistence through GitHub Actions artifacts
--> /Users/alexw/dev/ruff/.github/workflows/release.yml:62:9
|
62 | - uses: actions/checkout@v4
| _________-
63 | | with:
64 | | submodules: recursive
| |_______________________________- does not set persist-credentials: false
|
= note: audit confidence → Low
warning[artipacked]: credential persistence through GitHub Actions artifacts
--> /Users/alexw/dev/ruff/.github/workflows/release.yml:124:9
|
124 | - uses: actions/checkout@v4
| _________-
125 | | with:
126 | | submodules: recursive
| |_______________________________- does not set persist-credentials: false
|
= note: audit confidence → Low
warning[artipacked]: credential persistence through GitHub Actions artifacts
--> /Users/alexw/dev/ruff/.github/workflows/release.yml:174:9
|
174 | - uses: actions/checkout@v4
| _________-
175 | | with:
176 | | submodules: recursive
| |_______________________________- does not set persist-credentials: false
|
= note: audit confidence → Low
warning[artipacked]: credential persistence through GitHub Actions artifacts
--> /Users/alexw/dev/ruff/.github/workflows/release.yml:249:9
|
249 | - uses: actions/checkout@v4
| _________-
250 | | with:
251 | | submodules: recursive
252 | | # Create a GitHub Release while uploading all files to it
| |_______________________________________________________________- does not set persist-credentials: false
|
= note: audit confidence → Low
error[excessive-permissions]: overly broad workflow or job-level permissions
--> /Users/alexw/dev/ruff/.github/workflows/release.yml:17:1
|
17 | / permissions:
18 | | "contents": "write"
... |
39 | | # If there's a prerelease-style suffix to the version, then the release(s)
40 | | # will be marked as a prerelease.
| |_________________________________^ contents: write is overly broad at the workflow level
|
= note: audit confidence → High
error[template-injection]: code injection via template expansion
--> /Users/alexw/dev/ruff/.github/workflows/release.yml:80:9
|
80 | - id: plan
| _________^
81 | | run: |
| |_________^
82 | || dist ${{ (inputs.tag && inputs.tag != 'dry-run' && format('host --steps=create --tag={0}', inputs.tag)) || 'plan' }} --out...
83 | || echo "dist ran successfully"
84 | || cat plan-dist-manifest.json
85 | || echo "manifest=$(jq -c "." plan-dist-manifest.json)" >> "$GITHUB_OUTPUT"
| ||__________________________________________________________________________________^ this step
| ||__________________________________________________________________________________^ inputs.tag may expand into attacker-controllable code
|
= note: audit confidence → Low
error[template-injection]: code injection via template expansion
--> /Users/alexw/dev/ruff/.github/workflows/release.yml:80:9
|
80 | - id: plan
| _________^
81 | | run: |
| |_________^
82 | || dist ${{ (inputs.tag && inputs.tag != 'dry-run' && format('host --steps=create --tag={0}', inputs.tag)) || 'plan' }} --out...
83 | || echo "dist ran successfully"
84 | || cat plan-dist-manifest.json
85 | || echo "manifest=$(jq -c "." plan-dist-manifest.json)" >> "$GITHUB_OUTPUT"
| ||__________________________________________________________________________________^ this step
| ||__________________________________________________________________________________^ inputs.tag may expand into attacker-controllable code
|
= note: audit confidence → Low
error[template-injection]: code injection via template expansion
--> /Users/alexw/dev/ruff/.github/workflows/release.yml:80:9
|
80 | - id: plan
| _________^
81 | | run: |
| |_________^
82 | || dist ${{ (inputs.tag && inputs.tag != 'dry-run' && format('host --steps=create --tag={0}', inputs.tag)) || 'plan' }} --out...
83 | || echo "dist ran successfully"
84 | || cat plan-dist-manifest.json
85 | || echo "manifest=$(jq -c "." plan-dist-manifest.json)" >> "$GITHUB_OUTPUT"
| ||__________________________________________________________________________________^ this step
| ||__________________________________________________________________________________^ inputs.tag may expand into attacker-controllable code
|
= note: audit confidence → Low
```
</details>
## Test Plan
`uvx pre-commit run -a`
Improves error message for [except*](https://peps.python.org/pep-0654/)
(Rules: B025, B029, B030, B904)
Example python snippet:
```python
try:
a = 1
except* ValueError:
a = 2
except* ValueError:
a = 2
try:
pass
except* ():
pass
try:
pass
except* 1: # error
pass
try:
raise ValueError
except* ValueError:
raise UserWarning
```
Error messages
Before:
```
$ ruff check --select=B foo.py
foo.py:6:9: B025 try-except block with duplicate exception `ValueError`
foo.py:11:1: B029 Using `except ():` with an empty tuple does not catch anything; add exceptions to handle
foo.py:16:9: B030 `except` handlers should only be exception classes or tuples of exception classes
foo.py:22:5: B904 Within an `except` clause, raise exceptions with `raise ... from err` or `raise ... from None` to distinguish them from errors in exception handling
Found 4 errors.
```
After:
```
$ ruff check --select=B foo.py
foo.py:6:9: B025 try-except* block with duplicate exception `ValueError`
foo.py:11:1: B029 Using `except* ():` with an empty tuple does not catch anything; add exceptions to handle
foo.py:16:9: B030 `except*` handlers should only be exception classes or tuples of exception classes
foo.py:22:5: B904 Within an `except*` clause, raise exceptions with `raise ... from err` or `raise ... from None` to distinguish them from errors in exception handling
Found 4 errors.
```
Closes https://github.com/astral-sh/ruff/issues/14791
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
This adds support for `type[a.X]`, where the `type` special form is
applied to a qualified name that resolves to a class literal. This works
for both nested classes and classes imported from another module.
Closes#14545
## Summary
Inferred and declared types for function parameters, in the function
body scope.
Fixes#13693.
## Test Plan
Added mdtests.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Airflow 3.0 removes various deprecated functions, members, modules, and
other values. They have been deprecated in 2.x, but the removal causes
incompatibilities that we want to detect. This PR deprecates the
following names.
* in `DAG`
* `sla_miss_callback` was removed
* in `airflow.operators.trigger_dagrun.TriggerDagRunOperator`
* `execution_date` was removed
* in `airflow.operators.weekday.DayOfWeekSensor`,
`airflow.operators.datetime.BranchDateTimeOperator` and
`airflow.operators.weekday.BranchDayOfWeekOperator`
* `use_task_execution_day` was removed in favor of
`use_task_logical_date`
The full list of rules we will extend
https://github.com/apache/airflow/issues/44556
## Test Plan
<!-- How was it tested? -->
A test fixture is included in the PR.
## Summary
`typing.Never` and `typing.LiteralString` are only conditionally
exported from `typing` for Python versions 3.11 and later. We run the
Markdown tests with the default Python version of 3.9, so here we change
the import to `typing_extensions` instead, and add a new test to make
sure we'll continue to understand the `typing`-version of these symbols
for newer versions.
This didn't cause problems so far, as we don't understand
`sys.version_info` branches yet.
## Test Plan
New Markdown tests to make sure this will continue to work in the
future.
## Summary
Fixes https://github.com/astral-sh/ruff/issues/14778
The formatter incorrectly removed the inner implicitly concatenated
string for following single-line f-string:
```py
f"{'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' 'a' if True else ""}"
# formatted
f"{ if True else ''}"
```
This happened because I changed the `RemoveSoftlinesBuffer` in
https://github.com/astral-sh/ruff/pull/14489 to remove any content
wrapped in `if_group_breaks`. After all, it emulates an *all flat*
layout. This works fine when `if_group_breaks` is only used to **add**
content if the gorup breaks. It doesn't work if the same content is
rendered differently depending on if the group fits using
`if_group_breaks` and `if_groups_fits` because the enclosing `group`
might still *break* if the entire content exceeds the line-length limit.
This PR fixes this by unwrapping any `if_group_fits` content by removing
the `if_group_fits` start and end tags.
## Test Plan
added test
## Summary
This adds support for specifying the target Python version from a
Markdown test. It is a somewhat limited ad-hoc solution, but designed to
be future-compatible. TOML blocks can be added to arbitrary sections in
the Markdown block. They have the following format:
````markdown
```toml
[tool.knot.environment]
target-version = "3.13"
```
````
So far, there is nothing else that can be configured, but it should be
straightforward to extend this to things like a custom typeshed path.
This is in preparation for the statically-known branches feature where
we are going to have to specify the target version for lots of tests.
## Test Plan
- New Markdown test that fails without the explicitly specified
`target-version`.
- Manually tested various error paths when specifying a wrong
`target-version` field.
- Made sure that running tests is as fast as before.
## Summary
Fixes https://github.com/astral-sh/ruff/issues/14807
I suspect that this broke when we updated notify, although I'm not quiet
sure how this *ever* worked...
The problem was that the file watcher didn't skip over `Access` events,
but Ruff itself accesses the `pyproject.toml` when checking the project.
That means, Ruff triggers `Access` events but it also schedules a
re-check on every `Access` event... and this goes one forever.
This PR skips over `Access` and `Other` event. `Access` events are
uninteresting because they're only reads, they don't change any file
metadata or content.
The `Other` events should be rare and are mainly to inform about file
watcher changes... we don't need those.
I also added an explicit handling for the `Rescan` event. File watchers
emit a `Rescan` event if they failed to capture some file watching
changes
and it signals that the program should assume that all files might have
changed (the program should do a rescan to *get up to date*).
## Test Plan
I tested that Ruff no longer loops when running `check --watch`. I
verified that Ruff rechecks file after making content changes.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
in unknown moment older versions became broken for windows-gnullvm
targets. this update shouldn't break anything
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
successfully built for windows-gnullvm with `cargo build`
<!-- How was it tested? -->
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
https://docs.astral.sh/ruff/integrations/#github-actions upgraded for
https://github.com/astral-sh/ruff-action/releases
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
@eifinger Your review, please.
## Summary
This is related to #13778, more specifically
https://github.com/astral-sh/ruff/issues/13778#issuecomment-2513556004.
This PR adds various test cases where a keyword is being where an
identifier is expected. The tests are to make sure that red knot doesn't
panic, raises the syntax error and the identifier is added to the symbol
table. The final part allows editor related features like renaming the
symbol.
## Summary
`typing_extensions` has a `>=3.13` re-export for the `typing.NoDefault`
singleton, but not for `typing._NoDefaultType`. This causes problems as
soon as we understand `sys.version_info` branches, so we explicity
switch to `typing._NoDefaultType` for Python 3.13 and later.
This is a part of #14759 that I thought might make sense to break out
and merge in isolation.
## Test Plan
New test that will become more meaningful with #12700
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
- Instead of seven (more or less similar) `setup_db` functions, use just
one in a single central place.
- For every test that needs customization beyond that, offer a
`TestDbBuilder` that can control the Python target version, custom
typeshed, and pre-existing files.
The main motivation for this is that we're soon going to need
customization of the Python version, and I didn't feel like adding this
to each of the existing `setup_db` functions.
## Summary
This changeset contains various improvements concerning non-fully-static
types and their relationships:
- Make sure that non-fully-static types do not participate in
equivalence or subtyping.
- Clarify what `Type::is_equivalent_to` actually implements.
- Introduce `Type::is_fully_static`
- New tests making sure that multiple `Any`/`Unknown`s inside unions and
intersections are collapsed.
closes#14524
## Test Plan
- Added new unit tests for union and intersection builder
- Added new unit tests for `Type::is_equivalent_to`
- Added new unit tests for `Type::is_subtype_of`
- Added new property test making sure that non-fully-static types do not
participate in subtyping
We already had a representation for the Any type, which we would use
e.g. for expressions without type annotations. We now recognize
`typing.Any` as a way to refer to this type explicitly. Like other
special forms, this is tracked correctly through aliasing, and isn't
confused with local definitions that happen to have the same name.
Closes#14544
## Summary
Minor change that uses two plain classes `A` and `B` instead of
`typing.Sized` and `typing.Hashable`.
The motivation is twofold: I remember that I was confused when I first
saw this test. Was there anything specific to `Sized` and `Hashable`
that was relevant here? (there is, these classes are not overlapping;
and you can build a proper intersection from them; but that's true for
almost all non-builtin classes).
I now ran into another problem while working on #14758: `Sized` and
`Hashable` are protocols that we don't fully understand yet. This
causing some trouble when trying to infer whether these are fully-static
types or not.
Closes: #14676
I think the consensus generally was to keep the rule as-is, but expand
the docs.
## Summary
Expands the docs for TC006 with an explanation for why the type
expression is always quoted, including mention of another potential
benefit to this style.
When fixing an invalid escape sequence in an f-string, each f-string
element is analyzed for valid escape characters prior to creating the
diagnostic and fix. This allows us to safely prefix with `r` to create a
raw string if no valid escape characters were found anywhere in the
f-string, and otherwise insert backslashes.
This fixes a bug in the original implementation: each "f-string part"
was treated separately, so it was not possible to tell whether a valid
escape character was or would be used elsewhere in the f-string.
Progress towards #11491 but format specifiers are not handled in this
PR.
## Summary
This PR adds a fuzzer harness for red knot that runs the type checker on
source code that contains invalid syntax.
Additionally, this PR also updates the `init-fuzzer.sh` script to
increase the corpus size to:
* Include various crates that includes Python source code
* Use the 3.13 CPython source code
And, remove any non-Python files from the final corpus so that when the
fuzzer tries to minify the corpus, it doesn't produce files that only
contains documentation content as that's just noise.
## Test Plan
Run `./fuzz/init-fuzzer.sh`, say no to the large dataset.
Run the fuzzer with `cargo +night fuzz run red_knot_check_invalid_syntax
-- -timeout=5`
## Summary
This PR makes changes to the `AIR001` rule as per
https://github.com/astral-sh/ruff/pull/14627#discussion_r1860212307.
Additionally,
* Avoid returning the `Diagnostic` and update the checker in the rule
logic for consistency
* Remove test case for different keyword position (I don't think it's
required here)
## Test Plan
Add test cases for multiple operators from various modules.
## Summary
Just some minor followups to the recently merged RUF052 rule, that was
added in bf0fd04:
- Some small tweaks to the docs
- A minor code-style nit
- Some more tests for my peace of mind, just to check that the new
methods on the semantic model are working correctly
I'm adding the "internal" label as this doesn't deserve a changelog
entry. RUF052 is a new rule that hasn't been released yet.
## Test Plan
`cargo test -p ruff_linter`
## Summary
This PR adds a new `property_tests` module with quickcheck-based tests
that verify certain properties of types. The following properties are
currently checked:
* `is_equivalent_to`:
* is reflexive: `T` is equivalent to itself
* `is_subtype_of`:
* is reflexive: `T` is a subtype of `T`
* is antisymmetric: if `S <: T` and `T <: S`, then `S` is equivalent to
`T`
* is transitive: `S <: T` & `T <: U` => `S <: U`
* `is_disjoint_from`:
* is irreflexive: `T` is not disjoint from `T`
* is symmetric: `S` disjoint from `T` => `T` disjoint from `S`
* `is_assignable_to`:
* is reflexive
* `negate`:
* is an involution: `T.negate().negate()` is equivalent to `T`
There are also some tests that validate higher-level properties like:
* `S <: T` implies that `S` is not disjoint from `T`
* `S <: T` implies that `S` is assignable to `T`
* A singleton type must also be single-valued
These tests found a few bugs so far:
- #14177
- #14195
- #14196
- #14210
- #14731
Some additional notes:
- Quickcheck-based property tests are non-deterministic and finding
counter-examples might take an arbitrary long time. This makes them bad
candidates for running in CI (for every PR). We can think of running
them in a cron-job way from time to time, similar to fuzzing. But for
now, it's only possible to run them locally (see instructions in source
code).
- Some tests currently find false positive "counterexamples" because our
understanding of equivalence of types is not yet complete. We do not
understand that `int | str` is the same as `str | int`, for example.
These tests are in a separate `property_tests::flaky` module.
- Properties can not be formulated in every way possible, due to the
fact that `is_disjoint_from` and `is_subtype_of` can produce false
negative answers.
- The current shrinking implementation is very naive, which leads to
counterexamples that are very long (`str & Any & ~tuple[Any] &
~tuple[Unknown] & ~Literal[""] & ~Literal["a"] | str & int & ~tuple[Any]
& ~tuple[Unknown]`), requiring the developer to simplify manually. It
has not been a major issue so far, but there is a comment in the code
how this can be improved.
- The tests are currently implemented using a macro. This is a single
commit on top which can easily be reverted, if we prefer the plain code
instead. With the macro:
```rs
// `S <: T` implies that `S` can be assigned to `T`.
type_property_test!(
subtype_of_implies_assignable_to, db,
forall types s, t. s.is_subtype_of(db, t) => s.is_assignable_to(db, t)
);
```
without the macro:
```rs
/// `S <: T` implies that `S` can be assigned to `T`.
#[quickcheck]
fn subtype_of_implies_assignable_to(s: Ty, t: Ty) -> bool {
let db = get_cached_db();
let s = s.into_type(&db);
let t = t.into_type(&db);
!s.is_subtype_of(&*db, t) || s.is_assignable_to(&*db, t)
}
```
## Test Plan
```bash
while cargo test --release -p red_knot_python_semantic --features property_tests types::property_tests; do :; done
```
## Summary
`KnownInstance::instance_fallback` may return instances of supertypes.
For example, it returns an instance of `_SpecialForm` for `Literal`.
This means it can't be used on the right-hand side of `is_subtype_of`
relationships, because it might lead to false positives.
I can lead to false negatives on the left hand side of `is_subtype_of`,
but this is at least a known limitation. False negatives are fine for
most applications, but false positives can lead to wrong results in
intersection-simplification, for example.
closes#14731
## Test Plan
Added regression test
## Summary
Simplify tuples containing `Never` to `Never`:
```py
from typing import Never
def never() -> Never: ...
reveal_type((1, never(), "foo")) # revealed: Never
```
I should note that mypy and pyright do *not* perform this
simplification. I don't know why.
There is [only one
place](5137fcc9c8/crates/red_knot_python_semantic/src/types/infer.rs (L1477-L1484))
where we use `TupleType::new` directly (instead of `Type::tuple`, which
changes behavior here). This appears when creating `TypeVar`
constraints, and it looks to me like it should stay this way, because
we're using `TupleType` to store a list of constraints there, instead of
an actual type. We also store `tuple[constraint1, constraint2, …]` as
the type for the `constraint1, constraint2, …` tuple expression. This
would mean that we infer a type of `tuple[str, Never]` for the following
type variable constraints, without simplifying it to `Never`. This seems
like a weird edge case that's maybe not worth looking further into?!
```py
from typing import Never
# vvvvvvvvvv
def f[T: (str, Never)](x: T):
pass
```
## Test Plan
- Added a new unit test. Did not add additional Markdown tests as that
seems superfluous.
- Tested the example above using red knot, mypy, pyright.
- Verified that this allows us to remove `contains_never` from the
property tests
(https://github.com/astral-sh/ruff/pull/14178#discussion_r1866473192)
This PR improves on #14477 by:
- Ensuring user's do not require the module alias "__debug__", which is unassignable
- Validating the linter settings for
`lint.flake8-import-conventions.extend-aliases` (whereas previously we
only did this for `lint.flake8-import-conventions.aliases`).
Closes#14662
Resolves https://github.com/astral-sh/ruff/issues/14547 by delegating
narrowing to `E` for `bool(E)` where `E` is some expression.
This change does not include other builtin class constructors which
should also work in this position, like `int(..)` or `float(..)`, as the
original issue does not mention these. It should be easy enough to add
checks for these as well if we want to.
I don't see a lot of markdown tests for malformed input, maybe there's a
better place for the no args and too many args cases to go?
I did see after the fact that it looks like this task was intended for a
new hire.. my apologies. I got here from
https://github.com/astral-sh/ruff/issues/13694, which is marked
help-wanted.
---------
Co-authored-by: David Peter <mail@david-peter.de>
## Summary
Seeing the fuzzing results from @dhruvmanila in #13778, I think we can
re-enable these tests. We also had one regression that would have been
caught by these tests, so there is some value in having them enabled.
## Summary
- Check if `hashlib` and `crypt` imports have been seen for `FURB181`
and `S324`
- Mark the fix for `FURB181` as safe: I think it was accidentally marked
as unsafe in the first place. The rule does not support user-defined
classes as the "fix safety" section suggests.
- Removed `hashlib._Hash`, as it's not part of the `hashlib` module.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
Updated the test snapshots
Ruff now formats your code according to the 2025 style guide. As a result, your code might now get formatted differently. See the [changelog](./CHANGELOG.md#090) for a detailed list of changes.
## 0.8.0
- **Default to Python 3.9**
@@ -192,7 +196,7 @@ flag or `unsafe-fixes` configuration option can be used to enable unsafe fixes.
See the [docs](https://docs.astral.sh/ruff/configuration/#fix-safety) for details.
### Remove formatter-conflicting rules from the default rule set ([#7900](https://github.com/astral-sh/ruff/pull/7900))
### Remove formatter-conflicting rules from the default rule set ([#7900](https://github.com/astral-sh/ruff/pull/7900))
Previously, Ruff enabled all implemented rules in Pycodestyle (`E`) by default. Ruff now only includes the
Pycodestyle prefixes `E4`, `E7`, and `E9` to exclude rules that conflict with automatic formatters. Consequently,
- \[`pycodestyle`\] Run `too-many-newlines-at-end-of-file` on each cell in notebooks (`W391`) ([#15308](https://github.com/astral-sh/ruff/pull/15308))
- \[`ruff`\] Omit diagnostic for shadowed private function parameters in `used-dummy-variable` (`RUF052`) ([#15376](https://github.com/astral-sh/ruff/pull/15376))
- Preserve trailing end-of line comments for the last string literal in implicitly concatenated strings ([#15378](https://github.com/astral-sh/ruff/pull/15378))
### Server
- Fix a bug where the server and client notebooks were out of sync after reordering cells ([#15398](https://github.com/astral-sh/ruff/pull/15398))
- \[`pyupgrade`\] Handle comments and multiline expressions correctly (`UP037`) ([#15337](https://github.com/astral-sh/ruff/pull/15337))
## 0.9.0
Check out the [blog post](https://astral.sh/blog/ruff-v0.9.0) for a migration guide and overview of the changes!
### Breaking changes
Ruff now formats your code according to the 2025 style guide. As a result, your code might now get formatted differently. See the formatter section for a detailed list of changes.
This release doesn’t remove or remap any existing stable rules.
### Stabilization
The following rules have been stabilized and are no longer in preview:
- [`pytest-parametrize-names-wrong-type`](https://docs.astral.sh/ruff/rules/pytest-parametrize-names-wrong-type/) (`PT006`): Detect [`pytest.parametrize`](https://docs.pytest.org/en/7.1.x/how-to/parametrize.html#parametrize) calls outside decorators and calls with keyword arguments.
- [`module-import-not-at-top-of-file`](https://docs.astral.sh/ruff/rules/module-import-not-at-top-of-file/) (`E402`): Ignore [`pytest.importorskip`](https://docs.pytest.org/en/7.1.x/reference/reference.html#pytest-importorskip) calls between import statements.
- [`mutable-dataclass-default`](https://docs.astral.sh/ruff/rules/mutable-dataclass-default/) (`RUF008`) and [`function-call-in-dataclass-default-argument`](https://docs.astral.sh/ruff/rules/function-call-in-dataclass-default-argument/) (`RUF009`): Add support for [`attrs`](https://www.attrs.org/en/stable/).
- [`bad-version-info-comparison`](https://docs.astral.sh/ruff/rules/bad-version-info-comparison/) (`PYI006`): Extend the rule to check non-stub files.
The following fixes or improvements to fixes have been stabilized:
This release introduces the new 2025 stable style ([#13371](https://github.com/astral-sh/ruff/issues/13371)), stabilizing the following changes:
- Format expressions in f-string elements ([#7594](https://github.com/astral-sh/ruff/issues/7594))
- Alternate quotes for strings inside f-strings ([#13860](https://github.com/astral-sh/ruff/pull/13860))
- Preserve the casing of hex codes in f-string debug expressions ([#14766](https://github.com/astral-sh/ruff/issues/14766))
- Choose the quote style for each string literal in an implicitly concatenated f-string rather than for the entire string ([#13539](https://github.com/astral-sh/ruff/pull/13539))
- Automatically join an implicitly concatenated string into a single string literal if it fits on a single line ([#9457](https://github.com/astral-sh/ruff/issues/9457))
- Remove the [`ISC001`](https://docs.astral.sh/ruff/rules/single-line-implicit-string-concatenation/) incompatibility warning ([#15123](https://github.com/astral-sh/ruff/pull/15123))
- Prefer parenthesizing the `assert` message over breaking the assertion expression ([#9457](https://github.com/astral-sh/ruff/issues/9457))
- Automatically parenthesize over-long `if` guards in `match``case` clauses ([#13513](https://github.com/astral-sh/ruff/pull/13513))
- More consistent formatting for `match``case` patterns ([#6933](https://github.com/astral-sh/ruff/issues/6933))
- Avoid unnecessary parentheses around return type annotations ([#13381](https://github.com/astral-sh/ruff/pull/13381))
- Keep the opening parentheses on the same line as the `if` keyword for comprehensions where the condition has a leading comment ([#12282](https://github.com/astral-sh/ruff/pull/12282))
- More consistent formatting for `with` statements with a single context manager for Python 3.8 or older ([#10276](https://github.com/astral-sh/ruff/pull/10276))
- Correctly calculate the line-width for code blocks in docstrings when using `max-doc-code-line-length = "dynamic"` ([#13523](https://github.com/astral-sh/ruff/pull/13523))
- \[`flake8-type-checking`\] Apply `quoted-type-alias` more eagerly in `TYPE_CHECKING` blocks and ignore it in stubs (`TC008`) ([#15180](https://github.com/astral-sh/ruff/pull/15180))
- \[`pylint`\] Ignore `eq-without-hash` in stub files (`PLW1641`) ([#15310](https://github.com/astral-sh/ruff/pull/15310))
- \[`pyupgrade`\] Split `UP007` into two individual rules: `UP007` for `Union` and `UP045` for `Optional` (`UP007`, `UP045`) ([#15313](https://github.com/astral-sh/ruff/pull/15313))
- \[`ruff`\] New rule that detects classes that are both an enum and a `dataclass` (`RUF049`) ([#15299](https://github.com/astral-sh/ruff/pull/15299))
- \[`ruff`\] Recode `RUF025` to `RUF037` (`RUF037`) ([#15258](https://github.com/astral-sh/ruff/pull/15258))
### Rule changes
- \[`flake8-builtins`\] Ignore [`stdlib-module-shadowing`](https://docs.astral.sh/ruff/rules/stdlib-module-shadowing/) in stub files(`A005`) ([#15350](https://github.com/astral-sh/ruff/pull/15350))
- \[`flake8-return`\] Add support for functions returning `typing.Never` (`RET503`) ([#15298](https://github.com/astral-sh/ruff/pull/15298))
### Server
- Improve the observability by removing the need for the ["trace" value](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#traceValue) to turn on or off logging. The server logging is solely controlled using the [`logLevel` server setting](https://docs.astral.sh/ruff/editors/settings/#loglevel)
which defaults to `info`. This addresses the issue where users were notified about an error and told to consult the log, but it didn’t contain any messages. ([#15232](https://github.com/astral-sh/ruff/pull/15232))
- Ignore diagnostics from other sources for code action requests ([#15373](https://github.com/astral-sh/ruff/pull/15373))
### CLI
- Improve the error message for `--config key=value` when the `key` is for a table and it’s a simple `value`
### Bug fixes
- \[`eradicate`\] Ignore metadata blocks directly followed by normal blocks (`ERA001`) ([#15330](https://github.com/astral-sh/ruff/pull/15330))
- \[`flake8-django`\] Recognize other magic methods (`DJ012`) ([#15365](https://github.com/astral-sh/ruff/pull/15365))
- \[`pycodestyle`\] Avoid false positives related to type aliases (`E252`) ([#15356](https://github.com/astral-sh/ruff/pull/15356))
- \[`pydocstyle`\] Avoid treating newline-separated sections as sub-sections (`D405`) ([#15311](https://github.com/astral-sh/ruff/pull/15311))
- \[`pyflakes`\] Remove call when removing final argument from `format` (`F523`) ([#15309](https://github.com/astral-sh/ruff/pull/15309))
- \[`refurb`\] Mark fix as unsafe when the right-hand side is a string (`FURB171`) ([#15273](https://github.com/astral-sh/ruff/pull/15273))
- \[`ruff`\] Treat `)` as a regex metacharacter (`RUF043`, `RUF055`) ([#15318](https://github.com/astral-sh/ruff/pull/15318))
- \[`ruff`\] Parenthesize the `int`-call argument when removing the `int` call would change semantics (`RUF046`) ([#15277](https://github.com/astral-sh/ruff/pull/15277))
- Show errors for attempted fixes only when passed `--verbose` ([#15237](https://github.com/astral-sh/ruff/pull/15237))
### Bug fixes
- \[`ruff`\] Avoid syntax error when removing int over multiple lines (`RUF046`) ([#15230](https://github.com/astral-sh/ruff/pull/15230))
- \[`pyupgrade`\] Revert "Add all PEP-585 names to `UP006` rule" ([#15250](https://github.com/astral-sh/ruff/pull/15250))
## 0.8.5
### Preview features
- \[`airflow`\] Extend names moved from core to provider (`AIR303`) ([#15145](https://github.com/astral-sh/ruff/pull/15145), [#15159](https://github.com/astral-sh/ruff/pull/15159), [#15196](https://github.com/astral-sh/ruff/pull/15196), [#15216](https://github.com/astral-sh/ruff/pull/15216))
- \[`airflow`\] Extend rule to check class attributes, methods, arguments (`AIR302`) ([#15054](https://github.com/astral-sh/ruff/pull/15054), [#15083](https://github.com/astral-sh/ruff/pull/15083))
- \[`fastapi`\] Update `FAST002` to check keyword-only arguments ([#15119](https://github.com/astral-sh/ruff/pull/15119))
- \[`flake8-type-checking`\] Disable `TC006` and `TC007` in stub files ([#15179](https://github.com/astral-sh/ruff/pull/15179))
- \[`flake8-simplify`\] More precise inference for dictionaries (`SIM300`) ([#15164](https://github.com/astral-sh/ruff/pull/15164))
- \[`flake8-use-pathlib`\] Catch redundant joins in `PTH201` and avoid syntax errors ([#15177](https://github.com/astral-sh/ruff/pull/15177))
- \[`pycodestyle`\] Preserve original value format (`E731`) ([#15097](https://github.com/astral-sh/ruff/pull/15097))
- \[`pydocstyle`\] Split on first whitespace character (`D403`) ([#15082](https://github.com/astral-sh/ruff/pull/15082))
- \[`pyupgrade`\] Add all PEP-585 names to `UP006` rule ([#5454](https://github.com/astral-sh/ruff/pull/5454))
### Configuration
- \[`flake8-type-checking`\] Improve flexibility of `runtime-evaluated-decorators` ([#15204](https://github.com/astral-sh/ruff/pull/15204))
- \[`pydocstyle`\] Add setting to ignore missing documentation for `*args` and `**kwargs` parameters (`D417`) ([#15210](https://github.com/astral-sh/ruff/pull/15210))
- \[`ruff`\] Add an allowlist for `unsafe-markup-use` (`RUF035`) ([#15076](https://github.com/astral-sh/ruff/pull/15076))
### Bug fixes
- Fix type subscript on older python versions ([#15090](https://github.com/astral-sh/ruff/pull/15090))
- Use `TypeChecker` for detecting `fastapi` routes ([#15093](https://github.com/astral-sh/ruff/pull/15093))
- \[`pycodestyle`\] Avoid false positives and negatives related to type parameter default syntax (`E225`, `E251`) ([#15214](https://github.com/astral-sh/ruff/pull/15214))
### Documentation
- Fix incorrect doc in `shebang-not-executable` (`EXE001`) and add git+windows solution to executable bit ([#15208](https://github.com/astral-sh/ruff/pull/15208))
- Rename rules currently not conforming to naming convention ([#15102](https://github.com/astral-sh/ruff/pull/15102))
## 0.8.4
### Preview features
- \[`airflow`\] Extend `AIR302` with additional functions and classes ([#15015](https://github.com/astral-sh/ruff/pull/15015))
- \[`airflow`\] Implement `moved-to-provider-in-3` for modules that has been moved to Airflow providers (`AIR303`) ([#14764](https://github.com/astral-sh/ruff/pull/14764))
- \[`flake8-use-pathlib`\] Extend check for invalid path suffix to include the case `"."` (`PTH210`) ([#14902](https://github.com/astral-sh/ruff/pull/14902))
- \[`perflint`\] Fix panic in `PERF401` when list variable is after the `for` loop ([#14971](https://github.com/astral-sh/ruff/pull/14971))
- \[`perflint`\] Simplify finding the loop target in `PERF401` ([#15025](https://github.com/astral-sh/ruff/pull/15025))
- \[`pylint`\] Preserve original value format (`PLR6104`) ([#14978](https://github.com/astral-sh/ruff/pull/14978))
- \[`ruff`\] Avoid false positives for `RUF027` for typing context bindings ([#15037](https://github.com/astral-sh/ruff/pull/15037))
- \[`ruff`\] Check for ambiguous pattern passed to `pytest.raises()` (`RUF043`) ([#14966](https://github.com/astral-sh/ruff/pull/14966))
### Rule changes
- \[`flake8-bandit`\] Check `S105` for annotated assignment ([#15059](https://github.com/astral-sh/ruff/pull/15059))
- \[`flake8-pyi`\] More autofixes for `redundant-none-literal` (`PYI061`) ([#14872](https://github.com/astral-sh/ruff/pull/14872))
- \[`pydocstyle`\] Skip leading whitespace for `D403` ([#14963](https://github.com/astral-sh/ruff/pull/14963))
- \[`ruff`\] Skip `SQLModel` base classes for `mutable-class-default` (`RUF012`) ([#14949](https://github.com/astral-sh/ruff/pull/14949))
### Bug
- \[`perflint`\] Parenthesize walrus expressions in autofix for `manual-list-comprehension` (`PERF401`) ([#15050](https://github.com/astral-sh/ruff/pull/15050))
### Server
- Check diagnostic refresh support from client capability which enables dynamic configuration for various editors ([#15014](https://github.com/astral-sh/ruff/pull/15014))
## 0.8.3
### Preview features
- Fix fstring formatting removing overlong implicit concatenated string in expression part ([#14811](https://github.com/astral-sh/ruff/pull/14811))
- \[`airflow`\]: Extend rule to include deprecated names for Airflow 3.0 (`AIR302`) ([#14765](https://github.com/astral-sh/ruff/pull/14765) and [#14804](https://github.com/astral-sh/ruff/pull/14804))
- \[`flake8-bugbear`\] `itertools.batched()` without explicit `strict` (`B911`) ([#14408](https://github.com/astral-sh/ruff/pull/14408))
- \[`flake8-use-pathlib`\] Dotless suffix passed to `Path.with_suffix()` (`PTH210`) ([#14779](https://github.com/astral-sh/ruff/pull/14779))
- \[`pylint`\] Include parentheses and multiple comparators in check for `boolean-chained-comparison` (`PLR1716`) ([#14781](https://github.com/astral-sh/ruff/pull/14781))
- \[`ruff`\] Do not simplify `round()` calls (`RUF046`) ([#14832](https://github.com/astral-sh/ruff/pull/14832))
- \[`ruff`\] Don't emit `used-dummy-variable` on function parameters (`RUF052`) ([#14818](https://github.com/astral-sh/ruff/pull/14818))
- \[`ruff`\] Mark autofix for `RUF052` as always unsafe ([#14824](https://github.com/astral-sh/ruff/pull/14824))
- \[`ruff`\] Teach autofix for `used-dummy-variable` about TypeVars etc. (`RUF052`) ([#14819](https://github.com/astral-sh/ruff/pull/14819))
### Rule changes
- \[`flake8-bugbear`\] Offer unsafe autofix for `no-explicit-stacklevel` (`B028`) ([#14829](https://github.com/astral-sh/ruff/pull/14829))
- \[`flake8-pyi`\] Skip all type definitions in `string-or-bytes-too-long` (`PYI053`) ([#14797](https://github.com/astral-sh/ruff/pull/14797))
- \[`pyupgrade`\] Do not report when a UTF-8 comment is followed by a non-UTF-8 one (`UP009`) ([#14728](https://github.com/astral-sh/ruff/pull/14728))
- \[`pyupgrade`\] Mark fixes for `convert-typed-dict-functional-to-class` and `convert-named-tuple-functional-to-class` as unsafe if they will remove comments (`UP013`, `UP014`) ([#14842](https://github.com/astral-sh/ruff/pull/14842))
### Bug fixes
- Raise syntax error for mixing `except` and `except*` ([#14895](https://github.com/astral-sh/ruff/pull/14895))
- \[`flake8-bugbear`\] Fix `B028` to allow `stacklevel` to be explicitly assigned as a positional argument ([#14868](https://github.com/astral-sh/ruff/pull/14868))
- \[`flake8-bugbear`\] Skip `B028` if `warnings.warn` is called with `*args` or `**kwargs` ([#14870](https://github.com/astral-sh/ruff/pull/14870))
- \[`flake8-comprehensions`\] Skip iterables with named expressions in `unnecessary-map` (`C417`) ([#14827](https://github.com/astral-sh/ruff/pull/14827))
- \[`flake8-pyi`\] Also remove `self` and `cls`'s annotation (`PYI034`) ([#14801](https://github.com/astral-sh/ruff/pull/14801))
- \[`flake8-pytest-style`\] Fix `pytest-parametrize-names-wrong-type` (`PT006`) to edit both `argnames` and `argvalues` if both of them are single-element tuples/lists ([#14699](https://github.com/astral-sh/ruff/pull/14699))
- \[`perflint`\] Improve autofix for `PERF401` ([#14369](https://github.com/astral-sh/ruff/pull/14369))
- \[`pylint`\] Fix `PLW1508` false positive for default string created via a mult operation ([#14841](https://github.com/astral-sh/ruff/pull/14841))
- \[`airflow`\] Check `AIR001` from builtin or providers `operators` module ([#14631](https://github.com/astral-sh/ruff/pull/14631))
- \[`flake8-pytest-style`\] Remove `@` in `pytest.mark.parametrize` rule messages ([#14770](https://github.com/astral-sh/ruff/pull/14770))
- \[`pandas-vet`\] Skip rules if the `panda` module hasn't been seen ([#14671](https://github.com/astral-sh/ruff/pull/14671))
- \[`pylint`\] Fix false negatives for `ascii` and `sorted` in `len-as-condition` (`PLC1802`) ([#14692](https://github.com/astral-sh/ruff/pull/14692))
- \[`refurb`\] Guard `hashlib` imports and mark `hashlib-digest-hex` fix as safe (`FURB181`) ([#14694](https://github.com/astral-sh/ruff/pull/14694))
### Configuration
- \[`flake8-import-conventions`\] Improve syntax check for aliases supplied in configuration for `unconventional-import-alias` (`ICN001`) ([#14745](https://github.com/astral-sh/ruff/pull/14745))
- \[`pep8-naming`\] Avoid false positive for `class Bar(type(foo))` (`N804`) ([#14683](https://github.com/astral-sh/ruff/pull/14683))
- \[`pycodestyle`\] Handle f-strings properly for `invalid-escape-sequence` (`W605`) ([#14748](https://github.com/astral-sh/ruff/pull/14748))
- \[`pylint`\] Ignore `@overload` in `PLR0904` ([#14730](https://github.com/astral-sh/ruff/pull/14730))
- \[`refurb`\] Handle non-finite decimals in `verbose-decimal-constructor` (`FURB157`) ([#14596](https://github.com/astral-sh/ruff/pull/14596))
- \[`ruff`\] Avoid emitting `assignment-in-assert` when all references to the assigned variable are themselves inside `assert`s (`RUF018`) ([#14661](https://github.com/astral-sh/ruff/pull/14661))
### Documentation
- Improve docs for `flake8-use-pathlib` rules ([#14741](https://github.com/astral-sh/ruff/pull/14741))
- Improve error messages and docs for `flake8-comprehensions` rules ([#14729](https://github.com/astral-sh/ruff/pull/14729))
- \[`flake8-type-checking`\] Expands `TC006` docs to better explain itself ([#14749](https://github.com/astral-sh/ruff/pull/14749))
## 0.8.1
### Preview features
@@ -257,7 +532,7 @@ The following fixes have been stabilized:
### Preview features
- Fix `E221` and `E222` to flag missing or extra whitespace around `==` operator ([#13890](https://github.com/astral-sh/ruff/pull/13890))
- Formatter: Alternate quotes for strings inside f-strings in preview ([#13860](https://github.com/astral-sh/ruff/pull/13860))
- Formatter: Alternate quotes for strings inside f-strings in preview ([#13860](https://github.com/astral-sh/ruff/pull/13860))
- Formatter: Join implicit concatenated strings when they fit on a line ([#13663](https://github.com/astral-sh/ruff/pull/13663))
- \[`pylint`\] Restrict `iteration-over-set` to only work on sets of literals (`PLC0208`) ([#13731](https://github.com/astral-sh/ruff/pull/13731))
@@ -1272,7 +1547,7 @@ To read more about this exciting milestone, check out our [blog post](https://as
### Preview features
- \[`pycodestyle`\] Ignore end-of-line comments when determining blank line rules ([#11342](https://github.com/astral-sh/ruff/pull/11342))
- \[`pylint`\] Detect `pathlib.Path.open` calls in `unspecified-encoding` (`PLW1514`) ([#11288](https://github.com/astral-sh/ruff/pull/11288))
- \[`pylint`\] Detect `pathlib.Path.open` calls in `unspecified-encoding` (`PLW1514`) ([#11288](https://github.com/astral-sh/ruff/pull/11288))
You can run `poetry install` from `./scripts/benchmarks` to create a working environment for the
You can run `uv venv --project ./scripts/benchmarks`, activate the venv and then run `uv sync --project ./scripts/benchmarks` to create a working environment for the
above. All reported benchmarks were computed using the versions specified by
`./scripts/benchmarks/pyproject.toml` on Python 3.11.
@@ -863,7 +863,7 @@ each configuration file.
The package root is used to determine a file's "module path". Consider, again, `baz.py`. In that
case, `./my_project/src/foo` was identified as the package root, so the module path for `baz.py`
would resolve to `foo.bar.baz` — as computed by taking the relative path from the package root
would resolve to `foo.bar.baz` — as computed by taking the relative path from the package root
(inclusive of the root itself). The module path can be thought of as "the path you would use to
Red Knot generates a folded stack trace to the current directory named `tracing.folded` when setting the environment variable `RED_KNOT_LOG_PROFILE` to `1` or `true`.
Red Knot generates a folded stack trace to the current directory named `tracing.folded` when setting the environment variable `RED_KNOT_LOG_PROFILE` to `1` or `true`.
# error: "Object of type `Literal[1] | Literal[__call__]` is not callable (due to union element `Literal[1]`)"
reveal_type(a())# revealed: Unknown | int
a=NonCallable()
# error: "Object of type `Literal[1] | Literal[__call__]` is not callable (due to union element `Literal[1]`)"
reveal_type(a())# revealed: Unknown | int
```
## Call binding errors
### Wrong argument type
```py
classC:
def__call__(self,x:int)->int:
return1
c=C()
# error: 15 [invalid-argument-type] "Object of type `Literal["foo"]` cannot be assigned to parameter 2 (`x`) of function `__call__`; expected type `int`"
reveal_type(c("foo"))# revealed: int
```
### Wrong argument type on `self`
```py
classC:
# TODO this definition should also be an error; `C` must be assignable to type of `self`
def__call__(self:int)->int:
return1
c=C()
# error: 13 [invalid-argument-type] "Object of type `C` cannot be assigned to parameter 1 (`self`) of function `__call__`; expected type `int`"
c = (str_instance(), int_instance(), "foo", "different_length")
reveal_type(a == c) # revealed: Literal[False]
reveal_type(a != c) # revealed: Literal[True]
reveal_type(a < c) # revealed: bool
reveal_type(a <= c) # revealed: bool
reveal_type(a > c) # revealed: bool
reveal_type(a >= c) # revealed: bool
reveal_type(a == c) # revealed: Literal[False]
reveal_type(a != c) # revealed: Literal[True]
reveal_type(a < c) # revealed: bool
reveal_type(a <= c) # revealed: bool
reveal_type(a > c) # revealed: bool
reveal_type(a >= c) # revealed: bool
```
#### Error Propagation
@@ -252,42 +243,36 @@ Errors occurring within a tuple comparison should propagate outward. However, if
comparison can clearly conclude before encountering an error, the error should not be raised.
```py
def int_instance() -> int:
return 42
def _(n: int, s: str):
class A: ...
# error: [unsupported-operator] "Operator `<` is not supported for types `A` and `A`"
A() < A()
# error: [unsupported-operator] "Operator `<=` is not supported for types `A` and `A`"
A() <= A()
# error: [unsupported-operator] "Operator `>` is not supported for types `A` and `A`"
A() > A()
# error: [unsupported-operator] "Operator `>=` is not supported for types `A` and `A`"
A() >= A()
def str_instance() -> str:
return "hello"
a = (0, n, A())
class A: ...
# error: [unsupported-operator] "Operator `<` is not supported for types `A` and `A`, in comparing `tuple[Literal[0], int, A]` with `tuple[Literal[0], int, A]`"
reveal_type(a < a) # revealed: Unknown
# error: [unsupported-operator] "Operator `<=` is not supported for types `A` and `A`, in comparing `tuple[Literal[0], int, A]` with `tuple[Literal[0], int, A]`"
reveal_type(a <= a) # revealed: Unknown
# error: [unsupported-operator] "Operator `>` is not supported for types `A` and `A`, in comparing `tuple[Literal[0], int, A]` with `tuple[Literal[0], int, A]`"
reveal_type(a > a) # revealed: Unknown
# error: [unsupported-operator] "Operator `>=` is not supported for types `A` and `A`, in comparing `tuple[Literal[0], int, A]` with `tuple[Literal[0], int, A]`"
reveal_type(a >= a) # revealed: Unknown
# error: [unsupported-operator] "Operator `<` is not supported for types `A` and `A`"
A() < A()
# error: [unsupported-operator] "Operator `<=` is not supported for types `A` and `A`"
A() <= A()
# error: [unsupported-operator] "Operator `>` is not supported for types `A` and `A`"
A() > A()
# error: [unsupported-operator] "Operator `>=` is not supported for types `A` and `A`"
A() >= A()
# Comparison between `a` and `b` should only involve the first elements, `Literal[0]` and `Literal[99999]`,
# and should terminate immediately.
b = (99999, n, A())
a = (0, int_instance(), A())
# error: [unsupported-operator] "Operator `<` is not supported for types `A` and `A`, in comparing `tuple[Literal[0], int, A]` with `tuple[Literal[0], int, A]`"
reveal_type(a < a) # revealed: Unknown
# error: [unsupported-operator] "Operator `<=` is not supported for types `A` and `A`, in comparing `tuple[Literal[0], int, A]` with `tuple[Literal[0], int, A]`"
reveal_type(a <= a) # revealed: Unknown
# error: [unsupported-operator] "Operator `>` is not supported for types `A` and `A`, in comparing `tuple[Literal[0], int, A]` with `tuple[Literal[0], int, A]`"
reveal_type(a > a) # revealed: Unknown
# error: [unsupported-operator] "Operator `>=` is not supported for types `A` and `A`, in comparing `tuple[Literal[0], int, A]` with `tuple[Literal[0], int, A]`"
reveal_type(a >= a) # revealed: Unknown
# Comparison between `a` and `b` should only involve the first elements, `Literal[0]` and `Literal[99999]`,
# and should terminate immediately.
b = (99999, int_instance(), A())
reveal_type(a < b) # revealed: Literal[True]
reveal_type(a <= b) # revealed: Literal[True]
reveal_type(a > b) # revealed: Literal[False]
reveal_type(a >= b) # revealed: Literal[False]
reveal_type(a < b) # revealed: Literal[True]
reveal_type(a <= b) # revealed: Literal[True]
reveal_type(a > b) # revealed: Literal[False]
reveal_type(a >= b) # revealed: Literal[False]
```
### Membership Test Comparisons
@@ -295,22 +280,20 @@ reveal_type(a >= b) # revealed: Literal[False]
"Membership Test Comparisons" refers to the operators `in` and `not in`.
```py
def int_instance() -> int:
return 42
def _(n: int):
a = (1, 2)
b = ((3, 4), (1, 2))
c = ((1, 2, 3), (4, 5, 6))
d = ((n, n), (n, n))
a = (1, 2)
b = ((3, 4), (1, 2))
c = ((1, 2, 3), (4, 5, 6))
d = ((int_instance(), int_instance()), (int_instance(), int_instance()))
reveal_type(a in b) # revealed: Literal[True]
reveal_type(a not in b) # revealed: Literal[False]
reveal_type(a in b) # revealed: Literal[True]
reveal_type(a not in b) # revealed: Literal[False]
a=1in7# error: "Operator `in` is not supported for types `Literal[1]` and `Literal[7]`"
reveal_type(a)# revealed: bool
classA:...
b=0notin10# error: "Operator `not in` is not supported for types `Literal[0]` and `Literal[10]`"
reveal_type(b)# revealed: bool
a=1in7# error: "Operator `in` is not supported for types `Literal[1]` and `Literal[7]`"
reveal_type(a)# revealed: bool
# TODO: should error, once operand type check is implemented
# ("Operator `<` is not supported for types `object` and `int`")
c=object()<5
# TODO: should be Unknown, once operand type check is implemented
reveal_type(c)# revealed: bool
b=0notin10# error: "Operator `not in` is not supported for types `Literal[0]` and `Literal[10]`"
reveal_type(b)# revealed: bool
# TODO: should error, once operand type check is implemented
# ("Operator `<` is not supported for types `int` and `object`")
d=5<object()
# TODO: should be Unknown, once operand type check is implemented
reveal_type(d)# revealed: bool
# TODO: should error, once operand type check is implemented
# ("Operator `<` is not supported for types `object` and `int`")
c=object()<5
# TODO: should be Unknown, once operand type check is implemented
reveal_type(c)# revealed: bool
int_literal_or_str_literal=1ifflagelse"foo"
# error: "Operator `in` is not supported for types `Literal[42]` and `Literal[1]`, in comparing `Literal[42]` with `Literal[1, "foo"]`"
e=42inint_literal_or_str_literal
reveal_type(e)# revealed: bool
# TODO: should error, once operand type check is implemented
# ("Operator `<` is not supported for types `int` and `object`")
d=5<object()
# TODO: should be Unknown, once operand type check is implemented
reveal_type(d)# revealed: bool
# TODO: should error, need to check if __lt__ signature is valid for right operand
# error may be "Operator `<` is not supported for types `int` and `str`, in comparing `tuple[Literal[1], Literal[2]]` with `tuple[Literal[1], Literal["hello"]]`
f=(1,2)<(1,"hello")
# TODO: should be Unknown, once operand type check is implemented
reveal_type(f)# revealed: bool
flag=bool_instance()
int_literal_or_str_literal=1ifflagelse"foo"
# error: "Operator `in` is not supported for types `Literal[42]` and `Literal[1]`, in comparing `Literal[42]` with `Literal[1] | Literal["foo"]`"
e=42inint_literal_or_str_literal
reveal_type(e)# revealed: bool
# TODO: should error, need to check if __lt__ signature is valid for right operand
# error may be "Operator `<` is not supported for types `int` and `str`, in comparing `tuple[Literal[1], Literal[2]]` with `tuple[Literal[1], Literal["hello"]]`
f=(1,2)<(1,"hello")
# TODO: should be Unknown, once operand type check is implemented
reveal_type(f)# revealed: bool
# error: [unsupported-operator] "Operator `<` is not supported for types `A` and `A`, in comparing `tuple[bool, A]` with `tuple[bool, A]`"
g=(bool_instance(),A())<(bool_instance(),A())
reveal_type(g)# revealed: Unknown
# error: [unsupported-operator] "Operator `<` is not supported for types `A` and `A`, in comparing `tuple[bool, A]` with `tuple[bool, A]`"
# error: [invalid-exception-caught] "Cannot catch object of type `Literal[3]` in an exception handler (must be a `BaseException` subclass or a tuple of `BaseException` subclasses)"
except3ase:
reveal_type(e)# revealed: Unknown
try:
pass
# error: [invalid-exception-caught] "Cannot catch object of type `Literal["foo"]` in an exception handler (must be a `BaseException` subclass or a tuple of `BaseException` subclasses)"
# error: [invalid-exception-caught] "Cannot catch object of type `Literal[b"bar"]` in an exception handler (must be a `BaseException` subclass or a tuple of `BaseException` subclasses)"
# error: [conflicting-metaclass] "The metaclass of a derived class (`C`) must be a subclass of the metaclasses of all its bases, but `M1` (metaclass of base class `A`) and `M2` (metaclass of base class `B`) have no subclass relationship"
classC(A,B):...
reveal_type(C.__class__)# revealed: Unknown
reveal_type(C.__class__)# revealed: type[Unknown]
```
## Conflict (2)
@@ -85,7 +85,7 @@ class A(metaclass=M1): ...
# error: [conflicting-metaclass] "The metaclass of a derived class (`B`) must be a subclass of the metaclasses of all its bases, but `M2` (metaclass of `B`) and `M1` (metaclass of base class `A`) have no subclass relationship"
classB(A,metaclass=M2):...
reveal_type(B.__class__)# revealed: Unknown
reveal_type(B.__class__)# revealed: type[Unknown]
```
## Common metaclass
@@ -129,7 +129,7 @@ class C(metaclass=M12): ...
# error: [conflicting-metaclass] "The metaclass of a derived class (`D`) must be a subclass of the metaclasses of all its bases, but `M1` (metaclass of base class `A`) and `M2` (metaclass of base class `B`) have no subclass relationship"
# ~A | (C & ~B) The positive side of ~A is A | B | C ->
reveal_type(x)# revealed: B & ~A | C & ~A | C & ~B
```
## Boolean expression internal narrowing
```py
defoptional_string()->str|None:
returnNone
def_(x:str|None,y:str|None):
ifxisNoneandyisnotx:
reveal_type(y)# revealed: str
x=optional_string()
y=optional_string()
# Neither of the conditions alone is sufficient for narrowing y's type:
ifxisNone:
reveal_type(y)# revealed: str | None
if xisNoneandyisnotx:
reveal_type(y)# revealed: str
# Neither of the conditions alone is sufficient for narrowing y's type:
ifxisNone:
reveal_type(y)# revealed: str | None
ifyisnotx:
reveal_type(y)# revealed: str | None
ifyisnotx:
reveal_type(y)# revealed: str | None
```
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.