Compare commits

..

19 Commits

Author SHA1 Message Date
Douglas Creager
fa4a038b02 debug 2025-09-26 09:13:41 -04:00
Douglas Creager
209ae1a39b clippy 2025-09-26 09:13:41 -04:00
Douglas Creager
8847e5e665 minimize 2025-09-26 09:13:41 -04:00
Douglas Creager
239dda8a05 more 2025-09-26 09:13:41 -04:00
Douglas Creager
ef2f49dcf1 minimize 2025-09-26 09:13:41 -04:00
David Peter
3932f7c849 [ty] Fix subtyping for dynamic specializations (#20592)
## Summary

Fixes a bug observed by @AlexWaygood where `C[Any] <: C[object]` should
hold for a class that is covariant in its type parameter (and similar
subtyping relations involving dynamic types for other variance
configurations).

## Test Plan

New and updated Markdown tests
2025-09-26 15:05:03 +02:00
Alex Waygood
2af8c53110 [ty] Add more tests for subtyping/assignability between two protocol types (#20573) 2025-09-26 12:07:57 +01:00
Dan Parizher
0bae7e613d Use Annotation::tags instead of hardcoded rule matching in ruff server (#20565)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-09-26 09:06:26 +02:00
Douglas Creager
02ebb2ee61 [ty] Change to BDD representation for constraint sets (#20533)
While working on #20093, I kept running into test failures due to
constraint sets not simplifying as much as they could, and therefore not
being easily testable against "always true" and "always false".

This PR updates our constraint set representation to use BDDs. Because
BDDs are reduced and ordered, they are canonical — equivalent boolean
formulas are represented by the same interned BDD node.

That said, there is a wrinkle, in that the "variables" that we use in
these BDDs — the individual constraints like `Lower ≤ T ≤ Upper` are not
always independent of each other.

As an example, given types `A ≤ B ≤ C ≤ D` and a typevar `T`, the
constraints `A ≤ T ≤ C` and `B ≤ T ≤ D` "overlap" — their intersection
is non-empty. So we should be able to simplify

```
(A ≤ T ≤ C) ∧ (B ≤ T ≤ D) == (B ≤ T ≤ C)
```

That's not a simplification that the BDD structure can perform itself,
since those three constraints are modeled as separate BDD variables, and
are therefore "opaque" to the BDD algorithms.

That means we need to perform this kind of simplification ourselves. We
look at pairs of constraints that appear in a BDD and see if they can be
simplified relative to each other, and if so, replace the pair with the
simplification. A large part of the toil of getting this PR to work was
identifying all of those patterns and getting that substitution logic
correct.

With this new representation, all existing tests pass, as well as some
new ones that represent test failures that were occuring on #20093.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-09-25 21:55:35 -04:00
Francesco Giacometti
e66a872c14 [ty] Coalesce allocations for parameter info in ArgumentMatcher (#20586)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

Follow up on #20495. The improvement suggested by @AlexWaygood cannot be
applied as-is since the `argument_matches` vector is indexed by argument
number, while the two boolean vectors are indexed by parameter number.
Still coalescing the latter two saves one allocation.
2025-09-25 20:56:59 -04:00
Dan Parizher
589a674a8d [isort] Fix infinite loop when checking equivalent imports (I002, PLR0402) (#20381)
## Summary

Fixes #20380

The fix exempts required imports from `PLR0402`
2025-09-25 16:08:15 -05:00
Brent Westbrook
e4ac9e9041 Replace two more uses of unsafe with const Option::unwrap (#20584)
I guess I missed these in #20007, but I found them today while grepping
for something else. `Option::unwrap` has been const since 1.83, so we
can use it here and avoid some unsafe code.
2025-09-25 15:35:13 -04:00
Dylan
f2b7c82534 Handle t-string prefixes in SimpleTokenizer (#20578)
The simple tokenizer is meant to skip strings, but it was recording a
`Name` token for t-strings (from the `t`). This PR fixes that.
2025-09-25 14:33:37 -05:00
Bhuminjay Soni
cfc64d1707 [syntax-errors]: future-feature-not-defined (F407) (#20554)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

This PR implements
https://docs.astral.sh/ruff/rules/future-feature-not-defined/ (F407) as
a semantic syntax error.

## Test Plan

<!-- How was it tested? -->

I have written inline tests as directed in #17412

---------

Signed-off-by: 11happy <soni5happy@gmail.com>
2025-09-25 13:52:24 -04:00
Brent Westbrook
6b7a9dc2f2 [isort] Clarify dependency between order-by-type and case-sensitive settings (#20559)
Summary
--

Fixes #20536 by linking between the isort options `case-sensitive` and
`order-by-type`. The latter takes precedence over the former, so it
seems good to clarify this somewhere.

I tweaked the wording slightly, but this is otherwise based on the patch
from @SkylerWittman in
https://github.com/astral-sh/ruff/issues/20536#issuecomment-3326097324
(thank you!)

Test Plan
--

N/a

---------

Co-authored-by: Skyler Wittman <skyler.wittman@gmail.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-09-25 16:25:12 +00:00
Brent Westbrook
9903104328 [pylint] Fix missing max-nested-blocks in settings display (#20574)
Summary
--

This fixes a bug pointed out in #20560 where one of the `pylint`
settings wasn't used in its `Display` implementation.

Test Plan
--

Existing tests with updated snapshots
2025-09-25 12:14:28 -04:00
Giovani Moutinho
beec2f2dbb [flake8-simplify] Improve help message clarity (SIM105) (#20548)
## Summary

Improve the SIM105 rule message to prevent user confusion about how to
properly use `contextlib.suppress`.

The previous message "Replace with `contextlib.suppress(ValueError)`"
was ambiguous and led users to incorrectly use
`contextlib.suppress(ValueError)` as a statement inside except blocks
instead of replacing the entire try-except-pass block with `with
contextlib.suppress(ValueError):`.

This change makes the message more explicit:
- **Before**: `"Use \`contextlib.suppress({exception})\` instead of
\`try\`-\`except\`-\`pass\`"`
- **After**: `"Replace \`try\`-\`except\`-\`pass\` block with \`with
contextlib.suppress({exception})\`"`

The fix title is also updated to be more specific:
- **Before**: `"Replace with \`contextlib.suppress({exception})\`"`  
- **After**: `"Replace \`try\`-\`except\`-\`pass\` with \`with
contextlib.suppress({exception})\`"`

Fixes #20462

## Test Plan

-  All existing SIM105 tests pass with updated snapshots
-  Cargo clippy passes without warnings  
-  Full test suite passes
-  The new messages clearly indicate that the entire try-except-pass
block should be replaced with a `with` statement, preventing the misuse
described in the issue

---------

Co-authored-by: Giovani Moutinho <e@mgiovani.dev>
2025-09-25 11:19:26 -04:00
Micha Reiser
c256c7943c [ty] Update salsa to fix hang when cycle head panics (#20577) 2025-09-25 17:13:07 +02:00
Dhruv Manilawala
35ed55ec8c [ty] Filter overloads using variadic parameters (#20547)
## Summary

Closes: https://github.com/astral-sh/ty/issues/551

This PR adds support for step 4 of the overload call evaluation
algorithm which states that:

> If the argument list is compatible with two or more overloads,
determine whether one or more of the overloads has a variadic parameter
(either `*args` or `**kwargs`) that maps to a corresponding argument
that supplies an indeterminate number of positional or keyword
arguments. If so, eliminate overloads that do not have a variadic
parameter.

And, with that, the overload call evaluation algorithm has been
implemented completely end to end as stated in the typing spec.

## Test Plan

Expand the overload call test suite.
2025-09-25 14:58:00 +00:00
55 changed files with 2220 additions and 1391 deletions

6
Cargo.lock generated
View File

@@ -3463,7 +3463,7 @@ checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
[[package]]
name = "salsa"
version = "0.23.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=3713cd7eb30821c0c086591832dd6f59f2af7fe7#3713cd7eb30821c0c086591832dd6f59f2af7fe7"
source = "git+https://github.com/salsa-rs/salsa.git?rev=29ab321b45d00daa4315fa2a06f7207759a8c87e#29ab321b45d00daa4315fa2a06f7207759a8c87e"
dependencies = [
"boxcar",
"compact_str",
@@ -3487,12 +3487,12 @@ dependencies = [
[[package]]
name = "salsa-macro-rules"
version = "0.23.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=3713cd7eb30821c0c086591832dd6f59f2af7fe7#3713cd7eb30821c0c086591832dd6f59f2af7fe7"
source = "git+https://github.com/salsa-rs/salsa.git?rev=29ab321b45d00daa4315fa2a06f7207759a8c87e#29ab321b45d00daa4315fa2a06f7207759a8c87e"
[[package]]
name = "salsa-macros"
version = "0.23.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=3713cd7eb30821c0c086591832dd6f59f2af7fe7#3713cd7eb30821c0c086591832dd6f59f2af7fe7"
source = "git+https://github.com/salsa-rs/salsa.git?rev=29ab321b45d00daa4315fa2a06f7207759a8c87e#29ab321b45d00daa4315fa2a06f7207759a8c87e"
dependencies = [
"proc-macro2",
"quote",

View File

@@ -144,7 +144,7 @@ regex-automata = { version = "0.4.9" }
rustc-hash = { version = "2.0.0" }
rustc-stable-hash = { version = "0.1.2" }
# When updating salsa, make sure to also update the revision in `fuzz/Cargo.toml`
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "3713cd7eb30821c0c086591832dd6f59f2af7fe7", default-features = false, features = [
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "29ab321b45d00daa4315fa2a06f7207759a8c87e", default-features = false, features = [
"compact_str",
"macros",
"salsa_unstable",

View File

@@ -2445,6 +2445,7 @@ requires-python = ">= 3.11"
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -2758,6 +2759,7 @@ requires-python = ">= 3.11"
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -3070,6 +3072,7 @@ requires-python = ">= 3.11"
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -3434,6 +3437,7 @@ from typing import Union;foo: Union[int, str] = 1
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -3814,6 +3818,7 @@ from typing import Union;foo: Union[int, str] = 1
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -4142,6 +4147,7 @@ from typing import Union;foo: Union[int, str] = 1
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -4470,6 +4476,7 @@ from typing import Union;foo: Union[int, str] = 1
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -4755,6 +4762,7 @@ from typing import Union;foo: Union[int, str] = 1
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
@@ -5093,6 +5101,7 @@ from typing import Union;foo: Union[int, str] = 1
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false

View File

@@ -371,6 +371,7 @@ linter.pylint.max_branches = 12
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false

View File

@@ -0,0 +1,3 @@
import concurrent.futures as futures
1

View File

@@ -0,0 +1,3 @@
import concurrent.futures as futures
1

View File

@@ -803,11 +803,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
}
}
for alias in names {
if let Some("__future__") = module {
if checker.is_rule_enabled(Rule::FutureFeatureNotDefined) {
pyflakes::rules::future_feature_not_defined(checker, alias);
}
} else if &alias.name == "*" {
if module != Some("__future__") && &alias.name == "*" {
// F403
checker.report_diagnostic_if_enabled(
pyflakes::rules::UndefinedLocalWithImportStar {

View File

@@ -28,7 +28,7 @@ use itertools::Itertools;
use log::debug;
use rustc_hash::{FxHashMap, FxHashSet};
use ruff_db::diagnostic::{Annotation, Diagnostic, IntoDiagnosticMessage, Span};
use ruff_db::diagnostic::{Annotation, Diagnostic, DiagnosticTag, IntoDiagnosticMessage, Span};
use ruff_diagnostics::{Applicability, Fix, IsolationLevel};
use ruff_notebook::{CellOffsets, NotebookIndex};
use ruff_python_ast::helpers::{collect_import_from_member, is_docstring_stmt, to_module_path};
@@ -696,6 +696,14 @@ impl SemanticSyntaxContext for Checker<'_> {
self.report_diagnostic(MultipleStarredExpressions, error.range);
}
}
SemanticSyntaxErrorKind::FutureFeatureNotDefined(name) => {
if self.is_rule_enabled(Rule::FutureFeatureNotDefined) {
self.report_diagnostic(
pyflakes::rules::FutureFeatureNotDefined { name },
error.range,
);
}
}
SemanticSyntaxErrorKind::ReboundComprehensionVariable
| SemanticSyntaxErrorKind::DuplicateTypeParameter
| SemanticSyntaxErrorKind::MultipleCaseAssignment(_)
@@ -3318,6 +3326,56 @@ impl DiagnosticGuard<'_, '_> {
pub(crate) fn defuse(mut self) {
self.diagnostic = None;
}
/// Set the message on the primary annotation for this diagnostic.
///
/// If a message already exists on the primary annotation, then this
/// overwrites the existing message.
///
/// This message is associated with the primary annotation created
/// for every `Diagnostic` that uses the `DiagnosticGuard` API.
/// Specifically, the annotation is derived from the `TextRange` given to
/// the `LintContext::report_diagnostic` API.
///
/// Callers can add additional primary or secondary annotations via the
/// `DerefMut` trait implementation to a `Diagnostic`.
pub(crate) fn set_primary_message(&mut self, message: impl IntoDiagnosticMessage) {
// N.B. It is normally bad juju to define `self` methods
// on types that implement `Deref`. Instead, it's idiomatic
// to do `fn foo(this: &mut LintDiagnosticGuard)`, which in
// turn forces callers to use
// `LintDiagnosticGuard(&mut guard, message)`. But this is
// supremely annoying for what is expected to be a common
// case.
//
// Moreover, most of the downside that comes from these sorts
// of methods is a semver hazard. Because the deref target type
// could also define a method by the same name, and that leads
// to confusion. But we own all the code involved here and
// there is no semver boundary. So... ¯\_(ツ)_/¯ ---AG
// OK because we know the diagnostic was constructed with a single
// primary annotation that will always come before any other annotation
// in the diagnostic. (This relies on the `Diagnostic` API not exposing
// any methods for removing annotations or re-ordering them, which is
// true as of 2025-04-11.)
let ann = self.primary_annotation_mut().unwrap();
ann.set_message(message);
}
/// Adds a tag on the primary annotation for this diagnostic.
///
/// This tag is associated with the primary annotation created
/// for every `Diagnostic` that uses the `DiagnosticGuard` API.
/// Specifically, the annotation is derived from the `TextRange` given to
/// the `LintContext::report_diagnostic` API.
///
/// Callers can add additional primary or secondary annotations via the
/// `DerefMut` trait implementation to a `Diagnostic`.
pub(crate) fn add_primary_tag(&mut self, tag: DiagnosticTag) {
let ann = self.primary_annotation_mut().unwrap();
ann.push_tag(tag);
}
}
impl DiagnosticGuard<'_, '_> {

View File

@@ -58,7 +58,9 @@ impl Violation for SuppressibleException {
fn fix_title(&self) -> Option<String> {
let SuppressibleException { exception } = self;
Some(format!("Replace with `contextlib.suppress({exception})`"))
Some(format!(
"Replace `try`-`except`-`pass` with `with contextlib.suppress({exception}): ...`"
))
}
}

View File

@@ -11,7 +11,7 @@ SIM105 [*] Use `contextlib.suppress(ValueError)` instead of `try`-`except`-`pass
9 | | pass
| |________^
|
help: Replace with `contextlib.suppress(ValueError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -40,7 +40,7 @@ SIM105 [*] Use `contextlib.suppress(ValueError, OSError)` instead of `try`-`exce
17 |
18 | # SIM105
|
help: Replace with `contextlib.suppress(ValueError, OSError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError, OSError): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -71,7 +71,7 @@ SIM105 [*] Use `contextlib.suppress(ValueError, OSError)` instead of `try`-`exce
23 |
24 | # SIM105
|
help: Replace with `contextlib.suppress(ValueError, OSError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError, OSError): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -102,7 +102,7 @@ SIM105 [*] Use `contextlib.suppress(BaseException)` instead of `try`-`except`-`p
29 |
30 | # SIM105
|
help: Replace with `contextlib.suppress(BaseException)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(BaseException): ...`
1 + import contextlib
2 + import builtins
3 | def foo():
@@ -134,7 +134,7 @@ SIM105 [*] Use `contextlib.suppress(a.Error, b.Error)` instead of `try`-`except`
35 |
36 | # OK
|
help: Replace with `contextlib.suppress(a.Error, b.Error)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(a.Error, b.Error): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -164,7 +164,7 @@ SIM105 [*] Use `contextlib.suppress(ValueError)` instead of `try`-`except`-`pass
88 | | ...
| |___________^
|
help: Replace with `contextlib.suppress(ValueError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -195,7 +195,7 @@ SIM105 Use `contextlib.suppress(ValueError, OSError)` instead of `try`-`except`-
104 |
105 | try:
|
help: Replace with `contextlib.suppress(ValueError, OSError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError, OSError): ...`
SIM105 [*] Use `contextlib.suppress(OSError)` instead of `try`-`except`-`pass`
--> SIM105_0.py:117:5
@@ -210,7 +210,7 @@ SIM105 [*] Use `contextlib.suppress(OSError)` instead of `try`-`except`-`pass`
121 |
122 | try: os.makedirs(model_dir);
|
help: Replace with `contextlib.suppress(OSError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(OSError): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -241,7 +241,7 @@ SIM105 [*] Use `contextlib.suppress(OSError)` instead of `try`-`except`-`pass`
125 |
126 | try: os.makedirs(model_dir);
|
help: Replace with `contextlib.suppress(OSError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(OSError): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -271,7 +271,7 @@ SIM105 [*] Use `contextlib.suppress(OSError)` instead of `try`-`except`-`pass`
129 | \
130 | #
|
help: Replace with `contextlib.suppress(OSError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(OSError): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -299,7 +299,7 @@ SIM105 [*] Use `contextlib.suppress()` instead of `try`-`except`-`pass`
136 | | pass
| |________^
|
help: Replace with `contextlib.suppress()`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(): ...`
1 + import contextlib
2 | def foo():
3 | pass
@@ -328,7 +328,7 @@ SIM105 [*] Use `contextlib.suppress(BaseException)` instead of `try`-`except`-`p
143 | | pass
| |________^
|
help: Replace with `contextlib.suppress(BaseException)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(BaseException): ...`
1 + import contextlib
2 | def foo():
3 | pass

View File

@@ -11,7 +11,7 @@ SIM105 [*] Use `contextlib.suppress(ValueError)` instead of `try`-`except`-`pass
8 | | pass
| |________^
|
help: Replace with `contextlib.suppress(ValueError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError): ...`
1 | """Case: There's a random import, so it should add `contextlib` after it."""
2 | import math
3 + import contextlib

View File

@@ -11,7 +11,7 @@ SIM105 [*] Use `contextlib.suppress(ValueError)` instead of `try`-`except`-`pass
13 | | pass
| |________^
|
help: Replace with `contextlib.suppress(ValueError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError): ...`
7 |
8 |
9 | # SIM105

View File

@@ -12,4 +12,4 @@ SIM105 Use `contextlib.suppress(ValueError)` instead of `try`-`except`-`pass`
13 | | pass
| |____________^
|
help: Replace with `contextlib.suppress(ValueError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ValueError): ...`

View File

@@ -10,7 +10,7 @@ SIM105 [*] Use `contextlib.suppress(ImportError)` instead of `try`-`except`-`pas
4 | | except ImportError: pass
| |___________________________^
|
help: Replace with `contextlib.suppress(ImportError)`
help: Replace `try`-`except`-`pass` with `with contextlib.suppress(ImportError): ...`
1 | #!/usr/bin/env python
- try:
2 + import contextlib

View File

@@ -991,6 +991,29 @@ mod tests {
Ok(())
}
#[test_case(Path::new("plr0402_skip.py"))]
fn plr0402_skips_required_imports(path: &Path) -> Result<()> {
let snapshot = format!("plr0402_skips_required_imports_{}", path.to_string_lossy());
let diagnostics = test_path(
Path::new("isort/required_imports").join(path).as_path(),
&LinterSettings {
src: vec![test_resource_path("fixtures/isort")],
isort: super::settings::Settings {
required_imports: BTreeSet::from_iter([NameImport::Import(
ModuleNameImport::alias(
"concurrent.futures".to_string(),
"futures".to_string(),
),
)]),
..super::settings::Settings::default()
},
..LinterSettings::for_rules([Rule::MissingRequiredImport, Rule::ManualFromImport])
},
)?;
assert_diagnostics!(snapshot, diagnostics);
Ok(())
}
#[test_case(Path::new("from_first.py"))]
fn from_first(path: &Path) -> Result<()> {
let snapshot = format!("from_first_{}", path.to_string_lossy());

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/isort/mod.rs
---

View File

@@ -1,11 +1,6 @@
use ruff_python_ast::Alias;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_stdlib::future::is_feature_name;
use ruff_text_size::Ranged;
use crate::Violation;
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for `__future__` imports that are not defined in the current Python
@@ -19,7 +14,7 @@ use crate::checkers::ast::Checker;
/// - [Python documentation: `__future__`](https://docs.python.org/3/library/__future__.html)
#[derive(ViolationMetadata)]
pub(crate) struct FutureFeatureNotDefined {
name: String,
pub name: String,
}
impl Violation for FutureFeatureNotDefined {
@@ -29,17 +24,3 @@ impl Violation for FutureFeatureNotDefined {
format!("Future feature `{name}` is not defined")
}
}
/// F407
pub(crate) fn future_feature_not_defined(checker: &Checker, alias: &Alias) {
if is_feature_name(&alias.name) {
return;
}
checker.report_diagnostic(
FutureFeatureNotDefined {
name: alias.name.to_string(),
},
alias.range(),
);
}

View File

@@ -197,9 +197,7 @@ pub(crate) fn redefined_while_unused(checker: &Checker, scope_id: ScopeId, scope
shadowed,
);
if let Some(ann) = diagnostic.primary_annotation_mut() {
ann.set_message(format_args!("`{name}` redefined here"));
}
diagnostic.set_primary_message(format_args!("`{name}` redefined here"));
if let Some(range) = binding.parent_range(checker.semantic()) {
diagnostic.set_parent(range.start());

View File

@@ -435,6 +435,8 @@ pub(crate) fn unused_import(checker: &Checker, scope: &Scope) {
diagnostic.set_fix(fix.clone());
}
}
diagnostic.add_primary_tag(ruff_db::diagnostic::DiagnosticTag::Unnecessary);
}
}
@@ -455,6 +457,8 @@ pub(crate) fn unused_import(checker: &Checker, scope: &Scope) {
if let Some(range) = binding.parent_range {
diagnostic.set_parent(range.start());
}
diagnostic.add_primary_tag(ruff_db::diagnostic::DiagnosticTag::Unnecessary);
}
}

View File

@@ -272,4 +272,6 @@ pub(crate) fn unused_variable(checker: &Checker, name: &str, binding: &Binding)
if let Some(fix) = remove_unused_variable(binding, checker) {
diagnostic.set_fix(fix);
}
// Add Unnecessary tag for unused variables
diagnostic.add_primary_tag(ruff_db::diagnostic::DiagnosticTag::Unnecessary);
}

View File

@@ -4,6 +4,7 @@ use ruff_text_size::{Ranged, TextRange};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use crate::checkers::ast::Checker;
use crate::rules::pyupgrade::rules::is_import_required_by_isort;
use crate::{Edit, Fix, FixAvailability, Violation};
/// ## What it does
@@ -58,6 +59,15 @@ pub(crate) fn manual_from_import(checker: &Checker, stmt: &Stmt, alias: &Alias,
return;
}
// Skip if this import is required by isort to prevent infinite loops with I002
if is_import_required_by_isort(
&checker.settings().isort.required_imports,
stmt.into(),
alias,
) {
return;
}
let mut diagnostic = checker.report_diagnostic(
ManualFromImport {
module: module.to_string(),

View File

@@ -96,7 +96,8 @@ impl fmt::Display for Settings {
self.max_branches,
self.max_statements,
self.max_public_methods,
self.max_locals
self.max_locals,
self.max_nested_blocks
]
}
Ok(())

View File

@@ -755,6 +755,7 @@ pub(crate) fn deprecated_import(checker: &Checker, import_from_stmt: &StmtImport
},
import_from_stmt.range(),
);
diagnostic.add_primary_tag(ruff_db::diagnostic::DiagnosticTag::Deprecated);
if let Some(content) = fix {
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
content,

View File

@@ -4,6 +4,7 @@ use itertools::{Itertools, chain};
use ruff_python_semantic::NodeId;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::name::{QualifiedName, QualifiedNameBuilder};
use ruff_python_ast::{self as ast, Alias, Stmt, StmtRef};
use ruff_python_semantic::{NameImport, Scope};
use ruff_text_size::Ranged;
@@ -95,20 +96,35 @@ pub(crate) fn is_import_required_by_isort(
stmt: StmtRef,
alias: &Alias,
) -> bool {
let segments: &[&str] = match stmt {
match stmt {
StmtRef::ImportFrom(ast::StmtImportFrom {
module: Some(module),
..
}) => &[module.as_str(), alias.name.as_str()],
StmtRef::ImportFrom(ast::StmtImportFrom { module: None, .. }) | StmtRef::Import(_) => {
&[alias.name.as_str()]
}
_ => return false,
};
}) => {
let mut builder = QualifiedNameBuilder::with_capacity(module.split('.').count() + 1);
builder.extend(module.split('.'));
builder.push(alias.name.as_str());
let qualified = builder.build();
required_imports
.iter()
.any(|required_import| required_import.qualified_name().segments() == segments)
required_imports
.iter()
.any(|required_import| required_import.qualified_name() == qualified)
}
StmtRef::ImportFrom(ast::StmtImportFrom { module: None, .. })
| StmtRef::Import(ast::StmtImport { .. }) => {
let name = alias.name.as_str();
let qualified = if name.contains('.') {
QualifiedName::from_dotted_name(name)
} else {
QualifiedName::user_defined(name)
};
required_imports
.iter()
.any(|required_import| required_import.qualified_name() == qualified)
}
_ => false,
}
}
/// UP010

View File

@@ -100,4 +100,6 @@ pub(crate) fn unused_unpacked_variable(checker: &Checker, name: &str, binding: &
if let Some(fix) = remove_unused_variable(binding, checker) {
diagnostic.set_fix(fix);
}
// Add Unnecessary tag for unused unpacked variables
diagnostic.add_primary_tag(ruff_db::diagnostic::DiagnosticTag::Unnecessary);
}

View File

@@ -45,11 +45,7 @@ pub(super) fn generate_newtype_index(item: ItemStruct) -> syn::Result<proc_macro
// SAFETY:
// * The `value < u32::MAX` guarantees that the add doesn't overflow.
// * The `+ 1` guarantees that the index is not zero
//
// N.B. We have to use the unchecked variant here because we're
// in a const context and Option::unwrap isn't const yet.
#[expect(unsafe_code)]
Self(unsafe { std::num::NonZeroU32::new_unchecked((value as u32) + 1) })
Self(std::num::NonZeroU32::new((value as u32) + 1).unwrap())
}
#vis const fn from_u32(value: u32) -> Self {
@@ -58,11 +54,7 @@ pub(super) fn generate_newtype_index(item: ItemStruct) -> syn::Result<proc_macro
// SAFETY:
// * The `value < u32::MAX` guarantees that the add doesn't overflow.
// * The `+ 1` guarantees that the index is larger than zero.
//
// N.B. We have to use the unchecked variant here because we're
// in a const context and Option::unwrap isn't const yet.
#[expect(unsafe_code)]
Self(unsafe { std::num::NonZeroU32::new_unchecked(value + 1) })
Self(std::num::NonZeroU32::new(value + 1).unwrap())
}
/// Returns the index as a `u32` value

View File

@@ -0,0 +1,3 @@
from __future__ import invalid_feature
from __future__ import annotations, invalid_feature
from __future__ import invalid_feature_1, invalid_feature_2

View File

@@ -0,0 +1 @@
from __future__ import annotations

View File

@@ -65,8 +65,28 @@ impl SemanticSyntaxChecker {
names,
..
}) => {
if self.seen_futures_boundary && matches!(module.as_deref(), Some("__future__")) {
Self::add_error(ctx, SemanticSyntaxErrorKind::LateFutureImport, *range);
if matches!(module.as_deref(), Some("__future__")) {
for name in names {
if !is_known_future_feature(&name.name) {
// test_ok valid_future_feature
// from __future__ import annotations
// test_err invalid_future_feature
// from __future__ import invalid_feature
// from __future__ import annotations, invalid_feature
// from __future__ import invalid_feature_1, invalid_feature_2
Self::add_error(
ctx,
SemanticSyntaxErrorKind::FutureFeatureNotDefined(
name.name.to_string(),
),
name.range,
);
}
}
if self.seen_futures_boundary {
Self::add_error(ctx, SemanticSyntaxErrorKind::LateFutureImport, *range);
}
}
for alias in names {
if alias.name.as_str() == "*" && !ctx.in_module_scope() {
@@ -978,6 +998,22 @@ impl SemanticSyntaxChecker {
}
}
fn is_known_future_feature(name: &str) -> bool {
matches!(
name,
"nested_scopes"
| "generators"
| "division"
| "absolute_import"
| "with_statement"
| "print_function"
| "unicode_literals"
| "barry_as_FLUFL"
| "generator_stop"
| "annotations"
)
}
#[derive(Debug, Clone, PartialEq, Eq, Hash, get_size2::GetSize)]
pub struct SemanticSyntaxError {
pub kind: SemanticSyntaxErrorKind,
@@ -1086,6 +1122,9 @@ impl Display for SemanticSyntaxError {
SemanticSyntaxErrorKind::MultipleStarredExpressions => {
write!(f, "Two starred expressions in assignment")
}
SemanticSyntaxErrorKind::FutureFeatureNotDefined(name) => {
write!(f, "Future feature `{name}` is not defined")
}
}
}
}
@@ -1456,6 +1495,9 @@ pub enum SemanticSyntaxErrorKind {
/// left-hand side of an assignment. Using multiple starred expressions makes
/// the statement invalid and results in a `SyntaxError`.
MultipleStarredExpressions,
/// Represents the use of a `__future__` feature that is not defined.
FutureFeatureNotDefined(String),
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, get_size2::GetSize)]

View File

@@ -0,0 +1,146 @@
---
source: crates/ruff_python_parser/tests/fixtures.rs
input_file: crates/ruff_python_parser/resources/inline/err/invalid_future_feature.py
---
## AST
```
Module(
ModModule {
node_index: NodeIndex(None),
range: 0..151,
body: [
ImportFrom(
StmtImportFrom {
node_index: NodeIndex(None),
range: 0..38,
module: Some(
Identifier {
id: Name("__future__"),
range: 5..15,
node_index: NodeIndex(None),
},
),
names: [
Alias {
range: 23..38,
node_index: NodeIndex(None),
name: Identifier {
id: Name("invalid_feature"),
range: 23..38,
node_index: NodeIndex(None),
},
asname: None,
},
],
level: 0,
},
),
ImportFrom(
StmtImportFrom {
node_index: NodeIndex(None),
range: 39..90,
module: Some(
Identifier {
id: Name("__future__"),
range: 44..54,
node_index: NodeIndex(None),
},
),
names: [
Alias {
range: 62..73,
node_index: NodeIndex(None),
name: Identifier {
id: Name("annotations"),
range: 62..73,
node_index: NodeIndex(None),
},
asname: None,
},
Alias {
range: 75..90,
node_index: NodeIndex(None),
name: Identifier {
id: Name("invalid_feature"),
range: 75..90,
node_index: NodeIndex(None),
},
asname: None,
},
],
level: 0,
},
),
ImportFrom(
StmtImportFrom {
node_index: NodeIndex(None),
range: 91..150,
module: Some(
Identifier {
id: Name("__future__"),
range: 96..106,
node_index: NodeIndex(None),
},
),
names: [
Alias {
range: 114..131,
node_index: NodeIndex(None),
name: Identifier {
id: Name("invalid_feature_1"),
range: 114..131,
node_index: NodeIndex(None),
},
asname: None,
},
Alias {
range: 133..150,
node_index: NodeIndex(None),
name: Identifier {
id: Name("invalid_feature_2"),
range: 133..150,
node_index: NodeIndex(None),
},
asname: None,
},
],
level: 0,
},
),
],
},
)
```
## Semantic Syntax Errors
|
1 | from __future__ import invalid_feature
| ^^^^^^^^^^^^^^^ Syntax Error: Future feature `invalid_feature` is not defined
2 | from __future__ import annotations, invalid_feature
3 | from __future__ import invalid_feature_1, invalid_feature_2
|
|
1 | from __future__ import invalid_feature
2 | from __future__ import annotations, invalid_feature
| ^^^^^^^^^^^^^^^ Syntax Error: Future feature `invalid_feature` is not defined
3 | from __future__ import invalid_feature_1, invalid_feature_2
|
|
1 | from __future__ import invalid_feature
2 | from __future__ import annotations, invalid_feature
3 | from __future__ import invalid_feature_1, invalid_feature_2
| ^^^^^^^^^^^^^^^^^ Syntax Error: Future feature `invalid_feature_1` is not defined
|
|
1 | from __future__ import invalid_feature
2 | from __future__ import annotations, invalid_feature
3 | from __future__ import invalid_feature_1, invalid_feature_2
| ^^^^^^^^^^^^^^^^^ Syntax Error: Future feature `invalid_feature_2` is not defined
|

View File

@@ -0,0 +1,42 @@
---
source: crates/ruff_python_parser/tests/fixtures.rs
input_file: crates/ruff_python_parser/resources/inline/ok/valid_future_feature.py
---
## AST
```
Module(
ModModule {
node_index: NodeIndex(None),
range: 0..35,
body: [
ImportFrom(
StmtImportFrom {
node_index: NodeIndex(None),
range: 0..34,
module: Some(
Identifier {
id: Name("__future__"),
range: 5..15,
node_index: NodeIndex(None),
},
),
names: [
Alias {
range: 23..34,
node_index: NodeIndex(None),
name: Identifier {
id: Name("annotations"),
range: 23..34,
node_index: NodeIndex(None),
},
asname: None,
},
],
level: 0,
},
),
],
},
)
```

View File

@@ -1,17 +0,0 @@
/// Returns `true` if `name` is a valid `__future__` feature name, as defined by
/// `__future__.all_feature_names`.
pub fn is_feature_name(name: &str) -> bool {
matches!(
name,
"nested_scopes"
| "generators"
| "division"
| "absolute_import"
| "with_statement"
| "print_function"
| "unicode_literals"
| "barry_as_FLUFL"
| "generator_stop"
| "annotations"
)
}

View File

@@ -1,5 +1,4 @@
pub mod builtins;
pub mod future;
pub mod identifiers;
pub mod keyword;
pub mod logging;

View File

@@ -599,6 +599,16 @@ impl<'a> SimpleTokenizer<'a> {
| "rb"
| "rf"
| "u"
| "T"
| "TR"
| "Tr"
| "RT"
| "Rt"
| "t"
| "tR"
| "tr"
| "rT"
| "rt"
)
{
self.bogus = true;

View File

@@ -169,6 +169,22 @@ fn string_with_byte_kind() {
// note: not reversible: [other, bogus] vs [bogus, other]
}
#[test]
fn fstring() {
let source = "f'foo'";
let test_case = tokenize(source);
assert_debug_snapshot!(test_case.tokens());
}
#[test]
fn tstring() {
let source = "t'foo'";
let test_case = tokenize(source);
assert_debug_snapshot!(test_case.tokens());
}
#[test]
fn string_with_invalid_kind() {
let source = "abc'foo'";

View File

@@ -0,0 +1,14 @@
---
source: crates/ruff_python_trivia_integration_tests/tests/simple_tokenizer.rs
expression: test_case.tokens()
---
[
SimpleToken {
kind: Other,
range: 0..1,
},
SimpleToken {
kind: Bogus,
range: 1..6,
},
]

View File

@@ -0,0 +1,14 @@
---
source: crates/ruff_python_trivia_integration_tests/tests/simple_tokenizer.rs
expression: test_case.tokens()
---
[
SimpleToken {
kind: Other,
range: 0..1,
},
SimpleToken {
kind: Bogus,
range: 1..6,
},
]

View File

@@ -287,7 +287,7 @@ fn to_lsp_diagnostic(
let code = code.to_string();
(
Some(severity(&code)),
tags(&code),
tags(diagnostic),
Some(lsp_types::NumberOrString::String(code)),
)
} else {
@@ -338,12 +338,17 @@ fn severity(code: &str) -> lsp_types::DiagnosticSeverity {
}
}
fn tags(code: &str) -> Option<Vec<lsp_types::DiagnosticTag>> {
match code {
// F401: <module> imported but unused
// F841: local variable <name> is assigned to but never used
// RUF059: Unused unpacked variable
"F401" | "F841" | "RUF059" => Some(vec![lsp_types::DiagnosticTag::UNNECESSARY]),
_ => None,
}
fn tags(diagnostic: &Diagnostic) -> Option<Vec<lsp_types::DiagnosticTag>> {
diagnostic.primary_tags().map(|tags| {
tags.iter()
.map(|tag| match tag {
ruff_db::diagnostic::DiagnosticTag::Unnecessary => {
lsp_types::DiagnosticTag::UNNECESSARY
}
ruff_db::diagnostic::DiagnosticTag::Deprecated => {
lsp_types::DiagnosticTag::DEPRECATED
}
})
.collect()
})
}

View File

@@ -2302,6 +2302,9 @@ pub struct IsortOptions {
/// Order imports by type, which is determined by case, in addition to
/// alphabetically.
///
/// Note that this option takes precedence over the
/// [`case-sensitive`](#lint_isort_case-sensitive) setting when enabled.
#[option(
default = r#"true"#,
value_type = "bool",
@@ -2324,6 +2327,9 @@ pub struct IsortOptions {
pub force_sort_within_sections: Option<bool>,
/// Sort imports taking into account case sensitivity.
///
/// Note that the [`order-by-type`](#lint_isort_order-by-type) setting will
/// take precedence over this one when enabled.
#[option(
default = r#"false"#,
value_type = "bool",

View File

@@ -120,8 +120,10 @@ pub(crate) fn setup_tracing(
} else {
match level {
VerbosityLevel::Default => {
// Show warning traces
EnvFilter::default().add_directive(LevelFilter::WARN.into())
// Show warning traces for ty and ruff but not for other crates
EnvFilter::default()
.add_directive("ty=warn".parse().unwrap())
.add_directive("ruff=warn".parse().unwrap())
}
level => {
let level_filter = level.level_filter();

View File

@@ -159,6 +159,8 @@ def _(args: list[int]) -> None:
takes_zero(*args)
takes_one(*args)
takes_two(*args)
takes_two(*b"ab")
takes_two(*b"abc") # error: [too-many-positional-arguments]
takes_two_positional_only(*args)
takes_two_different(*args) # error: [invalid-argument-type]
takes_two_different_positional_only(*args) # error: [invalid-argument-type]

View File

@@ -931,6 +931,134 @@ def _(t: tuple[int, str] | tuple[int, str, int]) -> None:
f(*t) # error: [no-matching-overload]
```
## Filtering based on variaidic arguments
This is step 4 of the overload call evaluation algorithm which specifies that:
> If the argument list is compatible with two or more overloads, determine whether one or more of
> the overloads has a variadic parameter (either `*args` or `**kwargs`) that maps to a corresponding
> argument that supplies an indeterminate number of positional or keyword arguments. If so,
> eliminate overloads that do not have a variadic parameter.
This is only performed if the previous step resulted in more than one matching overload.
### Simple `*args`
`overloaded.pyi`:
```pyi
from typing import overload
@overload
def f(x1: int) -> tuple[int]: ...
@overload
def f(x1: int, x2: int) -> tuple[int, int]: ...
@overload
def f(*args: int) -> int: ...
```
```py
from overloaded import f
def _(x1: int, x2: int, args: list[int]):
reveal_type(f(x1)) # revealed: tuple[int]
reveal_type(f(x1, x2)) # revealed: tuple[int, int]
reveal_type(f(*(x1, x2))) # revealed: tuple[int, int]
# Step 4 should filter out all but the last overload.
reveal_type(f(*args)) # revealed: int
```
### Variable `*args`
```toml
[environment]
python-version = "3.11"
```
`overloaded.pyi`:
```pyi
from typing import overload
@overload
def f(x1: int) -> tuple[int]: ...
@overload
def f(x1: int, x2: int) -> tuple[int, int]: ...
@overload
def f(x1: int, *args: int) -> tuple[int, ...]: ...
```
```py
from overloaded import f
def _(x1: int, x2: int, args1: list[int], args2: tuple[int, *tuple[int, ...]]):
reveal_type(f(x1, x2)) # revealed: tuple[int, int]
reveal_type(f(*(x1, x2))) # revealed: tuple[int, int]
# Step 4 should filter out all but the last overload.
reveal_type(f(x1, *args1)) # revealed: tuple[int, ...]
reveal_type(f(*args2)) # revealed: tuple[int, ...]
```
### Simple `**kwargs`
`overloaded.pyi`:
```pyi
from typing import overload
@overload
def f(*, x1: int) -> int: ...
@overload
def f(*, x1: int, x2: int) -> tuple[int, int]: ...
@overload
def f(**kwargs: int) -> int: ...
```
```py
from overloaded import f
def _(x1: int, x2: int, kwargs: dict[str, int]):
reveal_type(f(x1=x1)) # revealed: int
reveal_type(f(x1=x1, x2=x2)) # revealed: tuple[int, int]
# Step 4 should filter out all but the last overload.
reveal_type(f(**{"x1": x1, "x2": x2})) # revealed: int
reveal_type(f(**kwargs)) # revealed: int
```
### `TypedDict`
The keys in a `TypedDict` are static so there's no variable part to it, so step 4 shouldn't filter
out any overloads.
`overloaded.pyi`:
```pyi
from typing import TypedDict, overload
@overload
def f(*, x: int) -> int: ...
@overload
def f(*, x: int, y: int) -> tuple[int, int]: ...
@overload
def f(**kwargs: int) -> tuple[int, ...]: ...
```
```py
from typing import TypedDict
from overloaded import f
class Foo(TypedDict):
x: int
y: int
def _(foo: Foo, kwargs: dict[str, int]):
reveal_type(f(**foo)) # revealed: tuple[int, int]
reveal_type(f(**kwargs)) # revealed: tuple[int, ...]
```
## Filtering based on `Any` / `Unknown`
This is the step 5 of the overload call evaluation algorithm which specifies that:

View File

@@ -312,12 +312,10 @@ def match_exhaustive(x: A[D] | B[E] | C[F]):
case C():
pass
case _:
# TODO: both of these are false positives (https://github.com/astral-sh/ty/issues/456)
no_diagnostic_here # error: [unresolved-reference]
assert_never(x) # error: [type-assertion-failure]
no_diagnostic_here
assert_never(x)
# TODO: false-positive diagnostic (https://github.com/astral-sh/ty/issues/456)
def match_exhaustive_no_assertion(x: A[D] | B[E] | C[F]) -> int: # error: [invalid-return-type]
def match_exhaustive_no_assertion(x: A[D] | B[E] | C[F]) -> int:
match x:
case A():
return 0

View File

@@ -28,7 +28,7 @@ get from the sequence is a valid `int`.
```py
from ty_extensions import is_assignable_to, is_equivalent_to, is_subtype_of, static_assert, Unknown
from typing import Any
from typing import Any, Never
class A: ...
class B(A): ...
@@ -60,6 +60,8 @@ static_assert(not is_subtype_of(C[A], C[Any]))
static_assert(not is_subtype_of(C[B], C[Any]))
static_assert(not is_subtype_of(C[Any], C[A]))
static_assert(not is_subtype_of(C[Any], C[B]))
static_assert(is_subtype_of(C[Any], C[object]))
static_assert(is_subtype_of(C[Never], C[Any]))
static_assert(is_subtype_of(D[B], C[A]))
static_assert(not is_subtype_of(D[A], C[B]))
@@ -104,7 +106,7 @@ that you pass into the consumer is a valid `int`.
```py
from ty_extensions import is_assignable_to, is_equivalent_to, is_subtype_of, static_assert, Unknown
from typing import Any
from typing import Any, Never
class A: ...
class B(A): ...
@@ -135,6 +137,8 @@ static_assert(not is_subtype_of(C[A], C[Any]))
static_assert(not is_subtype_of(C[B], C[Any]))
static_assert(not is_subtype_of(C[Any], C[A]))
static_assert(not is_subtype_of(C[Any], C[B]))
static_assert(is_subtype_of(C[object], C[Any]))
static_assert(is_subtype_of(C[Any], C[Never]))
static_assert(not is_subtype_of(D[B], C[A]))
static_assert(is_subtype_of(D[A], C[B]))
@@ -192,7 +196,7 @@ since we can't know in advance which of the allowed methods you'll want to use.
```py
from ty_extensions import is_assignable_to, is_equivalent_to, is_subtype_of, static_assert, Unknown
from typing import Any
from typing import Any, Never
class A: ...
class B(A): ...
@@ -225,6 +229,8 @@ static_assert(not is_subtype_of(C[A], C[Any]))
static_assert(not is_subtype_of(C[B], C[Any]))
static_assert(not is_subtype_of(C[Any], C[A]))
static_assert(not is_subtype_of(C[Any], C[B]))
static_assert(not is_subtype_of(C[object], C[Any]))
static_assert(not is_subtype_of(C[Any], C[Never]))
static_assert(not is_subtype_of(D[B], C[A]))
static_assert(not is_subtype_of(D[A], C[B]))
@@ -261,8 +267,8 @@ static_assert(not is_equivalent_to(D[Any], C[Unknown]))
## Bivariance
With a bivariant typevar, _all_ specializations of the generic class are assignable to (and in fact,
gradually equivalent to) each other, and all fully static specializations are subtypes of (and
equivalent to) each other.
gradually equivalent to) each other, and all specializations are subtypes of (and equivalent to)
each other.
This is a bit of pathological case, which really only happens when the class doesn't use the typevar
at all. (If it did, it would have to be covariant, contravariant, or invariant, depending on _how_
@@ -270,7 +276,7 @@ the typevar was used.)
```py
from ty_extensions import is_assignable_to, is_equivalent_to, is_subtype_of, static_assert, Unknown
from typing import Any
from typing import Any, Never
class A: ...
class B(A): ...
@@ -298,18 +304,20 @@ static_assert(is_assignable_to(D[Any], C[B]))
static_assert(is_subtype_of(C[B], C[A]))
static_assert(is_subtype_of(C[A], C[B]))
static_assert(not is_subtype_of(C[A], C[Any]))
static_assert(not is_subtype_of(C[B], C[Any]))
static_assert(not is_subtype_of(C[Any], C[A]))
static_assert(not is_subtype_of(C[Any], C[B]))
static_assert(not is_subtype_of(C[Any], C[Any]))
static_assert(is_subtype_of(C[A], C[Any]))
static_assert(is_subtype_of(C[B], C[Any]))
static_assert(is_subtype_of(C[Any], C[A]))
static_assert(is_subtype_of(C[Any], C[B]))
static_assert(is_subtype_of(C[Any], C[Any]))
static_assert(is_subtype_of(C[object], C[Any]))
static_assert(is_subtype_of(C[Any], C[Never]))
static_assert(is_subtype_of(D[B], C[A]))
static_assert(is_subtype_of(D[A], C[B]))
static_assert(not is_subtype_of(D[A], C[Any]))
static_assert(not is_subtype_of(D[B], C[Any]))
static_assert(not is_subtype_of(D[Any], C[A]))
static_assert(not is_subtype_of(D[Any], C[B]))
static_assert(is_subtype_of(D[A], C[Any]))
static_assert(is_subtype_of(D[B], C[Any]))
static_assert(is_subtype_of(D[Any], C[A]))
static_assert(is_subtype_of(D[Any], C[B]))
static_assert(is_equivalent_to(C[A], C[A]))
static_assert(is_equivalent_to(C[B], C[B]))

View File

@@ -228,6 +228,48 @@ def _(flag: bool):
reveal_type(x) # revealed: Result1A | Result1B | Result2A | Result2B | Result3 | Result4
```
## Union type as iterable where `Iterator[]` is used as the return type of `__iter__`
This test differs from the above tests in that `Iterator` (an abstract type) is used as the return
annotation of the `__iter__` methods, rather than a concrete type being used as the return
annotation.
```py
from typing import Iterator, Literal
class IntIterator:
def __iter__(self) -> Iterator[int]:
return iter(range(42))
class StrIterator:
def __iter__(self) -> Iterator[str]:
return iter("foo")
def f(x: IntIterator | StrIterator):
for a in x:
# TODO: this should be `int | str` (https://github.com/astral-sh/ty/issues/1089)
reveal_type(a) # revealed: int
```
Most real-world iterable types use `Iterator` as the return annotation of their `__iter__` methods:
```py
def g(
a: tuple[int, ...] | tuple[str, ...],
b: list[str] | list[int],
c: Literal["foo", b"bar"],
):
for x in a:
# TODO: should be `int | str` (https://github.com/astral-sh/ty/issues/1089)
reveal_type(x) # revealed: int
for y in b:
# TODO: should be `str | int` (https://github.com/astral-sh/ty/issues/1089)
reveal_type(y) # revealed: str
for z in c:
# TODO: should be `LiteralString | int` (https://github.com/astral-sh/ty/issues/1089)
reveal_type(z) # revealed: LiteralString
```
## Union type as iterable where one union element has no `__iter__` method
<!-- snapshot-diagnostics -->

View File

@@ -607,11 +607,22 @@ class HasXY(Protocol):
class Foo:
x: int
class IntSub(int): ...
class HasXIntSub(Protocol):
x: IntSub
static_assert(is_subtype_of(Foo, HasX))
static_assert(is_assignable_to(Foo, HasX))
static_assert(not is_subtype_of(Foo, HasXY))
static_assert(not is_assignable_to(Foo, HasXY))
# TODO: these should pass
static_assert(not is_subtype_of(HasXIntSub, HasX)) # error: [static-assert-error]
static_assert(not is_assignable_to(HasXIntSub, HasX)) # error: [static-assert-error]
static_assert(not is_subtype_of(HasX, HasXIntSub)) # error: [static-assert-error]
static_assert(not is_assignable_to(HasX, HasXIntSub)) # error: [static-assert-error]
class FooSub(Foo): ...
static_assert(is_subtype_of(FooSub, HasX))
@@ -1546,6 +1557,22 @@ static_assert(is_subtype_of(XImplicitFinal, HasXProperty))
static_assert(is_assignable_to(XImplicitFinal, HasXProperty))
```
But only if it has the correct type:
```py
class XAttrBad:
x: str
class HasStrXProperty(Protocol):
@property
def x(self) -> str: ...
# TODO: these should pass
static_assert(not is_assignable_to(XAttrBad, HasXProperty)) # error: [static-assert-error]
static_assert(not is_assignable_to(HasStrXProperty, HasXProperty)) # error: [static-assert-error]
static_assert(not is_assignable_to(HasXProperty, HasStrXProperty)) # error: [static-assert-error]
```
A read-only property on a protocol, unlike a mutable attribute, is covariant: `XSub` in the below
example satisfies the `HasXProperty` interface even though the type of the `x` attribute on `XSub`
is a subtype of `int` rather than being exactly `int`.
@@ -1558,6 +1585,13 @@ class XSub:
static_assert(is_subtype_of(XSub, HasXProperty))
static_assert(is_assignable_to(XSub, HasXProperty))
class XSubProto(Protocol):
@property
def x(self) -> XSub: ...
static_assert(is_subtype_of(XSubProto, HasXProperty))
static_assert(is_assignable_to(XSubProto, HasXProperty))
```
A read/write property on a protocol, where the getter returns the same type that the setter takes,
@@ -1582,8 +1616,8 @@ class XReadProperty:
return 42
# TODO: these should pass
static_assert(not is_subtype_of(XReadProperty, HasXProperty)) # error: [static-assert-error]
static_assert(not is_assignable_to(XReadProperty, HasXProperty)) # error: [static-assert-error]
static_assert(not is_subtype_of(XReadProperty, HasMutableXProperty)) # error: [static-assert-error]
static_assert(not is_assignable_to(XReadProperty, HasMutableXProperty)) # error: [static-assert-error]
class XReadWriteProperty:
@property
@@ -1593,18 +1627,19 @@ class XReadWriteProperty:
@x.setter
def x(self, val: int) -> None: ...
static_assert(is_subtype_of(XReadWriteProperty, HasXProperty))
static_assert(is_assignable_to(XReadWriteProperty, HasXProperty))
static_assert(is_subtype_of(XReadWriteProperty, HasMutableXProperty))
static_assert(is_assignable_to(XReadWriteProperty, HasMutableXProperty))
class XSub:
x: MyInt
static_assert(not is_subtype_of(XSub, XReadWriteProperty))
static_assert(not is_assignable_to(XSub, XReadWriteProperty))
# TODO: these should pass
static_assert(not is_subtype_of(XSub, HasMutableXProperty)) # error: [static-assert-error]
static_assert(not is_assignable_to(XSub, HasMutableXProperty)) # error: [static-assert-error]
```
A protocol with a read/write property `x` is exactly equivalent to a protocol with a mutable
attribute `x`. Both are subtypes of a protocol with a read-only prooperty `x`:
attribute `x`. Both are subtypes of a protocol with a read-only property `x`:
```py
from ty_extensions import is_equivalent_to
@@ -1618,8 +1653,22 @@ static_assert(is_equivalent_to(HasMutableXAttr, HasMutableXProperty)) # error:
static_assert(is_subtype_of(HasMutableXAttr, HasXProperty))
static_assert(is_assignable_to(HasMutableXAttr, HasXProperty))
static_assert(is_subtype_of(HasMutableXAttr, HasMutableXProperty))
static_assert(is_assignable_to(HasMutableXAttr, HasMutableXProperty))
static_assert(is_subtype_of(HasMutableXProperty, HasXProperty))
static_assert(is_assignable_to(HasMutableXProperty, HasXProperty))
static_assert(is_subtype_of(HasMutableXProperty, HasMutableXAttr))
static_assert(is_assignable_to(HasMutableXProperty, HasMutableXAttr))
class HasMutableXAttrWrongType(Protocol):
x: str
# TODO: these should pass
static_assert(not is_assignable_to(HasMutableXAttrWrongType, HasXProperty)) # error: [static-assert-error]
static_assert(not is_assignable_to(HasMutableXAttrWrongType, HasMutableXProperty)) # error: [static-assert-error]
static_assert(not is_assignable_to(HasMutableXProperty, HasMutableXAttrWrongType)) # error: [static-assert-error]
```
A read/write property on a protocol, where the setter accepts a subtype of the type returned by the
@@ -2212,6 +2261,129 @@ static_assert(is_equivalent_to(A | B | P1, P2 | B | A))
static_assert(is_equivalent_to(A | B | P3, P4 | B | A)) # error: [static-assert-error]
```
## Subtyping between two protocol types with method members
A protocol `PSub` with a method member can be considered a subtype of a protocol `PSuper` with a
method member if the signature of the member on `PSub` is a subtype of the signature of the member
on `PSuper`:
```py
from typing import Protocol
from ty_extensions import static_assert, is_subtype_of, is_assignable_to
class Super: ...
class Sub(Super): ...
class Unrelated: ...
class MethodPSuper(Protocol):
def f(self) -> Super: ...
class MethodPSub(Protocol):
def f(self) -> Sub: ...
class MethodPUnrelated(Protocol):
def f(self) -> Unrelated: ...
static_assert(is_subtype_of(MethodPSub, MethodPSuper))
# TODO: these should pass
static_assert(not is_assignable_to(MethodPUnrelated, MethodPSuper)) # error: [static-assert-error]
static_assert(not is_assignable_to(MethodPSuper, MethodPUnrelated)) # error: [static-assert-error]
static_assert(not is_assignable_to(MethodPSuper, MethodPSub)) # error: [static-assert-error]
```
## Subtyping between protocols with method members and protocols with non-method members
A protocol with a method member can be considered a subtype of a protocol with a read-only
`@property` member that returns a `Callable` type:
```py
from typing import Protocol, Callable
from ty_extensions import static_assert, is_subtype_of, is_assignable_to
class PropertyInt(Protocol):
@property
def f(self) -> Callable[[], int]: ...
class PropertyBool(Protocol):
@property
def f(self) -> Callable[[], bool]: ...
class PropertyNotReturningCallable(Protocol):
@property
def f(self) -> int: ...
class PropertyWithIncorrectSignature(Protocol):
@property
def f(self) -> Callable[[object], int]: ...
class Method(Protocol):
def f(self) -> bool: ...
static_assert(is_subtype_of(Method, PropertyInt))
static_assert(is_subtype_of(Method, PropertyBool))
# TODO: these should pass
static_assert(not is_assignable_to(Method, PropertyNotReturningCallable)) # error: [static-assert-error]
static_assert(not is_assignable_to(Method, PropertyWithIncorrectSignature)) # error: [static-assert-error]
```
However, a protocol with a method member can never be considered a subtype of a protocol with a
writable property member of the same name, as method members are covariant and immutable:
```py
class ReadWriteProperty(Protocol):
@property
def f(self) -> Callable[[], bool]: ...
@f.setter
def f(self, val: Callable[[], bool]): ...
# TODO: should pass
static_assert(not is_assignable_to(Method, ReadWriteProperty)) # error: [static-assert-error]
```
And for the same reason, they are never assignable to attribute members (which are also mutable):
```py
class Attribute(Protocol):
f: Callable[[], bool]
# TODO: should pass
static_assert(not is_assignable_to(Method, Attribute)) # error: [static-assert-error]
```
Protocols with attribute members, meanwhile, cannot be assigned to protocols with method members,
since a method member is guaranteed to exist on the meta-type as well as the instance type, whereas
this is not true for attribute members. The same principle also applies for protocols with property
members
```py
# TODO: this should pass
static_assert(not is_assignable_to(PropertyBool, Method)) # error: [static-assert-error]
static_assert(not is_assignable_to(Attribute, Method)) # error: [static-assert-error]
```
But an exception to this rule is if an attribute member is marked as `ClassVar`, as this guarantees
that the member will be available on the meta-type as well as the instance type for inhabitants of
the protocol:
```py
from typing import ClassVar
class ClassVarAttribute(Protocol):
f: ClassVar[Callable[[], bool]]
static_assert(is_subtype_of(ClassVarAttribute, Method))
static_assert(is_assignable_to(ClassVarAttribute, Method))
class ClassVarAttributeBad(Protocol):
f: ClassVar[Callable[[], str]]
# TODO: these should pass:
static_assert(not is_subtype_of(ClassVarAttributeBad, Method)) # error: [static-assert-error]
static_assert(not is_assignable_to(ClassVarAttributeBad, Method)) # error: [static-assert-error]
```
## Narrowing of protocols
<!-- snapshot-diagnostics -->
@@ -2549,7 +2721,10 @@ class RecursiveOptionalParent(Protocol):
static_assert(is_assignable_to(RecursiveOptionalParent, RecursiveOptionalParent))
static_assert(is_assignable_to(RecursiveNonFullyStatic, RecursiveOptionalParent))
# Due to invariance of mutable attribute members, neither is assignable to the other
#
# TODO: should pass
static_assert(not is_assignable_to(RecursiveNonFullyStatic, RecursiveOptionalParent)) # error: [static-assert-error]
static_assert(not is_assignable_to(RecursiveOptionalParent, RecursiveNonFullyStatic))
class Other(Protocol):

View File

@@ -305,9 +305,9 @@ range.
```py
def _[T]() -> None:
# revealed: ty_extensions.ConstraintSet[((SubSub ≤ T@_ ≤ Base) ∧ ¬(Sub ≤ T@_ ≤ Base))]
# revealed: ty_extensions.ConstraintSet[(¬(Sub ≤ T@_ ≤ Base) ∧ (SubSub ≤ T@_ ≤ Base))]
reveal_type(range_constraint(SubSub, T, Base) & negated_range_constraint(Sub, T, Super))
# revealed: ty_extensions.ConstraintSet[((SubSub ≤ T@_ ≤ Super) ∧ ¬(Sub ≤ T@_ ≤ Base))]
# revealed: ty_extensions.ConstraintSet[(¬(Sub ≤ T@_ ≤ Base) ∧ (SubSub ≤ T@_ ≤ Super))]
reveal_type(range_constraint(SubSub, T, Super) & negated_range_constraint(Sub, T, Base))
```
@@ -339,9 +339,9 @@ Otherwise, the union cannot be simplified.
```py
def _[T]() -> None:
# revealed: ty_extensions.ConstraintSet[(¬(Sub ≤ T@_ ≤ Base) ∧ ¬(Base ≤ T@_ ≤ Super))]
# revealed: ty_extensions.ConstraintSet[(¬(Base ≤ T@_ ≤ Super) ∧ ¬(Sub ≤ T@_ ≤ Base))]
reveal_type(negated_range_constraint(Sub, T, Base) & negated_range_constraint(Base, T, Super))
# revealed: ty_extensions.ConstraintSet[(¬(SubSub ≤ T@_ ≤ Sub) ∧ ¬(Base ≤ T@_ ≤ Super))]
# revealed: ty_extensions.ConstraintSet[(¬(Base ≤ T@_ ≤ Super) ∧ ¬(SubSub ≤ T@_ ≤ Sub))]
reveal_type(negated_range_constraint(SubSub, T, Sub) & negated_range_constraint(Base, T, Super))
# revealed: ty_extensions.ConstraintSet[(¬(SubSub ≤ T@_ ≤ Sub) ∧ ¬(Unrelated ≤ T@_))]
reveal_type(negated_range_constraint(SubSub, T, Sub) & negated_range_constraint(Unrelated, T, object))
@@ -385,7 +385,7 @@ We cannot simplify the union of constraints that refer to different typevars.
def _[T, U]() -> None:
# revealed: ty_extensions.ConstraintSet[(Sub ≤ T@_ ≤ Base) (Sub ≤ U@_ ≤ Base)]
reveal_type(range_constraint(Sub, T, Base) | range_constraint(Sub, U, Base))
# revealed: ty_extensions.ConstraintSet[¬(Sub ≤ T@_ ≤ Base) ¬(Sub ≤ U@_ ≤ Base)]
# revealed: ty_extensions.ConstraintSet[¬(Sub ≤ U@_ ≤ Base) ¬(Sub ≤ T@_ ≤ Base)]
reveal_type(negated_range_constraint(Sub, T, Base) | negated_range_constraint(Sub, U, Base))
```
@@ -417,9 +417,9 @@ Otherwise, the union cannot be simplified.
```py
def _[T]() -> None:
# revealed: ty_extensions.ConstraintSet[(Sub ≤ T@_ ≤ Base) (Base ≤ T@_ ≤ Super)]
# revealed: ty_extensions.ConstraintSet[(Base ≤ T@_ ≤ Super) (Sub ≤ T@_ ≤ Base)]
reveal_type(range_constraint(Sub, T, Base) | range_constraint(Base, T, Super))
# revealed: ty_extensions.ConstraintSet[(SubSub ≤ T@_ ≤ Sub) (Base ≤ T@_ ≤ Super)]
# revealed: ty_extensions.ConstraintSet[(Base ≤ T@_ ≤ Super) (SubSub ≤ T@_ ≤ Sub)]
reveal_type(range_constraint(SubSub, T, Sub) | range_constraint(Base, T, Super))
# revealed: ty_extensions.ConstraintSet[(SubSub ≤ T@_ ≤ Sub) (Unrelated ≤ T@_)]
reveal_type(range_constraint(SubSub, T, Sub) | range_constraint(Unrelated, T, object))
@@ -488,9 +488,9 @@ range.
```py
def _[T]() -> None:
# revealed: ty_extensions.ConstraintSet[¬(SubSub ≤ T@_ ≤ Base) (Sub ≤ T@_ ≤ Base)]
# revealed: ty_extensions.ConstraintSet[(Sub ≤ T@_ ≤ Base) ¬(SubSub ≤ T@_ ≤ Base)]
reveal_type(negated_range_constraint(SubSub, T, Base) | range_constraint(Sub, T, Super))
# revealed: ty_extensions.ConstraintSet[¬(SubSub ≤ T@_ ≤ Super) (Sub ≤ T@_ ≤ Base)]
# revealed: ty_extensions.ConstraintSet[(Sub ≤ T@_ ≤ Base) ¬(SubSub ≤ T@_ ≤ Super)]
reveal_type(negated_range_constraint(SubSub, T, Super) | range_constraint(Sub, T, Base))
```
@@ -562,3 +562,42 @@ def _[T]() -> None:
# revealed: ty_extensions.ConstraintSet[always]
reveal_type(constraint | ~constraint)
```
### Negation of constraints involving two variables
```py
from typing import final, Never
from ty_extensions import range_constraint
class Base: ...
@final
class Unrelated: ...
def _[T, U]() -> None:
# revealed: ty_extensions.ConstraintSet[¬(U@_ ≤ Base) ¬(T@_ ≤ Base)]
reveal_type(~(range_constraint(Never, T, Base) & range_constraint(Never, U, Base)))
```
The union of a constraint and its negation should always be satisfiable.
```py
def _[T, U]() -> None:
c1 = range_constraint(Never, T, Base) & range_constraint(Never, U, Base)
# revealed: ty_extensions.ConstraintSet[always]
reveal_type(c1 | ~c1)
# revealed: ty_extensions.ConstraintSet[always]
reveal_type(~c1 | c1)
c2 = range_constraint(Unrelated, T, object) & range_constraint(Unrelated, U, object)
# revealed: ty_extensions.ConstraintSet[always]
reveal_type(c2 | ~c2)
# revealed: ty_extensions.ConstraintSet[always]
reveal_type(~c2 | c2)
union = c1 | c2
# revealed: ty_extensions.ConstraintSet[always]
reveal_type(union | ~union)
# revealed: ty_extensions.ConstraintSet[always]
reveal_type(~union | union)
```

View File

@@ -1333,10 +1333,30 @@ impl<'db> CallableBinding<'db> {
}
MatchingOverloadIndex::Multiple(indexes) => {
// If two or more candidate overloads remain, proceed to step 4.
// TODO: Step 4
self.filter_overloads_containing_variadic(&indexes);
// Step 5
self.filter_overloads_using_any_or_unknown(db, argument_types.as_ref(), &indexes);
match self.matching_overload_index() {
MatchingOverloadIndex::None => {
// This shouldn't be possible because step 4 can only filter out overloads
// when there _is_ a matching variadic argument.
tracing::debug!("All overloads have been filtered out in step 4");
return None;
}
MatchingOverloadIndex::Single(index) => {
// If only one candidate overload remains, it is the winning match.
// Evaluate it as a regular (non-overloaded) call.
self.matching_overload_index = Some(index);
return None;
}
MatchingOverloadIndex::Multiple(indexes) => {
// If two or more candidate overloads remain, proceed to step 5.
self.filter_overloads_using_any_or_unknown(
db,
argument_types.as_ref(),
&indexes,
);
}
}
// This shouldn't lead to argument type expansion.
return None;
@@ -1446,15 +1466,28 @@ impl<'db> CallableBinding<'db> {
Some(self.overloads[index].return_type())
}
MatchingOverloadIndex::Multiple(matching_overload_indexes) => {
// TODO: Step 4
self.filter_overloads_containing_variadic(&matching_overload_indexes);
self.filter_overloads_using_any_or_unknown(
db,
expanded_arguments,
&matching_overload_indexes,
);
Some(self.return_type())
match self.matching_overload_index() {
MatchingOverloadIndex::None => {
tracing::debug!(
"All overloads have been filtered out in step 4 during argument type expansion"
);
None
}
MatchingOverloadIndex::Single(index) => {
self.matching_overload_index = Some(index);
Some(self.return_type())
}
MatchingOverloadIndex::Multiple(indexes) => {
self.filter_overloads_using_any_or_unknown(
db,
expanded_arguments,
&indexes,
);
Some(self.return_type())
}
}
}
};
@@ -1511,6 +1544,32 @@ impl<'db> CallableBinding<'db> {
None
}
/// Filter overloads based on variadic argument to variadic parameter match.
///
/// This is the step 4 of the [overload call evaluation algorithm][1].
///
/// [1]: https://typing.python.org/en/latest/spec/overload.html#overload-call-evaluation
fn filter_overloads_containing_variadic(&mut self, matching_overload_indexes: &[usize]) {
let variadic_matching_overloads = matching_overload_indexes
.iter()
.filter(|&&overload_index| {
self.overloads[overload_index].variadic_argument_matched_to_variadic_parameter
})
.collect::<HashSet<_>>();
if variadic_matching_overloads.is_empty()
|| variadic_matching_overloads.len() == matching_overload_indexes.len()
{
return;
}
for overload_index in matching_overload_indexes {
if !variadic_matching_overloads.contains(overload_index) {
self.overloads[*overload_index].mark_as_unmatched_overload();
}
}
}
/// Filter overloads based on [`Any`] or [`Unknown`] argument types.
///
/// This is the step 5 of the [overload call evaluation algorithm][1].
@@ -1984,17 +2043,23 @@ impl ArgumentForms {
}
}
#[derive(Default, Clone, Copy)]
struct ParameterInfo {
matched: bool,
suppress_missing_error: bool,
}
struct ArgumentMatcher<'a, 'db> {
parameters: &'a Parameters<'db>,
argument_forms: &'a mut ArgumentForms,
errors: &'a mut Vec<BindingError<'db>>,
argument_matches: Vec<MatchedArgument<'db>>,
parameter_matched: Vec<bool>,
suppress_missing_error: Vec<bool>,
parameter_info: Vec<ParameterInfo>,
next_positional: usize,
first_excess_positional: Option<usize>,
num_synthetic_args: usize,
variadic_argument_matched_to_variadic_parameter: bool,
}
impl<'a, 'db> ArgumentMatcher<'a, 'db> {
@@ -2009,11 +2074,11 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
argument_forms,
errors,
argument_matches: vec![MatchedArgument::default(); arguments.len()],
parameter_matched: vec![false; parameters.len()],
suppress_missing_error: vec![false; parameters.len()],
parameter_info: vec![ParameterInfo::default(); parameters.len()],
next_positional: 0,
first_excess_positional: None,
num_synthetic_args: 0,
variadic_argument_matched_to_variadic_parameter: false,
}
}
@@ -2029,6 +2094,7 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
}
}
#[expect(clippy::too_many_arguments)]
fn assign_argument(
&mut self,
argument_index: usize,
@@ -2037,6 +2103,7 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
parameter_index: usize,
parameter: &Parameter<'db>,
positional: bool,
variable_argument_length: bool,
) {
if !matches!(argument, Argument::Synthetic) {
let adjusted_argument_index = argument_index - self.num_synthetic_args;
@@ -2049,7 +2116,7 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
}
}
}
if self.parameter_matched[parameter_index] {
if self.parameter_info[parameter_index].matched {
if !parameter.is_variadic() && !parameter.is_keyword_variadic() {
self.errors.push(BindingError::ParameterAlreadyAssigned {
argument_index: self.get_argument_index(argument_index),
@@ -2057,11 +2124,20 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
});
}
}
if variable_argument_length
&& matches!(
(argument, parameter.kind()),
(Argument::Variadic, ParameterKind::Variadic { .. })
| (Argument::Keywords, ParameterKind::KeywordVariadic { .. })
)
{
self.variadic_argument_matched_to_variadic_parameter = true;
}
let matched_argument = &mut self.argument_matches[argument_index];
matched_argument.parameters.push(parameter_index);
matched_argument.types.push(argument_type);
matched_argument.matched = true;
self.parameter_matched[parameter_index] = true;
self.parameter_info[parameter_index].matched = true;
}
fn match_positional(
@@ -2069,6 +2145,7 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
argument_index: usize,
argument: Argument<'a>,
argument_type: Option<Type<'db>>,
variable_argument_length: bool,
) -> Result<(), ()> {
if matches!(argument, Argument::Synthetic) {
self.num_synthetic_args += 1;
@@ -2091,6 +2168,7 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
parameter_index,
parameter,
!parameter.is_variadic(),
variable_argument_length,
);
Ok(())
}
@@ -2115,7 +2193,7 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
argument_index: self.get_argument_index(argument_index),
parameter: ParameterContext::new(parameter, parameter_index, true),
});
self.suppress_missing_error[parameter_index] = true;
self.parameter_info[parameter_index].suppress_missing_error = true;
} else {
self.errors.push(BindingError::UnknownArgument {
argument_name: ast::name::Name::new(name),
@@ -2131,6 +2209,7 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
parameter_index,
parameter,
false,
false,
);
Ok(())
}
@@ -2157,6 +2236,8 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
),
};
let is_variable = length.is_variable();
// We must be able to match up the fixed-length portion of the argument with positional
// parameters, so we pass on any errors that occur.
for _ in 0..length.minimum() {
@@ -2164,12 +2245,13 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
argument_index,
argument,
argument_types.next().or(variable_element),
is_variable,
)?;
}
// If the tuple is variable-length, we assume that it will soak up all remaining positional
// parameters.
if length.is_variable() {
if is_variable {
while self
.parameters
.get_positional(self.next_positional)
@@ -2179,6 +2261,7 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
argument_index,
argument,
argument_types.next().or(variable_element),
is_variable,
)?;
}
}
@@ -2189,9 +2272,14 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
// raise a false positive as "too many arguments".
if self.parameters.variadic().is_some() {
if let Some(argument_type) = argument_types.next().or(variable_element) {
self.match_positional(argument_index, argument, Some(argument_type))?;
self.match_positional(argument_index, argument, Some(argument_type), is_variable)?;
for argument_type in argument_types {
self.match_positional(argument_index, argument, Some(argument_type))?;
self.match_positional(
argument_index,
argument,
Some(argument_type),
is_variable,
)?;
}
}
}
@@ -2232,7 +2320,8 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
};
for (parameter_index, parameter) in self.parameters.iter().enumerate() {
if self.parameter_matched[parameter_index] && !parameter.is_keyword_variadic() {
if self.parameter_info[parameter_index].matched && !parameter.is_keyword_variadic()
{
continue;
}
if matches!(
@@ -2248,6 +2337,7 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
parameter_index,
parameter,
false,
true,
);
}
}
@@ -2263,9 +2353,16 @@ impl<'a, 'db> ArgumentMatcher<'a, 'db> {
}
let mut missing = vec![];
for (index, matched) in self.parameter_matched.iter().copied().enumerate() {
for (
index,
ParameterInfo {
matched,
suppress_missing_error,
},
) in self.parameter_info.iter().copied().enumerate()
{
if !matched {
if self.suppress_missing_error[index] {
if suppress_missing_error {
continue;
}
let param = &self.parameters[index];
@@ -2670,6 +2767,10 @@ pub(crate) struct Binding<'db> {
/// order.
argument_matches: Box<[MatchedArgument<'db>]>,
/// Whether an argument that supplies an indeterminate number of positional or keyword
/// arguments is mapped to a variadic parameter (`*args` or `**kwargs`).
variadic_argument_matched_to_variadic_parameter: bool,
/// Bound types for parameters, in parameter source order, or `None` if no argument was matched
/// to that parameter.
parameter_tys: Box<[Option<Type<'db>>]>,
@@ -2688,6 +2789,7 @@ impl<'db> Binding<'db> {
specialization: None,
inherited_specialization: None,
argument_matches: Box::from([]),
variadic_argument_matched_to_variadic_parameter: false,
parameter_tys: Box::from([]),
errors: vec![],
}
@@ -2712,7 +2814,7 @@ impl<'db> Binding<'db> {
for (argument_index, (argument, argument_type)) in arguments.iter().enumerate() {
match argument {
Argument::Positional | Argument::Synthetic => {
let _ = matcher.match_positional(argument_index, argument, None);
let _ = matcher.match_positional(argument_index, argument, None, false);
}
Argument::Keyword(name) => {
let _ = matcher.match_keyword(argument_index, argument, None, name);
@@ -2730,6 +2832,8 @@ impl<'db> Binding<'db> {
}
self.return_ty = self.signature.return_ty.unwrap_or(Type::unknown());
self.parameter_tys = vec![None; parameters.len()].into_boxed_slice();
self.variadic_argument_matched_to_variadic_parameter =
matcher.variadic_argument_matched_to_variadic_parameter;
self.argument_matches = matcher.finish();
}

File diff suppressed because it is too large Load Diff

View File

@@ -841,18 +841,6 @@ impl<'db> Specialization<'db> {
.zip(self.types(db))
.zip(other.types(db))
{
// As an optimization, we can return early if either type is dynamic, unless
// we're dealing with a top or bottom materialization.
if other_materialization_kind.is_none()
&& self_materialization_kind.is_none()
&& (self_type.is_dynamic() || other_type.is_dynamic())
{
match relation {
TypeRelation::Assignability => continue,
TypeRelation::Subtyping => return ConstraintSet::from(false),
}
}
// Subtyping/assignability of each type in the specialization depends on the variance
// of the corresponding typevar:
// - covariant: verify that self_type <: other_type
@@ -877,7 +865,7 @@ impl<'db> Specialization<'db> {
}
TypeVarVariance::Bivariant => ConstraintSet::from(true),
};
if result.intersect(db, &compatible).is_never_satisfied() {
if result.intersect(db, compatible).is_never_satisfied() {
return result;
}
}
@@ -918,7 +906,7 @@ impl<'db> Specialization<'db> {
}
TypeVarVariance::Bivariant => ConstraintSet::from(true),
};
if result.intersect(db, &compatible).is_never_satisfied() {
if result.intersect(db, compatible).is_never_satisfied() {
return result;
}
}
@@ -928,7 +916,7 @@ impl<'db> Specialization<'db> {
(None, None) => {}
(Some(self_tuple), Some(other_tuple)) => {
let compatible = self_tuple.is_equivalent_to_impl(db, other_tuple, visitor);
if result.intersect(db, &compatible).is_never_satisfied() {
if result.intersect(db, compatible).is_never_satisfied() {
return result;
}
}

View File

@@ -6947,7 +6947,7 @@ impl<'db, 'ast> TypeInferenceBuilder<'db, 'ast> {
ast::UnaryOp::Invert,
Type::KnownInstance(KnownInstanceType::ConstraintSet(constraints)),
) => {
let constraints = constraints.constraints(self.db()).clone();
let constraints = constraints.constraints(self.db());
let result = constraints.negate(self.db());
Type::KnownInstance(KnownInstanceType::ConstraintSet(TrackedConstraintSet::new(
self.db(),
@@ -7311,9 +7311,9 @@ impl<'db, 'ast> TypeInferenceBuilder<'db, 'ast> {
Type::KnownInstance(KnownInstanceType::ConstraintSet(right)),
ast::Operator::BitAnd,
) => {
let left = left.constraints(self.db()).clone();
let right = right.constraints(self.db()).clone();
let result = left.and(self.db(), || right);
let left = left.constraints(self.db());
let right = right.constraints(self.db());
let result = left.and(self.db(), || *right);
Some(Type::KnownInstance(KnownInstanceType::ConstraintSet(
TrackedConstraintSet::new(self.db(), result),
)))
@@ -7324,9 +7324,9 @@ impl<'db, 'ast> TypeInferenceBuilder<'db, 'ast> {
Type::KnownInstance(KnownInstanceType::ConstraintSet(right)),
ast::Operator::BitOr,
) => {
let left = left.constraints(self.db()).clone();
let right = right.constraints(self.db()).clone();
let result = left.or(self.db(), || right);
let left = left.constraints(self.db());
let right = right.constraints(self.db());
let result = left.or(self.db(), || *right);
Some(Type::KnownInstance(KnownInstanceType::ConstraintSet(
TrackedConstraintSet::new(self.db(), result),
)))

View File

@@ -551,10 +551,7 @@ impl<'db> Signature<'db> {
let self_type = self_type.unwrap_or(Type::unknown());
let other_type = other_type.unwrap_or(Type::unknown());
!result
.intersect(
db,
&self_type.is_equivalent_to_impl(db, other_type, visitor),
)
.intersect(db, self_type.is_equivalent_to_impl(db, other_type, visitor))
.is_never_satisfied()
};
@@ -699,10 +696,7 @@ impl<'db> Signature<'db> {
let type1 = type1.unwrap_or(Type::unknown());
let type2 = type2.unwrap_or(Type::unknown());
!result
.intersect(
db,
&type1.has_relation_to_impl(db, type2, relation, visitor),
)
.intersect(db, type1.has_relation_to_impl(db, type2, relation, visitor))
.is_never_satisfied()
};

View File

@@ -44,7 +44,7 @@ impl TupleLength {
TupleLength::Variable(0, 0)
}
pub(crate) fn is_variable(self) -> bool {
pub(crate) const fn is_variable(self) -> bool {
matches!(self, TupleLength::Variable(_, _))
}
@@ -439,7 +439,7 @@ impl<'db> FixedLengthTuple<Type<'db>> {
let element_constraints =
self_ty.has_relation_to_impl(db, *other_ty, relation, visitor);
if result
.intersect(db, &element_constraints)
.intersect(db, element_constraints)
.is_never_satisfied()
{
return result;
@@ -452,7 +452,7 @@ impl<'db> FixedLengthTuple<Type<'db>> {
let element_constraints =
self_ty.has_relation_to_impl(db, *other_ty, relation, visitor);
if result
.intersect(db, &element_constraints)
.intersect(db, element_constraints)
.is_never_satisfied()
{
return result;
@@ -774,7 +774,7 @@ impl<'db> VariableLengthTuple<Type<'db>> {
let element_constraints =
self_ty.has_relation_to_impl(db, other_ty, relation, visitor);
if result
.intersect(db, &element_constraints)
.intersect(db, element_constraints)
.is_never_satisfied()
{
return result;
@@ -788,7 +788,7 @@ impl<'db> VariableLengthTuple<Type<'db>> {
let element_constraints =
self_ty.has_relation_to_impl(db, other_ty, relation, visitor);
if result
.intersect(db, &element_constraints)
.intersect(db, element_constraints)
.is_never_satisfied()
{
return result;
@@ -832,7 +832,7 @@ impl<'db> VariableLengthTuple<Type<'db>> {
return ConstraintSet::from(false);
}
};
if result.intersect(db, &pair_constraints).is_never_satisfied() {
if result.intersect(db, pair_constraints).is_never_satisfied() {
return result;
}
}
@@ -858,7 +858,7 @@ impl<'db> VariableLengthTuple<Type<'db>> {
return ConstraintSet::from(false);
}
};
if result.intersect(db, &pair_constraints).is_never_satisfied() {
if result.intersect(db, pair_constraints).is_never_satisfied() {
return result;
}
}

View File

@@ -30,7 +30,7 @@ ty_python_semantic = { path = "../crates/ty_python_semantic" }
ty_vendored = { path = "../crates/ty_vendored" }
libfuzzer-sys = { git = "https://github.com/rust-fuzz/libfuzzer", default-features = false }
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "3713cd7eb30821c0c086591832dd6f59f2af7fe7", default-features = false, features = [
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "29ab321b45d00daa4315fa2a06f7207759a8c87e", default-features = false, features = [
"compact_str",
"macros",
"salsa_unstable",

4
ruff.schema.json generated
View File

@@ -1655,7 +1655,7 @@
"type": "object",
"properties": {
"case-sensitive": {
"description": "Sort imports taking into account case sensitivity.",
"description": "Sort imports taking into account case sensitivity.\n\nNote that the [`order-by-type`](#lint_isort_order-by-type) setting will take precedence over this one when enabled.",
"type": [
"boolean",
"null"
@@ -1843,7 +1843,7 @@
]
},
"order-by-type": {
"description": "Order imports by type, which is determined by case, in addition to alphabetically.",
"description": "Order imports by type, which is determined by case, in addition to alphabetically.\n\nNote that this option takes precedence over the [`case-sensitive`](#lint_isort_case-sensitive) setting when enabled.",
"type": [
"boolean",
"null"