Compare commits

..

53 Commits

Author SHA1 Message Date
Brent Westbrook
79d526cd91 attempting a fix 2025-11-11 16:33:59 -05:00
Brent Westbrook
376571eed1 add test files from #21385 2025-11-11 15:48:47 -05:00
Aria Desires
e4374f14ed [ty] Consider from thispackage import y to re-export y in __init__.pyi (#21387)
Fixes https://github.com/astral-sh/ty/issues/1487

This one is a true extension of non-standard semantics, and is therefore
a certified Hot Take we might conclude is simply a Bad Take (let's see
what ecosystem tests say...).
2025-11-11 14:41:14 -05:00
Alex Waygood
03bd0619e9 [ty] Silence false-positive diagnostics when using typing.Dict or typing.Callable as the second argument to isinstance() (#21386) 2025-11-11 19:30:01 +00:00
Aria Desires
bd8812127d [ty] support absolute from imports introducing local submodules in __init__.py files (#21372)
By resolving `.` and the LHS of the from import during semantic
indexing, we can check if the LHS is a submodule of `.`, and handle
`from whatever.thispackage.x.y import z` exactly like we do `from .x.y
import z`.

Fixes https://github.com/astral-sh/ty/issues/1484
2025-11-11 13:04:42 -05:00
Alex Waygood
44b0c9ebac [ty] Allow PEP-604 unions in stubs and TYPE_CHECKING blocks prior to 3.10 (#21379) 2025-11-11 14:33:43 +00:00
Micha Reiser
7b237d316f Add option to provide a reason to --add-noqa (#21294)
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-11 14:03:46 +01:00
Micha Reiser
36cce347fd Reduce notebook memory footprint (#21319) 2025-11-11 10:43:37 +01:00
Douglas Creager
33b942c7ad [ty] Handle annotated self parameter in constructor of non-invariant generic classes (#21325)
This manifested as an error when inferring the type of a PEP-695 generic
class via its constructor parameters:

```py
class D[T, U]:
    @overload
    def __init__(self: "D[str, U]", u: U) -> None: ...
    @overload
    def __init__(self, t: T, u: U) -> None: ...
    def __init__(self, *args) -> None: ...

# revealed: D[Unknown, str]
# SHOULD BE: D[str, str]
reveal_type(D("string"))
```

This manifested because `D` is inferred to be bivariant in both `T` and
`U`. We weren't seeing this in the equivalent example for legacy
typevars, since those default to invariant. (This issue also showed up
for _covariant_ typevars, so this issue was not limited to bivariance.)

The underlying cause was because of a heuristic that we have in our
current constraint solver, which attempts to handle situations like
this:

```py
def f[T](t: T | None): ...
f(None)
```

Here, the `None` argument matches the non-typevar union element, so this
argument should not add any constraints on what `T` can specialize to.
Our previous heuristic would check for this by seeing if the argument
type is a subtype of the parameter annotation as a whole — even if it
isn't a union! That would cause us to erroneously ignore the `self`
parameter in our constructor call, since bivariant classes are
equivalent to each other, regardless of their specializations.

The quick fix is to move this heuristic "down a level", so that we only
apply it when the parameter annotation is a union. This heuristic should
go away completely 🤞 with the new constraint solver.
2025-11-10 19:46:49 -05:00
Aria Desires
9ce3230add [ty] Make implicit submodule imports only occur in global scope (#21370)
This loses any ability to have "per-function" implicit submodule
imports, to avoid the "ok but now we need per-scope imports" and "ok but
this should actually introduce a global that only exists during this
function" problems. A simple and clean implementation with no weird
corners.

Fixes https://github.com/astral-sh/ty/issues/1482
2025-11-10 18:59:48 -05:00
Aria Desires
2bc6c78e26 [ty] introduce local variables for from imports of submodules in __init__.py(i) (#21173)
This rips out the previous implementation in favour of a new
implementation with 3 rules:

- **froms are locals**: a `from..import` can only define locals, it does
not have global
side-effects. Specifically any submodule attribute `a` that's implicitly
introduced by either
`from .a import b` or `from . import a as b` (in an `__init__.py(i)`) is
a local and not a
global. If you do such an import at the top of a file you won't notice
this. However if you do
such an import in a function, that means it will only be function-scoped
(so you'll need to do
it in every function that wants to access it, making your code less
sensitive to execution
    order).

- **first from first serve**: only the *first* `from..import` in an
`__init__.py(i)` that imports a
particular direct submodule of the current package introduces that
submodule as a local.
Subsequent imports of the submodule will not introduce that local. This
reflects the fact that
in actual python only the first import of a submodule (in the entire
execution of the program)
introduces it as an attribute of the package. By "first" we mean "the
first time in this scope
(or any parent scope)". This pairs well with the fact that we are
specifically introducing a
local (as long as you don't accidentally shadow or overwrite the local).

- **dot re-exports**: `from . import a` in an `__init__.pyi` is
considered a re-export of `a`
(equivalent to `from . import a as a`). This is required to properly
handle many stubs in the
    wild. Currently it must be *exactly* `from . import ...`.
    
This implementation is intentionally limited/conservative (notably,
often requiring a from import to be relative). I'm going to file a ton
of followups for improvements so that their impact can be evaluated
separately.


Fixes https://github.com/astral-sh/ty/issues/133
2025-11-10 23:04:56 +00:00
Dan Parizher
1fd852fb3f [ruff] Ignore str() when not used for simple conversion (RUF065) (#21330)
## Summary

Fixed RUF065 (`logging-eager-conversion`) to only flag `str()` calls
when they perform a simple conversion that can be safely removed. The
rule now ignores `str()` calls with no arguments, multiple arguments,
starred arguments, or keyword unpacking, preventing false positives.

Fixes #21315

## Problem Analysis

The RUF065 rule was incorrectly flagging all `str()` calls in logging
statements, even when `str()` was performing actual conversion work
beyond simple type coercion. Specifically, the rule flagged:

- `str()` with no arguments - which returns an empty string
- `str(b"data", "utf-8")` with multiple arguments - which performs
encoding conversion
- `str(*args)` with starred arguments - which unpacks arguments
- `str(**kwargs)` with keyword unpacking - which passes keyword
arguments

These cases cannot be safely removed because `str()` is doing meaningful
work (encoding conversion, argument unpacking, etc.), not just redundant
type conversion.

The root cause was that the rule only checked if the function was
`str()` without validating the call signature. It didn't distinguish
between simple `str(value)` conversions (which can be removed) and more
complex `str()` calls that perform actual work.

## Approach

The fix adds validation to the `str()` detection logic in
`logging_eager_conversion.rs`:

1. **Check argument count**: Only flag `str()` calls with exactly one
positional argument (`str_call_args.args.len() == 1`)
2. **Check for starred arguments**: Ensure the single argument is not
starred (`!str_call_args.args[0].is_starred_expr()`)
3. **Check for keyword arguments**: Ensure there are no keyword
arguments (`str_call_args.keywords.is_empty()`)

This ensures the rule only flags cases like `str(value)` where `str()`
is truly redundant and can be removed, while ignoring cases where
`str()` performs actual conversion work.

The fix maintains backward compatibility - all existing valid test cases
continue to be flagged correctly, while the new edge cases are properly
ignored.

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-11-10 18:04:41 -05:00
Jack O'Connor
5f3e086ee4 [ty] implement typing.NewType by adding Type::NewTypeInstance 2025-11-10 14:55:47 -08:00
Aria Desires
039a69fa8c [ty] supress inlay hints for +1 and -1 (#21368)
It's everyone's favourite language corner case!

Also having kicked the tires on it, I'm pretty happy to call this (in
conjunction with #21367):

Fixes https://github.com/astral-sh/ty/issues/494

There's cases where you can make noisy Literal hints appear, so we can
always iterate on it, but this handles like, 98% of the cases in the
wild, which is great.

---------

Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
2025-11-10 21:56:25 +00:00
Ibraheem Ahmed
3656b44877 [ty] Use type context for inference of generic constructors (#20933)
## Summary

Resolves https://github.com/astral-sh/ty/issues/1228.

This PR is stacked on https://github.com/astral-sh/ruff/pull/21210.
2025-11-10 21:49:48 +00:00
Ibraheem Ahmed
98869f0307 [ty] Improve generic call expression inference (#21210)
## Summary

Implements https://github.com/astral-sh/ty/issues/1356 and https://github.com/astral-sh/ty/issues/136#issuecomment-3413669994.
2025-11-10 21:29:05 +00:00
Aria Desires
d258302b08 [ty] supress some trivial expr inlay hints (#21367)
I'm not 100% sold on this implementation, but it's a strict improvement
and it adds a ton of snapshot tests for future iteration.

Part of https://github.com/astral-sh/ty/issues/494
2025-11-10 19:51:14 +00:00
Dan Parizher
deeda56906 [configuration] Fix unclear error messages for line-length values exceeding u16::MAX (#21329)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-11-10 18:29:35 +00:00
justin
f63a9f2334 [ty] Fix incorrect inference of enum.auto() for enums with non-int mixins, and imprecise inference of enum.auto() for single-member enums (#20541)
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-11-10 17:53:08 +00:00
Dan Parizher
e4dc406a3d [refurb] Detect empty f-strings (FURB105) (#21348)
## Summary

Fixes FURB105 (`print-empty-string`) to detect empty f-strings in
addition to regular empty strings. Previously, the rule only flagged
`print("")` but missed `print(f"")`. This fix ensures both cases are
detected and can be automatically fixed.

Fixes #21346

## Problem Analysis

The FURB105 rule checks for unnecessary empty strings passed to
`print()` calls. The `is_empty_string` helper function was only checking
for `Expr::StringLiteral` with empty values, but did not handle
`Expr::FString` (f-strings). As a result, `print(f"")` was not being
flagged as a violation, even though it's semantically equivalent to
`print("")` and should be simplified to `print()`.

The issue occurred because the function used a `matches!` macro that
only checked for string literals:

```rust
fn is_empty_string(expr: &Expr) -> bool {
    matches!(
        expr,
        Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) if value.is_empty()
    )
}
```

## Approach

1. **Import the helper function**: Added `is_empty_f_string` to the
imports from `ruff_python_ast::helpers`, which already provides logic to
detect empty f-strings.

2. **Update `is_empty_string` function**: Changed the implementation
from a `matches!` macro to a `match` expression that handles both string
literals and f-strings:

   ```rust
   fn is_empty_string(expr: &Expr) -> bool {
       match expr {
Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) =>
value.is_empty(),
           Expr::FString(f_string) => is_empty_f_string(f_string),
           _ => false,
       }
   }
   ```

The fix leverages the existing `is_empty_f_string` helper function which
properly handles the complexity of f-strings, including nested f-strings
and interpolated expressions. This ensures the detection is accurate and
consistent with how empty strings are detected elsewhere in the
codebase.
2025-11-10 12:41:44 -05:00
Matthew Mckee
1d188476b6 [ty] provide import completion when in from <name> <name> statement (#21291)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

Resolves https://github.com/astral-sh/ty/issues/1494

## Test Plan

Add a test showing if we are in `from <name> <name> ` we provide the
keyword completion "import"
2025-11-10 11:59:45 -05:00
Aria Desires
4821c050ef [ty] elide redundant inlay hints for function args (#21365)
This elides the following inlay hints:

```py
foo([x=]x)
foo([x=]y.x)
foo([x=]x[0])
foo([x=]x(...))

# composes to complex situations
foo([x=]y.x(..)[0])
```

Fixes https://github.com/astral-sh/ty/issues/1514
2025-11-10 11:42:12 -05:00
Brent Westbrook
835e31b3ff Fix syntax error false positive on alternative match patterns (#21362)
Summary
--

Fixes #21360 by using the union of names instead of overwriting them, as
Micha suggested originally on #21104.

This avoids overwriting the `n` name in the `Subscript` by the empty set
of names visited in the nested OR pattern before visiting the other arm
of the outer OR pattern.

Test Plan
--

A new inline test case taken from the issue
2025-11-10 10:51:51 -05:00
Brent Westbrook
8d1efe964a Add a new "Opening a PR" section to the contribution guide (#21298)
Summary
--

This PR adds a new section to CONTRIBUTING.md describing the expected
contents of the PR summary and test plan, using the ecosystem report,
and communicating the status of a PR.

This seemed like a pretty good place to insert this in the document, at
the end of the advice on preparing actual code changes, but I'm
certainly open to other suggestions about both the content and
placement.

Test Plan
--

Future PRs :)

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
2025-11-10 14:12:32 +00:00
Dan Parizher
04e7cecab3 [flake8-simplify] Fix SIM222 false positive for tuple(generator) or None (SIM222) (#21187)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-11-10 14:27:31 +01:00
Micha Reiser
84a810736d Rebuild ruff binary instead of sharing it across jobs (#21361) 2025-11-10 14:27:07 +01:00
Micha Reiser
f44598dc11 [ty] Fix --exclude and src.exclude merging (#21341) 2025-11-10 12:52:45 +01:00
David Peter
ab46c8de0f [ty] Add support for properties that return Self (#21335)
## Summary

Detect usages of implicit `self` in property getters, which allows us to
treat their signature as being generic.

closes https://github.com/astral-sh/ty/issues/1502

## Typing conformance

Two new type assertions that are succeeding.

## Ecosystem results

Mostly look good. There are a few new false positives related to a bug
with constrained typevars that is unrelated to the work here. I reported
this as https://github.com/astral-sh/ty/issues/1503.

## Test Plan

Added regression tests.
2025-11-10 11:13:36 +01:00
Henry Schreiner
a6f2dee33b Add upstream linter URL to ruff linter --output-format=json (#21316) 2025-11-10 10:42:09 +01:00
David Peter
238f151371 [ty] Add support for Optional and Annotated in implicit type aliases (#21321)
## Summary

Add support for `Optional` and `Annotated` in implicit type aliases

part of https://github.com/astral-sh/ty/issues/221

## Typing conformance changes

New expected diagnostics.

## Ecosystem

A lot of true positives, some known limitations unrelated to this PR.

## Test Plan

New Markdown tests
2025-11-10 10:24:38 +01:00
Micha Reiser
3fa609929f Save rust cache for CI jobs on main only (#21359) 2025-11-10 10:04:44 +01:00
Alex Waygood
73b1fce74a [ty] Add diagnostics for isinstance() and issubclass() calls that use invalid PEP-604 unions for their second argument (#21343)
## Summary

This PR adds extra validation for `isinstance()` and `issubclass()`
calls that use `UnionType` instances for their second argument.
According to typeshed's annotations, any `UnionType` is accepted for the
second argument, but this isn't true at runtime: at runtime, all
elements in the `UnionType` must either be class objects or be `None` in
order for the `isinstance()` or `issubclass()` call to reliably succeed:

```pycon
% uvx python3.14                            
Python 3.14.0 (main, Oct 10 2025, 12:54:13) [Clang 20.1.4 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from typing import LiteralString
>>> import types
>>> type(LiteralString | int) is types.UnionType
True
>>> isinstance(42, LiteralString | int)
Traceback (most recent call last):
  File "<python-input-5>", line 1, in <module>
    isinstance(42, LiteralString | int)
    ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/alexw/Library/Application Support/uv/python/cpython-3.14.0-macos-aarch64-none/lib/python3.14/typing.py", line 559, in __instancecheck__
    raise TypeError(f"{self} cannot be used with isinstance()")
TypeError: typing.LiteralString cannot be used with isinstance()
```

## Test Plan

Added mdtests/snapshots
2025-11-10 08:46:31 +00:00
renovate[bot]
52bd22003b Update Rust crate libcst to v1.8.6 (#21355)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-10 09:39:46 +01:00
renovate[bot]
0a6d6b6194 Update cargo-bins/cargo-binstall action to v1.15.11 (#21352)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[cargo-bins/cargo-binstall](https://redirect.github.com/cargo-bins/cargo-binstall)
| action | patch | `v1.15.10` -> `v1.15.11` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>cargo-bins/cargo-binstall (cargo-bins/cargo-binstall)</summary>

###
[`v1.15.11`](https://redirect.github.com/cargo-bins/cargo-binstall/releases/tag/v1.15.11)

[Compare
Source](https://redirect.github.com/cargo-bins/cargo-binstall/compare/v1.15.10...v1.15.11)

*Binstall is a tool to fetch and install Rust-based executables as
binaries. It aims to be a drop-in replacement for `cargo install` in
most cases. Install it today with `cargo install cargo-binstall`, from
the binaries below, or if you already have it, upgrade with `cargo
binstall cargo-binstall`.*

##### In this release:

- Fix binstalk-downloader cannot decode some zip files on macos (x64 and
arm64) platforms
([#&#8203;2049](https://redirect.github.com/cargo-bins/cargo-binstall/issues/2049)
[#&#8203;2362](https://redirect.github.com/cargo-bins/cargo-binstall/issues/2362))
- Fix grammer in `HELP.md` and `--help` output
([#&#8203;2357](https://redirect.github.com/cargo-bins/cargo-binstall/issues/2357)
[#&#8203;2359](https://redirect.github.com/cargo-bins/cargo-binstall/issues/2359))
- Update documentation link in Cargo.toml
([#&#8203;2355](https://redirect.github.com/cargo-bins/cargo-binstall/issues/2355))

##### Other changes:

- Upgrade dependencies

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJpbnRlcm5hbCJdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-10 09:17:37 +01:00
renovate[bot]
ca51feb319 Update Rust crate jiff to v0.2.16 (#21354)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [jiff](https://redirect.github.com/BurntSushi/jiff) |
workspace.dependencies | patch | `0.2.15` -> `0.2.16` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>BurntSushi/jiff (jiff)</summary>

###
[`v0.2.16`](https://redirect.github.com/BurntSushi/jiff/blob/HEAD/CHANGELOG.md#0216-2025-11-07)

[Compare
Source](https://redirect.github.com/BurntSushi/jiff/compare/0.2.15...0.2.16)

\===================
This release contains a number of enhancements and bug fixes that have
accrued
over the last few months. Most are small polishes. A couple of the bug
fixes
apply to panics that could occur when parsing invalid `TZ` strings or
invalid
`strptime` format strings.

Also, parsing into a `Span` should now be much faster (for both the ISO
8601
and "friendly" duration formats).

Enhancements:

- [#&#8203;298](https://redirect.github.com/BurntSushi/jiff/issues/298):
  Add Serde helpers for (de)serializing `std::time::Duration` values.
- [#&#8203;396](https://redirect.github.com/BurntSushi/jiff/issues/396):
Add `Sub` and `Add` trait implementations for `Zoned` (in addition to
the
  already existing trait implementations for `&Zoned`).
- [#&#8203;397](https://redirect.github.com/BurntSushi/jiff/pull/397):
Add `BrokenDownTime::set_meridiem` and ensure it overrides the hour when
  formatting.
- [#&#8203;409](https://redirect.github.com/BurntSushi/jiff/pull/409):
Switch dependency on `serde` to `serde_core`. This should help speed up
  compilation times in some cases.
- [#&#8203;430](https://redirect.github.com/BurntSushi/jiff/pull/430):
Add new `Zoned::series` API, making it consistent with the same API on
other
  datetime types.
- [#&#8203;432](https://redirect.github.com/BurntSushi/jiff/pull/432):
When `lenient` mode is enabled for `strftime`, Jiff will no longer error
when
  the formatting string contains invalid UTF-8.
- [#&#8203;432](https://redirect.github.com/BurntSushi/jiff/pull/432):
Formatting of `%y` and `%g` no longer fails based on the specific year
value.
- [#&#8203;432](https://redirect.github.com/BurntSushi/jiff/pull/432):
Parsing of `%s` is now a bit more consistent with other fields.
Moreover,
`BrokenDownTime::{to_timestamp,to_zoned}` will now prefer timestamps
parsed
  with `%s` over any other fields that have been parsed.
- [#&#8203;433](https://redirect.github.com/BurntSushi/jiff/pull/433):
Allow parsing just a `%s` into a `Zoned` via the `Etc/Unknown` time
zone.

Bug fixes:

- [#&#8203;386](https://redirect.github.com/BurntSushi/jiff/issues/386):
Fix a bug where `2087-12-31T23:00:00Z` in the `Africa/Casablanca` time
zone
could not be round-tripped (because its offset was calculated
incorrectly as
  a result of not handling "permanent DST" POSIX time zones).
- [#&#8203;407](https://redirect.github.com/BurntSushi/jiff/issues/407):
Fix a panic that occurred when parsing an empty string as a POSIX time
zone.
- [#&#8203;410](https://redirect.github.com/BurntSushi/jiff/issues/410):
  Fix a panic that could occur when parsing `%:` via `strptime` APIs.
- [#&#8203;414](https://redirect.github.com/BurntSushi/jiff/pull/414):
Update some parts of the documentation to indicate that
`TimeZone::unknown()`
is a fallback for `TimeZone::system()` (instead of the `jiff 0.1`
behavior of
  using `TimeZone::UTC`).
- [#&#8203;423](https://redirect.github.com/BurntSushi/jiff/issues/423):
  Fix a panicking bug when reading malformed TZif data.
- [#&#8203;426](https://redirect.github.com/BurntSushi/jiff/issues/426):
  Fix a panicking bug when parsing century (`%C`) via `strptime`.
- [#&#8203;445](https://redirect.github.com/BurntSushi/jiff/pull/445):
  Fixed bugs with parsing durations like `-9223372036854775808s`
  and `-PT9223372036854775808S`.

Performance:

- [#&#8203;445](https://redirect.github.com/BurntSushi/jiff/pull/445):
Parsing into `Span` or `SignedDuration` is now a fair bit faster in some
cases.

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJpbnRlcm5hbCJdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-10 09:16:50 +01:00
renovate[bot]
e0a3cbb048 Update Rust crate quote to v1.0.42 (#21356)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [quote](https://redirect.github.com/dtolnay/quote) |
workspace.dependencies | patch | `1.0.41` -> `1.0.42` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>dtolnay/quote (quote)</summary>

###
[`v1.0.42`](https://redirect.github.com/dtolnay/quote/releases/tag/1.0.42)

[Compare
Source](https://redirect.github.com/dtolnay/quote/compare/1.0.41...1.0.42)

- Tweaks to improve build speed
([#&#8203;305](https://redirect.github.com/dtolnay/quote/issues/305),
[#&#8203;306](https://redirect.github.com/dtolnay/quote/issues/306),
[#&#8203;307](https://redirect.github.com/dtolnay/quote/issues/307),
[#&#8203;308](https://redirect.github.com/dtolnay/quote/issues/308),
thanks [@&#8203;dishmaker](https://redirect.github.com/dishmaker))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJpbnRlcm5hbCJdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-10 09:15:51 +01:00
renovate[bot]
f4f259395c Update taiki-e/install-action action to v2.62.49 (#21358)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[taiki-e/install-action](https://redirect.github.com/taiki-e/install-action)
| action | patch | `v2.62.45` -> `v2.62.49` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>taiki-e/install-action (taiki-e/install-action)</summary>

###
[`v2.62.49`](https://redirect.github.com/taiki-e/install-action/blob/HEAD/CHANGELOG.md#100---2021-12-30)

[Compare
Source](https://redirect.github.com/taiki-e/install-action/compare/v2.62.48...v2.62.49)

Initial release

[Unreleased]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.49...HEAD

[2.62.49]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.48...v2.62.49

[2.62.48]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.47...v2.62.48

[2.62.47]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.46...v2.62.47

[2.62.46]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.45...v2.62.46

[2.62.45]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.44...v2.62.45

[2.62.44]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.43...v2.62.44

[2.62.43]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.42...v2.62.43

[2.62.42]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.41...v2.62.42

[2.62.41]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.40...v2.62.41

[2.62.40]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.39...v2.62.40

[2.62.39]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.38...v2.62.39

[2.62.38]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.37...v2.62.38

[2.62.37]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.36...v2.62.37

[2.62.36]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.35...v2.62.36

[2.62.35]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.34...v2.62.35

[2.62.34]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.33...v2.62.34

[2.62.33]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.32...v2.62.33

[2.62.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.31...v2.62.32

[2.62.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.30...v2.62.31

[2.62.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.29...v2.62.30

[2.62.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.28...v2.62.29

[2.62.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.27...v2.62.28

[2.62.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.26...v2.62.27

[2.62.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.25...v2.62.26

[2.62.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.24...v2.62.25

[2.62.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.23...v2.62.24

[2.62.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.22...v2.62.23

[2.62.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.21...v2.62.22

[2.62.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.20...v2.62.21

[2.62.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.19...v2.62.20

[2.62.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.18...v2.62.19

[2.62.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.17...v2.62.18

[2.62.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.16...v2.62.17

[2.62.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.15...v2.62.16

[2.62.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.14...v2.62.15

[2.62.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.13...v2.62.14

[2.62.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.12...v2.62.13

[2.62.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.11...v2.62.12

[2.62.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.10...v2.62.11

[2.62.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.9...v2.62.10

[2.62.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.8...v2.62.9

[2.62.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.7...v2.62.8

[2.62.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.6...v2.62.7

[2.62.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.5...v2.62.6

[2.62.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.4...v2.62.5

[2.62.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.3...v2.62.4

[2.62.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.2...v2.62.3

[2.62.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.1...v2.62.2

[2.62.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.62.0...v2.62.1

[2.62.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.61.13...v2.62.0

[2.61.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.61.12...v2.61.13

[2.61.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.61.11...v2.61.12

[2.61.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.61.10...v2.61.11

[2.61.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.61.9...v2.61.10

[2.61.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.61.8...v2.61.9

[2.61.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.61.7...v2.61.8

[2.61.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.61.6...v2.61.7

[2.61.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.61.5...v2.61.6

[2.61.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.61.4...v2.61.5

[2.61.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.61.3...v2.61.4

[2.61.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.61.2...v2.61.3

[2.61.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.61.1...v2.61.2

[2.61.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.61.0...v2.61.1

[2.61.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.60.0...v2.61.0

[2.60.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.59.1...v2.60.0

[2.59.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.59.0...v2.59.1

[2.59.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.33...v2.59.0

[2.58.33]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.32...v2.58.33

[2.58.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.31...v2.58.32

[2.58.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.30...v2.58.31

[2.58.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.29...v2.58.30

[2.58.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.28...v2.58.29

[2.58.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.27...v2.58.28

[2.58.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.26...v2.58.27

[2.58.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.25...v2.58.26

[2.58.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.24...v2.58.25

[2.58.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.23...v2.58.24

[2.58.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.22...v2.58.23

[2.58.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.21...v2.58.22

[2.58.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.20...v2.58.21

[2.58.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.19...v2.58.20

[2.58.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.18...v2.58.19

[2.58.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.17...v2.58.18

[2.58.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.16...v2.58.17

[2.58.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.15...v2.58.16

[2.58.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.14...v2.58.15

[2.58.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.13...v2.58.14

[2.58.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.12...v2.58.13

[2.58.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.11...v2.58.12

[2.58.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.10...v2.58.11

[2.58.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.9...v2.58.10

[2.58.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.8...v2.58.9

[2.58.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.7...v2.58.8

[2.58.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.6...v2.58.7

[2.58.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.5...v2.58.6

[2.58.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.4...v2.58.5

[2.58.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.3...v2.58.4

[2.58.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.2...v2.58.3

[2.58.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.1...v2.58.2

[2.58.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.58.0...v2.58.1

[2.58.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.8...v2.58.0

[2.57.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.7...v2.57.8

[2.57.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.6...v2.57.7

[2.57.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.5...v2.57.6

[2.57.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.4...v2.57.5

[2.57.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.3...v2.57.4

[2.57.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.2...v2.57.3

[2.57.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.1...v2.57.2

[2.57.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.0...v2.57.1

[2.57.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.24...v2.57.0

[2.56.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.23...v2.56.24

[2.56.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.22...v2.56.23

[2.56.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.21...v2.56.22

[2.56.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.20...v2.56.21

[2.56.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.19...v2.56.20

[2.56.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.18...v2.56.19

[2.56.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.17...v2.56.18

[2.56.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.16...v2.56.17

[2.56.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.15...v2.56.16

[2.56.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.14...v2.56.15

[2.56.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.13...v2.56.14

[2.56.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.12...v2.56.13

[2.56.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.11...v2.56.12

[2.56.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.10...v2.56.11

[2.56.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.9...v2.56.10

[2.56.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.8...v2.56.9

[2.56.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.7...v2.56.8

[2.56.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.6...v2.56.7

[2.56.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.5...v2.56.6

[2.56.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.4...v2.56.5

[2.56.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.3...v2.56.4

[2.56.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.2...v2.56.3

[2.56.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.1...v2.56.2

[2.56.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.0...v2.56.1

[2.56.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.55.4...v2.56.0

[2.55.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.55.3...v2.55.4

[2.55.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.55.2...v2.55.3

[2.55.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.55.1...v2.55.2

[2.55.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.55.0...v2.55.1

[2.55.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.54.3...v2.55.0

[2.54.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.54.2...v2.54.3

[2.54.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.54.1...v2.54.2

[2.54.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.54.0...v2.54.1

[2.54.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.53.2...v2.54.0

[2.53.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.53.1...v2.53.2

[2.53.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.53.0...v2.53.1

[2.53.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.8...v2.53.0

[2.52.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.7...v2.52.8

[2.52.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.6...v2.52.7

[2.52.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.5...v2.52.6

[2.52.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.4...v2.52.5

[2.52.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.3...v2.52.4

[2.52.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.2...v2.52.3

[2.52.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.1...v2.52.2

[2.52.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.0...v2.52.1

[2.52.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.51.3...v2.52.0

[2.51.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.51.2...v2.51.3

[2.51.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.51.1...v2.51.2

[2.51.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.51.0...v2.51.1

[2.51.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.10...v2.51.0

[2.50.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.9...v2.50.10

[2.50.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.8...v2.50.9

[2.50.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.7...v2.50.8

[2.50.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.6...v2.50.7

[2.50.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.5...v2.50.6

[2.50.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.4...v2.50.5

[2.50.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.3...v2.50.4

[2.50.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.2...v2.50.3

[2.50.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.1...v2.50.2

[2.50.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.0...v2.50.1

[2.50.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.50...v2.50.0

[2.49.50]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.49...v2.49.50

[2.49.49]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.48...v2.49.49

[2.49.48]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.47...v2.49.48

[2.49.47]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.46...v2.49.47

[2.49.46]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.45...v2.49.46

[2.49.45]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.44...v2.49.45

[2.49.44]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.43...v2.49.44

[2.49.43]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.42...v2.49.43

[2.49.42]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.41...v2.49.42

[2.49.41]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.40...v2.49.41

[2.49.40]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.39...v2.49.40

[2.49.39]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.38...v2.49.39

[2.49.38]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.37...v2.49.38

[2.49.37]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.36...v2.49.37

[2.49.36]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.35...v2.49.36

[2.49.35]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.34...v2.49.35

[2.49.34]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.33...v2.49.34

[2.49.33]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.32...v2.49.33

[2.49.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.31...v2.49.32

[2.49.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.30...v2.49.31

[2.49.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.29...v2.49.30

[2.49.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.28...v2.49.29

[2.49.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.27...v2.49.28

[2.49.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.26...v2.49.27

[2.49.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.25...v2.49.26

[2.49.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.24...v2.49.25

[2.49.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.23...v2.49.24

[2.49.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.22...v2.49.23

[2.49.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.21...v2.49.22

[2.49.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.20...v2.49.21

[2.49.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.19...v2.49.20

[2.49.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.18...v2.49.19

[2.49.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.17...v2.49.18

[2.49.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.16...v2.49.17

[2.49.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.15...v2.49.16

[2.49.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.14...v2.49.15

[2.49.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.13...v2.49.14

[2.49.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.12...v2.49.13

[2.49.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.11...v2.49.12

[2.49.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.10...v2.49.11

[2.49.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.9...v2.49.10

[2.49.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.8...v2.49.9

[2.49.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.7...v2.49.8

[2.49.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.6...v2.49.7

[2.49.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.5...v2.49.6

[2.49.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.4...v2.49.5

[2.49.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.3...v2.49.4

[2.49.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.2...v2.49.3

[2.49.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.1...v2.49.2

[2.49.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.0...v2.49.1

[2.49.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.22...v2.49.0

[2.48.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.21...v2.48.22

[2.48.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.20...v2.48.21

[2.48.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.19...v2.48.20

[2.48.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.18...v2.48.19

[2.48.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.17...v2.48.18

[2.48.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.16...v2.48.17

[2.48.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.15...v2.48.16

[2.48.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.14...v2.48.15

[2.48.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.13...v2.48.14

[2.48.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.12...v2.48.13

[2.48.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.11...v2.48.12

[2.48.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.10...v2.48.11

[2.48.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.9...v2.48.10

[2.48.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.8...v2.48.9

[2.48.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.7...v2.48.8

[2.48.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.6...v2.48.7

[2.48.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.5...v2.48.6

[2.48.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.4...v2.48.5

[2.48.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.3...v2.48.4

[2.48.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.2...v2.48.3

[2.48.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.1...v2.48.2

[2.48.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.0...v2.48.1

[2.48.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.32...v2.48.0

[2.47.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.31...v2.47.32

[2.47.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.30...v2.47.31

[2.47.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.29...v2.47.30

[2.47.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.28...v2.47.29

[2.47.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.27...v2.47.28

[2.47.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.26...v2.47.27

[2.47.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.25...v2.47.26

[2.47.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.24...v2.47.25

[2.47.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.23...v2.47.24

[2.47.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.22...v2.47.23

[2.47.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.21...v2.47.22

[2.47.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.20...v2.47.21

[2.47.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.19...v2.47.20

[2.47.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.18...v2.47.19

[2.47.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.17...v2.47.18

[2.47.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.16...v2.47.17

[2.47.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.15...v2.47.16

[2.47.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.14...v2.47.15

[2.47.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.13...v2.47.14

[2.47.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.12...v2.47.13

[2.47.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.11...v2.47.12

[2.47.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.10...v2.47.11

[2.47.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.9...v2.47.10

[2.47.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.8...v2.47.9

[2.47.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.7...v2.47.8

[2.47.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.6...v2.47.7

[2.47.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.5...v2.47.6

[2.47.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.4...v2.47.5

[2.47.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.3...v2.47.4

[2.47.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.2...v2.47.3

[2.47.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.1...v2.47.2

[2.47.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.0...v2.47.1

[2.47.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.20...v2.47.0

[2.46.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.19...v2.46.20

[2.46.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.18...v2.46.19

[2.46.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.17...v2.46.18

[2.46.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.16...v2.46.17

[2.46.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.15...v2.46.16

[2.46.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.14...v2.46.15

[2.46.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.13...v2.46.14

[2.46.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.12...v2.46.13

[2.46.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.11...v2.46.12

[2.46.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.10...v2.46.11

[2.46.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.9...v2.46.10

[2.46.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.8...v2.46.9

[2.46.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.7...v2.46.8

[2.46.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.6...v2.46.7

[2.46.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.5...v2.46.6

[2.46.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.4...v2.46.5

[2.46.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.3...v2.46.4

[2.46.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.2...v2.46.3

[2.46.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.1...v2.46.2

[2.46.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.0...v2.46.1

[2.46.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.15...v2.46.0

[2.45.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.14...v2.45.15

[2.45.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.13...v2.45.14

[2.45.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.12...v2.45.13

[2.45.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.11...v2.45.12

[2.45.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.10...v2.45.11

[2.45.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.9...v2.45.10

[2.45.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.8...v2.45.9

[2.45.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.7...v2.45.8

[2.45.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.6...v2.45.7

[2.45.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.5...v2.45.6

[2.45.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.4...v2.45.5

[2.45.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.3...v2.45.4

[2.45.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.2...v2.45.3

[2.45.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.1...v2.45.2

[2.45.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.0...v2.45.1

[2.45.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.72...v2.45.0

[2.44.72]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.71...v2.44.72

[2.44.71]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.70...v2.44.71

[2.44.70]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.69...v2.44.70

[2.44.69]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.68...v2.44.69

[2.44.68]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.67...v2.44.68

[2.44.67]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.66...v2.44.67

[2.44.66]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.65...v2.44.66

[2.44.65]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.64...v2.44.65

[2.44.64]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.63...v2.44.64

[2.44.63]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.62...v2.44.63

[2.44.62]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.61...v2.44.62

[2.44.61]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.60...v2.44.61

[2.44.60]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.59...v2.44.60

[2.44.59]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.58...v2.44.59

[2.44.58]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.57...v2.44.58

[2.44.57]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.56...v2.44.57

[2.44.56]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.55...v2.44.56

[2.44.55]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.54...v2.44.55

[2.44.54]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.53...v2.44.54

[2.44.53]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.52...v2.44.53

[2.44.52]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.51...v2.44.52

[2.44.51]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.50...v2.44.51

[2.44.50]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.49...v2.44.50

[2.44.49]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.48...v2.44.49

[2.44.48]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.47...v2.44.48

[2.44.47]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.46...v2.44.47

[2.44.46]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.45...v2.44.46

[2.44.45]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.44...v2.44.45

[2.44.44]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.43...v2.44.44

[2.44.43]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.42...v2.44.43

[2.44.42]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.41...v2.44.42

[2.44.41]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.40...v2.44.41

[2.44.40]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.39...v2.44.40

[2.44.39]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.38...v2.44.39

[2.44.38]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.37...v2.44.38

[2.44.37]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.36...v2.44.37

[2.44.36]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.35...v2.44.36

[2.44.35]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.34...v2.44.35

[2.44.34]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.33...v2.44.34

[2.44.33]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.32...v2.44.33

[2.44.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.31...v2.44.32

[2.44.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.30...v2.44.31

[2.44.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.29...v2.44.30

[2.44.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.28...v2.44.29

[2.44.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.27...v2.44.28

[2.44.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.26...v2.44.27

[2.44.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.25...v2.44.26

[2.44.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.24...v2.44.25

[2.44.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.23...v2.44.24

[2.44.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.22...v2.44.23

[2.44.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.21...v2.44.22

[2.44.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.20...v2.44.21

[2.44.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.19...v2.44.20

[2.44.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.18...v2.44.19

[2.44.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.17...v2.44.18

[2.44.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.16...v2.44.17

[2.44.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.15...v2.44.16

[2.44.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.14...v2.44.15

[2.44.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.13...v2.44.14

[2.44.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.12...v2.44.13

[2.44.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.11...v2.44.12

[2.44.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.10...v2.44.11

[2.44.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.9...v2.44.10

[2.44.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.8...v2.44.9

[2.44.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.7...v2.44.8

[2.44.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.6...v2.44.7

[2.44.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.5...v2.44.6

[2.44.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.4...v2.44.5

[2.44.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.3...v2.44.4

[2.44.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.2...v2.44.3

[2.44.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.1...v2.44.2

[2.44.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.0...v2.44.1

[2.44.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.7...v2.44.0

[2.43.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.6...v2.43.7

[2.43.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.5...v2.43.6

[2.43.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.4...v2.43.5

[2.43.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.3...v2.43.4

[2.43.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.2...v2.43.3

[2.43.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.1...v2.43.2

[2.43.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.0...v2.43.1

[2.43.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.42...v2.43.0

[2.42.42]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.41...v2.42.42

[2.42.41]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.40...v2.42.41

[2.42.40]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.39...v2.42.40

[2.42.39]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.38...v2.42.39

[2.42.38]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.37...v2.42.38

[2.42.37]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.36...v2.42.37

[2.42.36]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.35...v2.42.36

[2.42.35]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.34...v2.42.35

[2.42.34]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.33...v2.42.34

[2.42.33]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.32...v2.42.33

[2.42.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.31...v2.42.32

[2.42.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.30...v2.42.31

[2.42.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.29...v2.42.30

[2.42.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.28...v2.42.29

[2.42.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.27...v2.42.28

[2.42.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.26...v2.42.27

[2.42.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.25...v2.42.26

[2.42.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.24...v2.42.25

[2.42.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.23...v2.42.24

[2.42.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.22...v2.42.23

[2.42.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.21...v2.42.22

[2.42.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.20...v2.42.21

[2.42.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.19...v2.42.20

[2.42.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.18...v2.42.19

[2.42.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.17...v2.42.18

[2.42.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.16...v2.42.17

[2.42.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.15...v2.42.16

[2.42.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.14...v2.42.15

[2.42.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.13...v2.42.14

[2.42.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.12...v2.42.13

[2.42.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.11...v2.42.12

[2.42.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.10...v2.42.11

[2.42.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.9...v2.42.10

[2.42.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.8...v2.42.9

[2.42.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.7...v2.42.8

[2.42.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.6...v2.42.7

[2.42.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.5...v2.42.6

[2.42.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.4...v2.42.5

[2.42.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.3...v2.42.4

[2.42.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.2...v2.42.3

[2.42.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.1...v2.42.2

[2.42.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.0...v2.42.1

[2.42.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.18...v2.42.0

[2.41.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.17...v2.41.18

[2.41.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.16...v2.41.17

[2.41.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.15...v2.41.16

[2.41.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.14...v2.41.15

[2.41.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.13...v2.41.14

[2.41.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.12...v2.41.13

[2.41.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.11...v2.41.12

[2.41.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.10...v2.41.11

[2.41.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.9...v2.41.10

[2.41.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.8...v2.41.9

[2.41.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.7...v2.41.8

[2.41.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.6...v2.41.7

[2.41.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.5...v2.41.6

[2.41.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.4...v2.41.5

[2.41.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.3...v2.41.4

[2.41.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.2...v2.41.3

[2.41.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.1...v2.41.2

[2.41.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.0...v2.41.1

[2.41.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.40.2...v2.41.0

[2.40.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.40.1...v2.40.2

[2.40.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.40.0...v2.40.1

[2.40.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.39.2...v2.40.0

[2.39.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.39.1...v2.39.2

[2.39.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.39.0...v2.39.1

[2.39.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.7...v2.39.0

[2.38.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.6...v2.38.7

[2.38.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.5...v2.38.6

[2.38.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.4...v2.38.5

[2.38.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.3...v2.38.4

[2.38.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.2...v2.38.3

[2.38.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.1...v2.38.2

[2.38.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.0...v2.38.1

[2.38.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.37.0...v2.38.0

[2.37.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.36.0...v2.37.0

[2.36.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.35.0...v2.36.0

[2.35.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.34.3...v2.35.0

[2.34.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.34.2...v2.34.3

[2.34.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.34.1...v2.34.2

[2.34.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.34.0...v2.34.1

[2.34.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.36...v2.34.0

[2.33.36]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.35...v2.33.36

[2.33.35]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.34...v2.33.35

[2.33.34]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.33...v2.33.34

[2.33.33]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.32...v2.33.33

[2.33.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.31...v2.33.32

[2.33.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.30...v2.33.31

[2.33.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.29...v2.33.30

[2.33.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.28...v2.33.29

[2.33.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.27...v2.33.28

[2.33.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.26...v2.33.27

[2.33.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.25...v2.33.26

[2.33.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.24...v2.33.25

[2.33.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.23...v2.33.24

[2.33.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.22...v2.33.23

[2.33.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.21...v2.33.22

[2.33.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.20...v2.33.21

[2.33.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.19...v2.33.20

[2.33.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.18...v2.33.19

[2.33.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.17...v2.33.18

[2.33.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.16...v2.33.17

[2.33.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.15...v2.33.16

[2.33.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.14...v2.33.15

[2.33.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.13...v2.33.14

[2.33.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.12...v2.33.13

[2.33.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.11...v2.33.12

[2.33.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.10...v2.33.11

[2.33.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.9...v2.33.10

[2.33.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.8...v2.33.9

[2.33.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.7...v2.33.8

[2.33.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.6...v2.33.7

[2.33.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.5...v2.33.6

[2.33.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.4...v2.33.5

[2.33.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.3...v2.33.4

[2.33.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.2...v2.33.3

[2.33.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.1...v2.33.2

[2.33.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.0...v2.33.1

[2.33.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.20...v2.33.0

[2.32.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.19...v2.32.20

[2.32.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.18...v2.32.19

[2.32.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.17...v2.32.18

[2.32.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.16...v2.32.17

[2.32.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.15...v2.32.16

[2.32.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.14...v2.32.15

[2.32.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.13...v2.32.14

[2.32.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.12...v2.32.13

[2.32.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.11...v2.32.12

[2.32.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.10...v2.32.11

[2.32.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.9...v2.32.10

[2.32.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.8...v2.32.9

[2.32.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.7...v2.32.8

[2.32.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.6...v2.32.7

[2.32.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.5...v2.32.6

[2.32.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.4...v2.32.5

[2.32.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.3...v2.32.4

[2.32.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.2...v2.32.3

[2.32.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.1...v2.32.2

[2.32.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.0...v2.32.1

[2.32.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.31.3...v2.32.0

[2.31.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.31.2...v2.31.3

[2.31.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.31.1...v2.31.2

[2.31.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.31.0...v2.31.1

[2.31.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.30.0...v2.31.0

[2.30.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.8...v2.30.0

[2.29.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.7...v2.29.8

[2.29.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.6...v2.29.7

[2.29.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.5...v2.29.6

[2.29.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.4...v2.29.5

[2.29.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.3...v2.29.4

[2.29.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.2...v2.29.3

[2.29.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.1...v2.29.2

[2.29.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.0...v2.29.1

[2.29.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.16...v2.29.0

[2.28.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.15...v2.28.16

[2.28.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.14...v2.28.15

[2.28.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.13...v2.28.14

[2.28.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.12...v2.28.13

[2.28.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.11...v2.28.12

[2.28.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.10...v2.28.11

[2.28.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.9...v2.28.10

[2.28.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.8...v2.28.9

[2.28.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.7...v2.28.8

[2.28.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.6...v2.28.7

[2.28.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.5...v2.28.6

[2.28.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.4...v2.28.5

[2.28.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.3...v2.28.4

[2.28.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.2...v2.28.3

[2.28.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.1...v2.28.2

[2.28.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.0...v2.28.1

[2.28.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.15...v2.28.0

[2.27.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.14...v2.27.15

[2.27.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.13...v2.27.14

[2.27.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.12...v2.27.13

[2.27.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.11...v2.27.12

[2.27.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.10...v2.27.11

[2.27.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.9...v2.27.10

[2.27.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.8...v2.27.9

[2.27.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.7...v2.27.8

[2.27.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.6...v2.27.7

[2.27.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.5...v2.27.6

[2.27.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.4...v2.27.5

[2.27.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.3...v2.27.4

[2.27.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.2...v2.27.3

[2.27.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.1...v2.27.2

[2.27.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.0...v2.27.1

[2.27.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.20...v2.27.0

[2.26.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.19...v2.26.20

[2.26.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.18...v2.26.19

[2.26.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.17...v2.26.18

[2.26.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.16...v2.26.17

[2.26.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.15...v2.26.16

[2.26.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.14...v2.26.15

[2.26.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.13...v2.26.14

[2.26.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.12...v2.26.13

[2.26.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.11...v2.26.12

[2.26.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.10...v2.26.11

[2.26.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.9...v2.26.10

[2.26.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.8...v2.26.9

[2.26.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.7...v2.26.8

[2.26.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.6...v2.26.7

[2.26.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.5...v2.26.6

[2.26.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.4...v2.26.5

[2.26.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.3...v2.26.4

[2.26.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.2...v2.26.3

[2.26.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.1...v2.26.2

[2.26.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.0...v2.26.1

[2.26.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.11...v2.26.0

[2.25.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.10...v2.25.11

[2.25.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.9...v2.25.10

[2.25.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.8...v2.25.9

[2.25.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.7...v2.25.8

[2.25.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.6...v2.25.7

[2.25.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.5...v2.25.6

[2.25.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.4...v2.25.5

[2.25.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.3...v2.25.4

[2.25.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.2...v2.25.3

[2.25.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.1...v2.25.2

[2.25.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.0...v2.25.1

[2.25.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.24.4...v2.25.0

[2.24.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.24.3...v2.24.4

[2.24.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.24.2...v2.24.3

[2.24.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.24.1...v2.24.2

[2.24.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.24.0...v2.24.1

[2.24.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.9...v2.24.0

[2.23.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.8...v2.23.9

[2.23.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.7...v2.23.8

[2.23.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.6...v2.23.7

[2.23.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.5...v2.23.6

[2.23.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.4...v2.23.5

[2.23.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.3...v2.23.4

[2.23.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.2...v2.23.3

[2.23.2]: https://redirect.github.com/taiki-e/in

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJpbnRlcm5hbCJdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-10 09:15:35 +01:00
renovate[bot]
2a1d412f72 Update Rust crate syn to v2.0.110 (#21357)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [syn](https://redirect.github.com/dtolnay/syn) |
workspace.dependencies | patch | `2.0.108` -> `2.0.110` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>dtolnay/syn (syn)</summary>

###
[`v2.0.110`](https://redirect.github.com/dtolnay/syn/releases/tag/2.0.110)

[Compare
Source](https://redirect.github.com/dtolnay/syn/compare/2.0.109...2.0.110)

- Tweaks to improve build speed
([#&#8203;1939](https://redirect.github.com/dtolnay/syn/issues/1939),
thanks [@&#8203;dishmaker](https://redirect.github.com/dishmaker))
- Make `syn::ext::IdentExt::unraw` available without "parsing" feature
([#&#8203;1940](https://redirect.github.com/dtolnay/syn/issues/1940))
- Support parsing `syn::Meta` followed by `=>`
([#&#8203;1944](https://redirect.github.com/dtolnay/syn/issues/1944))

###
[`v2.0.109`](https://redirect.github.com/dtolnay/syn/releases/tag/2.0.109)

[Compare
Source](https://redirect.github.com/dtolnay/syn/compare/2.0.108...2.0.109)

- Tweaks to improve build speed
([#&#8203;1927](https://redirect.github.com/dtolnay/syn/issues/1927),
[#&#8203;1928](https://redirect.github.com/dtolnay/syn/issues/1928),
[#&#8203;1930](https://redirect.github.com/dtolnay/syn/issues/1930),
[#&#8203;1932](https://redirect.github.com/dtolnay/syn/issues/1932),
[#&#8203;1934](https://redirect.github.com/dtolnay/syn/issues/1934),
thanks [@&#8203;dishmaker](https://redirect.github.com/dishmaker))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJpbnRlcm5hbCJdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-10 09:14:48 +01:00
renovate[bot]
dd751e8d07 Update dependency ruff to v0.14.4 (#21353)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-10 01:35:42 +00:00
Alex Waygood
020ff1723b [ty] Add narrowing for isinstance() and issubclass() checks that use PEP-604 unions (#21334) 2025-11-08 18:20:46 +00:00
Hugo van Kemenade
09e6af16c8 [ruff+ty] Add colour to --help (#21337) 2025-11-08 17:17:14 +01:00
Micha Reiser
76efc8061d [ty] Make variance_of logging trace only (#21339) 2025-11-08 14:10:24 +00:00
Dan Parizher
16de4aa3cc [refurb] Auto-fix annotated assignments (FURB101) (#21278)
## Summary

Fixed FURB101 (`read-whole-file`) to handle annotated assignments.
Previously, the rule would detect violations in code like `contents: str
= f.read()` but fail to generate a fix. Now it correctly generates fixes
that preserve type annotations (e.g., `contents: str =
Path("file.txt").read_text(encoding="utf-8")`).

Fixes #21274

## Problem Analysis

The FURB101 rule was only checking for `Stmt::Assign` statements when
determining whether a fix could be applied. When encountering annotated
assignments (`Stmt::AnnAssign`) like `contents: str = f.read()`, the
rule would:

1. Correctly detect the violation (the diagnostic was reported)
2. Fail to generate a fix because:
- The `visit_expr` method only matched `Stmt::Assign`, not
`Stmt::AnnAssign`
- The `generate_fix` function only accepted `Stmt::Assign` in its body
validation
   - The replacement code generation didn't account for type annotations

This occurred because Python's AST represents annotated assignments as a
different node type (`StmtAnnAssign`) with separate fields for the
target, annotation, and value, unlike regular assignments which use a
list of targets.

## Approach

The fix extends the rule to handle both assignment types:

1. **Updated `visit_expr` method**: Now matches both `Stmt::Assign` and
`Stmt::AnnAssign`, extracting:
   - Variable name from the target expression
   - Type annotation code (when present) using the code generator

2. **Updated `generate_fix` function**:
- Added `annotation: Option<String>` parameter to accept annotation code
- Updated body validation to accept both `Stmt::Assign` and
`Stmt::AnnAssign`
- Modified replacement code generation to preserve annotations: `{var}:
{annotation} = {binding}({filename_code}).{suggestion}`

3. **Added test case**: Added an annotated assignment test case to
verify the fix works correctly.

The implementation maintains backward compatibility with regular
assignments while adding support for annotated assignments, ensuring
type annotations are preserved in the generated fixes.

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-11-07 19:04:45 -05:00
Brent Westbrook
e06e108095 [flake8-annotations] Add link to allow-star-arg-any option (ANN401) (#21326)
Summary
--

Addresses
https://github.com/astral-sh/ruff/issues/19152#issuecomment-3501373508
by adding a link to the configuration option to the rule page.

Test Plan
--

Built the docs locally and made sure the link was present and working
2025-11-07 18:45:53 -05:00
William Woodruff
b6add3ee6d chore: bump dist, remove old commenting workflows (#21302) 2025-11-07 17:09:29 -05:00
Alex Waygood
39c21d7c6c [ty] Generalize some infrastructure around type visitors (#21323)
We have lots of `TypeVisitor`s that end up having very similar
`visit_type` implementations. This PR consolidates some of the code for
these so that there's less repetition and duplication.
2025-11-07 16:26:30 -05:00
Micha Reiser
1617292e9f Update CodSpeedHQ/action action to v4.3.3 (#21254)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-07 19:39:52 +00:00
Douglas Creager
faae72b836 [ty] Clarify behavior of constraint sets for gradual upper bounds and constraints (#21287)
When checking whether a constraint set is satisfied, if a typevar has a
non-fully-static upper bound or constraint, we are free to choose any
materialization that makes the check succeed.

In non-inferable positions, we have to show that the constraint set is
satisfied for all valid specializations, so it's best to choose the most
restrictive materialization, since that minimizes the set of valid
specializations that have to pass.

In inferable positions, we only have to show that the constraint set is
satisfied for _some_ valid specializations, so it's best to choose the
most permissive materialization, since that maximizes our chances of
finding a specialization that passes.
2025-11-07 14:01:39 -05:00
Brent Westbrook
276f1d0d88 Remove duplicate preview tests for FURB101 and FURB103 (#21303)
Summary
--

These rules are themselves in preview, so we don't need the additional
preview checks on the fixes or the separate preview tests. This has
confused me in a couple of reviews of changes to the fixes.

Test Plan
--

Existing tests, with the fixes previously only shown in the preview
tests now in the "non-preview" tests.
2025-11-07 12:47:21 -05:00
David Peter
ed18112cfa [ty] Add support for Literals in implicit type aliases (#21296)
## Summary

Add support for `Literal` types in implicit type aliases.

part of https://github.com/astral-sh/ty/issues/221

## Ecosystem analysis

This looks good to me, true positives and known problems.

## Test Plan

New Markdown tests.
2025-11-07 17:46:55 +01:00
Micha Reiser
8ba1cfebed [ty] Add missing heap_size to variance_of queries (#21318) 2025-11-07 16:18:28 +00:00
Dan Parizher
6185a2af9e [pyupgrade] Fix false positive on relative imports from local .builtins module (UP029) (#21309) 2025-11-07 16:01:52 +00:00
Micha Reiser
6cc3393ccd [ty] Make range/position conversions fallible (#21297)
Co-authored-by: Aria Desires <aria.desires@gmail.com>
2025-11-07 14:44:23 +00:00
148 changed files with 7693 additions and 2368 deletions

View File

@@ -231,6 +231,8 @@ jobs:
with:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain"
run: |
rustup component add clippy
@@ -251,16 +253,19 @@ jobs:
with:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
shared-key: ruff-linux-debug
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
with:
tool: cargo-insta
- name: "Install uv"
@@ -291,14 +296,6 @@ jobs:
env:
# Setting RUSTDOCFLAGS because `cargo doc --check` isn't yet implemented (https://github.com/rust-lang/cargo/issues/10025).
RUSTDOCFLAGS: "-D warnings"
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: ruff
path: target/debug/ruff
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: ty
path: target/debug/ty
cargo-test-linux-release:
name: "cargo test (linux, release)"
@@ -315,16 +312,18 @@ jobs:
with:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
with:
tool: cargo-insta
- name: "Install uv"
@@ -350,10 +349,12 @@ jobs:
with:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain"
run: rustup show
- name: "Install cargo nextest"
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
with:
tool: cargo-nextest
- name: "Install uv"
@@ -376,6 +377,8 @@ jobs:
with:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown
- uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
@@ -411,6 +414,8 @@ jobs:
file: "Cargo.toml"
field: "workspace.package.rust-version"
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain"
env:
MSRV: ${{ steps.msrv.outputs.value }}
@@ -435,12 +440,13 @@ jobs:
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
workspaces: "fuzz -> target"
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo-binstall"
uses: cargo-bins/cargo-binstall@b3f755e95653da9a2d25b99154edfdbd5b356d0a # v1.15.10
uses: cargo-bins/cargo-binstall@ae04fb5e853ae6cd3ad7de4a1d554a8b646d12aa # v1.15.11
- name: "Install cargo-fuzz"
# Download the latest version from quick install and not the github releases because github releases only has MUSL targets.
run: cargo binstall cargo-fuzz --force --disable-strategies crate-meta-data --no-confirm
@@ -449,9 +455,7 @@ jobs:
fuzz-parser:
name: "fuzz parser"
runs-on: ubuntu-latest
needs:
- cargo-test-linux
- determine_changes
needs: determine_changes
if: ${{ !contains(github.event.pull_request.labels.*.name, 'no-test') && (needs.determine_changes.outputs.parser == 'true' || needs.determine_changes.outputs.py-fuzzer == 'true') }}
timeout-minutes: 20
env:
@@ -461,26 +465,23 @@ jobs:
with:
persist-credentials: false
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
name: Download Ruff binary to test
id: download-cached-binary
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
name: ruff
path: ruff-to-test
shared-key: ruff-linux-debug
save-if: false
- name: "Install Rust toolchain"
run: rustup show
- name: Build Ruff binary
run: cargo build --bin ruff
- name: Fuzz
env:
DOWNLOAD_PATH: ${{ steps.download-cached-binary.outputs.download-path }}
run: |
# Make executable, since artifact download doesn't preserve this
chmod +x "${DOWNLOAD_PATH}/ruff"
(
uv run \
--python="${PYTHON_VERSION}" \
--project=./python/py-fuzzer \
--locked \
fuzz \
--test-executable="${DOWNLOAD_PATH}/ruff" \
--test-executable=target/debug/ruff \
--bin=ruff \
0-500
)
@@ -496,6 +497,8 @@ jobs:
with:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- name: "Install Rust toolchain"
run: rustup component add rustfmt
@@ -520,9 +523,7 @@ jobs:
ecosystem:
name: "ecosystem"
runs-on: ${{ github.repository == 'astral-sh/ruff' && 'depot-ubuntu-latest-8' || 'ubuntu-latest' }}
needs:
- cargo-test-linux
- determine_changes
needs: determine_changes
# Only runs on pull requests, since that is the only we way we can find the base version for comparison.
# Ecosystem check needs linter and/or formatter changes.
if: ${{ !contains(github.event.pull_request.labels.*.name, 'no-test') && github.event_name == 'pull_request' && needs.determine_changes.outputs.code == 'true' }}
@@ -530,26 +531,37 @@ jobs:
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
ref: ${{ github.event.pull_request.base.ref }}
persist-credentials: false
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
with:
python-version: ${{ env.PYTHON_VERSION }}
activate-environment: true
- uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
name: Download comparison Ruff binary
id: ruff-target
with:
name: ruff
path: target/debug
- name: "Install Rust toolchain"
run: rustup show
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: Download baseline Ruff binary
- name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
name: ruff
branch: ${{ github.event.pull_request.base.ref }}
workflow: "ci.yaml"
check_artifacts: true
shared-key: ruff-linux-debug
save-if: false
- name: Build baseline version
run: |
cargo build --bin ruff
mv target/debug/ruff target/debug/ruff-baseline
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
clean: false
- name: Build comparison version
run: cargo build --bin ruff
- name: Install ruff-ecosystem
run: |
@@ -557,16 +569,11 @@ jobs:
- name: Run `ruff check` stable ecosystem check
if: ${{ needs.determine_changes.outputs.linter == 'true' }}
env:
DOWNLOAD_PATH: ${{ steps.ruff-target.outputs.download-path }}
run: |
# Make executable, since artifact download doesn't preserve this
chmod +x ./ruff "${DOWNLOAD_PATH}/ruff"
# Set pipefail to avoid hiding errors with tee
set -eo pipefail
ruff-ecosystem check ./ruff "${DOWNLOAD_PATH}/ruff" --cache ./checkouts --output-format markdown | tee ecosystem-result-check-stable
ruff-ecosystem check ./target/debug/ruff-baseline ./target/debug/ruff --cache ./checkouts --output-format markdown | tee ecosystem-result-check-stable
cat ecosystem-result-check-stable > "$GITHUB_STEP_SUMMARY"
echo "### Linter (stable)" > ecosystem-result
@@ -575,16 +582,11 @@ jobs:
- name: Run `ruff check` preview ecosystem check
if: ${{ needs.determine_changes.outputs.linter == 'true' }}
env:
DOWNLOAD_PATH: ${{ steps.ruff-target.outputs.download-path }}
run: |
# Make executable, since artifact download doesn't preserve this
chmod +x ./ruff "${DOWNLOAD_PATH}/ruff"
# Set pipefail to avoid hiding errors with tee
set -eo pipefail
ruff-ecosystem check ./ruff "${DOWNLOAD_PATH}/ruff" --cache ./checkouts --output-format markdown --force-preview | tee ecosystem-result-check-preview
ruff-ecosystem check ./target/debug/ruff-baseline ./target/debug/ruff --cache ./checkouts --output-format markdown --force-preview | tee ecosystem-result-check-preview
cat ecosystem-result-check-preview > "$GITHUB_STEP_SUMMARY"
echo "### Linter (preview)" >> ecosystem-result
@@ -593,16 +595,11 @@ jobs:
- name: Run `ruff format` stable ecosystem check
if: ${{ needs.determine_changes.outputs.formatter == 'true' }}
env:
DOWNLOAD_PATH: ${{ steps.ruff-target.outputs.download-path }}
run: |
# Make executable, since artifact download doesn't preserve this
chmod +x ./ruff "${DOWNLOAD_PATH}/ruff"
# Set pipefail to avoid hiding errors with tee
set -eo pipefail
ruff-ecosystem format ./ruff "${DOWNLOAD_PATH}/ruff" --cache ./checkouts --output-format markdown | tee ecosystem-result-format-stable
ruff-ecosystem format ./target/debug/ruff-baseline ./target/debug/ruff --cache ./checkouts --output-format markdown | tee ecosystem-result-format-stable
cat ecosystem-result-format-stable > "$GITHUB_STEP_SUMMARY"
echo "### Formatter (stable)" >> ecosystem-result
@@ -611,32 +608,19 @@ jobs:
- name: Run `ruff format` preview ecosystem check
if: ${{ needs.determine_changes.outputs.formatter == 'true' }}
env:
DOWNLOAD_PATH: ${{ steps.ruff-target.outputs.download-path }}
run: |
# Make executable, since artifact download doesn't preserve this
chmod +x ./ruff "${DOWNLOAD_PATH}/ruff"
# Set pipefail to avoid hiding errors with tee
set -eo pipefail
ruff-ecosystem format ./ruff "${DOWNLOAD_PATH}/ruff" --cache ./checkouts --output-format markdown --force-preview | tee ecosystem-result-format-preview
ruff-ecosystem format ./target/debug/ruff-baseline ./target/debug/ruff --cache ./checkouts --output-format markdown --force-preview | tee ecosystem-result-format-preview
cat ecosystem-result-format-preview > "$GITHUB_STEP_SUMMARY"
echo "### Formatter (preview)" >> ecosystem-result
cat ecosystem-result-format-preview >> ecosystem-result
echo "" >> ecosystem-result
- name: Export pull request number
run: |
echo ${{ github.event.number }} > pr-number
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
name: Upload PR Number
with:
name: pr-number
path: pr-number
# NOTE: astral-sh-bot uses this artifact to post comments on PRs.
# Make sure to update the bot if you rename the artifact.
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
name: Upload Results
with:
@@ -658,6 +642,8 @@ jobs:
persist-credentials: false
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
@@ -700,7 +686,7 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- uses: cargo-bins/cargo-binstall@b3f755e95653da9a2d25b99154edfdbd5b356d0a # v1.15.10
- uses: cargo-bins/cargo-binstall@ae04fb5e853ae6cd3ad7de4a1d554a8b646d12aa # v1.15.11
- run: cargo binstall --no-confirm cargo-shear
- run: cargo shear
@@ -715,12 +701,14 @@ jobs:
persist-credentials: false
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Run ty completion evaluation"
run: cargo run --release --package ty_completion_eval -- all --threshold 0.4 --tasks /tmp/completion-evaluation-tasks.csv
run: cargo run --profile profiling --package ty_completion_eval -- all --threshold 0.4 --tasks /tmp/completion-evaluation-tasks.csv
- name: "Ensure there are no changes"
run: diff ./crates/ty_completion_eval/completion-evaluation-tasks.csv /tmp/completion-evaluation-tasks.csv
@@ -738,6 +726,8 @@ jobs:
python-version: ${{ env.PYTHON_VERSION }}
architecture: x64
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
@@ -762,6 +752,8 @@ jobs:
persist-credentials: false
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
with:
node-version: 22
@@ -792,6 +784,8 @@ jobs:
with:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
uses: webfactory/ssh-agent@a6f90b1f127823b31d4d4a8d96047790581349bd # v0.9.1
@@ -834,6 +828,8 @@ jobs:
with:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain"
run: rustup show
- name: "Run checks"
@@ -847,9 +843,7 @@ jobs:
name: "test ruff-lsp"
runs-on: ubuntu-latest
timeout-minutes: 5
needs:
- cargo-test-linux
- determine_changes
needs: determine_changes
if: ${{ !contains(github.event.pull_request.labels.*.name, 'no-test') && (needs.determine_changes.outputs.code == 'true' || github.ref == 'refs/heads/main') }}
steps:
- uses: extractions/setup-just@e33e0265a09d6d736e2ee1e0eb685ef1de4669ff # v3.0.0
@@ -857,37 +851,46 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
name: "Download ruff-lsp source"
name: "Checkout ruff source"
with:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
shared-key: ruff-linux-debug
save-if: false
- name: "Install Rust toolchain"
run: rustup show
- name: Build Ruff binary
run: cargo build -p ruff --bin ruff
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
name: "Checkout ruff-lsp source"
with:
persist-credentials: false
repository: "astral-sh/ruff-lsp"
path: ruff-lsp
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
# installation fails on 3.13 and newer
python-version: "3.12"
- uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
name: Download development ruff binary
id: ruff-target
with:
name: ruff
path: target/debug
- name: Install ruff-lsp dependencies
run: |
cd ruff-lsp
just install
- name: Run ruff-lsp tests
env:
DOWNLOAD_PATH: ${{ steps.ruff-target.outputs.download-path }}
run: |
# Setup development binary
pip uninstall --yes ruff
chmod +x "${DOWNLOAD_PATH}/ruff"
export PATH="${DOWNLOAD_PATH}:${PATH}"
export PATH="${PWD}/target/debug:${PATH}"
ruff version
cd ruff-lsp
just test
check-playground:
@@ -904,6 +907,8 @@ jobs:
- name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
with:
node-version: 22
@@ -942,13 +947,15 @@ jobs:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- name: "Install Rust toolchain"
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
with:
tool: cargo-codspeed
@@ -956,7 +963,7 @@ jobs:
run: cargo codspeed build --features "codspeed,instrumented" --profile profiling --no-default-features -p ruff_benchmark --bench formatter --bench lexer --bench linter --bench parser
- name: "Run benchmarks"
uses: CodSpeedHQ/action@6b43a0cd438f6ca5ad26f9ed03ed159ed2df7da9 # v4.1.1
uses: CodSpeedHQ/action@bb005fe1c1eea036d3894f02c049cb6b154a1c27 # v4.3.3
with:
mode: instrumentation
run: cargo codspeed run
@@ -980,13 +987,15 @@ jobs:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- name: "Install Rust toolchain"
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
with:
tool: cargo-codspeed
@@ -994,7 +1003,7 @@ jobs:
run: cargo codspeed build --features "codspeed,instrumented" --profile profiling --no-default-features -p ruff_benchmark --bench ty
- name: "Run benchmarks"
uses: CodSpeedHQ/action@6b43a0cd438f6ca5ad26f9ed03ed159ed2df7da9 # v4.1.1
uses: CodSpeedHQ/action@bb005fe1c1eea036d3894f02c049cb6b154a1c27 # v4.3.3
with:
mode: instrumentation
run: cargo codspeed run
@@ -1018,13 +1027,15 @@ jobs:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- name: "Install Rust toolchain"
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
with:
tool: cargo-codspeed
@@ -1032,7 +1043,7 @@ jobs:
run: cargo codspeed build --features "codspeed,walltime" --profile profiling --no-default-features -p ruff_benchmark
- name: "Run benchmarks"
uses: CodSpeedHQ/action@6b43a0cd438f6ca5ad26f9ed03ed159ed2df7da9 # v4.1.1
uses: CodSpeedHQ/action@bb005fe1c1eea036d3894f02c049cb6b154a1c27 # v4.3.3
env:
# enabling walltime flamegraphs adds ~6 minutes to the CI time, and they don't
# appear to provide much useful insight for our walltime benchmarks right now

View File

@@ -59,20 +59,15 @@ jobs:
run: |
cd ruff
scripts/mypy_primer.sh
echo ${{ github.event.number }} > ../pr-number
# NOTE: astral-sh-bot uses this artifact to post comments on PRs.
# Make sure to update the bot if you rename the artifact.
- name: Upload diff
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: mypy_primer_diff
path: mypy_primer.diff
- name: Upload pr-number
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: pr-number
path: pr-number
memory_usage:
name: Run memory statistics
runs-on: ${{ github.repository == 'astral-sh/ruff' && 'depot-ubuntu-22.04-32' || 'ubuntu-latest' }}

View File

@@ -1,122 +0,0 @@
name: PR comment (mypy_primer)
on: # zizmor: ignore[dangerous-triggers]
workflow_run:
workflows: [Run mypy_primer]
types: [completed]
workflow_dispatch:
inputs:
workflow_run_id:
description: The mypy_primer workflow that triggers the workflow run
required: true
jobs:
comment:
runs-on: ubuntu-24.04
permissions:
pull-requests: write
steps:
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: Download PR number
with:
name: pr-number
run_id: ${{ github.event.workflow_run.id || github.event.inputs.workflow_run_id }}
if_no_artifact_found: ignore
allow_forks: true
- name: Parse pull request number
id: pr-number
run: |
if [[ -f pr-number ]]
then
echo "pr-number=$(<pr-number)" >> "$GITHUB_OUTPUT"
fi
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: "Download mypy_primer results"
id: download-mypy_primer_diff
if: steps.pr-number.outputs.pr-number
with:
name: mypy_primer_diff
workflow: mypy_primer.yaml
pr: ${{ steps.pr-number.outputs.pr-number }}
path: pr/mypy_primer_diff
workflow_conclusion: completed
if_no_artifact_found: ignore
allow_forks: true
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: "Download mypy_primer memory results"
id: download-mypy_primer_memory_diff
if: steps.pr-number.outputs.pr-number
with:
name: mypy_primer_memory_diff
workflow: mypy_primer.yaml
pr: ${{ steps.pr-number.outputs.pr-number }}
path: pr/mypy_primer_memory_diff
workflow_conclusion: completed
if_no_artifact_found: ignore
allow_forks: true
- name: Generate comment content
id: generate-comment
if: ${{ steps.download-mypy_primer_diff.outputs.found_artifact == 'true' && steps.download-mypy_primer_memory_diff.outputs.found_artifact == 'true' }}
run: |
# Guard against malicious mypy_primer results that symlink to a secret
# file on this runner
if [[ -L pr/mypy_primer_diff/mypy_primer.diff ]] || [[ -L pr/mypy_primer_memory_diff/mypy_primer_memory.diff ]]
then
echo "Error: mypy_primer.diff and mypy_primer_memory.diff cannot be a symlink"
exit 1
fi
# Note this identifier is used to find the comment to update on
# subsequent runs
echo '<!-- generated-comment mypy_primer -->' >> comment.txt
echo '## `mypy_primer` results' >> comment.txt
if [ -s "pr/mypy_primer_diff/mypy_primer.diff" ]; then
echo '<details>' >> comment.txt
echo '<summary>Changes were detected when running on open source projects</summary>' >> comment.txt
echo '' >> comment.txt
echo '```diff' >> comment.txt
cat pr/mypy_primer_diff/mypy_primer.diff >> comment.txt
echo '```' >> comment.txt
echo '</details>' >> comment.txt
else
echo 'No ecosystem changes detected ✅' >> comment.txt
fi
if [ -s "pr/mypy_primer_memory_diff/mypy_primer_memory.diff" ]; then
echo '<details>' >> comment.txt
echo '<summary>Memory usage changes were detected when running on open source projects</summary>' >> comment.txt
echo '' >> comment.txt
echo '```diff' >> comment.txt
cat pr/mypy_primer_memory_diff/mypy_primer_memory.diff >> comment.txt
echo '```' >> comment.txt
echo '</details>' >> comment.txt
else
echo 'No memory usage changes detected ✅' >> comment.txt
fi
echo 'comment<<EOF' >> "$GITHUB_OUTPUT"
cat comment.txt >> "$GITHUB_OUTPUT"
echo 'EOF' >> "$GITHUB_OUTPUT"
- name: Find existing comment
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
if: steps.generate-comment.outcome == 'success'
id: find-comment
with:
issue-number: ${{ steps.pr-number.outputs.pr-number }}
comment-author: "github-actions[bot]"
body-includes: "<!-- generated-comment mypy_primer -->"
- name: Create or update comment
if: steps.find-comment.outcome == 'success'
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ steps.pr-number.outputs.pr-number }}
body-path: comment.txt
edit-mode: replace

View File

@@ -1,88 +0,0 @@
name: Ecosystem check comment
on:
workflow_run:
workflows: [CI]
types: [completed]
workflow_dispatch:
inputs:
workflow_run_id:
description: The ecosystem workflow that triggers the workflow run
required: true
jobs:
comment:
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: Download pull request number
with:
name: pr-number
run_id: ${{ github.event.workflow_run.id || github.event.inputs.workflow_run_id }}
if_no_artifact_found: ignore
allow_forks: true
- name: Parse pull request number
id: pr-number
run: |
if [[ -f pr-number ]]
then
echo "pr-number=$(<pr-number)" >> "$GITHUB_OUTPUT"
fi
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: "Download ecosystem results"
id: download-ecosystem-result
if: steps.pr-number.outputs.pr-number
with:
name: ecosystem-result
workflow: ci.yaml
pr: ${{ steps.pr-number.outputs.pr-number }}
path: pr/ecosystem
workflow_conclusion: completed
if_no_artifact_found: ignore
allow_forks: true
- name: Generate comment content
id: generate-comment
if: steps.download-ecosystem-result.outputs.found_artifact == 'true'
run: |
# Guard against malicious ecosystem results that symlink to a secret
# file on this runner
if [[ -L pr/ecosystem/ecosystem-result ]]
then
echo "Error: ecosystem-result cannot be a symlink"
exit 1
fi
# Note this identifier is used to find the comment to update on
# subsequent runs
echo '<!-- generated-comment ecosystem -->' >> comment.txt
echo '## `ruff-ecosystem` results' >> comment.txt
cat pr/ecosystem/ecosystem-result >> comment.txt
echo "" >> comment.txt
echo 'comment<<EOF' >> "$GITHUB_OUTPUT"
cat comment.txt >> "$GITHUB_OUTPUT"
echo 'EOF' >> "$GITHUB_OUTPUT"
- name: Find existing comment
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
if: steps.generate-comment.outcome == 'success'
id: find-comment
with:
issue-number: ${{ steps.pr-number.outputs.pr-number }}
comment-author: "github-actions[bot]"
body-includes: "<!-- generated-comment ecosystem -->"
- name: Create or update comment
if: steps.find-comment.outcome == 'success'
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ steps.pr-number.outputs.pr-number }}
body-path: comment.txt
edit-mode: replace

View File

@@ -68,7 +68,7 @@ jobs:
# we specify bash to get pipefail; it guards against the `curl` command
# failing. otherwise `sh` won't catch that `curl` returned non-0
shell: bash
run: "curl --proto '=https' --tlsv1.2 -LsSf https://github.com/axodotdev/cargo-dist/releases/download/v0.30.0/cargo-dist-installer.sh | sh"
run: "curl --proto '=https' --tlsv1.2 -LsSf https://github.com/axodotdev/cargo-dist/releases/download/v0.30.2/cargo-dist-installer.sh | sh"
- name: Cache dist
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
@@ -166,8 +166,8 @@ jobs:
- custom-build-binaries
- custom-build-docker
- build-global-artifacts
# Only run if we're "publishing", and only if local and global didn't fail (skipped is fine)
if: ${{ always() && needs.plan.outputs.publishing == 'true' && (needs.build-global-artifacts.result == 'skipped' || needs.build-global-artifacts.result == 'success') && (needs.custom-build-binaries.result == 'skipped' || needs.custom-build-binaries.result == 'success') && (needs.custom-build-docker.result == 'skipped' || needs.custom-build-docker.result == 'success') }}
# Only run if we're "publishing", and only if plan, local and global didn't fail (skipped is fine)
if: ${{ always() && needs.plan.result == 'success' && needs.plan.outputs.publishing == 'true' && (needs.build-global-artifacts.result == 'skipped' || needs.build-global-artifacts.result == 'success') && (needs.custom-build-binaries.result == 'skipped' || needs.custom-build-binaries.result == 'success') && (needs.custom-build-docker.result == 'skipped' || needs.custom-build-docker.result == 'success') }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
runs-on: "depot-ubuntu-latest-4"

View File

@@ -207,12 +207,12 @@ jobs:
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo nextest"
if: ${{ success() }}
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
with:
tool: cargo-nextest
- name: "Install cargo insta"
if: ${{ success() }}
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
with:
tool: cargo-insta
- name: Update snapshots

View File

@@ -112,8 +112,6 @@ jobs:
cat diff-statistics.md >> "$GITHUB_STEP_SUMMARY"
echo ${{ github.event.number }} > pr-number
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
id: deploy
@@ -131,18 +129,14 @@ jobs:
echo >> comment.md
echo "**[Full report with detailed diff]($DEPLOYMENT_URL/diff)** ([timing results]($DEPLOYMENT_URL/timing))" >> comment.md
# NOTE: astral-sh-bot uses this artifact to post comments on PRs.
# Make sure to update the bot if you rename the artifact.
- name: Upload comment
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: comment.md
path: comment.md
- name: Upload pr-number
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: pr-number
path: pr-number
- name: Upload diagnostics diff
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:

View File

@@ -1,85 +0,0 @@
name: PR comment (ty ecosystem-analyzer)
on: # zizmor: ignore[dangerous-triggers]
workflow_run:
workflows: [ty ecosystem-analyzer]
types: [completed]
workflow_dispatch:
inputs:
workflow_run_id:
description: The ty ecosystem-analyzer workflow that triggers the workflow run
required: true
jobs:
comment:
runs-on: ubuntu-24.04
permissions:
pull-requests: write
steps:
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: Download PR number
with:
name: pr-number
run_id: ${{ github.event.workflow_run.id || github.event.inputs.workflow_run_id }}
if_no_artifact_found: ignore
allow_forks: true
- name: Parse pull request number
id: pr-number
run: |
if [[ -f pr-number ]]
then
echo "pr-number=$(<pr-number)" >> "$GITHUB_OUTPUT"
fi
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: "Download comment.md"
id: download-comment
if: steps.pr-number.outputs.pr-number
with:
name: comment.md
workflow: ty-ecosystem-analyzer.yaml
pr: ${{ steps.pr-number.outputs.pr-number }}
path: pr/comment
workflow_conclusion: completed
if_no_artifact_found: ignore
allow_forks: true
- name: Generate comment content
id: generate-comment
if: ${{ steps.download-comment.outputs.found_artifact == 'true' }}
run: |
# Guard against malicious ty ecosystem-analyzer results that symlink to a secret
# file on this runner
if [[ -L pr/comment/comment.md ]]
then
echo "Error: comment.md cannot be a symlink"
exit 1
fi
# Note: this identifier is used to find the comment to update on subsequent runs
echo '<!-- generated-comment ty ecosystem-analyzer -->' > comment.md
echo >> comment.md
cat pr/comment/comment.md >> comment.md
echo 'comment<<EOF' >> "$GITHUB_OUTPUT"
cat comment.md >> "$GITHUB_OUTPUT"
echo 'EOF' >> "$GITHUB_OUTPUT"
- name: Find existing comment
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
if: steps.generate-comment.outcome == 'success'
id: find-comment
with:
issue-number: ${{ steps.pr-number.outputs.pr-number }}
comment-author: "github-actions[bot]"
body-includes: "<!-- generated-comment ty ecosystem-analyzer -->"
- name: Create or update comment
if: steps.find-comment.outcome == 'success'
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ steps.pr-number.outputs.pr-number }}
body-path: comment.md
edit-mode: replace

View File

@@ -94,21 +94,18 @@ jobs:
touch typing_conformance_diagnostics.diff
fi
echo ${{ github.event.number }} > pr-number
echo "${CONFORMANCE_SUITE_COMMIT}" > conformance-suite-commit
# NOTE: astral-sh-bot uses this artifact to post comments on PRs.
# Make sure to update the bot if you rename the artifact.
- name: Upload diff
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: typing_conformance_diagnostics_diff
path: typing_conformance_diagnostics.diff
- name: Upload pr-number
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: pr-number
path: pr-number
# NOTE: astral-sh-bot uses this artifact to post comments on PRs.
# Make sure to update the bot if you rename the artifact.
- name: Upload conformance suite commit
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:

View File

@@ -1,112 +0,0 @@
name: PR comment (typing_conformance)
on: # zizmor: ignore[dangerous-triggers]
workflow_run:
workflows: [Run typing conformance]
types: [completed]
workflow_dispatch:
inputs:
workflow_run_id:
description: The typing_conformance workflow that triggers the workflow run
required: true
jobs:
comment:
runs-on: ubuntu-24.04
permissions:
pull-requests: write
steps:
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: Download PR number
with:
name: pr-number
run_id: ${{ github.event.workflow_run.id || github.event.inputs.workflow_run_id }}
if_no_artifact_found: ignore
allow_forks: true
- name: Parse pull request number
id: pr-number
run: |
if [[ -f pr-number ]]
then
echo "pr-number=$(<pr-number)" >> "$GITHUB_OUTPUT"
fi
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: Download typing conformance suite commit
with:
name: conformance-suite-commit
run_id: ${{ github.event.workflow_run.id || github.event.inputs.workflow_run_id }}
if_no_artifact_found: ignore
allow_forks: true
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: "Download typing_conformance results"
id: download-typing_conformance_diff
if: steps.pr-number.outputs.pr-number
with:
name: typing_conformance_diagnostics_diff
workflow: typing_conformance.yaml
pr: ${{ steps.pr-number.outputs.pr-number }}
path: pr/typing_conformance_diagnostics_diff
workflow_conclusion: completed
if_no_artifact_found: ignore
allow_forks: true
- name: Generate comment content
id: generate-comment
if: ${{ steps.download-typing_conformance_diff.outputs.found_artifact == 'true' }}
run: |
# Guard against malicious typing_conformance results that symlink to a secret
# file on this runner
if [[ -L pr/typing_conformance_diagnostics_diff/typing_conformance_diagnostics.diff ]]
then
echo "Error: typing_conformance_diagnostics.diff cannot be a symlink"
exit 1
fi
# Note this identifier is used to find the comment to update on
# subsequent runs
echo '<!-- generated-comment typing_conformance_diagnostics_diff -->' >> comment.txt
if [[ -f conformance-suite-commit ]]
then
echo "## Diagnostic diff on [typing conformance tests](https://github.com/python/typing/tree/$(<conformance-suite-commit)/conformance)" >> comment.txt
else
echo "conformance-suite-commit file not found"
echo "## Diagnostic diff on typing conformance tests" >> comment.txt
fi
if [ -s "pr/typing_conformance_diagnostics_diff/typing_conformance_diagnostics.diff" ]; then
echo '<details>' >> comment.txt
echo '<summary>Changes were detected when running ty on typing conformance tests</summary>' >> comment.txt
echo '' >> comment.txt
echo '```diff' >> comment.txt
cat pr/typing_conformance_diagnostics_diff/typing_conformance_diagnostics.diff >> comment.txt
echo '```' >> comment.txt
echo '</details>' >> comment.txt
else
echo 'No changes detected when running ty on typing conformance tests ✅' >> comment.txt
fi
echo 'comment<<EOF' >> "$GITHUB_OUTPUT"
cat comment.txt >> "$GITHUB_OUTPUT"
echo 'EOF' >> "$GITHUB_OUTPUT"
- name: Find existing comment
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
if: steps.generate-comment.outcome == 'success'
id: find-comment
with:
issue-number: ${{ steps.pr-number.outputs.pr-number }}
comment-author: "github-actions[bot]"
body-includes: "<!-- generated-comment typing_conformance_diagnostics_diff -->"
- name: Create or update comment
if: steps.find-comment.outcome == 'success'
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ steps.pr-number.outputs.pr-number }}
body-path: comment.txt
edit-mode: replace

3
.github/zizmor.yml vendored
View File

@@ -3,9 +3,6 @@
#
# TODO: can we remove the ignores here so that our workflows are more secure?
rules:
dangerous-triggers:
ignore:
- pr-comment.yaml
cache-poisoning:
ignore:
- build-docker.yml

View File

@@ -280,6 +280,55 @@ Note that plugin-specific configuration options are defined in their own modules
Finally, regenerate the documentation and generated code with `cargo dev generate-all`.
### Opening a PR
After you finish your changes, the next step is to open a PR. By default, two
sections will be filled into the PR body: the summary and the test plan.
#### The summary
The summary is intended to give us as maintainers information about your PR.
This should typically include a link to the relevant issue(s) you're addressing
in your PR, as well as a summary of the issue and your approach to fixing it. If
you have any questions about your approach or design, or if you considered
alternative approaches, that can also be helpful to include.
AI can be helpful in generating both the code and summary of your PR, but a
successful contribution should still be carefully reviewed by you and the
summary editorialized before submitting a PR. A great summary is thorough but
also succinct and gives us the context we need to review your PR.
You can find examples of excellent issues and PRs by searching for the
[`great writeup`](https://github.com/astral-sh/ruff/issues?q=label%3A%22great%20writeup%22)
label.
#### The test plan
The test plan is likely to be shorter than the summary and can be as simple as
"Added new snapshot tests for `RUF123`," at least for rule bugs. For LSP or some
types of CLI changes, in particular, it can also be helpful to include
screenshots or recordings of your change in action.
#### Ecosystem report
After opening the PR, an ecosystem report will be run as part of CI. This shows
a diff of linter and formatter behavior before and after the changes in your PR.
Going through these changes and reporting your findings in the PR summary or an
additional comment help us to review your PR more efficiently. It's also a great
way to find new test cases to incorporate into your PR if you identify any
issues.
#### PR status
To help us know when your PR is ready for review again, please either move your
PR back to a draft while working on it (marking it ready for review afterwards
will ping the previous reviewers) or explicitly re-request a review. This helps
us to avoid re-reviewing a PR while you're still working on it and also to
prioritize PRs that are definitely ready for review.
You can also thumbs-up or mark as resolved any comments we leave to let us know
you addressed them.
## MkDocs
> [!NOTE]

36
Cargo.lock generated
View File

@@ -642,7 +642,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "117725a109d387c937a1533ce01b450cbde6b88abceea8473c4d7a85853cda3c"
dependencies = [
"lazy_static",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -651,7 +651,7 @@ version = "3.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fde0e0ec90c9dfb3b4b1a0891a7dcd0e2bffde2f7efed5fe7c9bb00e5bfb915e"
dependencies = [
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -1016,7 +1016,7 @@ dependencies = [
"libc",
"option-ext",
"redox_users",
"windows-sys 0.59.0",
"windows-sys 0.60.2",
]
[[package]]
@@ -1698,7 +1698,7 @@ checksum = "e04d7f318608d35d4b61ddd75cbdaee86b023ebe2bd5a66ee0915f0bf93095a9"
dependencies = [
"hermit-abi",
"libc",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -1752,24 +1752,24 @@ checksum = "4a5f13b858c8d314ee3e8f639011f7ccefe71f97f96e50151fb991f267928e2c"
[[package]]
name = "jiff"
version = "0.2.15"
version = "0.2.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "be1f93b8b1eb69c77f24bbb0afdf66f54b632ee39af40ca21c4365a1d7347e49"
checksum = "49cce2b81f2098e7e3efc35bc2e0a6b7abec9d34128283d7a26fa8f32a6dbb35"
dependencies = [
"jiff-static",
"jiff-tzdb-platform",
"log",
"portable-atomic",
"portable-atomic-util",
"serde",
"windows-sys 0.59.0",
"serde_core",
"windows-sys 0.52.0",
]
[[package]]
name = "jiff-static"
version = "0.2.15"
version = "0.2.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "03343451ff899767262ec32146f6d559dd759fdadf42ff0e227c7c48f72594b4"
checksum = "980af8b43c3ad5d8d349ace167ec8170839f753a42d233ba19e08afe1850fa69"
dependencies = [
"proc-macro2",
"quote",
@@ -1851,9 +1851,9 @@ checksum = "2874a2af47a2325c2001a6e6fad9b16a53b802102b528163885171cf92b15976"
[[package]]
name = "libcst"
version = "1.8.5"
version = "1.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9d56bcd52d9b5e5f43e7fba20eb1f423ccb18c84cdf1cb506b8c1b95776b0b49"
checksum = "6aea7143e4a0ed59b87a1ee71e198500889f8b005311136be15e84c97a6fcd8d"
dependencies = [
"annotate-snippets",
"libcst_derive",
@@ -1866,9 +1866,9 @@ dependencies = [
[[package]]
name = "libcst_derive"
version = "1.8.5"
version = "1.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3fcf5a725c4db703660124fe0edb98285f1605d0b87b7ee8684b699764a4f01a"
checksum = "0903173ea316c34a44d0497161e04d9210af44f5f5e89bf2f55d9a254c9a0e8d"
dependencies = [
"quote",
"syn",
@@ -2650,9 +2650,9 @@ dependencies = [
[[package]]
name = "quote"
version = "1.0.41"
version = "1.0.42"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ce25767e7b499d1b604768e7cde645d14cc8584231ea6b295e9c9eb22c02e1d1"
checksum = "a338cc41d27e6cc6dce6cefc13a0729dfbb81c262b1f519331575dd80ef3067f"
dependencies = [
"proc-macro2",
]
@@ -3927,9 +3927,9 @@ dependencies = [
[[package]]
name = "syn"
version = "2.0.108"
version = "2.0.110"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "da58917d35242480a05c2897064da0a80589a2a0476c9a3f2fdc83b53502e917"
checksum = "a99801b5bd34ede4cf3fc688c5919368fea4e4814a4664359503e6015b280aea"
dependencies = [
"proc-macro2",
"quote",

View File

@@ -7,6 +7,8 @@ use std::sync::Arc;
use crate::commands::completions::config::{OptionString, OptionStringParser};
use anyhow::bail;
use clap::builder::Styles;
use clap::builder::styling::{AnsiColor, Effects};
use clap::builder::{TypedValueParser, ValueParserFactory};
use clap::{Parser, Subcommand, command};
use colored::Colorize;
@@ -78,6 +80,13 @@ impl GlobalConfigArgs {
}
}
// Configures Clap v3-style help menu colors
const STYLES: Styles = Styles::styled()
.header(AnsiColor::Green.on_default().effects(Effects::BOLD))
.usage(AnsiColor::Green.on_default().effects(Effects::BOLD))
.literal(AnsiColor::Cyan.on_default().effects(Effects::BOLD))
.placeholder(AnsiColor::Cyan.on_default());
#[derive(Debug, Parser)]
#[command(
author,
@@ -86,6 +95,7 @@ impl GlobalConfigArgs {
after_help = "For help with a specific command, see: `ruff help <command>`."
)]
#[command(version)]
#[command(styles = STYLES)]
pub struct Args {
#[command(subcommand)]
pub(crate) command: Command,
@@ -405,8 +415,13 @@ pub struct CheckCommand {
)]
pub statistics: bool,
/// Enable automatic additions of `noqa` directives to failing lines.
/// Optionally provide a reason to append after the codes.
#[arg(
long,
value_name = "REASON",
default_missing_value = "",
num_args = 0..=1,
require_equals = true,
// conflicts_with = "add_noqa",
conflicts_with = "show_files",
conflicts_with = "show_settings",
@@ -418,7 +433,7 @@ pub struct CheckCommand {
conflicts_with = "fix",
conflicts_with = "diff",
)]
pub add_noqa: bool,
pub add_noqa: Option<String>,
/// See the files Ruff will be run against with the current settings.
#[arg(
long,
@@ -1047,7 +1062,7 @@ Possible choices:
/// etc.).
#[expect(clippy::struct_excessive_bools)]
pub struct CheckArguments {
pub add_noqa: bool,
pub add_noqa: Option<String>,
pub diff: bool,
pub exit_non_zero_on_fix: bool,
pub exit_zero: bool,

View File

@@ -21,6 +21,7 @@ pub(crate) fn add_noqa(
files: &[PathBuf],
pyproject_config: &PyprojectConfig,
config_arguments: &ConfigArguments,
reason: Option<&str>,
) -> Result<usize> {
// Collect all the files to check.
let start = Instant::now();
@@ -76,7 +77,14 @@ pub(crate) fn add_noqa(
return None;
}
};
match add_noqa_to_path(path, package, &source_kind, source_type, &settings.linter) {
match add_noqa_to_path(
path,
package,
&source_kind,
source_type,
&settings.linter,
reason,
) {
Ok(count) => Some(count),
Err(e) => {
error!("Failed to add noqa to {}: {e}", path.display());

View File

@@ -16,6 +16,8 @@ struct LinterInfo {
prefix: &'static str,
name: &'static str,
#[serde(skip_serializing_if = "Option::is_none")]
url: Option<&'static str>,
#[serde(skip_serializing_if = "Option::is_none")]
categories: Option<Vec<LinterCategoryInfo>>,
}
@@ -50,6 +52,7 @@ pub(crate) fn linter(format: HelpFormat) -> Result<()> {
.map(|linter_info| LinterInfo {
prefix: linter_info.common_prefix(),
name: linter_info.name(),
url: linter_info.url(),
categories: linter_info.upstream_categories().map(|cats| {
cats.iter()
.map(|c| LinterCategoryInfo {

View File

@@ -319,12 +319,20 @@ pub fn check(args: CheckCommand, global_options: GlobalConfigArgs) -> Result<Exi
warn_user!("Detected debug build without --no-cache.");
}
if cli.add_noqa {
if let Some(reason) = &cli.add_noqa {
if !fix_mode.is_generate() {
warn_user!("--fix is incompatible with --add-noqa.");
}
if reason.contains(['\n', '\r']) {
return Err(anyhow::anyhow!(
"--add-noqa <reason> cannot contain newline characters"
));
}
let reason_opt = (!reason.is_empty()).then_some(reason.as_str());
let modifications =
commands::add_noqa::add_noqa(&files, &pyproject_config, &config_arguments)?;
commands::add_noqa::add_noqa(&files, &pyproject_config, &config_arguments, reason_opt)?;
if modifications > 0 && config_arguments.log_level >= LogLevel::Default {
let s = if modifications == 1 { "" } else { "s" };
#[expect(clippy::print_stderr)]

View File

@@ -1760,6 +1760,64 @@ from foo import ( # noqa: F401
Ok(())
}
#[test]
fn add_noqa_with_reason() -> Result<()> {
let fixture = CliTest::new()?;
fixture.write_file(
"test.py",
r#"import os
def foo():
x = 1
"#,
)?;
assert_cmd_snapshot!(fixture
.check_command()
.arg("--add-noqa=TODO: fix")
.arg("--select=F401,F841")
.arg("test.py"), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
Added 2 noqa directives.
");
let content = fs::read_to_string(fixture.root().join("test.py"))?;
insta::assert_snapshot!(content, @r"
import os # noqa: F401 TODO: fix
def foo():
x = 1 # noqa: F841 TODO: fix
");
Ok(())
}
#[test]
fn add_noqa_with_newline_in_reason() -> Result<()> {
let fixture = CliTest::new()?;
fixture.write_file("test.py", "import os\n")?;
assert_cmd_snapshot!(fixture
.check_command()
.arg("--add-noqa=line1\nline2")
.arg("--select=F401")
.arg("test.py"), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: --add-noqa <reason> cannot contain newline characters
"###);
Ok(())
}
/// Infer `3.11` from `requires-python` in `pyproject.toml`.
#[test]
fn requires_python() -> Result<()> {

View File

@@ -112,16 +112,16 @@ impl std::fmt::Display for Diff<'_> {
// `None`, indicating a regular script file, all the lines will be in one "cell" under the
// `None` key.
let cells = if let Some(notebook_index) = &self.notebook_index {
let mut last_cell = OneIndexed::MIN;
let mut last_cell_index = OneIndexed::MIN;
let mut cells: Vec<(Option<OneIndexed>, TextSize)> = Vec::new();
for (row, cell) in notebook_index.iter() {
if cell != last_cell {
let offset = source_code.line_start(row);
cells.push((Some(last_cell), offset));
last_cell = cell;
for cell in notebook_index.iter() {
if cell.cell_index() != last_cell_index {
let offset = source_code.line_start(cell.start_row());
cells.push((Some(last_cell_index), offset));
last_cell_index = cell.cell_index();
}
}
cells.push((Some(last_cell), source_text.text_len()));
cells.push((Some(last_cell_index), source_text.text_len()));
cells
} else {
vec![(None, source_text.text_len())]

View File

@@ -475,6 +475,12 @@ impl File {
self.path(db).as_str().ends_with("__init__.pyi")
}
/// Returns `true` if the file is an `__init__.pyi`
pub fn is_package(self, db: &dyn Db) -> bool {
let path = self.path(db).as_str();
path.ends_with("__init__.pyi") || path.ends_with("__init__.py")
}
pub fn source_type(self, db: &dyn Db) -> PySourceType {
match self.path(db) {
FilePath::System(path) => path

View File

@@ -204,3 +204,15 @@ x = 1
print(f"{x=}" or "bar") # SIM222
(lambda: 1) or True # SIM222
(i for i in range(1)) or "bar" # SIM222
# https://github.com/astral-sh/ruff/issues/21136
def get_items():
return tuple(item for item in Item.objects.all()) or None # OK
def get_items_list():
return tuple([item for item in items]) or None # OK
def get_items_set():
return tuple({item for item in items}) or None # OK

View File

@@ -0,0 +1,5 @@
from .builtins import next
from ..builtins import str
from ...builtins import int
from .builtins import next as _next

View File

@@ -125,3 +125,18 @@ with open(*filename, mode="r") as f:
# `buffering`.
with open(*filename, file="file.txt", mode="r") as f:
x = f.read()
# FURB101
with open("file.txt", encoding="utf-8") as f:
contents: str = f.read()
# FURB101 but no fix because it would remove the assignment to `x`
with open("file.txt", encoding="utf-8") as f:
contents, x = f.read(), 2
# FURB101 but no fix because it would remove the `process_contents` call
with open("file.txt", encoding="utf-8") as f:
contents = process_contents(f.read())
with open("file.txt", encoding="utf-8") as f:
contents: str = process_contents(f.read())

View File

@@ -19,6 +19,9 @@ print("", *args, sep="")
print("", **kwargs)
print(sep="\t")
print(sep=print(1))
print(f"")
print(f"", sep=",")
print(f"", end="bar")
# OK.
@@ -33,3 +36,4 @@ print("foo", "", sep=",")
print("foo", "", "bar", "", sep=",")
print("", "", **kwargs)
print(*args, sep=",")
print(f"foo")

View File

@@ -0,0 +1,18 @@
import logging
# Test cases for str() that should NOT be flagged (issue #21315)
# str() with no arguments - should not be flagged
logging.warning("%s", str())
# str() with multiple arguments - should not be flagged
logging.warning("%s", str(b"\xe2\x9a\xa0", "utf-8"))
# str() with starred arguments - should not be flagged
logging.warning("%s", str(*(b"\xf0\x9f\x9a\xa7", "utf-8")))
# str() with keyword unpacking - should not be flagged
logging.warning("%s", str(**{"object": b"\xf0\x9f\x9a\xa8", "encoding": "utf-8"}))
# str() with single keyword argument - should be flagged (equivalent to str("!"))
logging.warning("%s", str(object="!"))

View File

@@ -717,7 +717,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
}
if checker.is_rule_enabled(Rule::UnnecessaryBuiltinImport) {
if let Some(module) = module {
pyupgrade::rules::unnecessary_builtin_import(checker, stmt, module, names);
pyupgrade::rules::unnecessary_builtin_import(
checker, stmt, module, names, level,
);
}
}
if checker.any_rule_enabled(&[

View File

@@ -51,13 +51,17 @@ impl<'de> serde::Deserialize<'de> for LineLength {
where
D: serde::Deserializer<'de>,
{
let value = u16::deserialize(deserializer)?;
Self::try_from(value).map_err(|_| {
serde::de::Error::custom(format!(
"line-length must be between 1 and {} (got {value})",
Self::MAX,
))
})
let value = i64::deserialize(deserializer)?;
u16::try_from(value)
.ok()
.and_then(|u16_value| Self::try_from(u16_value).ok())
.ok_or_else(|| {
serde::de::Error::custom(format!(
"line-length must be between 1 and {} (got {value})",
Self::MAX,
))
})
}
}

View File

@@ -377,6 +377,7 @@ pub fn add_noqa_to_path(
source_kind: &SourceKind,
source_type: PySourceType,
settings: &LinterSettings,
reason: Option<&str>,
) -> Result<usize> {
// Parse once.
let target_version = settings.resolve_target_version(path);
@@ -425,6 +426,7 @@ pub fn add_noqa_to_path(
&settings.external,
&directives.noqa_line_for,
stylist.line_ending(),
reason,
)
}

View File

@@ -39,7 +39,7 @@ pub fn generate_noqa_edits(
let exemption = FileExemption::from(&file_directives);
let directives = NoqaDirectives::from_commented_ranges(comment_ranges, external, path, locator);
let comments = find_noqa_comments(diagnostics, locator, &exemption, &directives, noqa_line_for);
build_noqa_edits_by_diagnostic(comments, locator, line_ending)
build_noqa_edits_by_diagnostic(comments, locator, line_ending, None)
}
/// A directive to ignore a set of rules either for a given line of Python source code or an entire file (e.g.,
@@ -715,6 +715,7 @@ impl Display for LexicalError {
impl Error for LexicalError {}
/// Adds noqa comments to suppress all messages of a file.
#[expect(clippy::too_many_arguments)]
pub(crate) fn add_noqa(
path: &Path,
diagnostics: &[Diagnostic],
@@ -723,6 +724,7 @@ pub(crate) fn add_noqa(
external: &[String],
noqa_line_for: &NoqaMapping,
line_ending: LineEnding,
reason: Option<&str>,
) -> Result<usize> {
let (count, output) = add_noqa_inner(
path,
@@ -732,12 +734,14 @@ pub(crate) fn add_noqa(
external,
noqa_line_for,
line_ending,
reason,
);
fs::write(path, output)?;
Ok(count)
}
#[expect(clippy::too_many_arguments)]
fn add_noqa_inner(
path: &Path,
diagnostics: &[Diagnostic],
@@ -746,6 +750,7 @@ fn add_noqa_inner(
external: &[String],
noqa_line_for: &NoqaMapping,
line_ending: LineEnding,
reason: Option<&str>,
) -> (usize, String) {
let mut count = 0;
@@ -757,7 +762,7 @@ fn add_noqa_inner(
let comments = find_noqa_comments(diagnostics, locator, &exemption, &directives, noqa_line_for);
let edits = build_noqa_edits_by_line(comments, locator, line_ending);
let edits = build_noqa_edits_by_line(comments, locator, line_ending, reason);
let contents = locator.contents();
@@ -783,6 +788,7 @@ fn build_noqa_edits_by_diagnostic(
comments: Vec<Option<NoqaComment>>,
locator: &Locator,
line_ending: LineEnding,
reason: Option<&str>,
) -> Vec<Option<Edit>> {
let mut edits = Vec::default();
for comment in comments {
@@ -794,6 +800,7 @@ fn build_noqa_edits_by_diagnostic(
FxHashSet::from_iter([comment.code]),
locator,
line_ending,
reason,
) {
edits.push(Some(noqa_edit.into_edit()));
}
@@ -808,6 +815,7 @@ fn build_noqa_edits_by_line<'a>(
comments: Vec<Option<NoqaComment<'a>>>,
locator: &Locator,
line_ending: LineEnding,
reason: Option<&'a str>,
) -> BTreeMap<TextSize, NoqaEdit<'a>> {
let mut comments_by_line = BTreeMap::default();
for comment in comments.into_iter().flatten() {
@@ -831,6 +839,7 @@ fn build_noqa_edits_by_line<'a>(
.collect(),
locator,
line_ending,
reason,
) {
edits.insert(offset, edit);
}
@@ -927,6 +936,7 @@ struct NoqaEdit<'a> {
noqa_codes: FxHashSet<&'a SecondaryCode>,
codes: Option<&'a Codes<'a>>,
line_ending: LineEnding,
reason: Option<&'a str>,
}
impl NoqaEdit<'_> {
@@ -954,6 +964,9 @@ impl NoqaEdit<'_> {
push_codes(writer, self.noqa_codes.iter().sorted_unstable());
}
}
if let Some(reason) = self.reason {
write!(writer, " {reason}").unwrap();
}
write!(writer, "{}", self.line_ending.as_str()).unwrap();
}
}
@@ -970,6 +983,7 @@ fn generate_noqa_edit<'a>(
noqa_codes: FxHashSet<&'a SecondaryCode>,
locator: &Locator,
line_ending: LineEnding,
reason: Option<&'a str>,
) -> Option<NoqaEdit<'a>> {
let line_range = locator.full_line_range(offset);
@@ -999,6 +1013,7 @@ fn generate_noqa_edit<'a>(
noqa_codes,
codes,
line_ending,
reason,
})
}
@@ -2832,6 +2847,7 @@ mod tests {
&[],
&noqa_line_for,
LineEnding::Lf,
None,
);
assert_eq!(count, 0);
assert_eq!(output, format!("{contents}"));
@@ -2855,6 +2871,7 @@ mod tests {
&[],
&noqa_line_for,
LineEnding::Lf,
None,
);
assert_eq!(count, 1);
assert_eq!(output, "x = 1 # noqa: F841\n");
@@ -2885,6 +2902,7 @@ mod tests {
&[],
&noqa_line_for,
LineEnding::Lf,
None,
);
assert_eq!(count, 1);
assert_eq!(output, "x = 1 # noqa: E741, F841\n");
@@ -2915,6 +2933,7 @@ mod tests {
&[],
&noqa_line_for,
LineEnding::Lf,
None,
);
assert_eq!(count, 0);
assert_eq!(output, "x = 1 # noqa");

View File

@@ -261,16 +261,6 @@ pub(crate) const fn is_b006_unsafe_fix_preserve_assignment_expr_enabled(
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/20520
pub(crate) const fn is_fix_read_whole_file_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/20520
pub(crate) const fn is_fix_write_whole_file_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}
pub(crate) const fn is_typing_extensions_str_alias_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}

View File

@@ -513,6 +513,9 @@ impl Violation for MissingReturnTypeClassMethod {
/// def foo(x: MyAny): ...
/// ```
///
/// ## Options
/// - `lint.flake8-annotations.allow-star-arg-any`
///
/// ## References
/// - [Typing spec: `Any`](https://typing.python.org/en/latest/spec/special-types.html#any)
/// - [Python documentation: `typing.Any`](https://docs.python.org/3/library/typing.html#typing.Any)

View File

@@ -1101,6 +1101,7 @@ help: Replace with `f"{x=}"`
204 + print(f"{x=}") # SIM222
205 | (lambda: 1) or True # SIM222
206 | (i for i in range(1)) or "bar" # SIM222
207 |
note: This is an unsafe fix and may change runtime behavior
SIM222 [*] Use `lambda: 1` instead of `lambda: 1 or ...`
@@ -1119,6 +1120,8 @@ help: Replace with `lambda: 1`
- (lambda: 1) or True # SIM222
205 + lambda: 1 # SIM222
206 | (i for i in range(1)) or "bar" # SIM222
207 |
208 | # https://github.com/astral-sh/ruff/issues/21136
note: This is an unsafe fix and may change runtime behavior
SIM222 [*] Use `(i for i in range(1))` instead of `(i for i in range(1)) or ...`
@@ -1128,6 +1131,8 @@ SIM222 [*] Use `(i for i in range(1))` instead of `(i for i in range(1)) or ...`
205 | (lambda: 1) or True # SIM222
206 | (i for i in range(1)) or "bar" # SIM222
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
207 |
208 | # https://github.com/astral-sh/ruff/issues/21136
|
help: Replace with `(i for i in range(1))`
203 | x = 1
@@ -1135,4 +1140,7 @@ help: Replace with `(i for i in range(1))`
205 | (lambda: 1) or True # SIM222
- (i for i in range(1)) or "bar" # SIM222
206 + (i for i in range(1)) # SIM222
207 |
208 | # https://github.com/astral-sh/ruff/issues/21136
209 | def get_items():
note: This is an unsafe fix and may change runtime behavior

View File

@@ -99,6 +99,7 @@ mod tests {
#[test_case(Rule::UTF8EncodingDeclaration, Path::new("UP009_many_empty_lines.py"))]
#[test_case(Rule::UnicodeKindPrefix, Path::new("UP025.py"))]
#[test_case(Rule::UnnecessaryBuiltinImport, Path::new("UP029_0.py"))]
#[test_case(Rule::UnnecessaryBuiltinImport, Path::new("UP029_2.py"))]
#[test_case(Rule::UnnecessaryClassParentheses, Path::new("UP039.py"))]
#[test_case(Rule::UnnecessaryDefaultTypeArgs, Path::new("UP043.py"))]
#[test_case(Rule::UnnecessaryEncodeUTF8, Path::new("UP012.py"))]

View File

@@ -75,7 +75,13 @@ pub(crate) fn unnecessary_builtin_import(
stmt: &Stmt,
module: &str,
names: &[Alias],
level: u32,
) {
// Ignore relative imports (they're importing from local modules, not Python's builtins).
if level > 0 {
return;
}
// Ignore irrelevant modules.
if !matches!(
module,

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/pyupgrade/mod.rs
---

View File

@@ -12,7 +12,6 @@ mod tests {
use test_case::test_case;
use crate::registry::Rule;
use crate::settings::types::PreviewMode;
use crate::test::test_path;
use crate::{assert_diagnostics, settings};
@@ -63,25 +62,6 @@ mod tests {
Ok(())
}
#[test_case(Rule::ReadWholeFile, Path::new("FURB101.py"))]
#[test_case(Rule::WriteWholeFile, Path::new("FURB103.py"))]
fn preview_rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!(
"preview_{}_{}",
rule_code.noqa_code(),
path.to_string_lossy()
);
let diagnostics = test_path(
Path::new("refurb").join(path).as_path(),
&settings::LinterSettings {
preview: PreviewMode::Enabled,
..settings::LinterSettings::for_rule(rule_code)
},
)?;
assert_diagnostics!(snapshot, diagnostics);
Ok(())
}
#[test]
fn write_whole_file_python_39() -> Result<()> {
let diagnostics = test_path(

View File

@@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::contains_effect;
use ruff_python_ast::helpers::{contains_effect, is_empty_f_string};
use ruff_python_ast::{self as ast, Expr};
use ruff_python_codegen::Generator;
use ruff_python_semantic::SemanticModel;
@@ -194,13 +194,11 @@ pub(crate) fn print_empty_string(checker: &Checker, call: &ast::ExprCall) {
/// Check if an expression is a constant empty string.
fn is_empty_string(expr: &Expr) -> bool {
matches!(
expr,
Expr::StringLiteral(ast::ExprStringLiteral {
value,
..
}) if value.is_empty()
)
match expr {
Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) => value.is_empty(),
Expr::FString(f_string) => is_empty_f_string(f_string),
_ => false,
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]

View File

@@ -125,24 +125,8 @@ impl<'a> Visitor<'a> for ReadMatcher<'a, '_> {
open.item.range(),
);
if !crate::preview::is_fix_read_whole_file_enabled(self.checker.settings()) {
return;
}
let target = match self.with_stmt.body.first() {
Some(Stmt::Assign(assign))
if assign.value.range().contains_range(expr.range()) =>
{
match assign.targets.first() {
Some(Expr::Name(name)) => Some(name.id.as_str()),
_ => None,
}
}
_ => None,
};
if let Some(fix) =
generate_fix(self.checker, &open, target, self.with_stmt, &suggestion)
generate_fix(self.checker, &open, expr, self.with_stmt, &suggestion)
{
diagnostic.set_fix(fix);
}
@@ -194,15 +178,16 @@ fn make_suggestion(open: &FileOpen<'_>, generator: Generator) -> String {
fn generate_fix(
checker: &Checker,
open: &FileOpen,
target: Option<&str>,
expr: &Expr,
with_stmt: &ast::StmtWith,
suggestion: &str,
) -> Option<Fix> {
if !(with_stmt.items.len() == 1 && matches!(with_stmt.body.as_slice(), [Stmt::Assign(_)])) {
if with_stmt.items.len() != 1 {
return None;
}
let locator = checker.locator();
let filename_code = locator.slice(open.filename.range());
let (import_edit, binding) = checker
@@ -214,9 +199,39 @@ fn generate_fix(
)
.ok()?;
let replacement = match target {
Some(var) => format!("{var} = {binding}({filename_code}).{suggestion}"),
None => format!("{binding}({filename_code}).{suggestion}"),
// Only replace context managers with a single assignment or annotated assignment in the body.
// The assignment's RHS must also be the same as the `read` call in `expr`, otherwise this fix
// would remove the rest of the expression.
let replacement = match with_stmt.body.as_slice() {
[Stmt::Assign(ast::StmtAssign { targets, value, .. })] if value.range() == expr.range() => {
match targets.as_slice() {
[Expr::Name(name)] => {
format!(
"{name} = {binding}({filename_code}).{suggestion}",
name = name.id
)
}
_ => return None,
}
}
[
Stmt::AnnAssign(ast::StmtAnnAssign {
target,
annotation,
value: Some(value),
..
}),
] if value.range() == expr.range() => match target.as_ref() {
Expr::Name(name) => {
format!(
"{var}: {ann} = {binding}({filename_code}).{suggestion}",
var = name.id,
ann = locator.slice(annotation.range())
)
}
_ => return None,
},
_ => return None,
};
let applicability = if checker.comment_ranges().intersects(with_stmt.range()) {

View File

@@ -141,10 +141,6 @@ impl<'a> Visitor<'a> for WriteMatcher<'a, '_> {
open.item.range(),
);
if !crate::preview::is_fix_write_whole_file_enabled(self.checker.settings()) {
return;
}
if let Some(fix) =
generate_fix(self.checker, &open, self.with_stmt, &suggestion)
{

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text()`
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text()`
--> FURB101.py:12:6
|
11 | # FURB101
@@ -10,8 +10,22 @@ FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text()`
13 | x = f.read()
|
help: Replace with `Path("file.txt").read_text()`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
10 | # Errors.
11 |
12 | # FURB101
- with open("file.txt") as f:
- x = f.read()
13 + x = pathlib.Path("file.txt").read_text()
14 |
15 | # FURB101
16 | with open("file.txt", "rb") as f:
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_bytes()`
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_bytes()`
--> FURB101.py:16:6
|
15 | # FURB101
@@ -20,8 +34,22 @@ FURB101 `open` and `read` should be replaced by `Path("file.txt").read_bytes()`
17 | x = f.read()
|
help: Replace with `Path("file.txt").read_bytes()`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
14 | x = f.read()
15 |
16 | # FURB101
- with open("file.txt", "rb") as f:
- x = f.read()
17 + x = pathlib.Path("file.txt").read_bytes()
18 |
19 | # FURB101
20 | with open("file.txt", mode="rb") as f:
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_bytes()`
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_bytes()`
--> FURB101.py:20:6
|
19 | # FURB101
@@ -30,8 +58,22 @@ FURB101 `open` and `read` should be replaced by `Path("file.txt").read_bytes()`
21 | x = f.read()
|
help: Replace with `Path("file.txt").read_bytes()`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
18 | x = f.read()
19 |
20 | # FURB101
- with open("file.txt", mode="rb") as f:
- x = f.read()
21 + x = pathlib.Path("file.txt").read_bytes()
22 |
23 | # FURB101
24 | with open("file.txt", encoding="utf8") as f:
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf8")`
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf8")`
--> FURB101.py:24:6
|
23 | # FURB101
@@ -40,8 +82,22 @@ FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(enco
25 | x = f.read()
|
help: Replace with `Path("file.txt").read_text(encoding="utf8")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
22 | x = f.read()
23 |
24 | # FURB101
- with open("file.txt", encoding="utf8") as f:
- x = f.read()
25 + x = pathlib.Path("file.txt").read_text(encoding="utf8")
26 |
27 | # FURB101
28 | with open("file.txt", errors="ignore") as f:
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(errors="ignore")`
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text(errors="ignore")`
--> FURB101.py:28:6
|
27 | # FURB101
@@ -50,8 +106,22 @@ FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(erro
29 | x = f.read()
|
help: Replace with `Path("file.txt").read_text(errors="ignore")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
26 | x = f.read()
27 |
28 | # FURB101
- with open("file.txt", errors="ignore") as f:
- x = f.read()
29 + x = pathlib.Path("file.txt").read_text(errors="ignore")
30 |
31 | # FURB101
32 | with open("file.txt", mode="r") as f: # noqa: FURB120
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text()`
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text()`
--> FURB101.py:32:6
|
31 | # FURB101
@@ -60,6 +130,21 @@ FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text()`
33 | x = f.read()
|
help: Replace with `Path("file.txt").read_text()`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
30 | x = f.read()
31 |
32 | # FURB101
- with open("file.txt", mode="r") as f: # noqa: FURB120
- x = f.read()
33 + x = pathlib.Path("file.txt").read_text()
34 |
35 | # FURB101
36 | with open(foo(), "rb") as f:
note: This is an unsafe fix and may change runtime behavior
FURB101 `open` and `read` should be replaced by `Path(foo()).read_bytes()`
--> FURB101.py:36:6
@@ -104,3 +189,58 @@ FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text()`
51 | # the user reads the whole file and that bit they can replace.
|
help: Replace with `Path("file.txt").read_text()`
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf-8")`
--> FURB101.py:130:6
|
129 | # FURB101
130 | with open("file.txt", encoding="utf-8") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
131 | contents: str = f.read()
|
help: Replace with `Path("file.txt").read_text(encoding="utf-8")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
128 | x = f.read()
129 |
130 | # FURB101
- with open("file.txt", encoding="utf-8") as f:
- contents: str = f.read()
131 + contents: str = pathlib.Path("file.txt").read_text(encoding="utf-8")
132 |
133 | # FURB101 but no fix because it would remove the assignment to `x`
134 | with open("file.txt", encoding="utf-8") as f:
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf-8")`
--> FURB101.py:134:6
|
133 | # FURB101 but no fix because it would remove the assignment to `x`
134 | with open("file.txt", encoding="utf-8") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
135 | contents, x = f.read(), 2
|
help: Replace with `Path("file.txt").read_text(encoding="utf-8")`
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf-8")`
--> FURB101.py:138:6
|
137 | # FURB101 but no fix because it would remove the `process_contents` call
138 | with open("file.txt", encoding="utf-8") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
139 | contents = process_contents(f.read())
|
help: Replace with `Path("file.txt").read_text(encoding="utf-8")`
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf-8")`
--> FURB101.py:141:6
|
139 | contents = process_contents(f.read())
140 |
141 | with open("file.txt", encoding="utf-8") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
142 | contents: str = process_contents(f.read())
|
help: Replace with `Path("file.txt").read_text(encoding="utf-8")`

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text("test")`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text("test")`
--> FURB103.py:12:6
|
11 | # FURB103
@@ -10,8 +10,22 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text("t
13 | f.write("test")
|
help: Replace with `Path("file.txt").write_text("test")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
10 | # Errors.
11 |
12 | # FURB103
- with open("file.txt", "w") as f:
- f.write("test")
13 + pathlib.Path("file.txt").write_text("test")
14 |
15 | # FURB103
16 | with open("file.txt", "wb") as f:
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_bytes(foobar)`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_bytes(foobar)`
--> FURB103.py:16:6
|
15 | # FURB103
@@ -20,8 +34,22 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_bytes(f
17 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_bytes(foobar)`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
14 | f.write("test")
15 |
16 | # FURB103
- with open("file.txt", "wb") as f:
- f.write(foobar)
17 + pathlib.Path("file.txt").write_bytes(foobar)
18 |
19 | # FURB103
20 | with open("file.txt", mode="wb") as f:
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_bytes(b"abc")`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_bytes(b"abc")`
--> FURB103.py:20:6
|
19 | # FURB103
@@ -30,8 +58,22 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_bytes(b
21 | f.write(b"abc")
|
help: Replace with `Path("file.txt").write_bytes(b"abc")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
18 | f.write(foobar)
19 |
20 | # FURB103
- with open("file.txt", mode="wb") as f:
- f.write(b"abc")
21 + pathlib.Path("file.txt").write_bytes(b"abc")
22 |
23 | # FURB103
24 | with open("file.txt", "w", encoding="utf8") as f:
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, encoding="utf8")`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, encoding="utf8")`
--> FURB103.py:24:6
|
23 | # FURB103
@@ -40,8 +82,22 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(fo
25 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar, encoding="utf8")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
22 | f.write(b"abc")
23 |
24 | # FURB103
- with open("file.txt", "w", encoding="utf8") as f:
- f.write(foobar)
25 + pathlib.Path("file.txt").write_text(foobar, encoding="utf8")
26 |
27 | # FURB103
28 | with open("file.txt", "w", errors="ignore") as f:
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, errors="ignore")`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, errors="ignore")`
--> FURB103.py:28:6
|
27 | # FURB103
@@ -50,8 +106,22 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(fo
29 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar, errors="ignore")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
26 | f.write(foobar)
27 |
28 | # FURB103
- with open("file.txt", "w", errors="ignore") as f:
- f.write(foobar)
29 + pathlib.Path("file.txt").write_text(foobar, errors="ignore")
30 |
31 | # FURB103
32 | with open("file.txt", mode="w") as f:
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(foobar)`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar)`
--> FURB103.py:32:6
|
31 | # FURB103
@@ -60,6 +130,20 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(fo
33 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar)`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
30 | f.write(foobar)
31 |
32 | # FURB103
- with open("file.txt", mode="w") as f:
- f.write(foobar)
33 + pathlib.Path("file.txt").write_text(foobar)
34 |
35 | # FURB103
36 | with open(foo(), "wb") as f:
FURB103 `open` and `write` should be replaced by `Path(foo()).write_bytes(bar())`
--> FURB103.py:36:6
@@ -105,7 +189,7 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(ba
|
help: Replace with `Path("file.txt").write_text(bar(bar(a + x)))`
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, newline="\r\n")`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, newline="\r\n")`
--> FURB103.py:58:6
|
57 | # FURB103
@@ -114,8 +198,22 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(fo
59 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar, newline="\r\n")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
56 |
57 |
58 | # FURB103
- with open("file.txt", "w", newline="\r\n") as f:
- f.write(foobar)
59 + pathlib.Path("file.txt").write_text(foobar, newline="\r\n")
60 |
61 |
62 | import builtins
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, newline="\r\n")`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, newline="\r\n")`
--> FURB103.py:66:6
|
65 | # FURB103
@@ -124,8 +222,21 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(fo
67 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar, newline="\r\n")`
60 |
61 |
62 | import builtins
63 + import pathlib
64 |
65 |
66 | # FURB103
- with builtins.open("file.txt", "w", newline="\r\n") as f:
- f.write(foobar)
67 + pathlib.Path("file.txt").write_text(foobar, newline="\r\n")
68 |
69 |
70 | from builtins import open as o
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, newline="\r\n")`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, newline="\r\n")`
--> FURB103.py:74:6
|
73 | # FURB103
@@ -134,8 +245,21 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(fo
75 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar, newline="\r\n")`
68 |
69 |
70 | from builtins import open as o
71 + import pathlib
72 |
73 |
74 | # FURB103
- with o("file.txt", "w", newline="\r\n") as f:
- f.write(foobar)
75 + pathlib.Path("file.txt").write_text(foobar, newline="\r\n")
76 |
77 | # Non-errors.
78 |
FURB103 `open` and `write` should be replaced by `Path("test.json")....`
FURB103 [*] `open` and `write` should be replaced by `Path("test.json")....`
--> FURB103.py:154:6
|
152 | data = {"price": 100}
@@ -145,3 +269,13 @@ FURB103 `open` and `write` should be replaced by `Path("test.json")....`
155 | f.write(json.dumps(data, indent=4).encode("utf-8"))
|
help: Replace with `Path("test.json")....`
148 |
149 | # See: https://github.com/astral-sh/ruff/issues/20785
150 | import json
151 + import pathlib
152 |
153 | data = {"price": 100}
154 |
- with open("test.json", "wb") as f:
- f.write(json.dumps(data, indent=4).encode("utf-8"))
155 + pathlib.Path("test.json").write_bytes(json.dumps(data, indent=4).encode("utf-8"))

View File

@@ -317,7 +317,7 @@ help: Remove empty string
19 + print(**kwargs)
20 | print(sep="\t")
21 | print(sep=print(1))
22 |
22 | print(f"")
FURB105 [*] Unnecessary separator passed to `print`
--> FURB105.py:20:1
@@ -327,6 +327,7 @@ FURB105 [*] Unnecessary separator passed to `print`
20 | print(sep="\t")
| ^^^^^^^^^^^^^^^
21 | print(sep=print(1))
22 | print(f"")
|
help: Remove separator
17 | print("", *args)
@@ -335,8 +336,8 @@ help: Remove separator
- print(sep="\t")
20 + print()
21 | print(sep=print(1))
22 |
23 | # OK.
22 | print(f"")
23 | print(f"", sep=",")
FURB105 [*] Unnecessary separator passed to `print`
--> FURB105.py:21:1
@@ -345,8 +346,8 @@ FURB105 [*] Unnecessary separator passed to `print`
20 | print(sep="\t")
21 | print(sep=print(1))
| ^^^^^^^^^^^^^^^^^^^
22 |
23 | # OK.
22 | print(f"")
23 | print(f"", sep=",")
|
help: Remove separator
18 | print("", *args, sep="")
@@ -354,7 +355,66 @@ help: Remove separator
20 | print(sep="\t")
- print(sep=print(1))
21 + print()
22 |
23 | # OK.
24 |
22 | print(f"")
23 | print(f"", sep=",")
24 | print(f"", end="bar")
note: This is an unsafe fix and may change runtime behavior
FURB105 [*] Unnecessary empty string passed to `print`
--> FURB105.py:22:1
|
20 | print(sep="\t")
21 | print(sep=print(1))
22 | print(f"")
| ^^^^^^^^^^
23 | print(f"", sep=",")
24 | print(f"", end="bar")
|
help: Remove empty string
19 | print("", **kwargs)
20 | print(sep="\t")
21 | print(sep=print(1))
- print(f"")
22 + print()
23 | print(f"", sep=",")
24 | print(f"", end="bar")
25 |
FURB105 [*] Unnecessary empty string and separator passed to `print`
--> FURB105.py:23:1
|
21 | print(sep=print(1))
22 | print(f"")
23 | print(f"", sep=",")
| ^^^^^^^^^^^^^^^^^^^
24 | print(f"", end="bar")
|
help: Remove empty string and separator
20 | print(sep="\t")
21 | print(sep=print(1))
22 | print(f"")
- print(f"", sep=",")
23 + print()
24 | print(f"", end="bar")
25 |
26 | # OK.
FURB105 [*] Unnecessary empty string passed to `print`
--> FURB105.py:24:1
|
22 | print(f"")
23 | print(f"", sep=",")
24 | print(f"", end="bar")
| ^^^^^^^^^^^^^^^^^^^^^
25 |
26 | # OK.
|
help: Remove empty string
21 | print(sep=print(1))
22 | print(f"")
23 | print(f"", sep=",")
- print(f"", end="bar")
24 + print(end="bar")
25 |
26 | # OK.
27 |

View File

@@ -1,191 +0,0 @@
---
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text()`
--> FURB101.py:12:6
|
11 | # FURB101
12 | with open("file.txt") as f:
| ^^^^^^^^^^^^^^^^^^^^^
13 | x = f.read()
|
help: Replace with `Path("file.txt").read_text()`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
10 | # Errors.
11 |
12 | # FURB101
- with open("file.txt") as f:
- x = f.read()
13 + x = pathlib.Path("file.txt").read_text()
14 |
15 | # FURB101
16 | with open("file.txt", "rb") as f:
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_bytes()`
--> FURB101.py:16:6
|
15 | # FURB101
16 | with open("file.txt", "rb") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
17 | x = f.read()
|
help: Replace with `Path("file.txt").read_bytes()`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
14 | x = f.read()
15 |
16 | # FURB101
- with open("file.txt", "rb") as f:
- x = f.read()
17 + x = pathlib.Path("file.txt").read_bytes()
18 |
19 | # FURB101
20 | with open("file.txt", mode="rb") as f:
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_bytes()`
--> FURB101.py:20:6
|
19 | # FURB101
20 | with open("file.txt", mode="rb") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
21 | x = f.read()
|
help: Replace with `Path("file.txt").read_bytes()`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
18 | x = f.read()
19 |
20 | # FURB101
- with open("file.txt", mode="rb") as f:
- x = f.read()
21 + x = pathlib.Path("file.txt").read_bytes()
22 |
23 | # FURB101
24 | with open("file.txt", encoding="utf8") as f:
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text(encoding="utf8")`
--> FURB101.py:24:6
|
23 | # FURB101
24 | with open("file.txt", encoding="utf8") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
25 | x = f.read()
|
help: Replace with `Path("file.txt").read_text(encoding="utf8")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
22 | x = f.read()
23 |
24 | # FURB101
- with open("file.txt", encoding="utf8") as f:
- x = f.read()
25 + x = pathlib.Path("file.txt").read_text(encoding="utf8")
26 |
27 | # FURB101
28 | with open("file.txt", errors="ignore") as f:
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text(errors="ignore")`
--> FURB101.py:28:6
|
27 | # FURB101
28 | with open("file.txt", errors="ignore") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
29 | x = f.read()
|
help: Replace with `Path("file.txt").read_text(errors="ignore")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
26 | x = f.read()
27 |
28 | # FURB101
- with open("file.txt", errors="ignore") as f:
- x = f.read()
29 + x = pathlib.Path("file.txt").read_text(errors="ignore")
30 |
31 | # FURB101
32 | with open("file.txt", mode="r") as f: # noqa: FURB120
FURB101 [*] `open` and `read` should be replaced by `Path("file.txt").read_text()`
--> FURB101.py:32:6
|
31 | # FURB101
32 | with open("file.txt", mode="r") as f: # noqa: FURB120
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
33 | x = f.read()
|
help: Replace with `Path("file.txt").read_text()`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
30 | x = f.read()
31 |
32 | # FURB101
- with open("file.txt", mode="r") as f: # noqa: FURB120
- x = f.read()
33 + x = pathlib.Path("file.txt").read_text()
34 |
35 | # FURB101
36 | with open(foo(), "rb") as f:
note: This is an unsafe fix and may change runtime behavior
FURB101 `open` and `read` should be replaced by `Path(foo()).read_bytes()`
--> FURB101.py:36:6
|
35 | # FURB101
36 | with open(foo(), "rb") as f:
| ^^^^^^^^^^^^^^^^^^^^^^
37 | # The body of `with` is non-trivial, but the recommendation holds.
38 | bar("pre")
|
help: Replace with `Path(foo()).read_bytes()`
FURB101 `open` and `read` should be replaced by `Path("a.txt").read_text()`
--> FURB101.py:44:6
|
43 | # FURB101
44 | with open("a.txt") as a, open("b.txt", "rb") as b:
| ^^^^^^^^^^^^^^^^^^
45 | x = a.read()
46 | y = b.read()
|
help: Replace with `Path("a.txt").read_text()`
FURB101 `open` and `read` should be replaced by `Path("b.txt").read_bytes()`
--> FURB101.py:44:26
|
43 | # FURB101
44 | with open("a.txt") as a, open("b.txt", "rb") as b:
| ^^^^^^^^^^^^^^^^^^^^^^^^
45 | x = a.read()
46 | y = b.read()
|
help: Replace with `Path("b.txt").read_bytes()`
FURB101 `open` and `read` should be replaced by `Path("file.txt").read_text()`
--> FURB101.py:49:18
|
48 | # FURB101
49 | with foo() as a, open("file.txt") as b, foo() as c:
| ^^^^^^^^^^^^^^^^^^^^^
50 | # We have other things in here, multiple with items, but
51 | # the user reads the whole file and that bit they can replace.
|
help: Replace with `Path("file.txt").read_text()`

View File

@@ -1,281 +0,0 @@
---
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text("test")`
--> FURB103.py:12:6
|
11 | # FURB103
12 | with open("file.txt", "w") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
13 | f.write("test")
|
help: Replace with `Path("file.txt").write_text("test")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
10 | # Errors.
11 |
12 | # FURB103
- with open("file.txt", "w") as f:
- f.write("test")
13 + pathlib.Path("file.txt").write_text("test")
14 |
15 | # FURB103
16 | with open("file.txt", "wb") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_bytes(foobar)`
--> FURB103.py:16:6
|
15 | # FURB103
16 | with open("file.txt", "wb") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
17 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_bytes(foobar)`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
14 | f.write("test")
15 |
16 | # FURB103
- with open("file.txt", "wb") as f:
- f.write(foobar)
17 + pathlib.Path("file.txt").write_bytes(foobar)
18 |
19 | # FURB103
20 | with open("file.txt", mode="wb") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_bytes(b"abc")`
--> FURB103.py:20:6
|
19 | # FURB103
20 | with open("file.txt", mode="wb") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
21 | f.write(b"abc")
|
help: Replace with `Path("file.txt").write_bytes(b"abc")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
18 | f.write(foobar)
19 |
20 | # FURB103
- with open("file.txt", mode="wb") as f:
- f.write(b"abc")
21 + pathlib.Path("file.txt").write_bytes(b"abc")
22 |
23 | # FURB103
24 | with open("file.txt", "w", encoding="utf8") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, encoding="utf8")`
--> FURB103.py:24:6
|
23 | # FURB103
24 | with open("file.txt", "w", encoding="utf8") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
25 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar, encoding="utf8")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
22 | f.write(b"abc")
23 |
24 | # FURB103
- with open("file.txt", "w", encoding="utf8") as f:
- f.write(foobar)
25 + pathlib.Path("file.txt").write_text(foobar, encoding="utf8")
26 |
27 | # FURB103
28 | with open("file.txt", "w", errors="ignore") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, errors="ignore")`
--> FURB103.py:28:6
|
27 | # FURB103
28 | with open("file.txt", "w", errors="ignore") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
29 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar, errors="ignore")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
26 | f.write(foobar)
27 |
28 | # FURB103
- with open("file.txt", "w", errors="ignore") as f:
- f.write(foobar)
29 + pathlib.Path("file.txt").write_text(foobar, errors="ignore")
30 |
31 | # FURB103
32 | with open("file.txt", mode="w") as f:
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar)`
--> FURB103.py:32:6
|
31 | # FURB103
32 | with open("file.txt", mode="w") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
33 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar)`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
30 | f.write(foobar)
31 |
32 | # FURB103
- with open("file.txt", mode="w") as f:
- f.write(foobar)
33 + pathlib.Path("file.txt").write_text(foobar)
34 |
35 | # FURB103
36 | with open(foo(), "wb") as f:
FURB103 `open` and `write` should be replaced by `Path(foo()).write_bytes(bar())`
--> FURB103.py:36:6
|
35 | # FURB103
36 | with open(foo(), "wb") as f:
| ^^^^^^^^^^^^^^^^^^^^^^
37 | # The body of `with` is non-trivial, but the recommendation holds.
38 | bar("pre")
|
help: Replace with `Path(foo()).write_bytes(bar())`
FURB103 `open` and `write` should be replaced by `Path("a.txt").write_text(x)`
--> FURB103.py:44:6
|
43 | # FURB103
44 | with open("a.txt", "w") as a, open("b.txt", "wb") as b:
| ^^^^^^^^^^^^^^^^^^^^^^^
45 | a.write(x)
46 | b.write(y)
|
help: Replace with `Path("a.txt").write_text(x)`
FURB103 `open` and `write` should be replaced by `Path("b.txt").write_bytes(y)`
--> FURB103.py:44:31
|
43 | # FURB103
44 | with open("a.txt", "w") as a, open("b.txt", "wb") as b:
| ^^^^^^^^^^^^^^^^^^^^^^^^
45 | a.write(x)
46 | b.write(y)
|
help: Replace with `Path("b.txt").write_bytes(y)`
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(bar(bar(a + x)))`
--> FURB103.py:49:18
|
48 | # FURB103
49 | with foo() as a, open("file.txt", "w") as b, foo() as c:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
50 | # We have other things in here, multiple with items, but the user
51 | # writes a single time to file and that bit they can replace.
|
help: Replace with `Path("file.txt").write_text(bar(bar(a + x)))`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, newline="\r\n")`
--> FURB103.py:58:6
|
57 | # FURB103
58 | with open("file.txt", "w", newline="\r\n") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
59 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar, newline="\r\n")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
56 |
57 |
58 | # FURB103
- with open("file.txt", "w", newline="\r\n") as f:
- f.write(foobar)
59 + pathlib.Path("file.txt").write_text(foobar, newline="\r\n")
60 |
61 |
62 | import builtins
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, newline="\r\n")`
--> FURB103.py:66:6
|
65 | # FURB103
66 | with builtins.open("file.txt", "w", newline="\r\n") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
67 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar, newline="\r\n")`
60 |
61 |
62 | import builtins
63 + import pathlib
64 |
65 |
66 | # FURB103
- with builtins.open("file.txt", "w", newline="\r\n") as f:
- f.write(foobar)
67 + pathlib.Path("file.txt").write_text(foobar, newline="\r\n")
68 |
69 |
70 | from builtins import open as o
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, newline="\r\n")`
--> FURB103.py:74:6
|
73 | # FURB103
74 | with o("file.txt", "w", newline="\r\n") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
75 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar, newline="\r\n")`
68 |
69 |
70 | from builtins import open as o
71 + import pathlib
72 |
73 |
74 | # FURB103
- with o("file.txt", "w", newline="\r\n") as f:
- f.write(foobar)
75 + pathlib.Path("file.txt").write_text(foobar, newline="\r\n")
76 |
77 | # Non-errors.
78 |
FURB103 [*] `open` and `write` should be replaced by `Path("test.json")....`
--> FURB103.py:154:6
|
152 | data = {"price": 100}
153 |
154 | with open("test.json", "wb") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
155 | f.write(json.dumps(data, indent=4).encode("utf-8"))
|
help: Replace with `Path("test.json")....`
148 |
149 | # See: https://github.com/astral-sh/ruff/issues/20785
150 | import json
151 + import pathlib
152 |
153 | data = {"price": 100}
154 |
- with open("test.json", "wb") as f:
- f.write(json.dumps(data, indent=4).encode("utf-8"))
155 + pathlib.Path("test.json").write_bytes(json.dumps(data, indent=4).encode("utf-8"))

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text("test")`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text("test")`
--> FURB103.py:12:6
|
11 | # FURB103
@@ -10,8 +10,22 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text("t
13 | f.write("test")
|
help: Replace with `Path("file.txt").write_text("test")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
10 | # Errors.
11 |
12 | # FURB103
- with open("file.txt", "w") as f:
- f.write("test")
13 + pathlib.Path("file.txt").write_text("test")
14 |
15 | # FURB103
16 | with open("file.txt", "wb") as f:
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_bytes(foobar)`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_bytes(foobar)`
--> FURB103.py:16:6
|
15 | # FURB103
@@ -20,8 +34,22 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_bytes(f
17 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_bytes(foobar)`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
14 | f.write("test")
15 |
16 | # FURB103
- with open("file.txt", "wb") as f:
- f.write(foobar)
17 + pathlib.Path("file.txt").write_bytes(foobar)
18 |
19 | # FURB103
20 | with open("file.txt", mode="wb") as f:
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_bytes(b"abc")`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_bytes(b"abc")`
--> FURB103.py:20:6
|
19 | # FURB103
@@ -30,8 +58,22 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_bytes(b
21 | f.write(b"abc")
|
help: Replace with `Path("file.txt").write_bytes(b"abc")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
18 | f.write(foobar)
19 |
20 | # FURB103
- with open("file.txt", mode="wb") as f:
- f.write(b"abc")
21 + pathlib.Path("file.txt").write_bytes(b"abc")
22 |
23 | # FURB103
24 | with open("file.txt", "w", encoding="utf8") as f:
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, encoding="utf8")`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, encoding="utf8")`
--> FURB103.py:24:6
|
23 | # FURB103
@@ -40,8 +82,22 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(fo
25 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar, encoding="utf8")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
22 | f.write(b"abc")
23 |
24 | # FURB103
- with open("file.txt", "w", encoding="utf8") as f:
- f.write(foobar)
25 + pathlib.Path("file.txt").write_text(foobar, encoding="utf8")
26 |
27 | # FURB103
28 | with open("file.txt", "w", errors="ignore") as f:
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, errors="ignore")`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar, errors="ignore")`
--> FURB103.py:28:6
|
27 | # FURB103
@@ -50,8 +106,22 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(fo
29 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar, errors="ignore")`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
26 | f.write(foobar)
27 |
28 | # FURB103
- with open("file.txt", "w", errors="ignore") as f:
- f.write(foobar)
29 + pathlib.Path("file.txt").write_text(foobar, errors="ignore")
30 |
31 | # FURB103
32 | with open("file.txt", mode="w") as f:
FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(foobar)`
FURB103 [*] `open` and `write` should be replaced by `Path("file.txt").write_text(foobar)`
--> FURB103.py:32:6
|
31 | # FURB103
@@ -60,6 +130,20 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(fo
33 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar)`
1 + import pathlib
2 | def foo():
3 | ...
4 |
--------------------------------------------------------------------------------
30 | f.write(foobar)
31 |
32 | # FURB103
- with open("file.txt", mode="w") as f:
- f.write(foobar)
33 + pathlib.Path("file.txt").write_text(foobar)
34 |
35 | # FURB103
36 | with open(foo(), "wb") as f:
FURB103 `open` and `write` should be replaced by `Path(foo()).write_bytes(bar())`
--> FURB103.py:36:6
@@ -105,7 +189,7 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(ba
|
help: Replace with `Path("file.txt").write_text(bar(bar(a + x)))`
FURB103 `open` and `write` should be replaced by `Path("test.json")....`
FURB103 [*] `open` and `write` should be replaced by `Path("test.json")....`
--> FURB103.py:154:6
|
152 | data = {"price": 100}
@@ -115,3 +199,13 @@ FURB103 `open` and `write` should be replaced by `Path("test.json")....`
155 | f.write(json.dumps(data, indent=4).encode("utf-8"))
|
help: Replace with `Path("test.json")....`
148 |
149 | # See: https://github.com/astral-sh/ruff/issues/20785
150 | import json
151 + import pathlib
152 |
153 | data = {"price": 100}
154 |
- with open("test.json", "wb") as f:
- f.write(json.dumps(data, indent=4).encode("utf-8"))
155 + pathlib.Path("test.json").write_bytes(json.dumps(data, indent=4).encode("utf-8"))

View File

@@ -112,7 +112,8 @@ mod tests {
#[test_case(Rule::LegacyFormPytestRaises, Path::new("RUF061_warns.py"))]
#[test_case(Rule::LegacyFormPytestRaises, Path::new("RUF061_deprecated_call.py"))]
#[test_case(Rule::NonOctalPermissions, Path::new("RUF064.py"))]
#[test_case(Rule::LoggingEagerConversion, Path::new("RUF065.py"))]
#[test_case(Rule::LoggingEagerConversion, Path::new("RUF065_0.py"))]
#[test_case(Rule::LoggingEagerConversion, Path::new("RUF065_1.py"))]
#[test_case(Rule::RedirectedNOQA, Path::new("RUF101_0.py"))]
#[test_case(Rule::RedirectedNOQA, Path::new("RUF101_1.py"))]
#[test_case(Rule::InvalidRuleCode, Path::new("RUF102.py"))]

View File

@@ -138,7 +138,12 @@ pub(crate) fn logging_eager_conversion(checker: &Checker, call: &ast::ExprCall)
.zip(call.arguments.args.iter().skip(msg_pos + 1))
{
// Check if the argument is a call to eagerly format a value
if let Expr::Call(ast::ExprCall { func, .. }) = arg {
if let Expr::Call(ast::ExprCall {
func,
arguments: str_call_args,
..
}) = arg
{
let CFormatType::String(format_conversion) = spec.format_type else {
continue;
};
@@ -146,8 +151,13 @@ pub(crate) fn logging_eager_conversion(checker: &Checker, call: &ast::ExprCall)
// Check for various eager conversion patterns
match format_conversion {
// %s with str() - remove str() call
// Only flag if str() has exactly one argument (positional or keyword) that is not unpacked
FormatConversion::Str
if checker.semantic().match_builtin_expr(func.as_ref(), "str") =>
if checker.semantic().match_builtin_expr(func.as_ref(), "str")
&& str_call_args.len() == 1
&& str_call_args
.find_argument("object", 0)
.is_some_and(|arg| !arg.is_variadic()) =>
{
checker.report_diagnostic(
LoggingEagerConversion {

View File

@@ -2,7 +2,7 @@
source: crates/ruff_linter/src/rules/ruff/mod.rs
---
RUF065 Unnecessary `str()` conversion when formatting with `%s`
--> RUF065.py:4:26
--> RUF065_0.py:4:26
|
3 | # %s + str()
4 | logging.info("Hello %s", str("World!"))
@@ -11,7 +11,7 @@ RUF065 Unnecessary `str()` conversion when formatting with `%s`
|
RUF065 Unnecessary `str()` conversion when formatting with `%s`
--> RUF065.py:5:39
--> RUF065_0.py:5:39
|
3 | # %s + str()
4 | logging.info("Hello %s", str("World!"))
@@ -22,7 +22,7 @@ RUF065 Unnecessary `str()` conversion when formatting with `%s`
|
RUF065 Unnecessary `repr()` conversion when formatting with `%s`. Use `%r` instead of `%s`
--> RUF065.py:8:26
--> RUF065_0.py:8:26
|
7 | # %s + repr()
8 | logging.info("Hello %s", repr("World!"))
@@ -31,7 +31,7 @@ RUF065 Unnecessary `repr()` conversion when formatting with `%s`. Use `%r` inste
|
RUF065 Unnecessary `repr()` conversion when formatting with `%s`. Use `%r` instead of `%s`
--> RUF065.py:9:39
--> RUF065_0.py:9:39
|
7 | # %s + repr()
8 | logging.info("Hello %s", repr("World!"))
@@ -42,7 +42,7 @@ RUF065 Unnecessary `repr()` conversion when formatting with `%s`. Use `%r` inste
|
RUF065 Unnecessary `str()` conversion when formatting with `%s`
--> RUF065.py:22:18
--> RUF065_0.py:22:18
|
21 | # %s + str()
22 | info("Hello %s", str("World!"))
@@ -51,7 +51,7 @@ RUF065 Unnecessary `str()` conversion when formatting with `%s`
|
RUF065 Unnecessary `str()` conversion when formatting with `%s`
--> RUF065.py:23:31
--> RUF065_0.py:23:31
|
21 | # %s + str()
22 | info("Hello %s", str("World!"))
@@ -62,7 +62,7 @@ RUF065 Unnecessary `str()` conversion when formatting with `%s`
|
RUF065 Unnecessary `repr()` conversion when formatting with `%s`. Use `%r` instead of `%s`
--> RUF065.py:26:18
--> RUF065_0.py:26:18
|
25 | # %s + repr()
26 | info("Hello %s", repr("World!"))
@@ -71,7 +71,7 @@ RUF065 Unnecessary `repr()` conversion when formatting with `%s`. Use `%r` inste
|
RUF065 Unnecessary `repr()` conversion when formatting with `%s`. Use `%r` instead of `%s`
--> RUF065.py:27:31
--> RUF065_0.py:27:31
|
25 | # %s + repr()
26 | info("Hello %s", repr("World!"))
@@ -82,7 +82,7 @@ RUF065 Unnecessary `repr()` conversion when formatting with `%s`. Use `%r` inste
|
RUF065 Unnecessary `repr()` conversion when formatting with `%s`. Use `%r` instead of `%s`
--> RUF065.py:44:32
--> RUF065_0.py:44:32
|
42 | logging.warning("Value: %r", repr(42))
43 | logging.error("Error: %r", repr([1, 2, 3]))
@@ -92,7 +92,7 @@ RUF065 Unnecessary `repr()` conversion when formatting with `%s`. Use `%r` inste
|
RUF065 Unnecessary `repr()` conversion when formatting with `%s`. Use `%r` instead of `%s`
--> RUF065.py:45:30
--> RUF065_0.py:45:30
|
43 | logging.error("Error: %r", repr([1, 2, 3]))
44 | logging.info("Debug info: %s", repr("test\nstring"))
@@ -103,7 +103,7 @@ RUF065 Unnecessary `repr()` conversion when formatting with `%s`. Use `%r` inste
|
RUF065 Unnecessary `ascii()` conversion when formatting with `%s`. Use `%a` instead of `%s`
--> RUF065.py:48:27
--> RUF065_0.py:48:27
|
47 | # %s + ascii()
48 | logging.info("ASCII: %s", ascii("Hello\nWorld"))
@@ -112,7 +112,7 @@ RUF065 Unnecessary `ascii()` conversion when formatting with `%s`. Use `%a` inst
|
RUF065 Unnecessary `ascii()` conversion when formatting with `%s`. Use `%a` instead of `%s`
--> RUF065.py:49:30
--> RUF065_0.py:49:30
|
47 | # %s + ascii()
48 | logging.info("ASCII: %s", ascii("Hello\nWorld"))
@@ -123,7 +123,7 @@ RUF065 Unnecessary `ascii()` conversion when formatting with `%s`. Use `%a` inst
|
RUF065 Unnecessary `oct()` conversion when formatting with `%s`. Use `%#o` instead of `%s`
--> RUF065.py:52:27
--> RUF065_0.py:52:27
|
51 | # %s + oct()
52 | logging.info("Octal: %s", oct(42))
@@ -132,7 +132,7 @@ RUF065 Unnecessary `oct()` conversion when formatting with `%s`. Use `%#o` inste
|
RUF065 Unnecessary `oct()` conversion when formatting with `%s`. Use `%#o` instead of `%s`
--> RUF065.py:53:30
--> RUF065_0.py:53:30
|
51 | # %s + oct()
52 | logging.info("Octal: %s", oct(42))
@@ -143,7 +143,7 @@ RUF065 Unnecessary `oct()` conversion when formatting with `%s`. Use `%#o` inste
|
RUF065 Unnecessary `hex()` conversion when formatting with `%s`. Use `%#x` instead of `%s`
--> RUF065.py:56:25
--> RUF065_0.py:56:25
|
55 | # %s + hex()
56 | logging.info("Hex: %s", hex(42))
@@ -152,7 +152,7 @@ RUF065 Unnecessary `hex()` conversion when formatting with `%s`. Use `%#x` inste
|
RUF065 Unnecessary `hex()` conversion when formatting with `%s`. Use `%#x` instead of `%s`
--> RUF065.py:57:28
--> RUF065_0.py:57:28
|
55 | # %s + hex()
56 | logging.info("Hex: %s", hex(42))
@@ -161,7 +161,7 @@ RUF065 Unnecessary `hex()` conversion when formatting with `%s`. Use `%#x` inste
|
RUF065 Unnecessary `ascii()` conversion when formatting with `%s`. Use `%a` instead of `%s`
--> RUF065.py:63:19
--> RUF065_0.py:63:19
|
61 | from logging import info, log
62 |
@@ -171,7 +171,7 @@ RUF065 Unnecessary `ascii()` conversion when formatting with `%s`. Use `%a` inst
|
RUF065 Unnecessary `ascii()` conversion when formatting with `%s`. Use `%a` instead of `%s`
--> RUF065.py:64:32
--> RUF065_0.py:64:32
|
63 | info("ASCII: %s", ascii("Hello\nWorld"))
64 | log(logging.INFO, "ASCII: %s", ascii("test"))
@@ -181,7 +181,7 @@ RUF065 Unnecessary `ascii()` conversion when formatting with `%s`. Use `%a` inst
|
RUF065 Unnecessary `oct()` conversion when formatting with `%s`. Use `%#o` instead of `%s`
--> RUF065.py:66:19
--> RUF065_0.py:66:19
|
64 | log(logging.INFO, "ASCII: %s", ascii("test"))
65 |
@@ -191,7 +191,7 @@ RUF065 Unnecessary `oct()` conversion when formatting with `%s`. Use `%#o` inste
|
RUF065 Unnecessary `oct()` conversion when formatting with `%s`. Use `%#o` instead of `%s`
--> RUF065.py:67:32
--> RUF065_0.py:67:32
|
66 | info("Octal: %s", oct(42))
67 | log(logging.INFO, "Octal: %s", oct(255))
@@ -201,7 +201,7 @@ RUF065 Unnecessary `oct()` conversion when formatting with `%s`. Use `%#o` inste
|
RUF065 Unnecessary `hex()` conversion when formatting with `%s`. Use `%#x` instead of `%s`
--> RUF065.py:69:17
--> RUF065_0.py:69:17
|
67 | log(logging.INFO, "Octal: %s", oct(255))
68 |
@@ -211,7 +211,7 @@ RUF065 Unnecessary `hex()` conversion when formatting with `%s`. Use `%#x` inste
|
RUF065 Unnecessary `hex()` conversion when formatting with `%s`. Use `%#x` instead of `%s`
--> RUF065.py:70:30
--> RUF065_0.py:70:30
|
69 | info("Hex: %s", hex(42))
70 | log(logging.INFO, "Hex: %s", hex(255))

View File

@@ -0,0 +1,10 @@
---
source: crates/ruff_linter/src/rules/ruff/mod.rs
---
RUF065 Unnecessary `str()` conversion when formatting with `%s`
--> RUF065_1.py:17:23
|
16 | # str() with single keyword argument - should be flagged (equivalent to str("!"))
17 | logging.warning("%s", str(object="!"))
| ^^^^^^^^^^^^^^^
|

View File

@@ -8,37 +8,40 @@ use ruff_source_file::{LineColumn, OneIndexed, SourceLocation};
/// [`ruff_text_size::TextSize`] to jupyter notebook cell/row/column.
#[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize)]
pub struct NotebookIndex {
/// Enter a row (1-based), get back the cell (1-based)
pub(super) row_to_cell: Vec<OneIndexed>,
/// Enter a row (1-based), get back the row in cell (1-based)
pub(super) row_to_row_in_cell: Vec<OneIndexed>,
/// Stores the starting row and the absolute cell index for every Python (valid) cell.
///
/// The index in this vector corresponds to the Python cell index (valid cell index).
pub(super) cell_starts: Vec<CellStart>,
}
impl NotebookIndex {
pub fn new(row_to_cell: Vec<OneIndexed>, row_to_row_in_cell: Vec<OneIndexed>) -> Self {
Self {
row_to_cell,
row_to_row_in_cell,
fn find_cell(&self, row: OneIndexed) -> Option<CellStart> {
match self
.cell_starts
.binary_search_by_key(&row, |start| start.start_row)
{
Ok(cell_index) => Some(self.cell_starts[cell_index]),
Err(insertion_point) => Some(self.cell_starts[insertion_point.checked_sub(1)?]),
}
}
/// Returns the cell number (1-based) for the given row (1-based).
/// Returns the (raw) cell number (1-based) for the given row (1-based).
pub fn cell(&self, row: OneIndexed) -> Option<OneIndexed> {
self.row_to_cell.get(row.to_zero_indexed()).copied()
self.find_cell(row).map(|start| start.raw_cell_index)
}
/// Returns the row number (1-based) in the cell (1-based) for the
/// given row (1-based).
pub fn cell_row(&self, row: OneIndexed) -> Option<OneIndexed> {
self.row_to_row_in_cell.get(row.to_zero_indexed()).copied()
self.find_cell(row)
.map(|start| OneIndexed::from_zero_indexed(row.get() - start.start_row.get()))
}
/// Returns an iterator over the row:cell-number pairs (both 1-based).
pub fn iter(&self) -> impl Iterator<Item = (OneIndexed, OneIndexed)> {
self.row_to_cell
.iter()
.enumerate()
.map(|(row, cell)| (OneIndexed::from_zero_indexed(row), *cell))
/// Returns an iterator over the starting rows of each cell (1-based).
///
/// This yields one entry per Python cell (skipping over Makrdown cell).
pub fn iter(&self) -> impl Iterator<Item = CellStart> + '_ {
self.cell_starts.iter().copied()
}
/// Translates the given [`LineColumn`] based on the indexing table.
@@ -67,3 +70,23 @@ impl NotebookIndex {
}
}
}
#[derive(Debug, Copy, Clone, Eq, PartialEq, Serialize, Deserialize)]
pub struct CellStart {
/// The row in the concatenated notebook source code at which
/// this cell starts.
pub(super) start_row: OneIndexed,
/// The absolute index of this cell in the notebook.
pub(super) raw_cell_index: OneIndexed,
}
impl CellStart {
pub fn start_row(&self) -> OneIndexed {
self.start_row
}
pub fn cell_index(&self) -> OneIndexed {
self.raw_cell_index
}
}

View File

@@ -18,7 +18,7 @@ use ruff_text_size::TextSize;
use crate::cell::CellOffsets;
use crate::index::NotebookIndex;
use crate::schema::{Cell, RawNotebook, SortAlphabetically, SourceValue};
use crate::{CellMetadata, RawNotebookMetadata, schema};
use crate::{CellMetadata, CellStart, RawNotebookMetadata, schema};
/// Run round-trip source code generation on a given Jupyter notebook file path.
pub fn round_trip(path: &Path) -> anyhow::Result<String> {
@@ -320,11 +320,19 @@ impl Notebook {
/// The index building is expensive as it needs to go through the content of
/// every valid code cell.
fn build_index(&self) -> NotebookIndex {
let mut row_to_cell = Vec::new();
let mut row_to_row_in_cell = Vec::new();
let mut cell_starts = Vec::with_capacity(self.valid_code_cells.len());
let mut current_row = OneIndexed::MIN;
for &cell_index in &self.valid_code_cells {
let line_count = match &self.raw.cells[cell_index as usize].source() {
let raw_cell_index = cell_index as usize;
// Record the starting row of this cell
cell_starts.push(CellStart {
start_row: current_row,
raw_cell_index: OneIndexed::from_zero_indexed(raw_cell_index),
});
let line_count = match &self.raw.cells[raw_cell_index].source() {
SourceValue::String(string) => {
if string.is_empty() {
1
@@ -342,17 +350,11 @@ impl Notebook {
}
}
};
row_to_cell.extend(std::iter::repeat_n(
OneIndexed::from_zero_indexed(cell_index as usize),
line_count,
));
row_to_row_in_cell.extend((0..line_count).map(OneIndexed::from_zero_indexed));
current_row = current_row.saturating_add(line_count);
}
NotebookIndex {
row_to_cell,
row_to_row_in_cell,
}
NotebookIndex { cell_starts }
}
/// Return the notebook content.
@@ -456,7 +458,7 @@ mod tests {
use ruff_source_file::OneIndexed;
use crate::{Cell, Notebook, NotebookError, NotebookIndex};
use crate::{Cell, CellStart, Notebook, NotebookError, NotebookIndex};
/// Construct a path to a Jupyter notebook in the `resources/test/fixtures/jupyter` directory.
fn notebook_path(path: impl AsRef<Path>) -> std::path::PathBuf {
@@ -548,39 +550,27 @@ print("after empty cells")
assert_eq!(
notebook.index(),
&NotebookIndex {
row_to_cell: vec![
OneIndexed::from_zero_indexed(0),
OneIndexed::from_zero_indexed(0),
OneIndexed::from_zero_indexed(0),
OneIndexed::from_zero_indexed(0),
OneIndexed::from_zero_indexed(0),
OneIndexed::from_zero_indexed(0),
OneIndexed::from_zero_indexed(2),
OneIndexed::from_zero_indexed(2),
OneIndexed::from_zero_indexed(2),
OneIndexed::from_zero_indexed(2),
OneIndexed::from_zero_indexed(2),
OneIndexed::from_zero_indexed(4),
OneIndexed::from_zero_indexed(6),
OneIndexed::from_zero_indexed(6),
OneIndexed::from_zero_indexed(7)
],
row_to_row_in_cell: vec![
OneIndexed::from_zero_indexed(0),
OneIndexed::from_zero_indexed(1),
OneIndexed::from_zero_indexed(2),
OneIndexed::from_zero_indexed(3),
OneIndexed::from_zero_indexed(4),
OneIndexed::from_zero_indexed(5),
OneIndexed::from_zero_indexed(0),
OneIndexed::from_zero_indexed(1),
OneIndexed::from_zero_indexed(2),
OneIndexed::from_zero_indexed(3),
OneIndexed::from_zero_indexed(4),
OneIndexed::from_zero_indexed(0),
OneIndexed::from_zero_indexed(0),
OneIndexed::from_zero_indexed(1),
OneIndexed::from_zero_indexed(0)
cell_starts: vec![
CellStart {
start_row: OneIndexed::MIN,
raw_cell_index: OneIndexed::MIN
},
CellStart {
start_row: OneIndexed::from_zero_indexed(6),
raw_cell_index: OneIndexed::from_zero_indexed(2)
},
CellStart {
start_row: OneIndexed::from_zero_indexed(11),
raw_cell_index: OneIndexed::from_zero_indexed(4)
},
CellStart {
start_row: OneIndexed::from_zero_indexed(12),
raw_cell_index: OneIndexed::from_zero_indexed(6)
},
CellStart {
start_row: OneIndexed::from_zero_indexed(14),
raw_cell_index: OneIndexed::from_zero_indexed(7)
}
],
}
);

View File

@@ -1318,9 +1318,19 @@ impl Truthiness {
if arguments.is_empty() {
// Ex) `list()`
Self::Falsey
} else if arguments.args.len() == 1 && arguments.keywords.is_empty() {
} else if let [argument] = &*arguments.args
&& arguments.keywords.is_empty()
{
// Ex) `list([1, 2, 3])`
Self::from_expr(&arguments.args[0], is_builtin)
// For tuple(generator), we can't determine statically if the result will
// be empty or not, so return Unknown. The generator itself is truthy, but
// tuple(empty_generator) is falsy. ListComp and SetComp are handled by
// recursing into Self::from_expr below, which returns Unknown for them.
if argument.is_generator_expr() {
Self::Unknown
} else {
Self::from_expr(argument, is_builtin)
}
} else {
Self::Unknown
}

View File

@@ -3269,6 +3269,13 @@ impl<'a> ArgOrKeyword<'a> {
ArgOrKeyword::Keyword(keyword) => &keyword.value,
}
}
pub const fn is_variadic(self) -> bool {
match self {
ArgOrKeyword::Arg(expr) => expr.is_starred_expr(),
ArgOrKeyword::Keyword(keyword) => keyword.arg.is_none(),
}
}
}
impl<'a> From<&'a Expr> for ArgOrKeyword<'a> {

View File

@@ -0,0 +1,8 @@
[
{
"preview": "disabled"
},
{
"preview": "enabled"
}
]

View File

@@ -125,6 +125,13 @@ lambda a, /, c: a
*x: x
)
(
lambda
# comment
*x,
**y: x
)
(
lambda
# comment 1
@@ -135,6 +142,17 @@ lambda a, /, c: a
x
)
(
lambda
# comment 1
*
# comment 2
x,
**y:
# comment 3
x
)
(
lambda # comment 1
* # comment 2
@@ -142,6 +160,14 @@ lambda a, /, c: a
x
)
(
lambda # comment 1
* # comment 2
x,
y: # comment 3
x
)
lambda *x\
:x
@@ -196,6 +222,17 @@ lambda: ( # comment
x
)
(
lambda # 1
# 2
x, # 3
# 4
y
: # 5
# 6
x
)
(
lambda
x,
@@ -204,6 +241,71 @@ lambda: ( # comment
z
)
# Leading
lambda x: (
lambda y: lambda z: x
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ z # Trailing
) # Trailing
# Leading
lambda x: lambda y: lambda z: [
x,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
z
] # Trailing
# Trailing
lambda self, araa, kkkwargs=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs), e=1, f=2, g=2: d
# Regression tests for https://github.com/astral-sh/ruff/issues/8179

View File

@@ -4,6 +4,7 @@ use ruff_python_ast::ExprLambda;
use ruff_text_size::Ranged;
use crate::comments::dangling_comments;
use crate::comments::leading_comments;
use crate::expression::parentheses::{NeedsParentheses, OptionalParentheses};
use crate::other::parameters::ParametersParentheses;
use crate::prelude::*;
@@ -33,24 +34,45 @@ impl FormatNodeRule<ExprLambda> for FormatExprLambda {
if dangling_before_parameters.is_empty() {
write!(f, [space()])?;
} else {
write!(f, [dangling_comments(dangling_before_parameters)])?;
}
write!(
f,
[parameters
.format()
.with_options(ParametersParentheses::Never)]
)?;
group(&format_with(|f: &mut PyFormatter| {
if f.context().node_level().is_parenthesized()
&& (parameters.len() > 1 || !dangling_before_parameters.is_empty())
{
let end_of_line_start = dangling_before_parameters
.partition_point(|comment| comment.line_position().is_end_of_line());
let (same_line_comments, own_line_comments) =
dangling_before_parameters.split_at(end_of_line_start);
write!(f, [token(":")])?;
dangling_comments(same_line_comments).fmt(f)?;
if dangling_after_parameters.is_empty() {
write!(f, [space()])?;
} else {
write!(f, [dangling_comments(dangling_after_parameters)])?;
}
write![
f,
[
soft_line_break(),
leading_comments(own_line_comments),
parameters
.format()
.with_options(ParametersParentheses::Never),
]
]
} else {
parameters
.format()
.with_options(ParametersParentheses::Never)
.fmt(f)
}?;
write!(f, [token(":")])?;
if dangling_after_parameters.is_empty() {
write!(f, [space()])
} else {
write!(f, [dangling_comments(dangling_after_parameters)])
}
}))
.fmt(f)?;
} else {
write!(f, [token(":")])?;

View File

@@ -241,7 +241,7 @@ impl FormatNodeRule<Parameters> for FormatParameters {
let num_parameters = item.len();
if self.parentheses == ParametersParentheses::Never {
write!(f, [group(&format_inner), dangling_comments(dangling)])
write!(f, [format_inner, dangling_comments(dangling)])
} else if num_parameters == 0 {
let mut f = WithNodeLevel::new(NodeLevel::ParenthesizedExpression, f);
// No parameters, format any dangling comments between `()`

View File

@@ -1,7 +1,6 @@
---
source: crates/ruff_python_formatter/tests/fixtures.rs
input_file: crates/ruff_python_formatter/resources/test/fixtures/ruff/expression/lambda.py
snapshot_kind: text
---
## Input
```python
@@ -132,6 +131,13 @@ lambda a, /, c: a
*x: x
)
(
lambda
# comment
*x,
**y: x
)
(
lambda
# comment 1
@@ -142,6 +148,17 @@ lambda a, /, c: a
x
)
(
lambda
# comment 1
*
# comment 2
x,
**y:
# comment 3
x
)
(
lambda # comment 1
* # comment 2
@@ -149,6 +166,14 @@ lambda a, /, c: a
x
)
(
lambda # comment 1
* # comment 2
x,
y: # comment 3
x
)
lambda *x\
:x
@@ -203,6 +228,17 @@ lambda: ( # comment
x
)
(
lambda # 1
# 2
x, # 3
# 4
y
: # 5
# 6
x
)
(
lambda
x,
@@ -211,6 +247,71 @@ lambda: ( # comment
z
)
# Leading
lambda x: (
lambda y: lambda z: x
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ z # Trailing
) # Trailing
# Leading
lambda x: lambda y: lambda z: [
x,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
z
] # Trailing
# Trailing
lambda self, araa, kkkwargs=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs), e=1, f=2, g=2: d
# Regression tests for https://github.com/astral-sh/ruff/issues/8179
@@ -237,7 +338,22 @@ def a():
```
## Output
## Outputs
### Output 1
```
indent-style = space
line-width = 88
indent-width = 4
quote-style = Double
line-ending = LineFeed
magic-trailing-comma = Respect
docstring-code = Disabled
docstring-code-line-width = "dynamic"
preview = Disabled
target_version = 3.10
source_type = Python
```
```python
# Leading
lambda x: x # Trailing
@@ -301,7 +417,8 @@ a = (
)
a = (
lambda x, # Dangling
lambda
x, # Dangling
y: 1
)
@@ -367,6 +484,13 @@ lambda a, /, c: a
*x: x
)
(
lambda
# comment
*x,
**y: x
)
(
lambda
# comment 1
@@ -376,6 +500,16 @@ lambda a, /, c: a
x
)
(
lambda
# comment 1
# comment 2
*x,
**y:
# comment 3
x
)
(
lambda # comment 1
# comment 2
@@ -383,6 +517,14 @@ lambda a, /, c: a
x
)
(
lambda # comment 1
# comment 2
*x,
y: # comment 3
x
)
lambda *x: x
(
@@ -435,11 +577,87 @@ lambda: ( # comment
)
(
lambda x,
lambda # 1
# 2
x, # 3
# 4
y: # 5
# 6
x
)
(
lambda
x,
# comment
y: z
)
# Leading
lambda x: (
lambda y: lambda z: x
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ z # Trailing
) # Trailing
# Leading
lambda x: lambda y: lambda z: [
x,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
z,
] # Trailing
# Trailing
lambda self, araa, kkkwargs=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
*args, **kwargs
), e=1, f=2, g=2: d
@@ -451,7 +669,8 @@ def a():
c,
d,
e,
f=lambda self,
f=lambda
self,
*args,
**kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs),
)
@@ -462,7 +681,365 @@ def a():
c,
d,
e,
f=lambda self,
f=lambda
self,
araa,
kkkwargs,
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa,
args,
kwargs,
e=1,
f=2,
g=2: d,
g=10,
)
```
### Output 2
```
indent-style = space
line-width = 88
indent-width = 4
quote-style = Double
line-ending = LineFeed
magic-trailing-comma = Respect
docstring-code = Disabled
docstring-code-line-width = "dynamic"
preview = Enabled
target_version = 3.10
source_type = Python
```
```python
# Leading
lambda x: x # Trailing
# Trailing
# Leading
lambda x, y: x # Trailing
# Trailing
# Leading
lambda x, y: x, y # Trailing
# Trailing
# Leading
lambda x, /, y: x # Trailing
# Trailing
# Leading
lambda x: lambda y: lambda z: x # Trailing
# Trailing
# Leading
lambda x: lambda y: lambda z: (x, y, z) # Trailing
# Trailing
# Leading
lambda x: lambda y: lambda z: (x, y, z) # Trailing
# Trailing
# Leading
lambda x: lambda y: lambda z: (
x,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
z,
) # Trailing
# Trailing
a = (
lambda: # Dangling
1
)
a = (
lambda
x, # Dangling
y: 1
)
# Regression test: lambda empty arguments ranges were too long, leading to unstable
# formatting
(
lambda: ( #
),
)
# lambda arguments don't have parentheses, so we never add a magic trailing comma ...
def f(
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa: bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb = lambda x: y,
):
pass
# ...but we do preserve a trailing comma after the arguments
a = lambda b,: 0
lambda a,: 0
lambda *args,: 0
lambda **kwds,: 0
lambda a, *args,: 0
lambda a, **kwds,: 0
lambda *args, b,: 0
lambda *, b,: 0
lambda *args, **kwds,: 0
lambda a, *args, b,: 0
lambda a, *, b,: 0
lambda a, *args, **kwds,: 0
lambda *args, b, **kwds,: 0
lambda *, b, **kwds,: 0
lambda a, *args, b, **kwds,: 0
lambda a, *, b, **kwds,: 0
lambda a, /: a
lambda a, /, c: a
# Dangling comments without parameters.
(
lambda: # 3
None
)
(
lambda:
# 3
None
)
(
lambda: # 1
# 2
# 3
# 4
None # 5
)
(
lambda
# comment
*x: x
)
(
lambda
# comment
*x,
**y: x
)
(
lambda
# comment 1
# comment 2
*x:
# comment 3
x
)
(
lambda
# comment 1
# comment 2
*x,
**y:
# comment 3
x
)
(
lambda # comment 1
# comment 2
*x: # comment 3
x
)
(
lambda # comment 1
# comment 2
*x,
y: # comment 3
x
)
lambda *x: x
(
lambda
# comment
*x: x
)
lambda: ( # comment
x
)
(
lambda: # comment
x
)
(
lambda:
# comment
x
)
(
lambda: # comment
x
)
(
lambda:
# comment
x
)
(
lambda: # comment
( # comment
x
)
)
(
lambda # 1
# 2
x: # 3
# 4
# 5
# 6
x
)
(
lambda # 1
# 2
x, # 3
# 4
y: # 5
# 6
x
)
(
lambda
x,
# comment
y: z
)
# Leading
lambda x: (
lambda y: lambda z: x
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ y
+ z # Trailing
) # Trailing
# Leading
lambda x: lambda y: lambda z: [
x,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
y,
z,
] # Trailing
# Trailing
lambda self, araa, kkkwargs=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
*args, **kwargs
), e=1, f=2, g=2: d
# Regression tests for https://github.com/astral-sh/ruff/issues/8179
def a():
return b(
c,
d,
e,
f=lambda
self,
*args,
**kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs),
)
def a():
return b(
c,
d,
e,
f=lambda
self,
araa,
kkkwargs,
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa,

View File

@@ -5,3 +5,5 @@ match 42:
case [[x] | [x]] | x: ...
match 42:
case [[x | x] | [x]] | x: ...
match 42:
case ast.Subscript(n, ast.Constant() | ast.Slice()) | ast.Attribute(n): ...

View File

@@ -1868,6 +1868,8 @@ impl<'a, Ctx: SemanticSyntaxContext> MatchPatternVisitor<'a, Ctx> {
// case [[x] | [x]] | x: ...
// match 42:
// case [[x | x] | [x]] | x: ...
// match 42:
// case ast.Subscript(n, ast.Constant() | ast.Slice()) | ast.Attribute(n): ...
SemanticSyntaxChecker::add_error(
self.ctx,
SemanticSyntaxErrorKind::DifferentMatchPatternBindings,
@@ -1875,7 +1877,7 @@ impl<'a, Ctx: SemanticSyntaxContext> MatchPatternVisitor<'a, Ctx> {
);
break;
}
self.names = visitor.names;
self.names.extend(visitor.names);
}
}
}

View File

@@ -8,7 +8,7 @@ input_file: crates/ruff_python_parser/resources/inline/ok/nested_alternative_pat
Module(
ModModule {
node_index: NodeIndex(None),
range: 0..181,
range: 0..271,
body: [
Match(
StmtMatch {
@@ -489,6 +489,216 @@ Module(
],
},
),
Match(
StmtMatch {
node_index: NodeIndex(None),
range: 181..270,
subject: NumberLiteral(
ExprNumberLiteral {
node_index: NodeIndex(None),
range: 187..189,
value: Int(
42,
),
},
),
cases: [
MatchCase {
range: 195..270,
node_index: NodeIndex(None),
pattern: MatchOr(
PatternMatchOr {
node_index: NodeIndex(None),
range: 200..265,
patterns: [
MatchClass(
PatternMatchClass {
node_index: NodeIndex(None),
range: 200..246,
cls: Attribute(
ExprAttribute {
node_index: NodeIndex(None),
range: 200..213,
value: Name(
ExprName {
node_index: NodeIndex(None),
range: 200..203,
id: Name("ast"),
ctx: Load,
},
),
attr: Identifier {
id: Name("Subscript"),
range: 204..213,
node_index: NodeIndex(None),
},
ctx: Load,
},
),
arguments: PatternArguments {
range: 213..246,
node_index: NodeIndex(None),
patterns: [
MatchAs(
PatternMatchAs {
node_index: NodeIndex(None),
range: 214..215,
pattern: None,
name: Some(
Identifier {
id: Name("n"),
range: 214..215,
node_index: NodeIndex(None),
},
),
},
),
MatchOr(
PatternMatchOr {
node_index: NodeIndex(None),
range: 217..245,
patterns: [
MatchClass(
PatternMatchClass {
node_index: NodeIndex(None),
range: 217..231,
cls: Attribute(
ExprAttribute {
node_index: NodeIndex(None),
range: 217..229,
value: Name(
ExprName {
node_index: NodeIndex(None),
range: 217..220,
id: Name("ast"),
ctx: Load,
},
),
attr: Identifier {
id: Name("Constant"),
range: 221..229,
node_index: NodeIndex(None),
},
ctx: Load,
},
),
arguments: PatternArguments {
range: 229..231,
node_index: NodeIndex(None),
patterns: [],
keywords: [],
},
},
),
MatchClass(
PatternMatchClass {
node_index: NodeIndex(None),
range: 234..245,
cls: Attribute(
ExprAttribute {
node_index: NodeIndex(None),
range: 234..243,
value: Name(
ExprName {
node_index: NodeIndex(None),
range: 234..237,
id: Name("ast"),
ctx: Load,
},
),
attr: Identifier {
id: Name("Slice"),
range: 238..243,
node_index: NodeIndex(None),
},
ctx: Load,
},
),
arguments: PatternArguments {
range: 243..245,
node_index: NodeIndex(None),
patterns: [],
keywords: [],
},
},
),
],
},
),
],
keywords: [],
},
},
),
MatchClass(
PatternMatchClass {
node_index: NodeIndex(None),
range: 249..265,
cls: Attribute(
ExprAttribute {
node_index: NodeIndex(None),
range: 249..262,
value: Name(
ExprName {
node_index: NodeIndex(None),
range: 249..252,
id: Name("ast"),
ctx: Load,
},
),
attr: Identifier {
id: Name("Attribute"),
range: 253..262,
node_index: NodeIndex(None),
},
ctx: Load,
},
),
arguments: PatternArguments {
range: 262..265,
node_index: NodeIndex(None),
patterns: [
MatchAs(
PatternMatchAs {
node_index: NodeIndex(None),
range: 263..264,
pattern: None,
name: Some(
Identifier {
id: Name("n"),
range: 263..264,
node_index: NodeIndex(None),
},
),
},
),
],
keywords: [],
},
},
),
],
},
),
guard: None,
body: [
Expr(
StmtExpr {
node_index: NodeIndex(None),
range: 267..270,
value: EllipsisLiteral(
ExprEllipsisLiteral {
node_index: NodeIndex(None),
range: 267..270,
},
),
},
),
],
},
],
},
),
],
},
)

View File

@@ -468,6 +468,62 @@ line-length = 500
"line-length must be between 1 and 320 (got 500)"
);
// Test value at u16::MAX boundary (65535) - should show range error
let invalid_line_length_65535 = toml::from_str::<Pyproject>(
r"
[tool.ruff]
line-length = 65535
",
)
.expect_err("Deserialization should have failed for line-length at u16::MAX");
assert_eq!(
invalid_line_length_65535.message(),
"line-length must be between 1 and 320 (got 65535)"
);
// Test value exceeding u16::MAX (65536) - should show clear error
let invalid_line_length_65536 = toml::from_str::<Pyproject>(
r"
[tool.ruff]
line-length = 65536
",
)
.expect_err("Deserialization should have failed for line-length exceeding u16::MAX");
assert_eq!(
invalid_line_length_65536.message(),
"line-length must be between 1 and 320 (got 65536)"
);
// Test value far exceeding u16::MAX (99_999) - should show clear error
let invalid_line_length_99999 = toml::from_str::<Pyproject>(
r"
[tool.ruff]
line-length = 99_999
",
)
.expect_err("Deserialization should have failed for line-length far exceeding u16::MAX");
assert_eq!(
invalid_line_length_99999.message(),
"line-length must be between 1 and 320 (got 99999)"
);
// Test negative value - should show clear error
let invalid_line_length_negative = toml::from_str::<Pyproject>(
r"
[tool.ruff]
line-length = -5
",
)
.expect_err("Deserialization should have failed for negative line-length");
assert_eq!(
invalid_line_length_negative.message(),
"line-length must be between 1 and 320 (got -5)"
);
Ok(())
}

164
crates/ty/docs/rules.md generated
View File

@@ -39,7 +39,7 @@ def test(): -> "int":
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20call-non-callable" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L118" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L120" target="_blank">View source</a>
</small>
@@ -63,7 +63,7 @@ Calling a non-callable object will raise a `TypeError` at runtime.
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20conflicting-argument-forms" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L162" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L164" target="_blank">View source</a>
</small>
@@ -95,7 +95,7 @@ f(int) # error
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20conflicting-declarations" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L188" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L190" target="_blank">View source</a>
</small>
@@ -126,7 +126,7 @@ a = 1
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20conflicting-metaclass" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L213" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L215" target="_blank">View source</a>
</small>
@@ -158,7 +158,7 @@ class C(A, B): ...
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20cyclic-class-definition" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L239" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L241" target="_blank">View source</a>
</small>
@@ -190,7 +190,7 @@ class B(A): ...
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20duplicate-base" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L304" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L306" target="_blank">View source</a>
</small>
@@ -217,7 +217,7 @@ class B(A, A): ...
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.12">0.0.1-alpha.12</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20duplicate-kw-only" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L325" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L327" target="_blank">View source</a>
</small>
@@ -329,7 +329,7 @@ def test(): -> "Literal[5]":
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20inconsistent-mro" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L529" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L531" target="_blank">View source</a>
</small>
@@ -359,7 +359,7 @@ class C(A, B): ...
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20index-out-of-bounds" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L553" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L555" target="_blank">View source</a>
</small>
@@ -385,7 +385,7 @@ t[3] # IndexError: tuple index out of range
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.12">0.0.1-alpha.12</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20instance-layout-conflict" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L357" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L359" target="_blank">View source</a>
</small>
@@ -474,7 +474,7 @@ an atypical memory layout.
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-argument-type" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L607" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L609" target="_blank">View source</a>
</small>
@@ -501,7 +501,7 @@ func("foo") # error: [invalid-argument-type]
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-assignment" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L647" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L649" target="_blank">View source</a>
</small>
@@ -529,7 +529,7 @@ a: int = ''
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-attribute-access" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1782" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1808" target="_blank">View source</a>
</small>
@@ -563,7 +563,7 @@ C.instance_var = 3 # error: Cannot assign to instance variable
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.19">0.0.1-alpha.19</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-await" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L669" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L671" target="_blank">View source</a>
</small>
@@ -599,7 +599,7 @@ asyncio.run(main())
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-base" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L699" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L701" target="_blank">View source</a>
</small>
@@ -623,7 +623,7 @@ class A(42): ... # error: [invalid-base]
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-context-manager" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L750" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L752" target="_blank">View source</a>
</small>
@@ -650,7 +650,7 @@ with 1:
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-declaration" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L771" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L773" target="_blank">View source</a>
</small>
@@ -679,7 +679,7 @@ a: str
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-exception-caught" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L794" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L796" target="_blank">View source</a>
</small>
@@ -723,7 +723,7 @@ except ZeroDivisionError:
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-generic-class" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L830" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L832" target="_blank">View source</a>
</small>
@@ -756,7 +756,7 @@ class C[U](Generic[T]): ...
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.17">0.0.1-alpha.17</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-key" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L574" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L576" target="_blank">View source</a>
</small>
@@ -795,7 +795,7 @@ carol = Person(name="Carol", age=25) # typo!
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-legacy-type-variable" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L856" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L858" target="_blank">View source</a>
</small>
@@ -830,7 +830,7 @@ def f(t: TypeVar("U")): ...
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-metaclass" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L929" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L955" target="_blank">View source</a>
</small>
@@ -864,7 +864,7 @@ class B(metaclass=f): ...
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.19">0.0.1-alpha.19</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-named-tuple" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L503" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L505" target="_blank">View source</a>
</small>
@@ -890,13 +890,43 @@ in a class's bases list.
TypeError: can only inherit from a NamedTuple type and Generic
```
## `invalid-newtype`
<small>
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Preview (since <a href="https://github.com/astral-sh/ty/releases/tag/1.0.0">1.0.0</a>) ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-newtype" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L931" target="_blank">View source</a>
</small>
**What it does**
Checks for the creation of invalid `NewType`s
**Why is this bad?**
There are several requirements that you must follow when creating a `NewType`.
**Examples**
```python
from typing import NewType
def get_name() -> str: ...
Foo = NewType("Foo", int) # okay
Bar = NewType(get_name(), int) # error: The first argument to `NewType` must be a string literal
Baz = NewType("Baz", int | str) # error: invalid base for `typing.NewType`
```
## `invalid-overload`
<small>
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-overload" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L956" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L982" target="_blank">View source</a>
</small>
@@ -946,7 +976,7 @@ def foo(x: int) -> int: ...
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-parameter-default" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1055" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1081" target="_blank">View source</a>
</small>
@@ -972,7 +1002,7 @@ def f(a: int = ''): ...
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-paramspec" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L884" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L886" target="_blank">View source</a>
</small>
@@ -1003,7 +1033,7 @@ P2 = ParamSpec("S2") # error: ParamSpec name must match the variable it's assig
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-protocol" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L439" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L441" target="_blank">View source</a>
</small>
@@ -1037,7 +1067,7 @@ TypeError: Protocols can only inherit from other protocols, got <class 'int'>
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-raise" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1075" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1101" target="_blank">View source</a>
</small>
@@ -1086,7 +1116,7 @@ def g():
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-return-type" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L628" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L630" target="_blank">View source</a>
</small>
@@ -1111,7 +1141,7 @@ def func() -> int:
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-super-argument" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1118" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1144" target="_blank">View source</a>
</small>
@@ -1169,7 +1199,7 @@ TODO #14889
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.6">0.0.1-alpha.6</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-type-alias-type" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L908" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L910" target="_blank">View source</a>
</small>
@@ -1196,7 +1226,7 @@ NewAlias = TypeAliasType(get_name(), int) # error: TypeAliasType name mus
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-type-checking-constant" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1157" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1183" target="_blank">View source</a>
</small>
@@ -1226,7 +1256,7 @@ TYPE_CHECKING = ''
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-type-form" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1181" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1207" target="_blank">View source</a>
</small>
@@ -1256,7 +1286,7 @@ b: Annotated[int] # `Annotated` expects at least two arguments
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.11">0.0.1-alpha.11</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-type-guard-call" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1233" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1259" target="_blank">View source</a>
</small>
@@ -1290,7 +1320,7 @@ f(10) # Error
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.11">0.0.1-alpha.11</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-type-guard-definition" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1205" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1231" target="_blank">View source</a>
</small>
@@ -1324,7 +1354,7 @@ class C:
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-type-variable-constraints" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1261" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1287" target="_blank">View source</a>
</small>
@@ -1359,7 +1389,7 @@ T = TypeVar('T', bound=str) # valid bound TypeVar
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20missing-argument" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1290" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1316" target="_blank">View source</a>
</small>
@@ -1384,7 +1414,7 @@ func() # TypeError: func() missing 1 required positional argument: 'x'
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.20">0.0.1-alpha.20</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20missing-typed-dict-key" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1883" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1909" target="_blank">View source</a>
</small>
@@ -1417,7 +1447,7 @@ alice["age"] # KeyError
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20no-matching-overload" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1309" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1335" target="_blank">View source</a>
</small>
@@ -1446,7 +1476,7 @@ func("string") # error: [no-matching-overload]
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20non-subscriptable" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1332" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1358" target="_blank">View source</a>
</small>
@@ -1470,7 +1500,7 @@ Subscripting an object that does not support it will raise a `TypeError` at runt
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20not-iterable" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1350" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1376" target="_blank">View source</a>
</small>
@@ -1496,7 +1526,7 @@ for i in 34: # TypeError: 'int' object is not iterable
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20parameter-already-assigned" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1401" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1427" target="_blank">View source</a>
</small>
@@ -1523,7 +1553,7 @@ f(1, x=2) # Error raised here
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.22">0.0.1-alpha.22</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20positional-only-parameter-as-kwarg" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1636" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1662" target="_blank">View source</a>
</small>
@@ -1581,7 +1611,7 @@ def test(): -> "int":
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20static-assert-error" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1758" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1784" target="_blank">View source</a>
</small>
@@ -1611,7 +1641,7 @@ static_assert(int(2.0 * 3.0) == 6) # error: does not have a statically known tr
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20subclass-of-final-class" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1492" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1518" target="_blank">View source</a>
</small>
@@ -1640,7 +1670,7 @@ class B(A): ... # Error raised here
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20too-many-positional-arguments" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1537" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1563" target="_blank">View source</a>
</small>
@@ -1667,7 +1697,7 @@ f("foo") # Error raised here
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20type-assertion-failure" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1515" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1541" target="_blank">View source</a>
</small>
@@ -1695,7 +1725,7 @@ def _(x: int):
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unavailable-implicit-super-arguments" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1558" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1584" target="_blank">View source</a>
</small>
@@ -1741,7 +1771,7 @@ class A:
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unknown-argument" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1615" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1641" target="_blank">View source</a>
</small>
@@ -1768,7 +1798,7 @@ f(x=1, y=2) # Error raised here
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unresolved-attribute" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1657" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1683" target="_blank">View source</a>
</small>
@@ -1796,7 +1826,7 @@ A().foo # AttributeError: 'A' object has no attribute 'foo'
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unresolved-import" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1679" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1705" target="_blank">View source</a>
</small>
@@ -1821,7 +1851,7 @@ import foo # ModuleNotFoundError: No module named 'foo'
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unresolved-reference" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1698" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1724" target="_blank">View source</a>
</small>
@@ -1846,7 +1876,7 @@ print(x) # NameError: name 'x' is not defined
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unsupported-bool-conversion" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1370" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1396" target="_blank">View source</a>
</small>
@@ -1883,7 +1913,7 @@ b1 < b2 < b1 # exception raised here
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unsupported-operator" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1717" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1743" target="_blank">View source</a>
</small>
@@ -1911,7 +1941,7 @@ A() + A() # TypeError: unsupported operand type(s) for +: 'A' and 'A'
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'error'."><code>error</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20zero-stepsize-in-slice" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1739" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1765" target="_blank">View source</a>
</small>
@@ -1936,7 +1966,7 @@ l[1:10:0] # ValueError: slice step cannot be zero
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'warn'."><code>warn</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.20">0.0.1-alpha.20</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20ambiguous-protocol-member" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L468" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L470" target="_blank">View source</a>
</small>
@@ -1977,7 +2007,7 @@ class SubProto(BaseProto, Protocol):
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'warn'."><code>warn</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.16">0.0.1-alpha.16</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20deprecated" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L283" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L285" target="_blank">View source</a>
</small>
@@ -2065,7 +2095,7 @@ a = 20 / 0 # type: ignore
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'warn'."><code>warn</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.22">0.0.1-alpha.22</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20possibly-missing-attribute" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1422" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1448" target="_blank">View source</a>
</small>
@@ -2093,7 +2123,7 @@ A.c # AttributeError: type object 'A' has no attribute 'c'
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'warn'."><code>warn</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.22">0.0.1-alpha.22</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20possibly-missing-implicit-call" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L136" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L138" target="_blank">View source</a>
</small>
@@ -2125,7 +2155,7 @@ A()[0] # TypeError: 'A' object is not subscriptable
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'warn'."><code>warn</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.22">0.0.1-alpha.22</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20possibly-missing-import" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1444" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1470" target="_blank">View source</a>
</small>
@@ -2157,7 +2187,7 @@ from module import a # ImportError: cannot import name 'a' from 'module'
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'warn'."><code>warn</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20redundant-cast" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1810" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1836" target="_blank">View source</a>
</small>
@@ -2184,7 +2214,7 @@ cast(int, f()) # Redundant
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'warn'."><code>warn</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20undefined-reveal" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1597" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1623" target="_blank">View source</a>
</small>
@@ -2208,7 +2238,7 @@ reveal_type(1) # NameError: name 'reveal_type' is not defined
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'warn'."><code>warn</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.15">0.0.1-alpha.15</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unresolved-global" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1831" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1857" target="_blank">View source</a>
</small>
@@ -2266,7 +2296,7 @@ def g():
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'warn'."><code>warn</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.7">0.0.1-alpha.7</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unsupported-base" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L717" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L719" target="_blank">View source</a>
</small>
@@ -2305,7 +2335,7 @@ class D(C): ... # error: [unsupported-base]
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'warn'."><code>warn</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.22">0.0.1-alpha.22</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20useless-overload-body" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L999" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1025" target="_blank">View source</a>
</small>
@@ -2368,7 +2398,7 @@ def foo(x: int | str) -> int | str:
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'ignore'."><code>ignore</code></a> ·
Preview (since <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a>) ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20division-by-zero" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L265" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L267" target="_blank">View source</a>
</small>
@@ -2392,7 +2422,7 @@ Dividing by zero raises a `ZeroDivisionError` at runtime.
Default level: <a href="../rules.md#rule-levels" title="This lint has a default level of 'ignore'."><code>ignore</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20possibly-unresolved-reference" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1470" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1496" target="_blank">View source</a>
</small>

View File

@@ -1,5 +1,7 @@
use crate::logging::Verbosity;
use crate::python_version::PythonVersion;
use clap::builder::Styles;
use clap::builder::styling::{AnsiColor, Effects};
use clap::error::ErrorKind;
use clap::{ArgAction, ArgMatches, Error, Parser};
use ruff_db::system::SystemPathBuf;
@@ -8,9 +10,17 @@ use ty_project::metadata::options::{EnvironmentOptions, Options, SrcOptions, Ter
use ty_project::metadata::value::{RangedValue, RelativeGlobPattern, RelativePathBuf, ValueSource};
use ty_python_semantic::lint;
// Configures Clap v3-style help menu colors
const STYLES: Styles = Styles::styled()
.header(AnsiColor::Green.on_default().effects(Effects::BOLD))
.usage(AnsiColor::Green.on_default().effects(Effects::BOLD))
.literal(AnsiColor::Cyan.on_default().effects(Effects::BOLD))
.placeholder(AnsiColor::Cyan.on_default());
#[derive(Debug, Parser)]
#[command(author, name = "ty", about = "An extremely fast Python type checker.")]
#[command(long_version = crate::version::version())]
#[command(styles = STYLES)]
pub struct Cli {
#[command(subcommand)]
pub(crate) command: Command,

View File

@@ -589,6 +589,81 @@ fn explicit_path_overrides_exclude() -> anyhow::Result<()> {
Ok(())
}
#[test]
fn cli_and_configuration_exclude() -> anyhow::Result<()> {
let case = CliTest::with_files([
(
"src/main.py",
r#"
print(undefined_var) # error: unresolved-reference
"#,
),
(
"tests/generated.py",
r#"
print(dist_undefined_var) # error: unresolved-reference
"#,
),
(
"my_dist/other.py",
r#"
print(other_undefined_var) # error: unresolved-reference
"#,
),
(
"ty.toml",
r#"
[src]
exclude = ["tests/"]
"#,
),
])?;
assert_cmd_snapshot!(case.command(), @r"
success: false
exit_code: 1
----- stdout -----
error[unresolved-reference]: Name `other_undefined_var` used when not defined
--> my_dist/other.py:2:7
|
2 | print(other_undefined_var) # error: unresolved-reference
| ^^^^^^^^^^^^^^^^^^^
|
info: rule `unresolved-reference` is enabled by default
error[unresolved-reference]: Name `undefined_var` used when not defined
--> src/main.py:2:7
|
2 | print(undefined_var) # error: unresolved-reference
| ^^^^^^^^^^^^^
|
info: rule `unresolved-reference` is enabled by default
Found 2 diagnostics
----- stderr -----
");
assert_cmd_snapshot!(case.command().arg("--exclude").arg("my_dist/"), @r"
success: false
exit_code: 1
----- stdout -----
error[unresolved-reference]: Name `undefined_var` used when not defined
--> src/main.py:2:7
|
2 | print(undefined_var) # error: unresolved-reference
| ^^^^^^^^^^^^^
|
info: rule `unresolved-reference` is enabled by default
Found 1 diagnostic
----- stderr -----
");
Ok(())
}
#[test]
fn invalid_include_pattern() -> anyhow::Result<()> {
let case = CliTest::with_files([

View File

@@ -127,7 +127,8 @@ impl<'db> Completion<'db> {
Type::NominalInstance(_)
| Type::PropertyInstance(_)
| Type::BoundSuper(_)
| Type::TypedDict(_) => CompletionKind::Struct,
| Type::TypedDict(_)
| Type::NewTypeInstance(_) => CompletionKind::Struct,
Type::IntLiteral(_)
| Type::BooleanLiteral(_)
| Type::TypeIs(_)
@@ -160,6 +161,20 @@ impl<'db> Completion<'db> {
.and_then(|ty| imp(db, ty, &CompletionKindVisitor::default()))
})
}
fn keyword(name: &str) -> Self {
Completion {
name: name.into(),
insert: None,
ty: None,
kind: Some(CompletionKind::Keyword),
module_name: None,
import: None,
builtin: false,
is_type_check_only: false,
documentation: None,
}
}
}
/// The "kind" of a completion.
@@ -212,14 +227,16 @@ pub fn completion<'db>(
offset: TextSize,
) -> Vec<Completion<'db>> {
let parsed = parsed_module(db, file).load(db);
let tokens = tokens_start_before(parsed.tokens(), offset);
let typed = find_typed_text(db, file, &parsed, offset);
if is_in_comment(tokens) || is_in_string(tokens) || is_in_definition_place(db, tokens, file) {
if is_in_no_completions_place(db, tokens, file) {
return vec![];
}
if let Some(completions) = only_keyword_completion(tokens, typed.as_deref()) {
return vec![completions];
}
let typed = find_typed_text(db, file, &parsed, offset);
let typed_query = typed
.as_deref()
.map(QueryPattern::new)
@@ -309,6 +326,17 @@ fn add_keyword_value_completions<'db>(
}
}
/// When the tokens indicate that the last token should be precisely one
/// possible keyword, we provide a single completion for it.
///
/// `typed` should be the text that we think the user has typed so far.
fn only_keyword_completion<'db>(tokens: &[Token], typed: Option<&str>) -> Option<Completion<'db>> {
if is_import_from_incomplete(tokens, typed) {
return Some(Completion::keyword("import"));
}
None
}
/// Adds completions not in scope.
///
/// `scoped` should be information about the identified scope
@@ -801,6 +829,67 @@ fn import_tokens(tokens: &[Token]) -> Option<(&Token, &Token)> {
None
}
/// Looks for the start of a `from module <CURSOR>` statement.
///
/// If found, `true` is returned.
///
/// `typed` should be the text that we think the user has typed so far.
fn is_import_from_incomplete(tokens: &[Token], typed: Option<&str>) -> bool {
// N.B. The implementation here is very similar to
// `from_import_tokens`. The main difference is that
// we're just looking for whether we should suggest
// the `import` keyword. So this is a little simpler.
use TokenKind as TK;
const LIMIT: usize = 1_000;
/// A state used to "parse" the tokens preceding the user's cursor,
/// in reverse, to detect a "from import" statement.
enum S {
Start,
ImportKeyword,
ModulePossiblyDotted,
ModuleOnlyDotted,
}
let mut state = S::Start;
if typed.is_none() {
state = S::ImportKeyword;
}
// Move backward through the tokens until we get to
// the `from` token.
for token in tokens.iter().rev().take(LIMIT) {
state = match (state, token.kind()) {
// Match an incomplete `import` keyword.
//
// It's okay to pop off a newline token here initially,
// since it may occur before the user starts typing
// `import` but after the module name.
(S::Start, TK::Newline | TK::Name | TK::Import) => S::ImportKeyword,
// We are a bit more careful with how we parse the module
// here than in `from_import_tokens`. In particular, we
// want to make sure we don't incorrectly suggest `import`
// for `from os.i<CURSOR>`. If we aren't careful, then
// `i` could be considered an incomplete `import` keyword
// and `os.` is the module. But of course, ending with a
// `.` (unless the entire module is dots) is invalid.
(S::ImportKeyword, TK::Dot | TK::Ellipsis) => S::ModuleOnlyDotted,
(S::ImportKeyword, TK::Name | TK::Case | TK::Match | TK::Type | TK::Unknown) => {
S::ModulePossiblyDotted
}
(S::ModuleOnlyDotted, TK::Dot | TK::Ellipsis) => S::ModuleOnlyDotted,
(
S::ModulePossiblyDotted,
TK::Name | TK::Dot | TK::Ellipsis | TK::Case | TK::Match | TK::Type | TK::Unknown,
) => S::ModulePossiblyDotted,
(S::ModulePossiblyDotted | S::ModuleOnlyDotted, TK::From) => return true,
_ => return false,
};
}
false
}
/// Looks for the text typed immediately before the cursor offset
/// given.
///
@@ -815,7 +904,10 @@ fn find_typed_text(
let source = source_text(db, file);
let tokens = tokens_start_before(parsed.tokens(), offset);
let last = tokens.last()?;
if !matches!(last.kind(), TokenKind::Name) {
// It's odd to include `TokenKind::Import` here, but it
// indicates that the user has typed `import`. This is
// useful to know in some contexts.
if !matches!(last.kind(), TokenKind::Name | TokenKind::Import) {
return None;
}
// This one's weird, but if the cursor is beyond
@@ -830,6 +922,11 @@ fn find_typed_text(
Some(source[last.range()].to_string())
}
/// Whether the last token is in a place where we should not provide completions.
fn is_in_no_completions_place(db: &dyn Db, tokens: &[Token], file: File) -> bool {
is_in_comment(tokens) || is_in_string(tokens) || is_in_definition_place(db, tokens, file)
}
/// Whether the last token is within a comment or not.
fn is_in_comment(tokens: &[Token]) -> bool {
tokens.last().is_some_and(|t| t.kind().is_comment())
@@ -4216,6 +4313,138 @@ type <CURSOR>
");
}
#[test]
fn from_import_i_suggests_import() {
let builder = completion_test_builder("from typing i<CURSOR>");
assert_snapshot!(builder.build().snapshot(), @"import");
}
#[test]
fn from_import_import_suggests_nothing() {
let builder = completion_test_builder("from typing import<CURSOR>");
assert_snapshot!(builder.build().snapshot(), @"import");
}
#[test]
fn from_import_importt_suggests_import() {
let builder = completion_test_builder("from typing importt<CURSOR>");
assert_snapshot!(builder.build().snapshot(), @"import");
}
#[test]
fn from_import_space_suggests_import() {
let builder = completion_test_builder("from typing <CURSOR>");
assert_snapshot!(builder.build().snapshot(), @"import");
}
#[test]
fn from_import_no_space_not_suggests_import() {
let builder = completion_test_builder("from typing<CURSOR>");
assert_snapshot!(builder.build().snapshot(), @r"
typing
typing_extensions
");
}
#[test]
fn from_import_two_imports_suggests_import() {
let builder = completion_test_builder(
"from collections.abc import Sequence
from typing i<CURSOR>",
);
assert_snapshot!(builder.build().snapshot(), @"import");
}
/// The following behaviour may not be reflected in editors, since LSP
/// clients may do their own filtering of completion suggestions.
#[test]
fn from_import_random_name_suggests_import() {
let builder = completion_test_builder("from typing aa<CURSOR>");
assert_snapshot!(builder.build().snapshot(), @"import");
}
#[test]
fn from_import_dotted_name_suggests_import() {
let builder = completion_test_builder("from collections.abc i<CURSOR>");
assert_snapshot!(builder.build().snapshot(), @"import");
}
#[test]
fn from_import_relative_import_suggests_import() {
let builder = CursorTest::builder()
.source("main.py", "from .foo i<CURSOR>")
.source("foo.py", "")
.completion_test_builder();
assert_snapshot!(builder.build().snapshot(), @"import");
}
#[test]
fn from_import_dotted_name_relative_import_suggests_import() {
let builder = CursorTest::builder()
.source("main.py", "from .foo.bar i<CURSOR>")
.source("foo/bar.py", "")
.completion_test_builder();
assert_snapshot!(builder.build().snapshot(), @"import");
}
#[test]
fn from_import_nested_dotted_name_relative_import_suggests_import() {
let builder = CursorTest::builder()
.source("src/main.py", "from ..foo i<CURSOR>")
.source("foo.py", "")
.completion_test_builder();
assert_snapshot!(builder.build().snapshot(), @"import");
}
#[test]
fn from_import_nested_very_dotted_name_relative_import_suggests_import() {
let builder = CursorTest::builder()
// N.B. the `...` tokenizes as `TokenKind::Ellipsis`
.source("src/main.py", "from ...foo i<CURSOR>")
.source("foo.py", "")
.completion_test_builder();
assert_snapshot!(builder.build().snapshot(), @"import");
}
#[test]
fn from_import_only_dot() {
let builder = CursorTest::builder()
.source(
"main.py",
"
import_zqzqzq = 1
from .<CURSOR>
",
)
.completion_test_builder();
assert_snapshot!(builder.build().snapshot(), @"import");
}
#[test]
fn from_import_only_dot_incomplete() {
let builder = CursorTest::builder()
.source(
"main.py",
"
import_zqzqzq = 1
from .imp<CURSOR>
",
)
.completion_test_builder();
assert_snapshot!(builder.build().snapshot(), @"import");
}
#[test]
fn from_import_incomplete() {
let builder = completion_test_builder(
"from collections.abc i
ZQZQZQ = 1
ZQ<CURSOR>",
);
assert_snapshot!(builder.build().snapshot(), @"ZQZQZQ");
}
/// A way to create a simple single-file (named `main.py`) completion test
/// builder.
///

View File

@@ -209,16 +209,11 @@ impl<'db> DefinitionsOrTargets<'db> {
ty_python_semantic::types::TypeDefinition::Module(module) => {
ResolvedDefinition::Module(module.file(db)?)
}
ty_python_semantic::types::TypeDefinition::Class(definition) => {
ResolvedDefinition::Definition(definition)
}
ty_python_semantic::types::TypeDefinition::Function(definition) => {
ResolvedDefinition::Definition(definition)
}
ty_python_semantic::types::TypeDefinition::TypeVar(definition) => {
ResolvedDefinition::Definition(definition)
}
ty_python_semantic::types::TypeDefinition::TypeAlias(definition) => {
ty_python_semantic::types::TypeDefinition::Class(definition)
| ty_python_semantic::types::TypeDefinition::Function(definition)
| ty_python_semantic::types::TypeDefinition::TypeVar(definition)
| ty_python_semantic::types::TypeDefinition::TypeAlias(definition)
| ty_python_semantic::types::TypeDefinition::NewType(definition) => {
ResolvedDefinition::Definition(definition)
}
};

View File

@@ -4,7 +4,7 @@ use crate::Db;
use ruff_db::files::File;
use ruff_db::parsed::parsed_module;
use ruff_python_ast::visitor::source_order::{self, SourceOrderVisitor, TraversalSignal};
use ruff_python_ast::{AnyNodeRef, Expr, Stmt};
use ruff_python_ast::{AnyNodeRef, ArgOrKeyword, Expr, ExprUnaryOp, Stmt, UnaryOp};
use ruff_text_size::{Ranged, TextRange, TextSize};
use ty_python_semantic::types::Type;
use ty_python_semantic::types::ide_support::inlay_hint_function_argument_details;
@@ -231,7 +231,7 @@ impl SourceOrderVisitor<'_> for InlayHintVisitor<'_, '_> {
match stmt {
Stmt::Assign(assign) => {
self.in_assignment = true;
self.in_assignment = !type_hint_is_excessive_for_expr(&assign.value);
for target in &assign.targets {
self.visit_expr(target);
}
@@ -283,7 +283,9 @@ impl SourceOrderVisitor<'_> for InlayHintVisitor<'_, '_> {
self.visit_expr(&call.func);
for (index, arg_or_keyword) in call.arguments.arguments_source_order().enumerate() {
if let Some(name) = argument_names.get(&index) {
if let Some(name) = argument_names.get(&index)
&& !arg_matches_name(&arg_or_keyword, name)
{
self.add_call_argument_name(arg_or_keyword.range().start(), name);
}
self.visit_expr(arg_or_keyword.value());
@@ -296,6 +298,61 @@ impl SourceOrderVisitor<'_> for InlayHintVisitor<'_, '_> {
}
}
/// Given a positional argument, check if the expression is the "same name"
/// as the function argument itself.
///
/// This allows us to filter out reptitive inlay hints like `x=x`, `x=y.x`, etc.
fn arg_matches_name(arg_or_keyword: &ArgOrKeyword, name: &str) -> bool {
// Only care about positional args
let ArgOrKeyword::Arg(arg) = arg_or_keyword else {
return false;
};
let mut expr = *arg;
loop {
match expr {
// `x=x(1, 2)` counts as a match, recurse for it
Expr::Call(expr_call) => expr = &expr_call.func,
// `x=x[0]` is a match, recurse for it
Expr::Subscript(expr_subscript) => expr = &expr_subscript.value,
// `x=x` is a match
Expr::Name(expr_name) => return expr_name.id.as_str() == name,
// `x=y.x` is a match
Expr::Attribute(expr_attribute) => return expr_attribute.attr.as_str() == name,
_ => return false,
}
}
}
/// Given an expression that's the RHS of an assignment, would it be excessive to
/// emit an inlay type hint for the variable assigned to it?
///
/// This is used to suppress inlay hints for things like `x = 1`, `x, y = (1, 2)`, etc.
fn type_hint_is_excessive_for_expr(expr: &Expr) -> bool {
match expr {
// A tuple of all literals is excessive to typehint
Expr::Tuple(expr_tuple) => expr_tuple.elts.iter().all(type_hint_is_excessive_for_expr),
// Various Literal[...] types which are always excessive to hint
| Expr::BytesLiteral(_)
| Expr::NumberLiteral(_)
| Expr::BooleanLiteral(_)
| Expr::StringLiteral(_)
// `None` isn't terribly verbose, but still redundant
| Expr::NoneLiteral(_)
// This one expands to `str` which isn't verbose but is redundant
| Expr::FString(_)
// This one expands to `Template` which isn't verbose but is redundant
| Expr::TString(_)=> true,
// You too `+1 and `-1`, get back here
Expr::UnaryOp(ExprUnaryOp { op: UnaryOp::UAdd | UnaryOp::USub, operand, .. }) => matches!(**operand, Expr::NumberLiteral(_)),
// Everything else is reasonable
_ => false,
}
}
#[cfg(test)]
mod tests {
use super::*;
@@ -387,47 +444,183 @@ mod tests {
#[test]
fn test_assign_statement() {
let test = inlay_hint_test("x = 1");
let test = inlay_hint_test(
"
def i(x: int, /) -> int:
return x
x = 1
y = x
z = i(1)
w = z
",
);
assert_snapshot!(test.inlay_hints(), @r"
x[: Literal[1]] = 1
def i(x: int, /) -> int:
return x
x = 1
y[: Literal[1]] = x
z[: int] = i(1)
w[: int] = z
");
}
#[test]
fn test_tuple_assignment() {
let test = inlay_hint_test("x, y = (1, 'abc')");
fn test_unpacked_tuple_assignment() {
let test = inlay_hint_test(
"
def i(x: int, /) -> int:
return x
def s(x: str, /) -> str:
return x
x1, y1 = (1, 'abc')
x2, y2 = (x1, y1)
x3, y3 = (i(1), s('abc'))
x4, y4 = (x3, y3)
",
);
assert_snapshot!(test.inlay_hints(), @r#"
x[: Literal[1]], y[: Literal["abc"]] = (1, 'abc')
def i(x: int, /) -> int:
return x
def s(x: str, /) -> str:
return x
x1, y1 = (1, 'abc')
x2[: Literal[1]], y2[: Literal["abc"]] = (x1, y1)
x3[: int], y3[: str] = (i(1), s('abc'))
x4[: int], y4[: str] = (x3, y3)
"#);
}
#[test]
fn test_multiple_assignment() {
let test = inlay_hint_test(
"
def i(x: int, /) -> int:
return x
def s(x: str, /) -> str:
return x
x1, y1 = 1, 'abc'
x2, y2 = x1, y1
x3, y3 = i(1), s('abc')
x4, y4 = x3, y3
",
);
assert_snapshot!(test.inlay_hints(), @r#"
def i(x: int, /) -> int:
return x
def s(x: str, /) -> str:
return x
x1, y1 = 1, 'abc'
x2[: Literal[1]], y2[: Literal["abc"]] = x1, y1
x3[: int], y3[: str] = i(1), s('abc')
x4[: int], y4[: str] = x3, y3
"#);
}
#[test]
fn test_tuple_assignment() {
let test = inlay_hint_test(
"
def i(x: int, /) -> int:
return x
def s(x: str, /) -> str:
return x
x = (1, 'abc')
y = x
z = (i(1), s('abc'))
w = z
",
);
assert_snapshot!(test.inlay_hints(), @r#"
def i(x: int, /) -> int:
return x
def s(x: str, /) -> str:
return x
x = (1, 'abc')
y[: tuple[Literal[1], Literal["abc"]]] = x
z[: tuple[int, str]] = (i(1), s('abc'))
w[: tuple[int, str]] = z
"#);
}
#[test]
fn test_nested_tuple_assignment() {
let test = inlay_hint_test("x, (y, z) = (1, ('abc', 2))");
let test = inlay_hint_test(
"
def i(x: int, /) -> int:
return x
def s(x: str, /) -> str:
return x
x1, (y1, z1) = (1, ('abc', 2))
x2, (y2, z2) = (x1, (y1, z1))
x3, (y3, z3) = (i(1), (s('abc'), i(2)))
x4, (y4, z4) = (x3, (y3, z3))",
);
assert_snapshot!(test.inlay_hints(), @r#"
x[: Literal[1]], (y[: Literal["abc"]], z[: Literal[2]]) = (1, ('abc', 2))
def i(x: int, /) -> int:
return x
def s(x: str, /) -> str:
return x
x1, (y1, z1) = (1, ('abc', 2))
x2[: Literal[1]], (y2[: Literal["abc"]], z2[: Literal[2]]) = (x1, (y1, z1))
x3[: int], (y3[: str], z3[: int]) = (i(1), (s('abc'), i(2)))
x4[: int], (y4[: str], z4[: int]) = (x3, (y3, z3))
"#);
}
#[test]
fn test_assign_statement_with_type_annotation() {
let test = inlay_hint_test("x: int = 1");
let test = inlay_hint_test(
"
def i(x: int, /) -> int:
return x
x: int = 1
y = x
z: int = i(1)
w = z",
);
assert_snapshot!(test.inlay_hints(), @r"
def i(x: int, /) -> int:
return x
x: int = 1
y[: Literal[1]] = x
z: int = i(1)
w[: int] = z
");
}
#[test]
fn test_assign_statement_out_of_range() {
let test = inlay_hint_test("<START>x = 1<END>\ny = 2");
let test = inlay_hint_test(
"
def i(x: int, /) -> int:
return x
<START>x = i(1)<END>
z = x",
);
assert_snapshot!(test.inlay_hints(), @r"
x[: Literal[1]] = 1
y = 2
def i(x: int, /) -> int:
return x
x[: int] = i(1)
z = x
");
}
@@ -437,28 +630,256 @@ mod tests {
"
class A:
def __init__(self, y):
self.x = 1
self.x = int(1)
self.y = y
a = A(2)
a.y = 3
a.y = int(3)
",
);
assert_snapshot!(test.inlay_hints(), @r"
class A:
def __init__(self, y):
self.x[: Literal[1]] = 1
self.x[: int] = int(1)
self.y[: Unknown] = y
a[: A] = A([y=]2)
a.y[: Literal[3]] = 3
a.y[: int] = int(3)
");
}
#[test]
fn test_many_literals() {
let test = inlay_hint_test(
r#"
a = 1
b = 1.0
c = True
d = None
e = "hello"
f = 'there'
g = f"{e} {f}"
h = t"wow %d"
i = b'\x00'
j = +1
k = -1.0
"#,
);
assert_snapshot!(test.inlay_hints(), @r#"
a = 1
b = 1.0
c = True
d = None
e = "hello"
f = 'there'
g = f"{e} {f}"
h = t"wow %d"
i = b'\x00'
j = +1
k = -1.0
"#);
}
#[test]
fn test_many_literals_tuple() {
let test = inlay_hint_test(
r#"
a = (1, 2)
b = (1.0, 2.0)
c = (True, False)
d = (None, None)
e = ("hel", "lo")
f = ('the', 're')
g = (f"{ft}", f"{ft}")
h = (t"wow %d", t"wow %d")
i = (b'\x01', b'\x02')
j = (+1, +2.0)
k = (-1, -2.0)
"#,
);
assert_snapshot!(test.inlay_hints(), @r#"
a = (1, 2)
b = (1.0, 2.0)
c = (True, False)
d = (None, None)
e = ("hel", "lo")
f = ('the', 're')
g = (f"{ft}", f"{ft}")
h = (t"wow %d", t"wow %d")
i = (b'\x01', b'\x02')
j = (+1, +2.0)
k = (-1, -2.0)
"#);
}
#[test]
fn test_many_literals_unpacked_tuple() {
let test = inlay_hint_test(
r#"
a1, a2 = (1, 2)
b1, b2 = (1.0, 2.0)
c1, c2 = (True, False)
d1, d2 = (None, None)
e1, e2 = ("hel", "lo")
f1, f2 = ('the', 're')
g1, g2 = (f"{ft}", f"{ft}")
h1, h2 = (t"wow %d", t"wow %d")
i1, i2 = (b'\x01', b'\x02')
j1, j2 = (+1, +2.0)
k1, k2 = (-1, -2.0)
"#,
);
assert_snapshot!(test.inlay_hints(), @r#"
a1, a2 = (1, 2)
b1, b2 = (1.0, 2.0)
c1, c2 = (True, False)
d1, d2 = (None, None)
e1, e2 = ("hel", "lo")
f1, f2 = ('the', 're')
g1, g2 = (f"{ft}", f"{ft}")
h1, h2 = (t"wow %d", t"wow %d")
i1, i2 = (b'\x01', b'\x02')
j1, j2 = (+1, +2.0)
k1, k2 = (-1, -2.0)
"#);
}
#[test]
fn test_many_literals_multiple() {
let test = inlay_hint_test(
r#"
a1, a2 = 1, 2
b1, b2 = 1.0, 2.0
c1, c2 = True, False
d1, d2 = None, None
e1, e2 = "hel", "lo"
f1, f2 = 'the', 're'
g1, g2 = f"{ft}", f"{ft}"
h1, h2 = t"wow %d", t"wow %d"
i1, i2 = b'\x01', b'\x02'
j1, j2 = +1, +2.0
k1, k2 = -1, -2.0
"#,
);
assert_snapshot!(test.inlay_hints(), @r#"
a1, a2 = 1, 2
b1, b2 = 1.0, 2.0
c1, c2 = True, False
d1, d2 = None, None
e1, e2 = "hel", "lo"
f1, f2 = 'the', 're'
g1, g2 = f"{ft}", f"{ft}"
h1, h2 = t"wow %d", t"wow %d"
i1, i2 = b'\x01', b'\x02'
j1, j2 = +1, +2.0
k1, k2 = -1, -2.0
"#);
}
#[test]
fn test_many_literals_list() {
let test = inlay_hint_test(
r#"
a = [1, 2]
b = [1.0, 2.0]
c = [True, False]
d = [None, None]
e = ["hel", "lo"]
f = ['the', 're']
g = [f"{ft}", f"{ft}"]
h = [t"wow %d", t"wow %d"]
i = [b'\x01', b'\x02']
j = [+1, +2.0]
k = [-1, -2.0]
"#,
);
assert_snapshot!(test.inlay_hints(), @r#"
a[: list[Unknown | int]] = [1, 2]
b[: list[Unknown | float]] = [1.0, 2.0]
c[: list[Unknown | bool]] = [True, False]
d[: list[Unknown | None]] = [None, None]
e[: list[Unknown | str]] = ["hel", "lo"]
f[: list[Unknown | str]] = ['the', 're']
g[: list[Unknown | str]] = [f"{ft}", f"{ft}"]
h[: list[Unknown | Template]] = [t"wow %d", t"wow %d"]
i[: list[Unknown | bytes]] = [b'\x01', b'\x02']
j[: list[Unknown | int | float]] = [+1, +2.0]
k[: list[Unknown | int | float]] = [-1, -2.0]
"#);
}
#[test]
fn test_simple_init_call() {
let test = inlay_hint_test(
r#"
class MyClass:
def __init__(self):
self.x: int = 1
x = MyClass()
y = (MyClass(), MyClass())
a, b = MyClass(), MyClass()
c, d = (MyClass(), MyClass())
"#,
);
assert_snapshot!(test.inlay_hints(), @r"
class MyClass:
def __init__(self):
self.x: int = 1
x[: MyClass] = MyClass()
y[: tuple[MyClass, MyClass]] = (MyClass(), MyClass())
a[: MyClass], b[: MyClass] = MyClass(), MyClass()
c[: MyClass], d[: MyClass] = (MyClass(), MyClass())
");
}
#[test]
fn test_generic_init_call() {
let test = inlay_hint_test(
r#"
class MyClass[T, U]:
def __init__(self, x: list[T], y: tuple[U, U]):
self.x = x
self.y = y
x = MyClass([42], ("a", "b"))
y = (MyClass([42], ("a", "b")), MyClass([42], ("a", "b")))
a, b = MyClass([42], ("a", "b")), MyClass([42], ("a", "b"))
c, d = (MyClass([42], ("a", "b")), MyClass([42], ("a", "b")))
"#,
);
assert_snapshot!(test.inlay_hints(), @r#"
class MyClass[T, U]:
def __init__(self, x: list[T], y: tuple[U, U]):
self.x[: list[T@MyClass]] = x
self.y[: tuple[U@MyClass, U@MyClass]] = y
x[: MyClass[Unknown | int, str]] = MyClass([x=][42], [y=]("a", "b"))
y[: tuple[MyClass[Unknown | int, str], MyClass[Unknown | int, str]]] = (MyClass([x=][42], [y=]("a", "b")), MyClass([x=][42], [y=]("a", "b")))
a[: MyClass[Unknown | int, str]], b[: MyClass[Unknown | int, str]] = MyClass([x=][42], [y=]("a", "b")), MyClass([x=][42], [y=]("a", "b"))
c[: MyClass[Unknown | int, str]], d[: MyClass[Unknown | int, str]] = (MyClass([x=][42], [y=]("a", "b")), MyClass([x=][42], [y=]("a", "b")))
"#);
}
#[test]
fn test_disabled_variable_types() {
let test = inlay_hint_test("x = 1");
let test = inlay_hint_test(
"
def i(x: int, /) -> int:
return x
x = i(1)
",
);
assert_snapshot!(
test.inlay_hints_with_settings(&InlayHintSettings {
@@ -466,7 +887,10 @@ mod tests {
..Default::default()
}),
@r"
x = 1
def i(x: int, /) -> int:
return x
x = i(1)
"
);
}
@@ -485,6 +909,173 @@ mod tests {
");
}
#[test]
fn test_function_call_with_positional_or_keyword_parameter_redundant_name() {
let test = inlay_hint_test(
"
def foo(x: int): pass
x = 1
y = 2
foo(x)
foo(y)",
);
assert_snapshot!(test.inlay_hints(), @r"
def foo(x: int): pass
x = 1
y = 2
foo(x)
foo([x=]y)
");
}
#[test]
fn test_function_call_with_positional_or_keyword_parameter_redundant_attribute() {
let test = inlay_hint_test(
"
def foo(x: int): pass
class MyClass:
def __init__(self):
self.x: int = 1
self.y: int = 2
val = MyClass()
foo(val.x)
foo(val.y)",
);
assert_snapshot!(test.inlay_hints(), @r"
def foo(x: int): pass
class MyClass:
def __init__(self):
self.x: int = 1
self.y: int = 2
val[: MyClass] = MyClass()
foo(val.x)
foo([x=]val.y)
");
}
#[test]
fn test_function_call_with_positional_or_keyword_parameter_redundant_attribute_not() {
// This one checks that we don't allow elide `x=` for `x.y`
let test = inlay_hint_test(
"
def foo(x: int): pass
class MyClass:
def __init__(self):
self.x: int = 1
self.y: int = 2
x = MyClass()
foo(x.x)
foo(x.y)",
);
assert_snapshot!(test.inlay_hints(), @r"
def foo(x: int): pass
class MyClass:
def __init__(self):
self.x: int = 1
self.y: int = 2
x[: MyClass] = MyClass()
foo(x.x)
foo([x=]x.y)
");
}
#[test]
fn test_function_call_with_positional_or_keyword_parameter_redundant_call() {
let test = inlay_hint_test(
"
def foo(x: int): pass
class MyClass:
def __init__(self):
def x() -> int:
return 1
def y() -> int:
return 2
val = MyClass()
foo(val.x())
foo(val.y())",
);
assert_snapshot!(test.inlay_hints(), @r"
def foo(x: int): pass
class MyClass:
def __init__(self):
def x() -> int:
return 1
def y() -> int:
return 2
val[: MyClass] = MyClass()
foo(val.x())
foo([x=]val.y())
");
}
#[test]
fn test_function_call_with_positional_or_keyword_parameter_redundant_complex() {
let test = inlay_hint_test(
"
from typing import List
def foo(x: int): pass
class MyClass:
def __init__(self):
def x() -> List[int]:
return 1
def y() -> List[int]:
return 2
val = MyClass()
foo(val.x()[0])
foo(val.y()[1])",
);
assert_snapshot!(test.inlay_hints(), @r"
from typing import List
def foo(x: int): pass
class MyClass:
def __init__(self):
def x() -> List[int]:
return 1
def y() -> List[int]:
return 2
val[: MyClass] = MyClass()
foo(val.x()[0])
foo([x=]val.y()[1])
");
}
#[test]
fn test_function_call_with_positional_or_keyword_parameter_redundant_subscript() {
let test = inlay_hint_test(
"
def foo(x: int): pass
x = [1]
y = [2]
foo(x[0])
foo(y[0])",
);
assert_snapshot!(test.inlay_hints(), @r"
def foo(x: int): pass
x[: list[Unknown | int]] = [1]
y[: list[Unknown | int]] = [2]
foo(x[0])
foo([x=]y[0])
");
}
#[test]
fn test_function_call_with_positional_only_parameter() {
let test = inlay_hint_test(

View File

@@ -1186,6 +1186,16 @@ impl From<OutputFormat> for DiagnosticFormat {
}
}
impl Combine for OutputFormat {
#[inline(always)]
fn combine_with(&mut self, _other: Self) {}
#[inline]
fn combine(self, _other: Self) -> Self {
self
}
}
#[derive(
Debug,
Default,

View File

@@ -179,14 +179,13 @@ impl<T> RangedValue<T> {
}
}
impl<T> Combine for RangedValue<T> {
fn combine(self, _other: Self) -> Self
where
Self: Sized,
{
self
impl<T> Combine for RangedValue<T>
where
T: Combine,
{
fn combine_with(&mut self, other: Self) {
self.value.combine_with(other.value);
}
fn combine_with(&mut self, _other: Self) {}
}
impl<T> IntoIterator for RangedValue<T>

View File

@@ -76,8 +76,7 @@ from ty_extensions import reveal_mro
class C(Annotated[int, "foo"]): ...
# TODO: Should be `(<class 'C'>, <class 'int'>, <class 'object'>)`
reveal_mro(C) # revealed: (<class 'C'>, @Todo(Inference of subscript on special form), <class 'object'>)
reveal_mro(C) # revealed: (<class 'C'>, <class 'int'>, <class 'object'>)
```
### Not parameterized

View File

@@ -181,30 +181,20 @@ def _(
bool2: Literal[Bool2],
multiple: Literal[SingleInt, SingleStr, SingleEnum],
):
# TODO should be `Literal[1]`
reveal_type(single_int) # revealed: @Todo(Inference of subscript on special form)
# TODO should be `Literal["foo"]`
reveal_type(single_str) # revealed: @Todo(Inference of subscript on special form)
# TODO should be `Literal[b"bar"]`
reveal_type(single_bytes) # revealed: @Todo(Inference of subscript on special form)
# TODO should be `Literal[True]`
reveal_type(single_bool) # revealed: @Todo(Inference of subscript on special form)
# TODO should be `None`
reveal_type(single_none) # revealed: @Todo(Inference of subscript on special form)
# TODO should be `Literal[E.A]`
reveal_type(single_enum) # revealed: @Todo(Inference of subscript on special form)
# TODO should be `Literal[1, "foo", b"bar", True, E.A] | None`
reveal_type(union_literals) # revealed: @Todo(Inference of subscript on special form)
reveal_type(single_int) # revealed: Literal[1]
reveal_type(single_str) # revealed: Literal["foo"]
reveal_type(single_bytes) # revealed: Literal[b"bar"]
reveal_type(single_bool) # revealed: Literal[True]
reveal_type(single_none) # revealed: None
reveal_type(single_enum) # revealed: Literal[E.A]
reveal_type(union_literals) # revealed: Literal[1, "foo", b"bar", True, E.A] | None
# Could also be `E`
reveal_type(an_enum1) # revealed: Unknown
# TODO should be `E`
reveal_type(an_enum2) # revealed: @Todo(Inference of subscript on special form)
reveal_type(an_enum2) # revealed: E
# Could also be `bool`
reveal_type(bool1) # revealed: Unknown
# TODO should be `bool`
reveal_type(bool2) # revealed: @Todo(Inference of subscript on special form)
# TODO should be `Literal[1, "foo", E.A]`
reveal_type(multiple) # revealed: @Todo(Inference of subscript on special form)
reveal_type(bool2) # revealed: bool
reveal_type(multiple) # revealed: Literal[1, "foo", E.A]
```
### Implicit type alias
@@ -246,28 +236,18 @@ def _(
bool2: Literal[Bool2],
multiple: Literal[SingleInt, SingleStr, SingleEnum],
):
# TODO should be `Literal[1]`
reveal_type(single_int) # revealed: @Todo(Inference of subscript on special form)
# TODO should be `Literal["foo"]`
reveal_type(single_str) # revealed: @Todo(Inference of subscript on special form)
# TODO should be `Literal[b"bar"]`
reveal_type(single_bytes) # revealed: @Todo(Inference of subscript on special form)
# TODO should be `Literal[True]`
reveal_type(single_bool) # revealed: @Todo(Inference of subscript on special form)
# TODO should be `None`
reveal_type(single_none) # revealed: @Todo(Inference of subscript on special form)
# TODO should be `Literal[E.A]`
reveal_type(single_enum) # revealed: @Todo(Inference of subscript on special form)
# TODO should be `Literal[1, "foo", b"bar", True, E.A] | None`
reveal_type(union_literals) # revealed: @Todo(Inference of subscript on special form)
reveal_type(single_int) # revealed: Literal[1]
reveal_type(single_str) # revealed: Literal["foo"]
reveal_type(single_bytes) # revealed: Literal[b"bar"]
reveal_type(single_bool) # revealed: Literal[True]
reveal_type(single_none) # revealed: None
reveal_type(single_enum) # revealed: Literal[E.A]
reveal_type(union_literals) # revealed: Literal[1, "foo", b"bar", True, E.A] | None
reveal_type(an_enum1) # revealed: Unknown
# TODO should be `E`
reveal_type(an_enum2) # revealed: @Todo(Inference of subscript on special form)
reveal_type(an_enum2) # revealed: E
reveal_type(bool1) # revealed: Unknown
# TODO should be `bool`
reveal_type(bool2) # revealed: @Todo(Inference of subscript on special form)
# TODO should be `Literal[1, "foo", E.A]`
reveal_type(multiple) # revealed: @Todo(Inference of subscript on special form)
reveal_type(bool2) # revealed: bool
reveal_type(multiple) # revealed: Literal[1, "foo", E.A]
```
## Shortening unions of literals

View File

@@ -1,7 +1,5 @@
# NewType
Currently, ty doesn't support `typing.NewType` in type annotations.
## Valid forms
```py
@@ -12,13 +10,389 @@ X = GenericAlias(type, ())
A = NewType("A", int)
# TODO: typeshed for `typing.GenericAlias` uses `type` for the first argument. `NewType` should be special-cased
# to be compatible with `type`
# error: [invalid-argument-type] "Argument to function `__new__` is incorrect: Expected `type`, found `NewType`"
# error: [invalid-argument-type] "Argument to function `__new__` is incorrect: Expected `type`, found `<NewType pseudo-class 'A'>`"
B = GenericAlias(A, ())
def _(
a: A,
b: B,
):
reveal_type(a) # revealed: @Todo(Support for `typing.NewType` instances in type expressions)
reveal_type(a) # revealed: A
reveal_type(b) # revealed: @Todo(Support for `typing.GenericAlias` instances in type expressions)
```
## Subtyping
The basic purpose of `NewType` is that it acts like a subtype of its base, but not the exact same
type (i.e. not an alias).
```py
from typing_extensions import NewType
from ty_extensions import static_assert, is_subtype_of, is_equivalent_to
Foo = NewType("Foo", int)
Bar = NewType("Bar", Foo)
static_assert(is_subtype_of(Foo, int))
static_assert(not is_equivalent_to(Foo, int))
static_assert(is_subtype_of(Bar, Foo))
static_assert(is_subtype_of(Bar, int))
static_assert(not is_equivalent_to(Bar, Foo))
Foo(42)
Foo(Foo(42)) # allowed: `Foo` is a subtype of `int`.
Foo(Bar(Foo(42))) # allowed: `Bar` is a subtype of `int`.
Foo(True) # allowed: `bool` is a subtype of `int`.
Foo("forty-two") # error: [invalid-argument-type] "Argument is incorrect: Expected `int`, found `Literal["forty-two"]`"
def f(_: int): ...
def g(_: Foo): ...
def h(_: Bar): ...
f(42)
f(Foo(42))
f(Bar(Foo(42)))
g(42) # error: [invalid-argument-type] "Argument to function `g` is incorrect: Expected `Foo`, found `Literal[42]`"
g(Foo(42))
g(Bar(Foo(42)))
h(42) # error: [invalid-argument-type] "Argument to function `h` is incorrect: Expected `Bar`, found `Literal[42]`"
h(Foo(42)) # error: [invalid-argument-type] "Argument to function `h` is incorrect: Expected `Bar`, found `Foo`"
h(Bar(Foo(42)))
```
## Member and method lookup work
```py
from typing_extensions import NewType
class Foo:
foo_member: str = "hello"
def foo_method(self) -> int:
return 42
Bar = NewType("Bar", Foo)
Baz = NewType("Baz", Bar)
baz = Baz(Bar(Foo()))
reveal_type(baz.foo_member) # revealed: str
reveal_type(baz.foo_method()) # revealed: int
```
We also infer member access on the `NewType` pseudo-type itself correctly:
```py
reveal_type(Bar.__supertype__) # revealed: type | NewType
reveal_type(Baz.__supertype__) # revealed: type | NewType
```
## `NewType` wrapper functions are `Callable`
```py
from collections.abc import Callable
from typing_extensions import NewType
from ty_extensions import CallableTypeOf
Foo = NewType("Foo", int)
def _(obj: CallableTypeOf[Foo]):
reveal_type(obj) # revealed: (int, /) -> Foo
def f(_: Callable[[int], Foo]): ...
f(Foo)
map(Foo, [1, 2, 3])
def g(_: Callable[[str], Foo]): ...
g(Foo) # error: [invalid-argument-type]
```
## `NewType` instances are `Callable` if the base type is
```py
from typing import NewType, Callable, Any
from ty_extensions import CallableTypeOf
N = NewType("N", int)
i = N(42)
y: Callable[..., Any] = i # error: [invalid-assignment] "Object of type `N` is not assignable to `(...) -> Any`"
# error: [invalid-type-form] "Expected the first argument to `ty_extensions.CallableTypeOf` to be a callable object, but got an object of type `N`"
def f(x: CallableTypeOf[i]):
reveal_type(x) # revealed: Unknown
class SomethingCallable:
def __call__(self, a: str) -> bytes:
raise NotImplementedError
N2 = NewType("N2", SomethingCallable)
j = N2(SomethingCallable())
z: Callable[[str], bytes] = j # fine
def g(x: CallableTypeOf[j]):
reveal_type(x) # revealed: (a: str) -> bytes
```
## The name must be a string literal
```py
from typing_extensions import NewType
def _(name: str) -> None:
_ = NewType(name, int) # error: [invalid-newtype] "The first argument to `NewType` must be a string literal"
```
However, the literal doesn't necessarily need to be inline, as long as we infer it:
```py
name = "Foo"
Foo = NewType(name, int)
reveal_type(Foo) # revealed: <NewType pseudo-class 'Foo'>
```
## The second argument must be a class type or another newtype
Other typing constructs like `Union` are not allowed.
```py
from typing_extensions import NewType
# error: [invalid-newtype] "invalid base for `typing.NewType`"
Foo = NewType("Foo", int | str)
```
We don't emit the "invalid base" diagnostic for `Unknown`, because that typically results from other
errors that already have a diagnostic, and there's no need to pile on. For example, this mistake
gives you an "Int literals are not allowed" error, and we'd rather not see an "invalid base" error
on top of that:
```py
# error: [invalid-type-form] "Int literals are not allowed in this context in a type expression"
Foo = NewType("Foo", 42)
```
## A `NewType` definition must be a simple variable assignment
```py
from typing import NewType
N: NewType = NewType("N", int) # error: [invalid-newtype] "A `NewType` definition must be a simple variable assignment"
```
## Newtypes can be cyclic in various ways
Cyclic newtypes are kind of silly, but it's possible for the user to express them, and it's
important that we don't go into infinite recursive loops and crash with a stack overflow. In fact,
this is *why* base type evaluation is deferred; otherwise Salsa itself would crash.
```py
from typing_extensions import NewType, reveal_type, cast
# Define a directly cyclic newtype.
A = NewType("A", "A")
reveal_type(A) # revealed: <NewType pseudo-class 'A'>
# Typechecking still works. We can't construct an `A` "honestly", but we can `cast` into one.
a: A
a = 42 # error: [invalid-assignment] "Object of type `Literal[42]` is not assignable to `A`"
a = A(42) # error: [invalid-argument-type] "Argument is incorrect: Expected `A`, found `Literal[42]`"
a = cast(A, 42)
reveal_type(a) # revealed: A
# A newtype cycle might involve more than one step.
B = NewType("B", "C")
C = NewType("C", "B")
reveal_type(B) # revealed: <NewType pseudo-class 'B'>
reveal_type(C) # revealed: <NewType pseudo-class 'C'>
b: B = cast(B, 42)
c: C = C(b)
reveal_type(b) # revealed: B
reveal_type(c) # revealed: C
# Cyclic types behave in surprising ways. These assignments are legal, even though B and C aren't
# the same type, because each of them is a subtype of the other.
b = c
c = b
# Another newtype could inherit from a cyclic one.
D = NewType("D", C)
reveal_type(D) # revealed: <NewType pseudo-class 'D'>
d: D
d = D(42) # error: [invalid-argument-type] "Argument is incorrect: Expected `C`, found `Literal[42]`"
d = D(c)
d = D(b) # Allowed, the same surprise as above. B and C are subtypes of each other.
reveal_type(d) # revealed: D
```
Normal classes can't inherit from newtypes, but generic classes can be parametrized with them, so we
also need to detect "ordinary" type cycles that happen to involve a newtype.
```py
E = NewType("E", list["E"])
reveal_type(E) # revealed: <NewType pseudo-class 'E'>
e: E = E([])
reveal_type(e) # revealed: E
reveal_type(E(E(E(E(E([])))))) # revealed: E
reveal_type(E([E([E([]), E([E([])])]), E([])])) # revealed: E
E(["foo"]) # error: [invalid-argument-type]
E(E(E(["foo"]))) # error: [invalid-argument-type]
```
## `NewType` wrapping preserves singleton-ness and single-valued-ness
```py
from typing_extensions import NewType
from ty_extensions import is_singleton, is_single_valued, static_assert
from types import EllipsisType
A = NewType("A", EllipsisType)
static_assert(is_singleton(A))
static_assert(is_single_valued(A))
reveal_type(type(A(...)) is EllipsisType) # revealed: Literal[True]
# TODO: This should be `Literal[True]` also.
reveal_type(A(...) is ...) # revealed: bool
B = NewType("B", int)
static_assert(not is_singleton(B))
static_assert(not is_single_valued(B))
```
## `NewType`s of tuples can be iterated/unpacked
```py
from typing import NewType
N = NewType("N", tuple[int, str])
a, b = N((1, "foo"))
reveal_type(a) # revealed: int
reveal_type(b) # revealed: str
```
## `isinstance` of a `NewType` instance and its base class is inferred as `Literal[True]`
```py
from typing import NewType
N = NewType("N", int)
def f(x: N):
reveal_type(isinstance(x, int)) # revealed: Literal[True]
```
However, a `NewType` isn't a real class, so it isn't a valid second argument to `isinstance`:
```py
def f(x: N):
# error: [invalid-argument-type] "Argument to function `isinstance` is incorrect"
reveal_type(isinstance(x, N)) # revealed: bool
```
Because of that, we don't generate any narrowing constraints for it:
```py
def f(x: N | str):
if isinstance(x, N): # error: [invalid-argument-type]
reveal_type(x) # revealed: N | str
else:
reveal_type(x) # revealed: N | str
```
## Trying to subclass a `NewType` produces an error matching CPython
<!-- snapshot-diagnostics -->
```py
from typing import NewType
X = NewType("X", int)
class Foo(X): ... # error: [invalid-base]
```
## Don't narrow `NewType`-wrapped `Enum`s inside of match arms
`Literal[Foo.X]` is actually disjoint from `N` here:
```py
from enum import Enum
from typing import NewType
class Foo(Enum):
X = 0
Y = 1
N = NewType("N", Foo)
def f(x: N):
match x:
case Foo.X:
reveal_type(x) # revealed: N
case Foo.Y:
reveal_type(x) # revealed: N
case _:
reveal_type(x) # revealed: N
```
## We don't support `NewType` on Python 3.9
We implement `typing.NewType` as a `KnownClass`, but in Python 3.9 it's actually a function, so all
we get is the `Any` annotations from typeshed. However, `typing_extensions.NewType` is always a
class. This could be improved in the future, but Python 3.9 is now end-of-life, so it's not
high-priority.
```toml
[environment]
python-version = "3.9"
```
```py
from typing import NewType
Foo = NewType("Foo", int)
reveal_type(Foo) # revealed: Any
reveal_type(Foo(42)) # revealed: Any
from typing_extensions import NewType
Bar = NewType("Bar", int)
reveal_type(Bar) # revealed: <NewType pseudo-class 'Bar'>
reveal_type(Bar(42)) # revealed: Bar
```
## The base of a `NewType` can't be a protocol class or a `TypedDict`
<!-- snapshot-diagnostics -->
```py
from typing import NewType, Protocol, TypedDict
class Id(Protocol):
code: int
UserId = NewType("UserId", Id) # error: [invalid-newtype]
class Foo(TypedDict):
a: int
Bar = NewType("Bar", Foo) # error: [invalid-newtype]
```
## TODO: A `NewType` cannot be generic
```py
from typing import Any, NewType, TypeVar
# All of these are allowed.
A = NewType("A", list)
B = NewType("B", list[int])
B = NewType("B", list[Any])
# But a free typevar is not allowed.
T = TypeVar("T")
C = NewType("C", list[T]) # TODO: should be "error: [invalid-newtype]"
```

View File

@@ -139,7 +139,7 @@ The first parameter of instance methods always has type `Self`, if it is not exp
The name `self` is not special in any way.
```py
def some_decorator(f: Callable) -> Callable:
def some_decorator[**P, R](f: Callable[P, R]) -> Callable[P, R]:
return f
class B:
@@ -188,10 +188,10 @@ class B:
reveal_type(B().name_does_not_matter()) # revealed: B
reveal_type(B().positional_only(1)) # revealed: B
reveal_type(B().keyword_only(x=1)) # revealed: B
# TODO: This should deally be `B`
reveal_type(B().decorated_method()) # revealed: Unknown
# TODO: this should be B
reveal_type(B().a_property) # revealed: Unknown
reveal_type(B().a_property) # revealed: B
async def _():
reveal_type(await B().async_method()) # revealed: B

View File

@@ -310,6 +310,65 @@ reveal_type(s) # revealed: list[Literal[1]]
reveal_type(s) # revealed: list[Literal[1]]
```
## Generic constructor annotations are understood
```toml
[environment]
python-version = "3.12"
```
```py
from typing import Any
class X[T]:
def __init__(self, value: T):
self.value = value
a: X[int] = X(1)
reveal_type(a) # revealed: X[int]
b: X[int | None] = X(1)
reveal_type(b) # revealed: X[int | None]
c: X[int | None] | None = X(1)
reveal_type(c) # revealed: X[int | None]
def _[T](a: X[T]):
b: X[T | int] = X(a.value)
reveal_type(b) # revealed: X[T@_ | int]
d: X[Any] = X(1)
reveal_type(d) # revealed: X[Any]
def _(flag: bool):
# TODO: Handle unions correctly.
# error: [invalid-assignment] "Object of type `X[int]` is not assignable to `X[int | None]`"
a: X[int | None] = X(1) if flag else X(2)
reveal_type(a) # revealed: X[int | None]
```
```py
from dataclasses import dataclass
@dataclass
class Y[T]:
value: T
y1: Y[Any] = Y(value=1)
# TODO: This should reveal `Y[Any]`.
reveal_type(y1) # revealed: Y[int]
```
```py
class Z[T]:
def __new__(cls, value: T):
return super().__new__(cls)
z1: Z[Any] = Z(1)
# TODO: This should reveal `Z[Any]`.
reveal_type(z1) # revealed: Z[int]
```
## PEP-604 annotations are supported
```py
@@ -417,6 +476,8 @@ reveal_type(x) # revealed: Literal[1]
python-version = "3.12"
```
`generic_list.py`:
```py
from typing import Literal
@@ -427,14 +488,13 @@ a = f("a")
reveal_type(a) # revealed: list[Literal["a"]]
b: list[int | Literal["a"]] = f("a")
reveal_type(b) # revealed: list[Literal["a"] | int]
reveal_type(b) # revealed: list[int | Literal["a"]]
c: list[int | str] = f("a")
reveal_type(c) # revealed: list[str | int]
reveal_type(c) # revealed: list[int | str]
d: list[int | tuple[int, int]] = f((1, 2))
# TODO: We could avoid reordering the union elements here.
reveal_type(d) # revealed: list[tuple[int, int] | int]
reveal_type(d) # revealed: list[int | tuple[int, int]]
e: list[int] = f(True)
reveal_type(e) # revealed: list[int]
@@ -455,10 +515,217 @@ j: int | str = f2(True)
reveal_type(j) # revealed: Literal[True]
```
Types are not widened unnecessarily:
A function's arguments are also inferred using the type context:
`typed_dict.py`:
```py
def id[T](x: T) -> T:
from typing import TypedDict
class TD(TypedDict):
x: int
def f[T](x: list[T]) -> T:
return x[0]
a: TD = f([{"x": 0}, {"x": 1}])
reveal_type(a) # revealed: TD
b: TD | None = f([{"x": 0}, {"x": 1}])
reveal_type(b) # revealed: TD
# error: [missing-typed-dict-key] "Missing required key 'x' in TypedDict `TD` constructor"
# error: [invalid-key] "Invalid key for TypedDict `TD`: Unknown key "y""
# error: [invalid-assignment] "Object of type `Unknown | dict[Unknown | str, Unknown | int]` is not assignable to `TD`"
c: TD = f([{"y": 0}, {"x": 1}])
# error: [missing-typed-dict-key] "Missing required key 'x' in TypedDict `TD` constructor"
# error: [invalid-key] "Invalid key for TypedDict `TD`: Unknown key "y""
# error: [invalid-assignment] "Object of type `Unknown | dict[Unknown | str, Unknown | int]` is not assignable to `TD | None`"
c: TD | None = f([{"y": 0}, {"x": 1}])
```
But not in a way that leads to assignability errors:
`dict_any.py`:
```py
from typing import TypedDict, Any
class TD(TypedDict, total=False):
x: str
class TD2(TypedDict):
x: str
def f(self, dt: dict[str, Any], key: str):
# TODO: This should not error once typed dict assignability is implemented.
# error: [invalid-assignment]
x1: TD = dt.get(key, {})
reveal_type(x1) # revealed: TD
x2: TD = dt.get(key, {"x": 0})
reveal_type(x2) # revealed: Any
x3: TD | None = dt.get(key, {})
# TODO: This should reveal `Any` once typed dict assignability is implemented.
reveal_type(x3) # revealed: Any | None
x4: TD | None = dt.get(key, {"x": 0})
reveal_type(x4) # revealed: Any
x5: TD2 = dt.get(key, {})
reveal_type(x5) # revealed: Any
x6: TD2 = dt.get(key, {"x": 0})
reveal_type(x6) # revealed: Any
x7: TD2 | None = dt.get(key, {})
reveal_type(x7) # revealed: Any
x8: TD2 | None = dt.get(key, {"x": 0})
reveal_type(x8) # revealed: Any
```
## Prefer the declared type of generic classes
```toml
[environment]
python-version = "3.12"
```
```py
from typing import Any
def f[T](x: T) -> list[T]:
return [x]
def f2[T](x: T) -> list[T] | None:
return [x]
def f3[T](x: T) -> list[T] | dict[T, T]:
return [x]
a = f(1)
reveal_type(a) # revealed: list[Literal[1]]
b: list[Any] = f(1)
reveal_type(b) # revealed: list[Any]
c: list[Any] = [1]
reveal_type(c) # revealed: list[Any]
d: list[Any] | None = f(1)
reveal_type(d) # revealed: list[Any]
e: list[Any] | None = [1]
reveal_type(e) # revealed: list[Any]
f: list[Any] | None = f2(1)
# TODO: Better constraint solver.
reveal_type(f) # revealed: list[Literal[1]] | None
g: list[Any] | dict[Any, Any] = f3(1)
# TODO: Better constraint solver.
reveal_type(g) # revealed: list[Literal[1]] | dict[Literal[1], Literal[1]]
```
We currently prefer the generic declared type regardless of its variance:
```py
class Bivariant[T]:
pass
class Covariant[T]:
def pop(self) -> T:
raise NotImplementedError
class Contravariant[T]:
def push(self, value: T) -> None:
pass
class Invariant[T]:
x: T
def bivariant[T](x: T) -> Bivariant[T]:
return Bivariant()
def covariant[T](x: T) -> Covariant[T]:
return Covariant()
def contravariant[T](x: T) -> Contravariant[T]:
return Contravariant()
def invariant[T](x: T) -> Invariant[T]:
return Invariant()
x1 = bivariant(1)
x2 = covariant(1)
x3 = contravariant(1)
x4 = invariant(1)
reveal_type(x1) # revealed: Bivariant[Literal[1]]
reveal_type(x2) # revealed: Covariant[Literal[1]]
reveal_type(x3) # revealed: Contravariant[Literal[1]]
reveal_type(x4) # revealed: Invariant[Literal[1]]
x5: Bivariant[Any] = bivariant(1)
x6: Covariant[Any] = covariant(1)
x7: Contravariant[Any] = contravariant(1)
x8: Invariant[Any] = invariant(1)
reveal_type(x5) # revealed: Bivariant[Any]
reveal_type(x6) # revealed: Covariant[Any]
reveal_type(x7) # revealed: Contravariant[Any]
reveal_type(x8) # revealed: Invariant[Any]
```
## Narrow generic unions
```toml
[environment]
python-version = "3.12"
```
```py
from typing import reveal_type, TypedDict
def identity[T](x: T) -> T:
return x
def _(narrow: dict[str, str], target: list[str] | dict[str, str] | None):
target = identity(narrow)
reveal_type(target) # revealed: dict[str, str]
def _(narrow: list[str], target: list[str] | dict[str, str] | None):
target = identity(narrow)
reveal_type(target) # revealed: list[str]
def _(narrow: list[str] | dict[str, str], target: list[str] | dict[str, str] | None):
target = identity(narrow)
reveal_type(target) # revealed: list[str] | dict[str, str]
class TD(TypedDict):
x: int
def _(target: list[TD] | dict[str, TD] | None):
target = identity([{"x": 1}])
reveal_type(target) # revealed: list[TD]
def _(target: list[TD] | dict[str, TD] | None):
target = identity({"x": {"x": 1}})
reveal_type(target) # revealed: dict[str, TD]
```
## Prefer the inferred type of non-generic classes
```toml
[environment]
python-version = "3.12"
```
```py
def identity[T](x: T) -> T:
return x
def lst[T](x: T) -> list[T]:
@@ -466,20 +733,18 @@ def lst[T](x: T) -> list[T]:
def _(i: int):
a: int | None = i
b: int | None = id(i)
c: int | str | None = id(i)
b: int | None = identity(i)
c: int | str | None = identity(i)
reveal_type(a) # revealed: int
reveal_type(b) # revealed: int
reveal_type(c) # revealed: int
a: list[int | None] | None = [i]
b: list[int | None] | None = id([i])
c: list[int | None] | int | None = id([i])
b: list[int | None] | None = identity([i])
c: list[int | None] | int | None = identity([i])
reveal_type(a) # revealed: list[int | None]
# TODO: these should reveal `list[int | None]`
# we currently do not use the call expression annotation as type context for argument inference
reveal_type(b) # revealed: list[Unknown | int]
reveal_type(c) # revealed: list[Unknown | int]
reveal_type(b) # revealed: list[int | None]
reveal_type(c) # revealed: list[int | None]
a: list[int | None] | None = [i]
b: list[int | None] | None = lst(i)
@@ -489,9 +754,44 @@ def _(i: int):
reveal_type(c) # revealed: list[int | None]
a: list | None = []
b: list | None = id([])
c: list | int | None = id([])
b: list | None = identity([])
c: list | int | None = identity([])
reveal_type(a) # revealed: list[Unknown]
reveal_type(b) # revealed: list[Unknown]
reveal_type(c) # revealed: list[Unknown]
def f[T](x: list[T]) -> T:
return x[0]
def _(a: int, b: str, c: int | str):
x1: int = f(lst(a))
reveal_type(x1) # revealed: int
x2: int | str = f(lst(a))
reveal_type(x2) # revealed: int
x3: int | None = f(lst(a))
reveal_type(x3) # revealed: int
x4: str = f(lst(b))
reveal_type(x4) # revealed: str
x5: int | str = f(lst(b))
reveal_type(x5) # revealed: str
x6: str | None = f(lst(b))
reveal_type(x6) # revealed: str
x7: int | str = f(lst(c))
reveal_type(x7) # revealed: int | str
x8: int | str = f(lst(c))
reveal_type(x8) # revealed: int | str
# TODO: Ideally this would reveal `int | str`. This is a known limitation of our
# call inference solver, and would # require an extra inference attempt without type
# context, or with type context # of subsets of the union, both of which are impractical
# for performance reasons.
x9: int | str | None = f(lst(c))
reveal_type(x9) # revealed: int | str | None
```

View File

@@ -50,8 +50,8 @@ def _(l: list[int] | None = None):
def f[T](x: T, cond: bool) -> T | list[T]:
return x if cond else [x]
# TODO: no error
# error: [invalid-assignment] "Object of type `Literal[1] | list[Literal[1]]` is not assignable to `int | list[int]`"
# TODO: Better constraint solver.
# error: [invalid-assignment]
l5: int | list[int] = f(1, True)
```

View File

@@ -162,3 +162,38 @@ def _(x: A | B, y: list[int]):
reveal_type(x) # revealed: B & ~A
reveal_type(isinstance(x, B)) # revealed: Literal[True]
```
Certain special forms in the typing module are not instances of `type`, so are strictly-speaking
disallowed as the second argument to `isinstance()` according to typeshed's annotations. However, at
runtime they work fine as the second argument, and we implement that special case in ty:
```py
import typing as t
# no errors emitted for any of these:
isinstance("", t.Dict)
isinstance("", t.List)
isinstance("", t.Set)
isinstance("", t.FrozenSet)
isinstance("", t.Tuple)
isinstance("", t.ChainMap)
isinstance("", t.Counter)
isinstance("", t.Deque)
isinstance("", t.OrderedDict)
isinstance("", t.Callable)
isinstance("", t.Type)
isinstance("", t.Callable | t.Deque)
# `Any` is valid in `issubclass()` calls but not `isinstance()` calls
issubclass(list, t.Any)
issubclass(list, t.Any | t.Dict)
```
But for other special forms that are not permitted as the second argument, we still emit an error:
```py
isinstance("", t.TypeGuard) # error: [invalid-argument-type]
isinstance("", t.ClassVar) # error: [invalid-argument-type]
isinstance("", t.Final) # error: [invalid-argument-type]
isinstance("", t.Any) # error: [invalid-argument-type]
```

View File

@@ -66,7 +66,7 @@ synthesized `Protocol`s that cannot be upcast to, or interpreted as, a non-`obje
```py
import types
from typing_extensions import Callable, TypeIs, Literal, TypedDict
from typing_extensions import Callable, TypeIs, Literal, NewType, TypedDict
def f(): ...
@@ -81,6 +81,8 @@ class SomeTypedDict(TypedDict):
x: int
y: bytes
N = NewType("N", int)
# revealed: <super: <class 'object'>, FunctionType>
reveal_type(super(object, f))
# revealed: <super: <class 'object'>, WrapperDescriptorType>
@@ -95,6 +97,8 @@ reveal_type(super(object, Alias))
reveal_type(super(object, Foo().method))
# revealed: <super: <class 'object'>, property>
reveal_type(super(object, Foo.some_property))
# revealed: <super: <class 'object'>, int>
reveal_type(super(object, N(42)))
def g(x: object) -> TypeIs[list[object]]:
return isinstance(x, list)

View File

@@ -37,7 +37,7 @@ class Data:
content: list[int] = field(default_factory=list)
timestamp: datetime = field(default_factory=datetime.now, init=False)
# revealed: (self: Data, content: list[int] = Unknown) -> None
# revealed: (self: Data, content: list[int] = list[int]) -> None
reveal_type(Data.__init__)
data = Data([1, 2, 3])
@@ -63,7 +63,6 @@ class Person:
age: int | None = field(default=None, kw_only=True)
role: str = field(default="user", kw_only=True)
# TODO: this would ideally show a default value of `None` for `age`
# revealed: (self: Person, name: str, *, age: int | None = None, role: str = Literal["user"]) -> None
reveal_type(Person.__init__)

View File

@@ -320,6 +320,11 @@ reveal_type(enum_members(Answer))
reveal_type(Answer.YES.value) # revealed: Literal[1]
reveal_type(Answer.NO.value) # revealed: Literal[2]
class SingleMember(Enum):
SINGLE = auto()
reveal_type(SingleMember.SINGLE.value) # revealed: Literal[1]
```
Usages of `auto()` can be combined with manual value assignments:
@@ -348,6 +353,11 @@ class Answer(StrEnum):
reveal_type(Answer.YES.value) # revealed: Literal["yes"]
reveal_type(Answer.NO.value) # revealed: Literal["no"]
class SingleMember(StrEnum):
SINGLE = auto()
reveal_type(SingleMember.SINGLE.value) # revealed: Literal["single"]
```
Using `auto()` with `IntEnum` also works as expected:
@@ -363,6 +373,52 @@ reveal_type(Answer.YES.value) # revealed: Literal[1]
reveal_type(Answer.NO.value) # revealed: Literal[2]
```
As does using `auto()` for other enums that use `int` as a mixin:
```py
from enum import Enum, auto
class Answer(int, Enum):
YES = auto()
NO = auto()
reveal_type(Answer.YES.value) # revealed: Literal[1]
reveal_type(Answer.NO.value) # revealed: Literal[2]
```
It's [hard to predict](https://github.com/astral-sh/ruff/pull/20541#discussion_r2381878613) what the
effect of using `auto()` will be for an arbitrary non-integer mixin, so for anything that isn't a
`StrEnum` and has a non-`int` mixin, we simply fallback to typeshed's annotation of `Any` for the
`value` property:
```python
from enum import Enum, auto
class A(str, Enum):
X = auto()
Y = auto()
reveal_type(A.X.value) # revealed: Any
class B(bytes, Enum):
X = auto()
Y = auto()
reveal_type(B.X.value) # revealed: Any
class C(tuple, Enum):
X = auto()
Y = auto()
reveal_type(C.X.value) # revealed: Any
class D(float, Enum):
X = auto()
Y = auto()
reveal_type(D.X.value) # revealed: Any
```
Combining aliases with `auto()`:
```py

View File

@@ -436,9 +436,7 @@ def test_seq(x: Sequence[T]) -> Sequence[T]:
def func8(t1: tuple[complex, list[int]], t2: tuple[int, *tuple[str, ...]], t3: tuple[()]):
reveal_type(test_seq(t1)) # revealed: Sequence[int | float | complex | list[int]]
reveal_type(test_seq(t2)) # revealed: Sequence[int | str]
# TODO: this should be `Sequence[Never]`
reveal_type(test_seq(t3)) # revealed: Sequence[Unknown]
reveal_type(test_seq(t3)) # revealed: Sequence[Never]
```
### `__init__` is itself generic
@@ -466,6 +464,7 @@ wrong_innards: C[int] = C("five", 1)
from typing_extensions import overload, Generic, TypeVar
T = TypeVar("T")
U = TypeVar("U")
class C(Generic[T]):
@overload
@@ -497,6 +496,17 @@ C[int](12)
C[None]("string") # error: [no-matching-overload]
C[None](b"bytes") # error: [no-matching-overload]
C[None](12)
class D(Generic[T, U]):
@overload
def __init__(self: "D[str, U]", u: U) -> None: ...
@overload
def __init__(self, t: T, u: U) -> None: ...
def __init__(self, *args) -> None: ...
reveal_type(D("string")) # revealed: D[str, str]
reveal_type(D(1)) # revealed: D[str, int]
reveal_type(D(1, "string")) # revealed: D[int, str]
```
### Synthesized methods with dataclasses

View File

@@ -375,9 +375,7 @@ def test_seq[T](x: Sequence[T]) -> Sequence[T]:
def func8(t1: tuple[complex, list[int]], t2: tuple[int, *tuple[str, ...]], t3: tuple[()]):
reveal_type(test_seq(t1)) # revealed: Sequence[int | float | complex | list[int]]
reveal_type(test_seq(t2)) # revealed: Sequence[int | str]
# TODO: this should be `Sequence[Never]`
reveal_type(test_seq(t3)) # revealed: Sequence[Unknown]
reveal_type(test_seq(t3)) # revealed: Sequence[Never]
```
### `__init__` is itself generic
@@ -436,6 +434,17 @@ C[int](12)
C[None]("string") # error: [no-matching-overload]
C[None](b"bytes") # error: [no-matching-overload]
C[None](12)
class D[T, U]:
@overload
def __init__(self: "D[str, U]", u: U) -> None: ...
@overload
def __init__(self, t: T, u: U) -> None: ...
def __init__(self, *args) -> None: ...
reveal_type(D("string")) # revealed: D[str, str]
reveal_type(D(1)) # revealed: D[str, int]
reveal_type(D(1, "string")) # revealed: D[int, str]
```
### Synthesized methods with dataclasses

View File

@@ -33,7 +33,7 @@ g(None)
We also support unions in type aliases:
```py
from typing_extensions import Any, Never
from typing_extensions import Any, Never, Literal, LiteralString, Tuple, Annotated, Optional
from ty_extensions import Unknown
IntOrStr = int | str
@@ -54,6 +54,16 @@ NeverOrAny = Never | Any
AnyOrNever = Any | Never
UnknownOrInt = Unknown | int
IntOrUnknown = int | Unknown
StrOrZero = str | Literal[0]
ZeroOrStr = Literal[0] | str
LiteralStringOrInt = LiteralString | int
IntOrLiteralString = int | LiteralString
NoneOrTuple = None | Tuple[int, str]
TupleOrNone = Tuple[int, str] | None
IntOrAnnotated = int | Annotated[str, "meta"]
AnnotatedOrInt = Annotated[str, "meta"] | int
IntOrOptional = int | Optional[str]
OptionalOrInt = Optional[str] | int
reveal_type(IntOrStr) # revealed: types.UnionType
reveal_type(IntOrStrOrBytes1) # revealed: types.UnionType
@@ -73,6 +83,16 @@ reveal_type(NeverOrAny) # revealed: types.UnionType
reveal_type(AnyOrNever) # revealed: types.UnionType
reveal_type(UnknownOrInt) # revealed: types.UnionType
reveal_type(IntOrUnknown) # revealed: types.UnionType
reveal_type(StrOrZero) # revealed: types.UnionType
reveal_type(ZeroOrStr) # revealed: types.UnionType
reveal_type(IntOrLiteralString) # revealed: types.UnionType
reveal_type(LiteralStringOrInt) # revealed: types.UnionType
reveal_type(NoneOrTuple) # revealed: types.UnionType
reveal_type(TupleOrNone) # revealed: types.UnionType
reveal_type(IntOrAnnotated) # revealed: types.UnionType
reveal_type(AnnotatedOrInt) # revealed: types.UnionType
reveal_type(IntOrOptional) # revealed: types.UnionType
reveal_type(OptionalOrInt) # revealed: types.UnionType
def _(
int_or_str: IntOrStr,
@@ -93,6 +113,16 @@ def _(
any_or_never: AnyOrNever,
unknown_or_int: UnknownOrInt,
int_or_unknown: IntOrUnknown,
str_or_zero: StrOrZero,
zero_or_str: ZeroOrStr,
literal_string_or_int: LiteralStringOrInt,
int_or_literal_string: IntOrLiteralString,
none_or_tuple: NoneOrTuple,
tuple_or_none: TupleOrNone,
int_or_annotated: IntOrAnnotated,
annotated_or_int: AnnotatedOrInt,
int_or_optional: IntOrOptional,
optional_or_int: OptionalOrInt,
):
reveal_type(int_or_str) # revealed: int | str
reveal_type(int_or_str_or_bytes1) # revealed: int | str | bytes
@@ -112,6 +142,16 @@ def _(
reveal_type(any_or_never) # revealed: Any
reveal_type(unknown_or_int) # revealed: Unknown | int
reveal_type(int_or_unknown) # revealed: int | Unknown
reveal_type(str_or_zero) # revealed: str | Literal[0]
reveal_type(zero_or_str) # revealed: Literal[0] | str
reveal_type(literal_string_or_int) # revealed: LiteralString | int
reveal_type(int_or_literal_string) # revealed: int | LiteralString
reveal_type(none_or_tuple) # revealed: None | tuple[int, str]
reveal_type(tuple_or_none) # revealed: tuple[int, str] | None
reveal_type(int_or_annotated) # revealed: int | str
reveal_type(annotated_or_int) # revealed: str | int
reveal_type(int_or_optional) # revealed: int | str | None
reveal_type(optional_or_int) # revealed: str | None | int
```
If a type is unioned with itself in a value expression, the result is just that type. No
@@ -232,6 +272,54 @@ def g(
): ...
```
## `|` unions in stubs and `TYPE_CHECKING` blocks
In runtime contexts, `|` unions are only permitted on Python 3.10+. But in suites of code that are
never executed at runtime (stub files, `if TYPE_CHECKING` blocks, and stringified annotations), they
are permitted even if the target version is set to Python 3.9 or earlier.
```toml
[environment]
python-version = "3.9"
```
`bar.pyi`:
```pyi
Z = int | str
GLOBAL_CONSTANT: Z
```
`foo.py`:
```py
from typing import TYPE_CHECKING
from bar import GLOBAL_CONSTANT
reveal_type(GLOBAL_CONSTANT) # revealed: int | str
if TYPE_CHECKING:
class ItsQuiteCloudyInManchester:
X = int | str
def f(obj: X):
reveal_type(obj) # revealed: int | str
# TODO: we currently only understand code as being inside a `TYPE_CHECKING` block
# if a whole *scope* is inside the `if TYPE_CHECKING` block
# (like the `ItsQuiteCloudyInManchester` class above); this is a false-positive
Y = int | str # error: [unsupported-operator]
def g(obj: Y):
# TODO: should be `int | str`
reveal_type(obj) # revealed: Unknown
Y = list["int | str"]
def g(obj: Y):
reveal_type(obj) # revealed: list[int | str]
```
## Generic types
Implicit type aliases can also refer to generic types:
@@ -255,6 +343,177 @@ def _(list_or_tuple: ListOrTuple[int]):
reveal_type(list_or_tuple) # revealed: @Todo(Generic specialization of types.UnionType)
```
## `Literal`s
We also support `typing.Literal` in implicit type aliases.
```py
from typing import Literal
from enum import Enum
IntLiteral1 = Literal[26]
IntLiteral2 = Literal[0x1A]
IntLiterals = Literal[-1, 0, 1]
NestedLiteral = Literal[Literal[1]]
StringLiteral = Literal["a"]
BytesLiteral = Literal[b"b"]
BoolLiteral = Literal[True]
MixedLiterals = Literal[1, "a", True, None]
class Color(Enum):
RED = 0
GREEN = 1
BLUE = 2
EnumLiteral = Literal[Color.RED]
def _(
int_literal1: IntLiteral1,
int_literal2: IntLiteral2,
int_literals: IntLiterals,
nested_literal: NestedLiteral,
string_literal: StringLiteral,
bytes_literal: BytesLiteral,
bool_literal: BoolLiteral,
mixed_literals: MixedLiterals,
enum_literal: EnumLiteral,
):
reveal_type(int_literal1) # revealed: Literal[26]
reveal_type(int_literal2) # revealed: Literal[26]
reveal_type(int_literals) # revealed: Literal[-1, 0, 1]
reveal_type(nested_literal) # revealed: Literal[1]
reveal_type(string_literal) # revealed: Literal["a"]
reveal_type(bytes_literal) # revealed: Literal[b"b"]
reveal_type(bool_literal) # revealed: Literal[True]
reveal_type(mixed_literals) # revealed: Literal[1, "a", True] | None
reveal_type(enum_literal) # revealed: Literal[Color.RED]
```
We reject invalid uses:
```py
# error: [invalid-type-form] "Type arguments for `Literal` must be `None`, a literal value (int, bool, str, or bytes), or an enum member"
LiteralInt = Literal[int]
reveal_type(LiteralInt) # revealed: Unknown
def _(weird: LiteralInt):
reveal_type(weird) # revealed: Unknown
# error: [invalid-type-form] "`Literal[26]` is not a generic class"
def _(weird: IntLiteral1[int]):
reveal_type(weird) # revealed: Unknown
```
## `Annotated`
Basic usage:
```py
from typing import Annotated
MyAnnotatedInt = Annotated[int, "some metadata", 1, 2, 3]
def _(annotated_int: MyAnnotatedInt):
reveal_type(annotated_int) # revealed: int
```
Usage with generics:
```py
from typing import TypeVar
T = TypeVar("T")
Deprecated = Annotated[T, "deprecated attribute"]
class C:
old: Deprecated[int]
# TODO: Should be `int`
reveal_type(C().old) # revealed: @Todo(Generic specialization of typing.Annotated)
```
If the metadata argument is missing, we emit an error (because this code fails at runtime), but
still use the first element as the type, when used in annotations:
```py
# error: [invalid-type-form] "Special form `typing.Annotated` expected at least 2 arguments (one type and at least one metadata element)"
WronglyAnnotatedInt = Annotated[int]
def _(wrongly_annotated_int: WronglyAnnotatedInt):
reveal_type(wrongly_annotated_int) # revealed: int
```
## `Optional`
Starting with Python 3.14, `Optional[int]` creates an instance of `typing.Union`, which is an alias
for `types.UnionType`. We only support this new behavior and do not attempt to model the details of
the pre-3.14 behavior:
```py
from typing import Optional
MyOptionalInt = Optional[int]
reveal_type(MyOptionalInt) # revealed: types.UnionType
def _(optional_int: MyOptionalInt):
reveal_type(optional_int) # revealed: int | None
```
A special case is `Optional[None]`, which is equivalent to `None`:
```py
JustNone = Optional[None]
reveal_type(JustNone) # revealed: None
def _(just_none: JustNone):
reveal_type(just_none) # revealed: None
```
Invalid uses:
```py
# error: [invalid-type-form] "`typing.Optional` requires exactly one argument"
Optional[int, str]
```
## `LiteralString`, `NoReturn`, `Never`
```py
from typing_extensions import LiteralString, NoReturn, Never
MyLiteralString = LiteralString
MyNoReturn = NoReturn
MyNever = Never
reveal_type(MyLiteralString) # revealed: typing.LiteralString
reveal_type(MyNoReturn) # revealed: typing.NoReturn
reveal_type(MyNever) # revealed: typing.Never
def _(
ls: MyLiteralString,
nr: MyNoReturn,
nv: MyNever,
):
reveal_type(ls) # revealed: LiteralString
reveal_type(nr) # revealed: Never
reveal_type(nv) # revealed: Never
```
## `Tuple`
```py
from typing import Tuple
IntAndStr = Tuple[int, str]
def _(int_and_str: IntAndStr):
reveal_type(int_and_str) # revealed: tuple[int, str]
```
## Stringified annotations?
From the [typing spec on type aliases](https://typing.python.org/en/latest/spec/aliases.html):

View File

@@ -1,39 +1,40 @@
# Nonstandard Import Conventions
This document covers ty-specific extensions to the
[standard import conventions](https://typing.python.org/en/latest/spec/distributing.html#import-conventions).
[standard import conventions](https://typing.python.org/en/latest/spec/distributing.html#import-conventions),
and other intentional deviations from actual python semantics.
It's a common idiom for a package's `__init__.py(i)` to include several imports like
`from . import mysubmodule`, with the intent that the `mypackage.mysubmodule` attribute should work
for anyone who only imports `mypackage`.
This file currently covers the following details:
In the context of a `.py` we handle this well through our general attempts to faithfully implement
import side-effects. However for `.pyi` files we are expected to apply
[a more strict set of rules](https://typing.python.org/en/latest/spec/distributing.html#import-conventions)
to encourage intentional API design. Although `.pyi` files are explicitly designed to work with
typecheckers, which ostensibly should all enforce these strict rules, every typechecker has its own
defacto "extensions" to them and so a few idioms like `from . import mysubmodule` have found their
way into `.pyi` files too.
- **froms are locals**: a `from..import` can only define locals, it does not have global
side-effects. Specifically any submodule attribute `a` that's implicitly introduced by either
`from .a import b` or `from . import a as b` (in an `__init__.py(i)`) is a local and not a
global. However we only introduce this symbol if the `from..import` is in global-scope. This
means imports at the start of a file work as you'd expect, while imports in a function don't
introduce submodule attributes.
Thus for the sake of compatibility, we need to define our own "extensions". Any extensions we define
here have several competing concerns:
- **first from first serve**: only the *first* `from..import` in an `__init__.py(i)` that imports a
particular direct submodule of the current package introduces that submodule as a local.
Subsequent imports of the submodule will not introduce that local. This reflects the fact that
in actual python only the first import of a submodule (in the entire execution of the program)
introduces it as an attribute of the package. By "first" we mean "the first time in global
scope".
- Extensions should ideally be kept narrow to continue to encourage explicit API design
- Extensions should be easy to explain, document, and understand
- Extensions should ideally still be a subset of runtime behaviour (if it works in a stub, it works
at runtime)
- Extensions should ideally not make `.pyi` files more permissive than `.py` files (if it works in a
stub, it works in an impl)
- **dot re-exports**: `from . import a` in an `__init__.pyi` is considered a re-export of `a`
(equivalent to `from . import a as a`). This is required to properly handle many stubs in the
wild. Equivalent imports like `from whatever.thispackage import a` also introduce a re-export
(this has essentially zero ecosystem impact, we just felt it was more consistent). The only way
to opt out of this is to rename the import to something else (`from . import a as b`).
`from .a import b` and equivalent does *not* introduce a re-export.
To that end we define the following extension:
> If an `__init__.pyi` for `mypackage` contains a `from...import` targetting a direct submodule of
> `mypackage`, then that submodule should be available as an attribute of `mypackage`.
Note: almost all tests in here have a stub and non-stub version, because we're interested in both
defining symbols *at all* and re-exporting them.
## Relative `from` Import of Direct Submodule in `__init__`
The `from . import submodule` idiom in an `__init__.pyi` is fairly explicit and we should definitely
support it.
We consider the `from . import submodule` idiom in an `__init__.pyi` an explicit re-export.
### In Stub
`mypackage/__init__.pyi`:
@@ -63,7 +64,7 @@ reveal_type(mypackage.imported.X) # revealed: int
reveal_type(mypackage.fails.Y) # revealed: Unknown
```
## Relative `from` Import of Direct Submodule in `__init__` (Non-Stub Check)
### In Non-Stub
`mypackage/__init__.py`:
@@ -95,8 +96,10 @@ reveal_type(mypackage.fails.Y) # revealed: Unknown
## Absolute `from` Import of Direct Submodule in `__init__`
If an absolute `from...import` happens to import a submodule, it works just as well as a relative
one.
If an absolute `from...import` happens to import a submodule (i.e. it's equivalent to
`from . import y`) we also treat it as a re-export.
### In Stub
`mypackage/__init__.pyi`:
@@ -126,7 +129,7 @@ reveal_type(mypackage.imported.X) # revealed: int
reveal_type(mypackage.fails.Y) # revealed: Unknown
```
## Absolute `from` Import of Direct Submodule in `__init__` (Non-Stub Check)
### In Non-Stub
`mypackage/__init__.py`:
@@ -159,7 +162,9 @@ reveal_type(mypackage.fails.Y) # revealed: Unknown
## Import of Direct Submodule in `__init__`
An `import` that happens to import a submodule does not expose the submodule as an attribute. (This
is an arbitrary decision and can be changed easily!)
is an arbitrary decision and can be changed!)
### In Stub
`mypackage/__init__.pyi`:
@@ -178,12 +183,12 @@ X: int = 42
```py
import mypackage
# TODO: this is probably safe to allow, as it's an unambiguous import of a submodule
# TODO: this could work and would be nice to have?
# error: "has no member `imported`"
reveal_type(mypackage.imported.X) # revealed: Unknown
```
## Import of Direct Submodule in `__init__` (Non-Stub Check)
### In Non-Stub
`mypackage/__init__.py`:
@@ -202,15 +207,17 @@ X: int = 42
```py
import mypackage
# TODO: this is probably safe to allow, as it's an unambiguous import of a submodule
# TODO: this could work and would be nice to have
# error: "has no member `imported`"
reveal_type(mypackage.imported.X) # revealed: Unknown
```
## Relative `from` Import of Nested Submodule in `__init__`
`from .submodule import nested` in an `__init__.pyi` is currently not supported as a way to expose
`mypackage.submodule` or `mypackage.submodule.nested` but it could be.
`from .submodule import nested` in an `__init__.pyi` does not re-export `mypackage.submodule`,
`mypackage.submodule.nested`, or `nested`.
### In Stub
`mypackage/__init__.pyi`:
@@ -234,16 +241,21 @@ X: int = 42
```py
import mypackage
# TODO: this would be nice to allow
# error: "has no member `submodule`"
reveal_type(mypackage.submodule) # revealed: Unknown
# error: "has no member `submodule`"
reveal_type(mypackage.submodule.nested) # revealed: Unknown
# error: "has no member `submodule`"
reveal_type(mypackage.submodule.nested.X) # revealed: Unknown
# error: "has no member `nested`"
reveal_type(mypackage.nested) # revealed: Unknown
# error: "has no member `nested`"
reveal_type(mypackage.nested.X) # revealed: Unknown
```
## Relative `from` Import of Nested Submodule in `__init__` (Non-Stub Check)
### In Non-Stub
`from .submodule import nested` in an `__init__.py` exposes `mypackage.submodule` and `nested`.
`mypackage/__init__.py`:
@@ -267,19 +279,22 @@ X: int = 42
```py
import mypackage
reveal_type(mypackage.submodule) # revealed: <module 'mypackage.submodule'>
# TODO: this would be nice to support
# error: "has no member `submodule`"
reveal_type(mypackage.submodule) # revealed: Unknown
# error: "has no member `submodule`"
# error: "has no member `nested`"
reveal_type(mypackage.submodule.nested) # revealed: Unknown
# error: "has no member `submodule`"
# error: "has no member `nested`"
reveal_type(mypackage.submodule.nested.X) # revealed: Unknown
reveal_type(mypackage.nested) # revealed: <module 'mypackage.submodule.nested'>
reveal_type(mypackage.nested.X) # revealed: int
```
## Absolute `from` Import of Nested Submodule in `__init__`
`from mypackage.submodule import nested` in an `__init__.pyi` is currently not supported as a way to
expose `mypackage.submodule` or `mypackage.submodule.nested` but it could be.
`from mypackage.submodule import nested` in an `__init__.pyi` does not re-export
`mypackage.submodule`, `mypackage.submodule.nested`, or `nested`.
### In Stub
`mypackage/__init__.pyi`:
@@ -303,16 +318,22 @@ X: int = 42
```py
import mypackage
# TODO: this would be nice to support
# TODO: this could work and would be nice to have
# error: "has no member `submodule`"
reveal_type(mypackage.submodule) # revealed: Unknown
# error: "has no member `submodule`"
reveal_type(mypackage.submodule.nested) # revealed: Unknown
# error: "has no member `submodule`"
reveal_type(mypackage.submodule.nested.X) # revealed: Unknown
# error: "has no member `nested`"
reveal_type(mypackage.nested) # revealed: Unknown
# error: "has no member `nested`"
reveal_type(mypackage.nested.X) # revealed: Unknown
```
## Absolute `from` Import of Nested Submodule in `__init__` (Non-Stub Check)
### In Non-Stub
`from mypackage.submodule import nested` in an `__init__.py` creates both `submodule` and `nested`.
`mypackage/__init__.py`:
@@ -336,19 +357,22 @@ X: int = 42
```py
import mypackage
reveal_type(mypackage.submodule) # revealed: <module 'mypackage.submodule'>
# TODO: this would be nice to support
# error: "has no member `submodule`"
reveal_type(mypackage.submodule) # revealed: Unknown
# error: "has no member `submodule`"
# error: "has no member `nested`"
reveal_type(mypackage.submodule.nested) # revealed: Unknown
# error: "has no member `submodule`"
# error: "has no member `nested`"
reveal_type(mypackage.submodule.nested.X) # revealed: Unknown
reveal_type(mypackage.nested) # revealed: <module 'mypackage.submodule.nested'>
reveal_type(mypackage.nested.X) # revealed: int
```
## Import of Nested Submodule in `__init__`
`import mypackage.submodule.nested` in an `__init__.pyi` is currently not supported as a way to
expose `mypackage.submodule` or `mypackage.submodule.nested` but it could be.
`import mypackage.submodule.nested` in an `__init__.pyi` does not re-export `mypackage.submodule` or
`mypackage.submodule.nested`.
### In Stub
`mypackage/__init__.pyi`:
@@ -372,7 +396,6 @@ X: int = 42
```py
import mypackage
# TODO: this would be nice to support, and is probably safe to do as it's unambiguous
# error: "has no member `submodule`"
reveal_type(mypackage.submodule) # revealed: Unknown
# error: "has no member `submodule`"
@@ -381,7 +404,10 @@ reveal_type(mypackage.submodule.nested) # revealed: Unknown
reveal_type(mypackage.submodule.nested.X) # revealed: Unknown
```
## Import of Nested Submodule in `__init__` (Non-Stub Check)
### In Non-Stub
`import mypackage.submodule.nested` in an `__init__.py` does not define `mypackage.submodule` or
`mypackage.submodule.nested` outside the package.
`mypackage/__init__.py`:
@@ -405,7 +431,7 @@ X: int = 42
```py
import mypackage
# TODO: this would be nice to support, and is probably safe to do as it's unambiguous
# TODO: this would be nice to support
# error: "has no member `submodule`"
reveal_type(mypackage.submodule) # revealed: Unknown
# error: "has no member `submodule`"
@@ -418,6 +444,8 @@ reveal_type(mypackage.submodule.nested.X) # revealed: Unknown
Renaming the submodule to something else disables the `__init__.pyi` idiom.
### In Stub
`mypackage/__init__.pyi`:
```pyi
@@ -441,7 +469,7 @@ reveal_type(mypackage.imported.X) # revealed: Unknown
reveal_type(mypackage.imported_m.X) # revealed: Unknown
```
## Relative `from` Import of Direct Submodule in `__init__`, Mismatched Alias (Non-Stub Check)
### In Non-Stub
`mypackage/__init__.py`:
@@ -471,6 +499,8 @@ reveal_type(mypackage.imported_m.X) # revealed: int
The `__init__.pyi` idiom should definitely always work if the submodule is renamed to itself, as
this is the re-export idiom.
### In Stub
`mypackage/__init__.pyi`:
```pyi
@@ -491,7 +521,7 @@ import mypackage
reveal_type(mypackage.imported.X) # revealed: int
```
## Relative `from` Import of Direct Submodule in `__init__`, Matched Alias (Non-Stub Check)
### In Non-Stub
`mypackage/__init__.py`:
@@ -518,6 +548,8 @@ reveal_type(mypackage.imported.X) # revealed: int
Even if the `__init__` idiom is in effect, star imports do not pick it up. (This is an arbitrary
decision that mostly fell out of the implementation details and can be changed!)
### In Stub
`mypackage/__init__.pyi`:
```pyi
@@ -536,13 +568,13 @@ X: int = 42
```py
from mypackage import *
# TODO: this would be nice to support (available_submodule_attributes isn't visible to `*` imports)
# TODO: this would be nice to support
# error: "`imported` used when not defined"
reveal_type(imported.X) # revealed: Unknown
reveal_type(Z) # revealed: int
```
## Star Import Unaffected (Non-Stub Check)
### In Non-Stub
`mypackage/__init__.py`:
@@ -569,9 +601,10 @@ reveal_type(Z) # revealed: int
## `from` Import of Non-Submodule
A from import that terminates in a non-submodule should not expose the intermediate submodules as
attributes. This is an arbitrary decision but on balance probably safe and correct, as otherwise it
would be hard for a stub author to be intentional about the submodules being exposed as attributes.
A `from` import that imports a non-submodule isn't currently a special case here (various
proposed/tested approaches did treat this specially).
### In Stub
`mypackage/__init__.pyi`:
@@ -590,11 +623,11 @@ X: int = 42
```py
import mypackage
# error: "has no member `imported`"
# error: "no member `imported`"
reveal_type(mypackage.imported.X) # revealed: Unknown
```
## `from` Import of Non-Submodule (Non-Stub Check)
### In Non-Stub
`mypackage/__init__.py`:
@@ -613,9 +646,7 @@ X: int = 42
```py
import mypackage
# TODO: this would be nice to support, as it works at runtime
# error: "has no member `imported`"
reveal_type(mypackage.imported.X) # revealed: Unknown
reveal_type(mypackage.imported.X) # revealed: int
```
## `from` Import of Other Package's Submodule
@@ -623,6 +654,8 @@ reveal_type(mypackage.imported.X) # revealed: Unknown
`from mypackage import submodule` from outside the package is not modeled as a side-effect on
`mypackage`, even in the importing file (this could be changed!).
### In Stub
`mypackage/__init__.pyi`:
```pyi
@@ -641,12 +674,13 @@ import mypackage
from mypackage import imported
# TODO: this would be nice to support, but it's dangerous with available_submodule_attributes
# for details, see: https://github.com/astral-sh/ty/issues/1488
reveal_type(imported.X) # revealed: int
# error: "has no member `imported`"
reveal_type(mypackage.imported.X) # revealed: Unknown
```
## `from` Import of Other Package's Submodule (Non-Stub Check)
### In Non-Stub
`mypackage/__init__.py`:
@@ -676,6 +710,8 @@ reveal_type(mypackage.imported.X) # revealed: Unknown
`from . import submodule` from a sibling module is not modeled as a side-effect on `mypackage` or a
re-export from `submodule`.
### In Stub
`mypackage/__init__.pyi`:
```pyi
@@ -707,7 +743,7 @@ reveal_type(imported.fails.Y) # revealed: Unknown
reveal_type(mypackage.fails.Y) # revealed: Unknown
```
## `from` Import of Sibling Module (Non-Stub Check)
### In Non-Stub
`mypackage/__init__.py`:
@@ -752,9 +788,11 @@ Can easily result in the typechecker getting "confused" and thinking imports of
top-level package are referring to the subpackage and not the function/class. This issue can be
found with the `lobpcg` function in `scipy.sparse.linalg`.
This kind of failure mode is why the rule is restricted to *direct* submodule imports, as anything
more powerful than that in the current implementation strategy quickly gets the functions and
submodules mixed up.
We avoid this by ensuring that the imported name (the right-hand `funcmod` in
`from .funcmod import funcmod`) overwrites the submodule attribute (the left-hand `funcmod`), as it
does at runtime.
### In Stub
`mypackage/__init__.pyi`:
@@ -788,7 +826,7 @@ from mypackage import funcmod
x = funcmod(1)
```
## Fractal Re-export Nameclash Problems (Non-Stub Check)
### In Non-Stub
`mypackage/__init__.py`:
@@ -822,3 +860,311 @@ from mypackage import funcmod
x = funcmod(1)
```
## Re-export Nameclash Problems In Functions
`from` imports in an `__init__.py` at file scope should be visible to functions defined in the file:
`mypackage/__init__.py`:
```py
from .funcmod import funcmod
funcmod(1)
def run():
funcmod(2)
```
`mypackage/funcmod.py`:
```py
def funcmod(x: int) -> int:
return x
```
## Re-export Nameclash Problems In Try-Blocks
`from` imports in an `__init__.py` at file scope in a `try` block should be visible to functions
defined in the `try` block (regression test for a bug):
`mypackage/__init__.py`:
```py
try:
from .funcmod import funcmod
funcmod(1)
def run():
# TODO: this is a bug in how we analyze try-blocks
# error: [call-non-callable]
funcmod(2)
finally:
x = 1
```
`mypackage/funcmod.py`:
```py
def funcmod(x: int) -> int:
return x
```
## RHS `from` Imports In Functions
If a `from` import occurs in a function, the RHS symbols should only be visible in that function.
`mypackage/__init__.py`:
```py
def run1():
from .funcmod import funcmod
funcmod(1)
def run2():
from .funcmod import funcmod
funcmod(2)
def run3():
# error: [unresolved-reference]
funcmod(3)
# error: [unresolved-reference]
funcmod(4)
```
`mypackage/funcmod.py`:
```py
def funcmod(x: int) -> int:
return x
```
## LHS `from` Imports In Functions
If a `from` import occurs in a function, we simply ignore its LHS effects to avoid modeling
execution-order-specific behaviour (and to discourage people writing code that has it).
`mypackage/__init__.py`:
```py
def run1():
from .funcmod import other
# TODO: this would be nice to support
# error: [unresolved-reference]
funcmod.funcmod(1)
def run2():
from .funcmod import other
# TODO: this would be nice to support
# error: [unresolved-reference]
funcmod.funcmod(2)
def run3():
# error: [unresolved-reference]
funcmod.funcmod(3)
# error: [unresolved-reference]
funcmod.funcmod(4)
```
`mypackage/funcmod.py`:
```py
other: int = 1
def funcmod(x: int) -> int:
return x
```
## LHS `from` Imports Overwrite Locals
The LHS of a `from..import` introduces a local symbol that overwrites any local with the same name.
This reflects actual runtime behaviour, although we're kinda assuming it hasn't been imported
already.
`mypackage/__init__.py`:
```py
funcmod = 0
from .funcmod import funcmod
funcmod(1)
```
`mypackage/funcmod.py`:
```py
def funcmod(x: int) -> int:
return x
```
## LHS `from` Imports Overwritten By Local Function
The LHS of a `from..import` introduces a local symbol that can be overwritten by defining a function
(or class) with the same name.
### In Stub
`mypackage/__init__.pyi`:
```pyi
from .funcmod import other
def funcmod(x: int) -> int: ...
```
`mypackage/funcmod/__init__.pyi`:
```pyi
def other(int) -> int: ...
```
`main.py`:
```py
from mypackage import funcmod
x = funcmod(1)
```
### In Non-Stub
`mypackage/__init__.py`:
```py
from .funcmod import other
def funcmod(x: int) -> int:
return x
```
`mypackage/funcmod/__init__.py`:
```py
def other(x: int) -> int:
return x
```
`main.py`:
```py
from mypackage import funcmod
x = funcmod(1)
```
## LHS `from` Imports Overwritten By Local Assignment
The LHS of a `from..import` introduces a local symbol that can be overwritten by assigning to it.
### In Stub
`mypackage/__init__.pyi`:
```pyi
from .funcmod import other
funcmod = other
```
`mypackage/funcmod/__init__.pyi`:
```pyi
def other(x: int) -> int: ...
```
`main.py`:
```py
from mypackage import funcmod
x = funcmod(1)
```
### In Non-Stub
`mypackage/__init__.py`:
```py
from .funcmod import other
funcmod = other
```
`mypackage/funcmod/__init__.py`:
```py
def other(x: int) -> int:
return x
```
`main.py`:
```py
from mypackage import funcmod
x = funcmod(1)
```
## LHS `from` Imports Only Apply The First Time
The LHS of a `from..import` of a submodule introduces a local symbol only the first time it
introduces a direct submodule. The second time does nothing.
### In Stub
`mypackage/__init__.pyi`:
```pyi
from .funcmod import funcmod as funcmod
from .funcmod import other
```
`mypackage/funcmod/__init__.pyi`:
```pyi
def other(x: int) -> int: ...
def funcmod(x: int) -> int: ...
```
`main.py`:
```py
from mypackage import funcmod
x = funcmod(1)
```
### In Non-Stub
`mypackage/__init__.py`:
```py
from .funcmod import funcmod
from .funcmod import other
```
`mypackage/funcmod/__init__.py`:
```py
def other(x: int) -> int:
return x
def funcmod(x: int) -> int:
return x
```
`main.py`:
```py
from mypackage import funcmod
x = funcmod(1)
```

View File

@@ -206,7 +206,7 @@ dd: defaultdict[int, int] = defaultdict(int)
dd[0] = 0
cm: ChainMap[int, int] = ChainMap({1: 1}, {0: 0})
cm[0] = 0
reveal_type(cm) # revealed: ChainMap[Unknown | int, Unknown | int]
reveal_type(cm) # revealed: ChainMap[int | Unknown, int | Unknown]
reveal_type(l[0]) # revealed: Literal[0]
reveal_type(d[0]) # revealed: Literal[0]

View File

@@ -70,6 +70,83 @@ def _(flag: bool):
reveal_type(x) # revealed: Literal["a"]
```
## `classinfo` is a PEP-604 union of types
```toml
[environment]
python-version = "3.10"
```
```py
def _(x: int | str | bytes | memoryview | range):
if isinstance(x, int | str):
reveal_type(x) # revealed: int | str
elif isinstance(x, bytes | memoryview):
reveal_type(x) # revealed: bytes | memoryview[Unknown]
else:
reveal_type(x) # revealed: range
```
Although `isinstance()` usually only works if all elements in the `UnionType` are class objects, at
runtime a special exception is made for `None` so that `isinstance(x, int | None)` can work:
```py
def _(x: int | str | bytes | range | None):
if isinstance(x, int | str | None):
reveal_type(x) # revealed: int | str | None
else:
reveal_type(x) # revealed: bytes | range
```
## `classinfo` is an invalid PEP-604 union of types
Except for the `None` special case mentioned above, narrowing can only take place if all elements in
the PEP-604 union are class literals. If any elements are generic aliases or other types, the
`isinstance()` call may fail at runtime, so no narrowing can take place:
<!-- snapshot-diagnostics -->
```toml
[environment]
python-version = "3.10"
```
```py
from typing import Any, Literal, NamedTuple
def _(x: int | list[int] | bytes):
# error: [invalid-argument-type]
if isinstance(x, list[int] | int):
reveal_type(x) # revealed: int | list[int] | bytes
# error: [invalid-argument-type]
elif isinstance(x, Literal[42] | list[int] | bytes):
reveal_type(x) # revealed: int | list[int] | bytes
# error: [invalid-argument-type]
elif isinstance(x, Any | NamedTuple | list[int]):
reveal_type(x) # revealed: int | list[int] | bytes
else:
reveal_type(x) # revealed: int | list[int] | bytes
```
## PEP-604 unions on Python \<3.10
PEP-604 unions were added in Python 3.10, so attempting to use them on Python 3.9 does not lead to
any type narrowing.
```toml
[environment]
python-version = "3.9"
```
```py
def _(x: int | str | bytes):
# error: [unsupported-operator]
if isinstance(x, int | str):
reveal_type(x) # revealed: (int & Unknown) | (str & Unknown) | (bytes & Unknown)
else:
reveal_type(x) # revealed: (int & Unknown) | (str & Unknown) | (bytes & Unknown)
```
## Class types
```py

View File

@@ -131,6 +131,75 @@ def _(flag1: bool, flag2: bool):
reveal_type(t) # revealed: <class 'str'>
```
## `classinfo` is a PEP-604 union of types
```toml
[environment]
python-version = "3.10"
```
```py
def f(x: type[int | str | bytes | range]):
if issubclass(x, int | str):
reveal_type(x) # revealed: type[int] | type[str]
elif issubclass(x, bytes | memoryview):
reveal_type(x) # revealed: type[bytes]
else:
reveal_type(x) # revealed: <class 'range'>
```
Although `issubclass()` usually only works if all elements in the `UnionType` are class objects, at
runtime a special exception is made for `None` so that `issubclass(x, int | None)` can work:
```py
def _(x: type):
if issubclass(x, int | str | None):
reveal_type(x) # revealed: type[int] | type[str] | <class 'NoneType'>
else:
reveal_type(x) # revealed: type & ~type[int] & ~type[str] & ~<class 'NoneType'>
```
## `classinfo` is an invalid PEP-604 union of types
Except for the `None` special case mentioned above, narrowing can only take place if all elements in
the PEP-604 union are class literals. If any elements are generic aliases or other types, the
`issubclass()` call may fail at runtime, so no narrowing can take place:
<!-- snapshot-diagnostics -->
```toml
[environment]
python-version = "3.10"
```
```py
def _(x: type[int | list | bytes]):
# error: [invalid-argument-type]
if issubclass(x, int | list[int]):
reveal_type(x) # revealed: type[int] | type[list[Unknown]] | type[bytes]
else:
reveal_type(x) # revealed: type[int] | type[list[Unknown]] | type[bytes]
```
## PEP-604 unions on Python \<3.10
PEP-604 unions were added in Python 3.10, so attempting to use them on Python 3.9 does not lead to
any type narrowing.
```toml
[environment]
python-version = "3.9"
```
```py
def _(x: type[int | str | bytes]):
# error: [unsupported-operator]
if issubclass(x, int | str):
reveal_type(x) # revealed: (type[int] & Unknown) | (type[str] & Unknown) | (type[bytes] & Unknown)
else:
reveal_type(x) # revealed: (type[int] & Unknown) | (type[str] & Unknown) | (type[bytes] & Unknown)
```
## Special cases
### Emit a diagnostic if the first argument is of wrong type

View File

@@ -49,6 +49,40 @@ c.my_property = 2
c.my_property = "a"
```
## Properties returning `Self`
A property that returns `Self` refers to an instance of the class:
```py
from typing_extensions import Self
class Path:
@property
def parent(self) -> Self:
raise NotImplementedError
reveal_type(Path().parent) # revealed: Path
```
This also works when a setter is defined:
```py
class Node:
@property
def parent(self) -> Self:
raise NotImplementedError
@parent.setter
def parent(self, value: Self) -> None:
pass
root = Node()
child = Node()
child.parent = root
reveal_type(child.parent) # revealed: Node
```
## `property.getter`
`property.getter` can be used to overwrite the getter method of a property. This does not overwrite

View File

@@ -0,0 +1,88 @@
---
source: crates/ty_test/src/lib.rs
expression: snapshot
---
---
mdtest name: isinstance.md - Narrowing for `isinstance` checks - `classinfo` is an invalid PEP-604 union of types
mdtest path: crates/ty_python_semantic/resources/mdtest/narrow/isinstance.md
---
# Python source files
## mdtest_snippet.py
```
1 | from typing import Any, Literal, NamedTuple
2 |
3 | def _(x: int | list[int] | bytes):
4 | # error: [invalid-argument-type]
5 | if isinstance(x, list[int] | int):
6 | reveal_type(x) # revealed: int | list[int] | bytes
7 | # error: [invalid-argument-type]
8 | elif isinstance(x, Literal[42] | list[int] | bytes):
9 | reveal_type(x) # revealed: int | list[int] | bytes
10 | # error: [invalid-argument-type]
11 | elif isinstance(x, Any | NamedTuple | list[int]):
12 | reveal_type(x) # revealed: int | list[int] | bytes
13 | else:
14 | reveal_type(x) # revealed: int | list[int] | bytes
```
# Diagnostics
```
error[invalid-argument-type]: Invalid second argument to `isinstance`
--> src/mdtest_snippet.py:5:8
|
3 | def _(x: int | list[int] | bytes):
4 | # error: [invalid-argument-type]
5 | if isinstance(x, list[int] | int):
| ^^^^^^^^^^^^^^---------------^
| |
| This `UnionType` instance contains non-class elements
6 | reveal_type(x) # revealed: int | list[int] | bytes
7 | # error: [invalid-argument-type]
|
info: A `UnionType` instance can only be used as the second argument to `isinstance` if all elements are class objects
info: Element `<class 'list[int]'>` in the union is not a class object
info: rule `invalid-argument-type` is enabled by default
```
```
error[invalid-argument-type]: Invalid second argument to `isinstance`
--> src/mdtest_snippet.py:8:10
|
6 | reveal_type(x) # revealed: int | list[int] | bytes
7 | # error: [invalid-argument-type]
8 | elif isinstance(x, Literal[42] | list[int] | bytes):
| ^^^^^^^^^^^^^^-------------------------------^
| |
| This `UnionType` instance contains non-class elements
9 | reveal_type(x) # revealed: int | list[int] | bytes
10 | # error: [invalid-argument-type]
|
info: A `UnionType` instance can only be used as the second argument to `isinstance` if all elements are class objects
info: Elements `<typing.Literal special form>` and `<class 'list[int]'>` in the union are not class objects
info: rule `invalid-argument-type` is enabled by default
```
```
error[invalid-argument-type]: Invalid second argument to `isinstance`
--> src/mdtest_snippet.py:11:10
|
9 | reveal_type(x) # revealed: int | list[int] | bytes
10 | # error: [invalid-argument-type]
11 | elif isinstance(x, Any | NamedTuple | list[int]):
| ^^^^^^^^^^^^^^----------------------------^
| |
| This `UnionType` instance contains non-class elements
12 | reveal_type(x) # revealed: int | list[int] | bytes
13 | else:
|
info: A `UnionType` instance can only be used as the second argument to `isinstance` if all elements are class objects
info: Element `typing.Any` in the union, and 2 more elements, are not class objects
info: rule `invalid-argument-type` is enabled by default
```

View File

@@ -0,0 +1,42 @@
---
source: crates/ty_test/src/lib.rs
expression: snapshot
---
---
mdtest name: issubclass.md - Narrowing for `issubclass` checks - `classinfo` is an invalid PEP-604 union of types
mdtest path: crates/ty_python_semantic/resources/mdtest/narrow/issubclass.md
---
# Python source files
## mdtest_snippet.py
```
1 | def _(x: type[int | list | bytes]):
2 | # error: [invalid-argument-type]
3 | if issubclass(x, int | list[int]):
4 | reveal_type(x) # revealed: type[int] | type[list[Unknown]] | type[bytes]
5 | else:
6 | reveal_type(x) # revealed: type[int] | type[list[Unknown]] | type[bytes]
```
# Diagnostics
```
error[invalid-argument-type]: Invalid second argument to `issubclass`
--> src/mdtest_snippet.py:3:8
|
1 | def _(x: type[int | list | bytes]):
2 | # error: [invalid-argument-type]
3 | if issubclass(x, int | list[int]):
| ^^^^^^^^^^^^^^---------------^
| |
| This `UnionType` instance contains non-class elements
4 | reveal_type(x) # revealed: type[int] | type[list[Unknown]] | type[bytes]
5 | else:
|
info: A `UnionType` instance can only be used as the second argument to `issubclass` if all elements are class objects
info: Element `<class 'list[int]'>` in the union is not a class object
info: rule `invalid-argument-type` is enabled by default
```

View File

@@ -0,0 +1,58 @@
---
source: crates/ty_test/src/lib.rs
expression: snapshot
---
---
mdtest name: new_types.md - NewType - The base of a `NewType` can't be a protocol class or a `TypedDict`
mdtest path: crates/ty_python_semantic/resources/mdtest/annotations/new_types.md
---
# Python source files
## mdtest_snippet.py
```
1 | from typing import NewType, Protocol, TypedDict
2 |
3 | class Id(Protocol):
4 | code: int
5 |
6 | UserId = NewType("UserId", Id) # error: [invalid-newtype]
7 |
8 | class Foo(TypedDict):
9 | a: int
10 |
11 | Bar = NewType("Bar", Foo) # error: [invalid-newtype]
```
# Diagnostics
```
error[invalid-newtype]: invalid base for `typing.NewType`
--> src/mdtest_snippet.py:6:28
|
4 | code: int
5 |
6 | UserId = NewType("UserId", Id) # error: [invalid-newtype]
| ^^ type `Id`
7 |
8 | class Foo(TypedDict):
|
info: The base of a `NewType` is not allowed to be a protocol class.
info: rule `invalid-newtype` is enabled by default
```
```
error[invalid-newtype]: invalid base for `typing.NewType`
--> src/mdtest_snippet.py:11:22
|
9 | a: int
10 |
11 | Bar = NewType("Bar", Foo) # error: [invalid-newtype]
| ^^^ type `Foo`
|
info: The base of a `NewType` is not allowed to be a `TypedDict`.
info: rule `invalid-newtype` is enabled by default
```

View File

@@ -0,0 +1,37 @@
---
source: crates/ty_test/src/lib.rs
expression: snapshot
---
---
mdtest name: new_types.md - NewType - Trying to subclass a `NewType` produces an error matching CPython
mdtest path: crates/ty_python_semantic/resources/mdtest/annotations/new_types.md
---
# Python source files
## mdtest_snippet.py
```
1 | from typing import NewType
2 |
3 | X = NewType("X", int)
4 |
5 | class Foo(X): ... # error: [invalid-base]
```
# Diagnostics
```
error[invalid-base]: Cannot subclass an instance of NewType
--> src/mdtest_snippet.py:5:11
|
3 | X = NewType("X", int)
4 |
5 | class Foo(X): ... # error: [invalid-base]
| ^
|
info: Perhaps you were looking for: `Foo = NewType('Foo', X)`
info: Definition of class `Foo` will raise `TypeError` at runtime
info: rule `invalid-base` is enabled by default
```

View File

@@ -46,7 +46,7 @@ mdtest path: crates/ty_python_semantic/resources/mdtest/class/super.md
32 | reveal_type(super(C, C()).aa) # revealed: int
33 | reveal_type(super(C, C()).bb) # revealed: int
34 | import types
35 | from typing_extensions import Callable, TypeIs, Literal, TypedDict
35 | from typing_extensions import Callable, TypeIs, Literal, NewType, TypedDict
36 |
37 | def f(): ...
38 |
@@ -61,59 +61,63 @@ mdtest path: crates/ty_python_semantic/resources/mdtest/class/super.md
47 | x: int
48 | y: bytes
49 |
50 | # revealed: <super: <class 'object'>, FunctionType>
51 | reveal_type(super(object, f))
52 | # revealed: <super: <class 'object'>, WrapperDescriptorType>
53 | reveal_type(super(object, types.FunctionType.__get__))
54 | # revealed: <super: <class 'object'>, GenericAlias>
55 | reveal_type(super(object, Foo[int]))
56 | # revealed: <super: <class 'object'>, _SpecialForm>
57 | reveal_type(super(object, Literal))
58 | # revealed: <super: <class 'object'>, TypeAliasType>
59 | reveal_type(super(object, Alias))
60 | # revealed: <super: <class 'object'>, MethodType>
61 | reveal_type(super(object, Foo().method))
62 | # revealed: <super: <class 'object'>, property>
63 | reveal_type(super(object, Foo.some_property))
64 |
65 | def g(x: object) -> TypeIs[list[object]]:
66 | return isinstance(x, list)
67 |
68 | def _(x: object, y: SomeTypedDict, z: Callable[[int, str], bool]):
69 | if hasattr(x, "bar"):
70 | # revealed: <Protocol with members 'bar'>
71 | reveal_type(x)
72 | # error: [invalid-super-argument]
73 | # revealed: Unknown
74 | reveal_type(super(object, x))
75 |
76 | # error: [invalid-super-argument]
77 | # revealed: Unknown
78 | reveal_type(super(object, z))
50 | N = NewType("N", int)
51 |
52 | # revealed: <super: <class 'object'>, FunctionType>
53 | reveal_type(super(object, f))
54 | # revealed: <super: <class 'object'>, WrapperDescriptorType>
55 | reveal_type(super(object, types.FunctionType.__get__))
56 | # revealed: <super: <class 'object'>, GenericAlias>
57 | reveal_type(super(object, Foo[int]))
58 | # revealed: <super: <class 'object'>, _SpecialForm>
59 | reveal_type(super(object, Literal))
60 | # revealed: <super: <class 'object'>, TypeAliasType>
61 | reveal_type(super(object, Alias))
62 | # revealed: <super: <class 'object'>, MethodType>
63 | reveal_type(super(object, Foo().method))
64 | # revealed: <super: <class 'object'>, property>
65 | reveal_type(super(object, Foo.some_property))
66 | # revealed: <super: <class 'object'>, int>
67 | reveal_type(super(object, N(42)))
68 |
69 | def g(x: object) -> TypeIs[list[object]]:
70 | return isinstance(x, list)
71 |
72 | def _(x: object, y: SomeTypedDict, z: Callable[[int, str], bool]):
73 | if hasattr(x, "bar"):
74 | # revealed: <Protocol with members 'bar'>
75 | reveal_type(x)
76 | # error: [invalid-super-argument]
77 | # revealed: Unknown
78 | reveal_type(super(object, x))
79 |
80 | is_list = g(x)
81 | # revealed: TypeIs[list[object] @ x]
82 | reveal_type(is_list)
83 | # revealed: <super: <class 'object'>, bool>
84 | reveal_type(super(object, is_list))
85 |
86 | # revealed: <super: <class 'object'>, dict[Literal["x", "y"], int | bytes]>
87 | reveal_type(super(object, y))
88 |
89 | # The first argument to `super()` must be an actual class object;
90 | # instances of `GenericAlias` are not accepted at runtime:
91 | #
92 | # error: [invalid-super-argument]
93 | # revealed: Unknown
94 | reveal_type(super(list[int], []))
95 | class Super:
96 | def method(self) -> int:
97 | return 42
98 |
99 | class Sub(Super):
100 | def method(self: Sub) -> int:
101 | # revealed: <super: <class 'Sub'>, Sub>
102 | return reveal_type(super(self.__class__, self)).method()
80 | # error: [invalid-super-argument]
81 | # revealed: Unknown
82 | reveal_type(super(object, z))
83 |
84 | is_list = g(x)
85 | # revealed: TypeIs[list[object] @ x]
86 | reveal_type(is_list)
87 | # revealed: <super: <class 'object'>, bool>
88 | reveal_type(super(object, is_list))
89 |
90 | # revealed: <super: <class 'object'>, dict[Literal["x", "y"], int | bytes]>
91 | reveal_type(super(object, y))
92 |
93 | # The first argument to `super()` must be an actual class object;
94 | # instances of `GenericAlias` are not accepted at runtime:
95 | #
96 | # error: [invalid-super-argument]
97 | # revealed: Unknown
98 | reveal_type(super(list[int], []))
99 | class Super:
100 | def method(self) -> int:
101 | return 42
102 |
103 | class Sub(Super):
104 | def method(self: Sub) -> int:
105 | # revealed: <super: <class 'Sub'>, Sub>
106 | return reveal_type(super(self.__class__, self)).method()
```
# Diagnostics
@@ -206,14 +210,14 @@ info: rule `unresolved-attribute` is enabled by default
```
error[invalid-super-argument]: `<Protocol with members 'bar'>` is an abstract/structural type in `super(<class 'object'>, <Protocol with members 'bar'>)` call
--> src/mdtest_snippet.py:74:21
--> src/mdtest_snippet.py:78:21
|
72 | # error: [invalid-super-argument]
73 | # revealed: Unknown
74 | reveal_type(super(object, x))
76 | # error: [invalid-super-argument]
77 | # revealed: Unknown
78 | reveal_type(super(object, x))
| ^^^^^^^^^^^^^^^^
75 |
76 | # error: [invalid-super-argument]
79 |
80 | # error: [invalid-super-argument]
|
info: rule `invalid-super-argument` is enabled by default
@@ -221,14 +225,14 @@ info: rule `invalid-super-argument` is enabled by default
```
error[invalid-super-argument]: `(int, str, /) -> bool` is an abstract/structural type in `super(<class 'object'>, (int, str, /) -> bool)` call
--> src/mdtest_snippet.py:78:17
--> src/mdtest_snippet.py:82:17
|
76 | # error: [invalid-super-argument]
77 | # revealed: Unknown
78 | reveal_type(super(object, z))
80 | # error: [invalid-super-argument]
81 | # revealed: Unknown
82 | reveal_type(super(object, z))
| ^^^^^^^^^^^^^^^^
79 |
80 | is_list = g(x)
83 |
84 | is_list = g(x)
|
info: rule `invalid-super-argument` is enabled by default
@@ -236,15 +240,15 @@ info: rule `invalid-super-argument` is enabled by default
```
error[invalid-super-argument]: `types.GenericAlias` instance `list[int]` is not a valid class
--> src/mdtest_snippet.py:94:13
|
92 | # error: [invalid-super-argument]
93 | # revealed: Unknown
94 | reveal_type(super(list[int], []))
| ^^^^^^^^^^^^^^^^^^^^
95 | class Super:
96 | def method(self) -> int:
|
--> src/mdtest_snippet.py:98:13
|
96 | # error: [invalid-super-argument]
97 | # revealed: Unknown
98 | reveal_type(super(list[int], []))
| ^^^^^^^^^^^^^^^^^^^^
99 | class Super:
100 | def method(self) -> int:
|
info: rule `invalid-super-argument` is enabled by default
```

View File

@@ -607,23 +607,33 @@ module:
`module2.py`:
```py
import importlib
import importlib.abc
import imported
import imported.abc
```
`imported/__init__.pyi`:
```pyi
```
`imported/abc.pyi`:
```pyi
```
`main2.py`:
```py
import importlib
from module2 import importlib as other_importlib
import imported
from module2 import imported as other_imported
from ty_extensions import TypeOf, static_assert, is_equivalent_to
# error: [unresolved-attribute] "Module `importlib` has no member `abc`"
reveal_type(importlib.abc) # revealed: Unknown
# error: [unresolved-attribute] "Module `imported` has no member `abc`"
reveal_type(imported.abc) # revealed: Unknown
reveal_type(other_importlib.abc) # revealed: <module 'importlib.abc'>
reveal_type(other_imported.abc) # revealed: <module 'imported.abc'>
static_assert(not is_equivalent_to(TypeOf[importlib], TypeOf[other_importlib]))
static_assert(not is_equivalent_to(TypeOf[imported], TypeOf[other_imported]))
```
[materializations]: https://typing.python.org/en/latest/spec/glossary.html#term-materialize

View File

@@ -141,6 +141,97 @@ def bounded[T: Base]():
static_assert(not constraints.satisfied_by_all_typevars())
```
If the upper bound is a gradual type, we are free to choose any materialization of the upper bound
that makes the test succeed. In non-inferable positions, it is most helpful to choose the bottom
materialization as the upper bound. That is the most restrictive possible choice, which minimizes
the number of valid specializations that must satisfy the constraint set. In inferable positions,
the opposite is true: it is most helpful to choose the top materialization. That is the most
permissive possible choice, which maximizes the number of valid specializations that might satisfy
the constraint set.
```py
from typing import Any
def bounded_by_gradual[T: Any]():
static_assert(ConstraintSet.always().satisfied_by_all_typevars(inferable=tuple[T]))
static_assert(ConstraintSet.always().satisfied_by_all_typevars())
static_assert(not ConstraintSet.never().satisfied_by_all_typevars(inferable=tuple[T]))
static_assert(not ConstraintSet.never().satisfied_by_all_typevars())
# If we choose Base as the materialization for the upper bound, then (T = Base) is a valid
# specialization, which satisfies (T ≤ Base).
static_assert(ConstraintSet.range(Never, T, Base).satisfied_by_all_typevars(inferable=tuple[T]))
# We are free to choose any materialization of the upper bound, and only have to show that the
# constraint set holds for that one materialization. Having chosen one materialization, we then
# have to show that the constraint set holds for all valid specializations of that
# materialization. If we choose Never as the materialization, then all valid specializations
# must satisfy (T ≤ Never). That means there is only one valid specialization, (T = Never),
# which satisfies (T ≤ Base).
static_assert(ConstraintSet.range(Never, T, Base).satisfied_by_all_typevars())
# If we choose Unrelated as the materialization, then (T = Unrelated) is a valid specialization,
# which satisfies (T ≤ Unrelated).
constraints = ConstraintSet.range(Never, T, Unrelated)
static_assert(constraints.satisfied_by_all_typevars(inferable=tuple[T]))
# If we choose Never as the materialization, then (T = Never) is the only valid specialization,
# which satisfies (T ≤ Unrelated).
static_assert(constraints.satisfied_by_all_typevars())
# If we choose Unrelated as the materialization, then (T = Unrelated) is a valid specialization,
# which satisfies (T ≤ Unrelated ∧ T ≠ Never).
constraints = constraints & ~ConstraintSet.range(Never, T, Never)
static_assert(constraints.satisfied_by_all_typevars(inferable=tuple[T]))
# There is no upper bound that we can choose to satisfy this constraint set in non-inferable
# position. (T = Never) will be a valid assignment no matter what, and that does not satisfy
# (T ≤ Unrelated ∧ T ≠ Never).
static_assert(not constraints.satisfied_by_all_typevars())
```
When the upper bound is a more complex gradual type, we are still free to choose any materialization
that causes the check to succeed, and we will still choose the bottom materialization in
non-inferable position, and the top materialization in inferable position. The variance of the
typevar does not affect whether there is a materialization we can choose. Below, we test the most
restrictive variance (i.e., invariance), but we get the same results for other variances as well.
```py
def bounded_by_gradual[T: list[Any]]():
static_assert(ConstraintSet.always().satisfied_by_all_typevars(inferable=tuple[T]))
static_assert(ConstraintSet.always().satisfied_by_all_typevars())
static_assert(not ConstraintSet.never().satisfied_by_all_typevars(inferable=tuple[T]))
static_assert(not ConstraintSet.never().satisfied_by_all_typevars())
# If we choose list[Base] as the materialization of the upper bound, then (T = list[Base]) is a
# valid specialization, which satisfies (T ≤ list[Base]).
static_assert(ConstraintSet.range(Never, T, list[Base]).satisfied_by_all_typevars(inferable=tuple[T]))
# If we choose Base as the materialization, then all valid specializations must satisfy
# (T ≤ list[Base]).
# We are free to choose any materialization of the upper bound, and only have to show that the
# constraint set holds for that one materialization. Having chosen one materialization, we then
# have to show that the constraint set holds for all valid specializations of that
# materialization. If we choose list[Base] as the materialization, then all valid specializations
# must satisfy (T ≤ list[Base]), which is exactly the constraint set that we need to satisfy.
static_assert(ConstraintSet.range(Never, T, list[Base]).satisfied_by_all_typevars())
# If we choose Unrelated as the materialization, then (T = list[Unrelated]) is a valid
# specialization, which satisfies (T ≤ list[Unrelated]).
constraints = ConstraintSet.range(Never, T, list[Unrelated])
static_assert(constraints.satisfied_by_all_typevars(inferable=tuple[T]))
# If we choose Unrelated as the materialization, then all valid specializations must satisfy
# (T ≤ list[Unrelated]).
static_assert(constraints.satisfied_by_all_typevars())
# If we choose Unrelated as the materialization, then (T = list[Unrelated]) is a valid
# specialization, which satisfies (T ≤ list[Unrelated] ∧ T ≠ Never).
constraints = constraints & ~ConstraintSet.range(Never, T, Never)
static_assert(constraints.satisfied_by_all_typevars(inferable=tuple[T]))
# There is no upper bound that we can choose to satisfy this constraint set in non-inferable
# position. (T = Never) will be a valid assignment no matter what, and that does not satisfy
# (T ≤ list[Unrelated] ∧ T ≠ Never).
static_assert(not constraints.satisfied_by_all_typevars())
```
## Constrained typevar
If a typevar has constraints, then it must specialize to one of those specific types. (Not to a
@@ -218,3 +309,174 @@ def constrained[T: (Base, Unrelated)]():
# (T = Base) is a valid specialization, which does not satisfy (T = Sub T = Unrelated).
static_assert(not constraints.satisfied_by_all_typevars())
```
If any of the constraints is a gradual type, we are free to choose any materialization of that
constraint that makes the test succeed. In non-inferable positions, it is most helpful to choose the
bottom materialization as the constraint. That is the most restrictive possible choice, which
minimizes the number of valid specializations that must satisfy the constraint set. In inferable
positions, the opposite is true: it is most helpful to choose the top materialization. That is the
most permissive possible choice, which maximizes the number of valid specializations that might
satisfy the constraint set.
```py
from typing import Any
def constrained_by_gradual[T: (Base, Any)]():
static_assert(ConstraintSet.always().satisfied_by_all_typevars(inferable=tuple[T]))
static_assert(ConstraintSet.always().satisfied_by_all_typevars())
static_assert(not ConstraintSet.never().satisfied_by_all_typevars(inferable=tuple[T]))
static_assert(not ConstraintSet.never().satisfied_by_all_typevars())
# If we choose Unrelated as the materialization of the gradual constraint, then (T = Unrelated)
# is a valid specialization, which satisfies (T ≤ Unrelated).
static_assert(ConstraintSet.range(Never, T, Unrelated).satisfied_by_all_typevars(inferable=tuple[T]))
# No matter which materialization we choose, (T = Base) is a valid specialization, which does
# not satisfy (T ≤ Unrelated).
static_assert(not ConstraintSet.range(Never, T, Unrelated).satisfied_by_all_typevars())
# If we choose Super as the materialization, then (T = Super) is a valid specialization, which
# satisfies (T ≤ Super).
static_assert(ConstraintSet.range(Never, T, Super).satisfied_by_all_typevars(inferable=tuple[T]))
# If we choose Never as the materialization, then (T = Base) and (T = Never) are the only valid
# specializations, both of which satisfy (T ≤ Super).
static_assert(ConstraintSet.range(Never, T, Super).satisfied_by_all_typevars())
# If we choose Base as the materialization, then (T = Base) is a valid specialization, which
# satisfies (T ≤ Base).
static_assert(ConstraintSet.range(Never, T, Base).satisfied_by_all_typevars(inferable=tuple[T]))
# If we choose Never as the materialization, then (T = Base) and (T = Never) are the only valid
# specializations, both of which satisfy (T ≤ Base).
static_assert(ConstraintSet.range(Never, T, Base).satisfied_by_all_typevars())
def constrained_by_two_gradual[T: (Any, Any)]():
static_assert(ConstraintSet.always().satisfied_by_all_typevars(inferable=tuple[T]))
static_assert(ConstraintSet.always().satisfied_by_all_typevars())
static_assert(not ConstraintSet.never().satisfied_by_all_typevars(inferable=tuple[T]))
static_assert(not ConstraintSet.never().satisfied_by_all_typevars())
# If we choose Unrelated as the materialization of either constraint, then (T = Unrelated) is a
# valid specialization, which satisfies (T ≤ Unrelated).
static_assert(ConstraintSet.range(Never, T, Unrelated).satisfied_by_all_typevars(inferable=tuple[T]))
# If we choose Unrelated as the materialization of both constraints, then (T = Unrelated) is the
# only valid specialization, which satisfies (T ≤ Unrelated).
static_assert(ConstraintSet.range(Never, T, Unrelated).satisfied_by_all_typevars())
# If we choose Base as the materialization of either constraint, then (T = Base) is a valid
# specialization, which satisfies (T ≤ Base).
static_assert(ConstraintSet.range(Never, T, Base).satisfied_by_all_typevars(inferable=tuple[T]))
# If we choose Never as the materialization of both constraints, then (T = Never) is the only
# valid specialization, which satisfies (T ≤ Base).
static_assert(ConstraintSet.range(Never, T, Base).satisfied_by_all_typevars())
```
When a constraint is a more complex gradual type, we are still free to choose any materialization
that causes the check to succeed, and we will still choose the bottom materialization in
non-inferable position, and the top materialization in inferable position. The variance of the
typevar does not affect whether there is a materialization we can choose. Below, we test the most
restrictive variance (i.e., invariance), but we get the same results for other variances as well.
```py
def constrained_by_gradual[T: (list[Base], list[Any])]():
static_assert(ConstraintSet.always().satisfied_by_all_typevars(inferable=tuple[T]))
static_assert(ConstraintSet.always().satisfied_by_all_typevars())
static_assert(not ConstraintSet.never().satisfied_by_all_typevars(inferable=tuple[T]))
static_assert(not ConstraintSet.never().satisfied_by_all_typevars())
# No matter which materialization we choose, every valid specialization will be of the form
# (T = list[X]). Because Unrelated is final, it is disjoint from all lists. There is therefore
# no materialization or specialization that satisfies (T ≤ Unrelated).
static_assert(not ConstraintSet.range(Never, T, Unrelated).satisfied_by_all_typevars(inferable=tuple[T]))
static_assert(not ConstraintSet.range(Never, T, Unrelated).satisfied_by_all_typevars())
# If we choose list[Super] as the materialization, then (T = list[Super]) is a valid
# specialization, which satisfies (T ≤ list[Super]).
static_assert(ConstraintSet.range(Never, T, list[Super]).satisfied_by_all_typevars(inferable=tuple[T]))
# No matter which materialization we choose, (T = list[Base]) is a valid specialization, which
# does not satisfy (T ≤ list[Super]).
static_assert(not ConstraintSet.range(Never, T, list[Super]).satisfied_by_all_typevars())
# If we choose list[Base] as the materialization, then (T = list[Base]) is a valid
# specialization, which satisfies (T ≤ list[Base]).
static_assert(ConstraintSet.range(Never, T, list[Base]).satisfied_by_all_typevars(inferable=tuple[T]))
# If we choose list[Base] as the materialization, then all valid specializations must satisfy
# (T ≤ list[Base]).
static_assert(ConstraintSet.range(Never, T, list[Base]).satisfied_by_all_typevars())
# If we choose list[Sub] as the materialization, then (T = list[Sub]) is a valid specialization,
# which # satisfies (T ≤ list[Sub]).
static_assert(ConstraintSet.range(Never, T, list[Sub]).satisfied_by_all_typevars(inferable=tuple[T]))
# No matter which materialization we choose, (T = list[Base]) is a valid specialization, which
# does not satisfy (T ≤ list[Sub]).
static_assert(not ConstraintSet.range(Never, T, list[Sub]).satisfied_by_all_typevars())
# If we choose list[Unrelated] as the materialization, then (T = list[Unrelated]) is a valid
# specialization, which satisfies (T ≤ list[Unrelated]).
constraints = ConstraintSet.range(Never, T, list[Unrelated])
static_assert(constraints.satisfied_by_all_typevars(inferable=tuple[T]))
# No matter which materialization we choose, (T = list[Base]) is a valid specialization, which
# does not satisfy (T ≤ list[Unrelated]).
static_assert(not constraints.satisfied_by_all_typevars())
# If we choose list[Unrelated] as the materialization, then (T = list[Unrelated]) is a valid
# specialization, which satisfies (T ≤ list[Unrelated] ∧ T ≠ Never).
constraints = constraints & ~ConstraintSet.range(Never, T, Never)
static_assert(constraints.satisfied_by_all_typevars(inferable=tuple[T]))
# There is no materialization that we can choose to satisfy this constraint set in non-inferable
# position. (T = Never) will be a valid assignment no matter what, and that does not satisfy
# (T ≤ list[Unrelated] ∧ T ≠ Never).
static_assert(not constraints.satisfied_by_all_typevars())
def constrained_by_two_gradual[T: (list[Any], list[Any])]():
static_assert(ConstraintSet.always().satisfied_by_all_typevars(inferable=tuple[T]))
static_assert(ConstraintSet.always().satisfied_by_all_typevars())
static_assert(not ConstraintSet.never().satisfied_by_all_typevars(inferable=tuple[T]))
static_assert(not ConstraintSet.never().satisfied_by_all_typevars())
# No matter which materialization we choose, every valid specialization will be of the form
# (T = list[X]). Because Unrelated is final, it is disjoint from all lists. There is therefore
# no materialization or specialization that satisfies (T ≤ Unrelated).
static_assert(not ConstraintSet.range(Never, T, Unrelated).satisfied_by_all_typevars(inferable=tuple[T]))
static_assert(not ConstraintSet.range(Never, T, Unrelated).satisfied_by_all_typevars())
# If we choose list[Super] as the materialization, then (T = list[Super]) is a valid
# specialization, which satisfies (T ≤ list[Super]).
static_assert(ConstraintSet.range(Never, T, list[Super]).satisfied_by_all_typevars(inferable=tuple[T]))
# No matter which materialization we choose, (T = list[Base]) is a valid specialization, which
# does not satisfy (T ≤ list[Super]).
static_assert(ConstraintSet.range(Never, T, list[Super]).satisfied_by_all_typevars())
# If we choose list[Base] as the materialization, then (T = list[Base]) is a valid
# specialization, which satisfies (T ≤ list[Base]).
static_assert(ConstraintSet.range(Never, T, list[Base]).satisfied_by_all_typevars(inferable=tuple[T]))
# If we choose Base as the materialization, then all valid specializations must satisfy
# (T ≤ list[Base]).
static_assert(ConstraintSet.range(Never, T, list[Base]).satisfied_by_all_typevars())
# If we choose list[Sub] as the materialization, then (T = list[Sub]) is a valid specialization,
# which satisfies (T ≤ list[Sub]).
static_assert(ConstraintSet.range(Never, T, list[Sub]).satisfied_by_all_typevars(inferable=tuple[T]))
# No matter which materialization we choose, (T = list[Base]) is a valid specialization, which
# does not satisfy (T ≤ list[Sub]).
static_assert(ConstraintSet.range(Never, T, list[Sub]).satisfied_by_all_typevars())
# If we choose list[Unrelated] as the materialization, then (T = list[Unrelated]) is a valid
# specialization, which satisfies (T ≤ list[Unrelated]).
constraints = ConstraintSet.range(Never, T, list[Unrelated])
static_assert(constraints.satisfied_by_all_typevars(inferable=tuple[T]))
# No matter which materialization we choose, (T = list[Base]) is a valid specialization, which
# does not satisfy (T ≤ list[Unrelated]).
static_assert(constraints.satisfied_by_all_typevars())
# If we choose list[Unrelated] as the materialization, then (T = list[Unrelated]) is a valid
# specialization, which satisfies (T ≤ list[Unrelated] ∧ T ≠ Never).
constraints = constraints & ~ConstraintSet.range(Never, T, Never)
static_assert(constraints.satisfied_by_all_typevars(inferable=tuple[T]))
# There is no constraint that we can choose to satisfy this constraint set in non-inferable
# position. (T = Never) will be a valid assignment no matter what, and that does not satisfy
# (T ≤ list[Unrelated] ∧ T ≠ Never).
static_assert(constraints.satisfied_by_all_typevars())
```

View File

@@ -295,6 +295,7 @@ impl ModuleName {
Self::from_identifier_parts(db, importing_file, module.as_deref(), *level)
}
/// Computes the absolute module name from the LHS components of `from LHS import RHS`
pub(crate) fn from_identifier_parts(
db: &dyn Db,
importing_file: File,
@@ -309,6 +310,16 @@ impl ModuleName {
.ok_or(ModuleNameResolutionError::InvalidSyntax)
}
}
/// Computes the absolute module name for the package this file belongs to.
///
/// i.e. this resolves `.`
pub(crate) fn package_for_file(
db: &dyn Db,
importing_file: File,
) -> Result<Self, ModuleNameResolutionError> {
Self::from_identifier_parts(db, importing_file, None, 1)
}
}
impl Deref for ModuleName {

View File

@@ -6,12 +6,12 @@ use ruff_db::parsed::parsed_module;
use ruff_index::{IndexSlice, IndexVec};
use ruff_python_ast::NodeIndex;
use ruff_python_ast::name::Name;
use ruff_python_parser::semantic_errors::SemanticSyntaxError;
use rustc_hash::{FxHashMap, FxHashSet};
use salsa::Update;
use salsa::plumbing::AsId;
use crate::Db;
use crate::module_name::ModuleName;
use crate::node_key::NodeKey;
use crate::semantic_index::ast_ids::AstIds;
@@ -28,7 +28,6 @@ use crate::semantic_index::scope::{
use crate::semantic_index::symbol::ScopedSymbolId;
use crate::semantic_index::use_def::{EnclosingSnapshotKey, ScopedEnclosingSnapshotId, UseDefMap};
use crate::semantic_model::HasTrackedScope;
use crate::{Db, Module, resolve_module};
pub mod ast_ids;
mod builder;
@@ -84,65 +83,6 @@ pub(crate) fn imported_modules<'db>(db: &'db dyn Db, file: File) -> Arc<FxHashSe
semantic_index(db, file).imported_modules.clone()
}
/// Returns the set of relative submodules that are explicitly imported anywhere in
/// `importing_module`.
///
/// This set only considers `from...import` statements (but it could also include `import`).
/// It also only returns a non-empty result for `__init__.pyi` files.
/// See [`ModuleLiteralType::available_submodule_attributes`] for discussion
/// of why this analysis is intentionally limited.
///
/// This function specifically implements the rule that if an `__init__.pyi` file
/// contains a `from...import` that imports a direct submodule of the package,
/// that submodule should be available as an attribute of the package.
///
/// While we endeavour to accurately model import side-effects for `.py` files, we intentionally
/// limit them for `.pyi` files to encourage more intentional API design. The standard escape
/// hatches for this are the `import x as x` idiom or listing them in `__all__`, but in practice
/// some other idioms are popular.
///
/// In particular, many packages have their `__init__` include lines like
/// `from . import subpackage`, with the intent that `mypackage.subpackage` should be
/// available for anyone who only does `import mypackage`.
#[salsa::tracked(returns(deref), heap_size=ruff_memory_usage::heap_size)]
pub(crate) fn imported_relative_submodules_of_stub_package<'db>(
db: &'db dyn Db,
importing_module: Module<'db>,
) -> Box<[ModuleName]> {
let Some(file) = importing_module.file(db) else {
return Box::default();
};
if !file.is_package_stub(db) {
return Box::default();
}
semantic_index(db, file)
.maybe_imported_modules
.iter()
.filter_map(|import| {
let mut submodule = ModuleName::from_identifier_parts(
db,
file,
import.from_module.as_deref(),
import.level,
)
.ok()?;
// We only actually care if this is a direct submodule of the package
// so this part should actually be exactly the importing module.
let importing_module_name = importing_module.name(db);
if importing_module_name != &submodule {
return None;
}
submodule.extend(&ModuleName::new(import.submodule.as_str())?);
// Throw out the result if this doesn't resolve to an actual module.
// This is quite expensive, but we've gone through a lot of hoops to
// get here so it won't happen too much.
resolve_module(db, &submodule)?;
// Return only the relative part
submodule.relative_to(importing_module_name)
})
.collect()
}
/// Returns the use-def map for a specific `scope`.
///
/// Using [`use_def_map`] over [`semantic_index`] has the advantage that
@@ -284,9 +224,6 @@ pub(crate) struct SemanticIndex<'db> {
/// The set of modules that are imported anywhere within this file.
imported_modules: Arc<FxHashSet<ModuleName>>,
/// `from...import` statements within this file that might import a submodule.
maybe_imported_modules: FxHashSet<MaybeModuleImport>,
/// Flags about the global scope (code usage impacting inference)
has_future_annotations: bool,
@@ -300,16 +237,6 @@ pub(crate) struct SemanticIndex<'db> {
generator_functions: FxHashSet<FileScopeId>,
}
/// A `from...import` that may be an import of a module
///
/// Later analysis will determine if it is.
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Hash, get_size2::GetSize)]
pub(crate) struct MaybeModuleImport {
level: u32,
from_module: Option<Name>,
submodule: Name,
}
impl<'db> SemanticIndex<'db> {
/// Returns the place table for a specific scope.
///

View File

@@ -26,8 +26,8 @@ use crate::semantic_index::definition::{
AnnotatedAssignmentDefinitionNodeRef, AssignmentDefinitionNodeRef,
ComprehensionDefinitionNodeRef, Definition, DefinitionCategory, DefinitionNodeKey,
DefinitionNodeRef, Definitions, ExceptHandlerDefinitionNodeRef, ForStmtDefinitionNodeRef,
ImportDefinitionNodeRef, ImportFromDefinitionNodeRef, MatchPatternDefinitionNodeRef,
StarImportDefinitionNodeRef, WithItemDefinitionNodeRef,
ImportDefinitionNodeRef, ImportFromDefinitionNodeRef, ImportFromSubmoduleDefinitionNodeRef,
MatchPatternDefinitionNodeRef, StarImportDefinitionNodeRef, WithItemDefinitionNodeRef,
};
use crate::semantic_index::expression::{Expression, ExpressionKind};
use crate::semantic_index::place::{PlaceExpr, PlaceTableBuilder, ScopedPlaceId};
@@ -47,9 +47,7 @@ use crate::semantic_index::symbol::{ScopedSymbolId, Symbol};
use crate::semantic_index::use_def::{
EnclosingSnapshotKey, FlowSnapshot, ScopedEnclosingSnapshotId, UseDefMapBuilder,
};
use crate::semantic_index::{
ExpressionsScopeMap, MaybeModuleImport, SemanticIndex, VisibleAncestorsIter,
};
use crate::semantic_index::{ExpressionsScopeMap, SemanticIndex, VisibleAncestorsIter};
use crate::semantic_model::HasTrackedScope;
use crate::unpack::{EvaluationMode, Unpack, UnpackKind, UnpackPosition, UnpackValue};
use crate::{Db, Program};
@@ -113,7 +111,7 @@ pub(super) struct SemanticIndexBuilder<'db, 'ast> {
definitions_by_node: FxHashMap<DefinitionNodeKey, Definitions<'db>>,
expressions_by_node: FxHashMap<ExpressionNodeKey, Expression<'db>>,
imported_modules: FxHashSet<ModuleName>,
maybe_imported_modules: FxHashSet<MaybeModuleImport>,
seen_submodule_imports: FxHashSet<String>,
/// Hashset of all [`FileScopeId`]s that correspond to [generator functions].
///
/// [generator functions]: https://docs.python.org/3/glossary.html#term-generator
@@ -151,7 +149,7 @@ impl<'db, 'ast> SemanticIndexBuilder<'db, 'ast> {
definitions_by_node: FxHashMap::default(),
expressions_by_node: FxHashMap::default(),
maybe_imported_modules: FxHashSet::default(),
seen_submodule_imports: FxHashSet::default(),
imported_modules: FxHashSet::default(),
generator_functions: FxHashSet::default(),
@@ -1266,7 +1264,6 @@ impl<'db, 'ast> SemanticIndexBuilder<'db, 'ast> {
self.scopes_by_node.shrink_to_fit();
self.generator_functions.shrink_to_fit();
self.enclosing_snapshots.shrink_to_fit();
self.maybe_imported_modules.shrink_to_fit();
SemanticIndex {
place_tables,
@@ -1279,7 +1276,6 @@ impl<'db, 'ast> SemanticIndexBuilder<'db, 'ast> {
scopes_by_node: self.scopes_by_node,
use_def_maps,
imported_modules: Arc::new(self.imported_modules),
maybe_imported_modules: self.maybe_imported_modules,
has_future_annotations: self.has_future_annotations,
enclosing_snapshots: self.enclosing_snapshots,
semantic_syntax_errors: self.semantic_syntax_errors.into_inner(),
@@ -1453,6 +1449,53 @@ impl<'ast> Visitor<'ast> for SemanticIndexBuilder<'_, 'ast> {
self.current_use_def_map_mut()
.record_node_reachability(NodeKey::from_node(node));
// If we see:
//
// * `from .x.y import z` (or `from whatever.thispackage.x.y`)
// * And we are in an `__init__.py(i)` (hereafter `thispackage`)
// * And this is the first time we've seen `from .x` in this module
// * And we're in the global scope
//
// We introduce a local definition `x = <module 'thispackage.x'>` that occurs
// before the `z = ...` declaration the import introduces. This models the fact
// that the *first* time that you import 'thispackage.x' the python runtime creates
// `x` as a variable in the global scope of `thispackage`.
//
// This is not a perfect simulation of actual runtime behaviour for *various*
// reasons but it works well for most practical purposes. In particular it's nice
// that `x` can be freely overwritten, and that we don't assume that an import
// in one function is visible in another function.
let mut is_self_import = false;
if self.file.is_package(self.db)
&& let Ok(module_name) = ModuleName::from_identifier_parts(
self.db,
self.file,
node.module.as_deref(),
node.level,
)
&& let Ok(thispackage) = ModuleName::package_for_file(self.db, self.file)
{
// Record whether this is equivalent to `from . import ...`
is_self_import = module_name == thispackage;
if node.module.is_some()
&& let Some(relative_submodule) = module_name.relative_to(&thispackage)
&& let Some(direct_submodule) = relative_submodule.components().next()
&& !self.seen_submodule_imports.contains(direct_submodule)
&& self.current_scope().is_global()
{
self.seen_submodule_imports
.insert(direct_submodule.to_owned());
let direct_submodule_name = Name::new(direct_submodule);
let symbol = self.add_symbol(direct_submodule_name);
self.add_definition(
symbol.into(),
ImportFromSubmoduleDefinitionNodeRef { node },
);
}
}
let mut found_star = false;
for (alias_index, alias) in node.names.iter().enumerate() {
if &alias.name == "*" {
@@ -1559,20 +1602,15 @@ impl<'ast> Visitor<'ast> for SemanticIndexBuilder<'_, 'ast> {
}
let (symbol_name, is_reexported) = if let Some(asname) = &alias.asname {
// It's re-exported if it's `from ... import x as x`
(&asname.id, asname.id == alias.name.id)
} else {
(&alias.name.id, false)
// As a non-standard rule to handle stubs in the wild, we consider
// `from . import x` and `from whatever.thispackage import x` in an
// `__init__.pyi` to re-export `x` (as long as it wasn't renamed)
(&alias.name.id, is_self_import)
};
// If there's no alias or a redundant alias, record this as a potential import of a submodule
if alias.asname.is_none() || is_reexported {
self.maybe_imported_modules.insert(MaybeModuleImport {
level: node.level,
from_module: node.module.clone().map(Into::into),
submodule: alias.name.clone().into(),
});
}
// Look for imports `from __future__ import annotations`, ignore `as ...`
// We intentionally don't enforce the rules about location of `__future__`
// imports here, we assume the user's intent was to apply the `__future__`

View File

@@ -209,6 +209,7 @@ impl<'db> DefinitionState<'db> {
pub(crate) enum DefinitionNodeRef<'ast, 'db> {
Import(ImportDefinitionNodeRef<'ast>),
ImportFrom(ImportFromDefinitionNodeRef<'ast>),
ImportFromSubmodule(ImportFromSubmoduleDefinitionNodeRef<'ast>),
ImportStar(StarImportDefinitionNodeRef<'ast>),
For(ForStmtDefinitionNodeRef<'ast, 'db>),
Function(&'ast ast::StmtFunctionDef),
@@ -290,6 +291,12 @@ impl<'ast> From<ImportFromDefinitionNodeRef<'ast>> for DefinitionNodeRef<'ast, '
}
}
impl<'ast> From<ImportFromSubmoduleDefinitionNodeRef<'ast>> for DefinitionNodeRef<'ast, '_> {
fn from(node_ref: ImportFromSubmoduleDefinitionNodeRef<'ast>) -> Self {
Self::ImportFromSubmodule(node_ref)
}
}
impl<'ast, 'db> From<ForStmtDefinitionNodeRef<'ast, 'db>> for DefinitionNodeRef<'ast, 'db> {
fn from(value: ForStmtDefinitionNodeRef<'ast, 'db>) -> Self {
Self::For(value)
@@ -357,7 +364,10 @@ pub(crate) struct ImportFromDefinitionNodeRef<'ast> {
pub(crate) alias_index: usize,
pub(crate) is_reexported: bool,
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct ImportFromSubmoduleDefinitionNodeRef<'ast> {
pub(crate) node: &'ast ast::StmtImportFrom,
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct AssignmentDefinitionNodeRef<'ast, 'db> {
pub(crate) unpack: Option<(UnpackPosition, Unpack<'db>)>,
@@ -427,7 +437,6 @@ impl<'db> DefinitionNodeRef<'_, 'db> {
alias_index,
is_reexported,
}),
DefinitionNodeRef::ImportFrom(ImportFromDefinitionNodeRef {
node,
alias_index,
@@ -437,6 +446,11 @@ impl<'db> DefinitionNodeRef<'_, 'db> {
alias_index,
is_reexported,
}),
DefinitionNodeRef::ImportFromSubmodule(ImportFromSubmoduleDefinitionNodeRef {
node,
}) => DefinitionKind::ImportFromSubmodule(ImportFromSubmoduleDefinitionKind {
node: AstNodeRef::new(parsed, node),
}),
DefinitionNodeRef::ImportStar(star_import) => {
let StarImportDefinitionNodeRef { node, symbol_id } = star_import;
DefinitionKind::StarImport(StarImportDefinitionKind {
@@ -562,7 +576,7 @@ impl<'db> DefinitionNodeRef<'_, 'db> {
alias_index,
is_reexported: _,
}) => (&node.names[alias_index]).into(),
Self::ImportFromSubmodule(ImportFromSubmoduleDefinitionNodeRef { node }) => node.into(),
// INVARIANT: for an invalid-syntax statement such as `from foo import *, bar, *`,
// we only create a `StarImportDefinitionKind` for the *first* `*` alias in the names list.
Self::ImportStar(StarImportDefinitionNodeRef { node, symbol_id: _ }) => node
@@ -661,6 +675,7 @@ impl DefinitionCategory {
pub enum DefinitionKind<'db> {
Import(ImportDefinitionKind),
ImportFrom(ImportFromDefinitionKind),
ImportFromSubmodule(ImportFromSubmoduleDefinitionKind),
StarImport(StarImportDefinitionKind),
Function(AstNodeRef<ast::StmtFunctionDef>),
Class(AstNodeRef<ast::StmtClassDef>),
@@ -687,6 +702,7 @@ impl DefinitionKind<'_> {
match self {
DefinitionKind::Import(import) => import.is_reexported(),
DefinitionKind::ImportFrom(import) => import.is_reexported(),
DefinitionKind::ImportFromSubmodule(_) => false,
_ => true,
}
}
@@ -704,6 +720,7 @@ impl DefinitionKind<'_> {
DefinitionKind::Import(_)
| DefinitionKind::ImportFrom(_)
| DefinitionKind::StarImport(_)
| DefinitionKind::ImportFromSubmodule(_)
)
}
@@ -719,6 +736,7 @@ impl DefinitionKind<'_> {
match self {
DefinitionKind::Import(import) => import.alias(module).range(),
DefinitionKind::ImportFrom(import) => import.alias(module).range(),
DefinitionKind::ImportFromSubmodule(import) => import.import(module).range(),
DefinitionKind::StarImport(import) => import.alias(module).range(),
DefinitionKind::Function(function) => function.node(module).name.range(),
DefinitionKind::Class(class) => class.node(module).name.range(),
@@ -756,6 +774,7 @@ impl DefinitionKind<'_> {
match self {
DefinitionKind::Import(import) => import.alias(module).range(),
DefinitionKind::ImportFrom(import) => import.alias(module).range(),
DefinitionKind::ImportFromSubmodule(import) => import.import(module).range(),
DefinitionKind::StarImport(import) => import.import(module).range(),
DefinitionKind::Function(function) => function.node(module).range(),
DefinitionKind::Class(class) => class.node(module).range(),
@@ -846,6 +865,7 @@ impl DefinitionKind<'_> {
| DefinitionKind::Comprehension(_)
| DefinitionKind::WithItem(_)
| DefinitionKind::MatchPattern(_)
| DefinitionKind::ImportFromSubmodule(_)
| DefinitionKind::ExceptHandler(_) => DefinitionCategory::Binding,
}
}
@@ -991,6 +1011,16 @@ impl ImportFromDefinitionKind {
self.is_reexported
}
}
#[derive(Clone, Debug, get_size2::GetSize)]
pub struct ImportFromSubmoduleDefinitionKind {
node: AstNodeRef<ast::StmtImportFrom>,
}
impl ImportFromSubmoduleDefinitionKind {
pub fn import<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::StmtImportFrom {
self.node.node(module)
}
}
#[derive(Clone, Debug, get_size2::GetSize)]
pub struct AssignmentDefinitionKind<'db> {
@@ -1121,6 +1151,12 @@ impl From<&ast::Alias> for DefinitionNodeKey {
}
}
impl From<&ast::StmtImportFrom> for DefinitionNodeKey {
fn from(node: &ast::StmtImportFrom) -> Self {
Self(NodeKey::from_node(node))
}
}
impl From<&ast::StmtFunctionDef> for DefinitionNodeKey {
fn from(node: &ast::StmtFunctionDef) -> Self {
Self(NodeKey::from_node(node))

Some files were not shown because too many files have changed in this diff Show More